Black-sky thinking – A distinguished astronomer sees evidence of extraterrestrial life | Books & arts – The Economist

Feb 13th 2021

Extraterrestrial. By Avi Loeb. Houghton Mifflin Harcourt; 240 pages; $27. John Murray; 20

THE OBJECT came hurtling in from deep space, from the direction of Vega, a star 25 light-years away. It crossed the orbital plane of the solar system, within which the Earth and the other planets revolve around the sun, on September 6th 2017. Now under the influence of the suns gravitation, the object accelerated to around 200,000mph as it made its closest approach to the star on September 9th. Its trajectory then took it out of the solar system. A month after the object had arrived, it was well on its way back to interstellar space, moving towards the constellation of Pegasus.

Your browser does not support the

Enjoy more audio and podcasts on iOS or Android.

As it catapulted past the sun and began to head off, no one on Earth had any idea of the objects existence. Astronomers at the Haleakala Observatory in Maui only discovered it on October 19th; it was hidden in the data collected by their network of telescopes, as a point of light that travelled too fast to be trapped by the suns gravity. They gave it a name: Oumuamua.

In the weeks after this discovery, astronomers quickly confirmed that Oumuamua (which loosely means scout in Hawaiian) was the first interstellar object recorded as having passed through the solar system. Initially it was thought most likely to have been an asteroid or a comet; but as 2017 drew to a close, the available data continued to puzzle scientists. Their analyses indicated that Oumuamua was small (around 400 metres long) and shiny (perhaps ten times shinier than any asteroid or comet seen before). It seemed to have an elongated, cigar-like shape, at least five to ten times longer than it was wide. (Later it was generally deemed to have been flatter, like a pancake, as in the impression in the picture.) Astronomers had never seen anything like it.

In addition to these physical peculiarities, Oumuamua had travelled along a path through the solar system that could not be explained by the gravity of the sun alone. This, for me, was the most eyebrow-raising bit of data we accumulated over the roughly two weeks we were able to observe Oumuamua, writes Avi Loeb, an astronomer, in Extraterrestrial, his account of the interstellar visitation. This anomaly about Oumuamuawould soon lead me to form a hypothesis about the object that put me at odds with most of the scientific establishment.

For, after studying the available evidence, Mr Loeb concluded that the simplest explanation for the exotic strangeness of Oumuamua was that it had been created by an intelligent civilisation beyond Earth.

By definition, scientists are meant to follow wherever the evidence leads them. Personal biases and prejudices can cloud the judgments of those seeking to understand the rules of naturebut the methods of modern scientific research, developed over hundreds of years and keenly honed in the past century, seek to reduce the impact of subjective human factors that could otherwise impede progress.

Observations and data are the material on which scientists build their hypotheses. Those hypotheses are then ritually torn apart by other scientists andif they can withstand sustained critiques and are not contradicted by further evidence from the real worldthey might lay claim to being true. In science, changing your mind in the light of fresh information is seen as a good thing. If a new conjecture gathers supporting evidence and eventually supplants years of previous thinking on a topic, scientists are duty-bound to abandon the defunct ideas and embrace the new ones. The more radically an idea diverges from the mainstream, however, the greater the scrutiny it will inevitably face. Carl Sagan, an American astronomer, once summed this up: Extraordinary claims require extraordinary evidence.

That is the theory, at least. But like any profession, the path of scientific research can be influenced (both positively and negatively) by fashions and personalities, which can also determine who receives funding and which ideas get heard. Take the search for extraterrestrial intelligence, commonly known as SETI. Since the 1960s astronomers have been listening to the skies for any signs of radio signals sent out by technologically capable life beyond Earth. For most of its existence, though, SETI has been marginalised, dismissed as a lesser use of time and resources than the more prestigious study of black holes, subatomic particles, stars, galaxies and other real physics. The steadfastly radio-silent skies have not burnished SETIs image as a discipline to be taken seriously.

Mr Loeb says he has always found the hostility to SETI bizarre. Modern mainstream theoretical physicists, he points out, accept the study of spatial dimensions beyond the three (length, breadth and depth) with which people are familiar. Experimental evidence for these dimensions, however, does not exist. Similarly, many leading cosmologists think that this universe is one among an infinite number of others that exist together in a multiverse. But, again, experimental evidence for that proposition does not exist. String theory, the putative theory of everything that is meant to bind together the physics of the cosmos with that of subatomic particles, is considered scientific even though there is no direct evidence to prove it is real.

Compared with these abstract theories, the notion that there could be life elsewhere in the universe, when it is known to exist on Earth, should not seem so radical a subject of study. Mr Loeb thinks resistance to it comes from two sources. First, the laughable popular narratives in which aliens lay waste to Earths cities and possess superhuman wisdom. He is no fan of science fiction that ignores the laws of physics.

But the more important reason, he says, is a conservatism within science, which is sustained by the desire of individual scientists to keep risk low and funding high:

By limiting interpretations or placing blinders on our telescopes, we risk missing discoveriesThe scientific communitys pre judice or closed-mindednesshowever you want to describe itis particularly pervasive and powerful when it comes to the search for alien life, especially intelligent life. Many researchers refuse to even consider the possibility that a bizarre object or phenomenon might be evidence of an advanced civilisation.

The fact that accusations of conservatism in mainstream science are being levelled by an astronomer situated at the very heart of the scientific establishment may seem ironic. Mr Loeb has, after all, spent most of his career at prestigious American institutions, including a recent spell as the head of the astronomy department at Harvard University. He is also chairman of the board on physics and astronomy of the US National Academies.

His prominent status in astronomy circles has ensured that Mr Loebs radical hypothesis has attracted widespread attention. All the same, and as he reports in his book, it would be putting the matter mildly to say that his idea has been met with disapproval by his scientific colleagues. Writing in Nature Astronomy in July 2019, a research team assembled by the International Space Science Institute concluded that it had found no compelling evidence to favour an alien explanation for Oumuamua. It dismissed Mr Loebs theory as one not based on fact.

This is not his first brush with scientific celebrity. In 2016 he was the astrophysical brain behind Breakthrough Starshot, a $100m project funded by Yuri Milner, an Israeli-Russian tech billionaire, the goal of which is to dispatch a fleet of tiny probes called Starchips to Alpha Centauri, the nearest star to the sun. They are to be equipped with cameras able to relay any signs of life they might find back to Earth.Mr Loeb worked out that it might be possible to accelerate a Starchip to around 20% of the speed of light if it were fitted with an ultra-thin sail and a 100-gigawatt laser were directed towards it for a few minutes. So launched, the Starchips would in theory make the 4.4-light-year journey to Alpha Centauri in between 20 and 30 years.

The Breakthrough Starshot project was announced a year before the discovery of Oumuamua. The hunt for life elsewhere may well have been on Mr Loebs mind when he was contemplating the objects most intriguing anomaly: the weird way it had moved past the sun.

In June 2018 scientists reported that Oumuamuas trajectory had deviated slightly from the one it might have been expected to follow if it had been determined purely by the suns gravitational attraction. As it passed the sun the object was pushed away by an unexplained force. Comets sometimes behave like this when they get close to the sun, but in their case the force is easy to explain: a tail of dust and gas is ejected from the ball of ice as it is heated by the sun, which gives the object a rocket-like push. Yet no such tail was detected near Oumuamua.

Mr Loeb had another hypothesis: perhaps sunlight was bouncing off the objects surface like the wind off a thin sail. A thin, sturdy, light sail, of the sort that he had himself proposed for the Breakthrough Starshot project, would be technically feasible for a more advanced civilisation. In any case, such a sail could not occur naturally; it would have to be engineered by intelligent beings.

He may or may not be right about Oumuamua. But that hardly seems to make much difference to what is ultimately the mainthesis of his book. Conservatism may not be unique to astrophysics, he argues, but it is depressing and concerning given the huge number of anomalies still perceived in the universe. Mr Loeb is surely correct that scientists studying the vastness of the cosmos should entertain risky ideas more often, for the universe is undoubtedly more wild and unexpected than any extremes conjured by the human imagination. Extraterrestrial considers the possibility of intelligent life elsewhere, but its core message, an update to Sagans maxim, is aimed squarely at life on Earth: Extraordinary conservatism keeps us extraordinarily ignorant.

Dig Deeper

The search for life elsewhere in the universe is heating up and may even yield an answer soon. In addition to our coverage, learn about Avi Loeb, the alien hunter of Harvard, in our sister publication, 1843, and read our review of his book, Extraterrestrial.

This article appeared in the Books & arts section of the print edition under the headline "Black-sky thinking"

Originally posted here:

Black-sky thinking - A distinguished astronomer sees evidence of extraterrestrial life | Books & arts - The Economist

DST astronomers trace huge optical flare from supermassive black hole discovered in the 1960s – Firstpost

Press Trust of IndiaFeb 17, 2021 10:12:47 IST

Indian astronomers have reported one of the strongest flares from a feeding supermassive black hole or blazar called BL Lacertae, analysis of which can help trace the mass of the black hole and the source of this emission, the Department of Science and Technology said on Saturday. Such analysis can provide a lead to probe mysteries and trace events at different stages of evolution of the Universe, it said. Blazars or feeding supermassive black holes in the heart of distant galaxies receive a lot of attention from the astronomical community because of their complicated emission mechanism. They emit jets of charged particles travelling nearly at the speed of light, making them one of the most luminous and energetic objects in the known universe.

"BL Lacertae blazar is 10 million light-years away and is among the 50 most prominent blazars that can be observed with the help of a relatively small telescope. It was among the 3 to 4 blazars that was predicted to be experiencing flares by the Whole Earth Blazar Telescope (WEBT), an international consortium of astronomers," the statement said.

Illustration of a shock wave (bright blob in the upper jet) after aspiral path (yellow) as it moves away from the black hole and through a section ofthe jet where the magnetic field (light blue curved lines) is wound up in a coil. Image: Cosmovision/Instituto de Astronomia

A team of astronomers led by Alok Chandra Gupta from Aryabhatta Research Institute of Observational Sciences (ARIES), an institute of the Department of Science & Technology, who has been following the blazar since October 2020 as part of an international observational campaign detected the exceptionally high flare on January 16 with the help of Sampurnanand Telescope (ST) and 1.3 m Devasthal Fast Optical Telescopes located in Nainital.

The data collected from the flare observed will help calculation of the black hole mass, size of emission region, and mechanism of the emission from one of the oldest astronomical objects known, hence opening a door to the origin and evolution of the Universe, it added.

Read the original here:

DST astronomers trace huge optical flare from supermassive black hole discovered in the 1960s - Firstpost

Farthest known object in the solar system identified – EarthSky

This illustration imagines what the distant object nicknamed Farfarout might look like in the outer reaches of our solar system. The most distant object yet discovered in our solar system, Farfarout is 132 astronomical units from the sun, which is 132 times farther from the sun than Earth is. Estimated to be about 250 miles (400 km) across, Farfarout is shown in the lower right, while the sun appears in the upper left. The Milky Way stretches diagonally across the background. Image via NOIRLab/ NSF/ AURA/ J. da Silva.

Back in January 2018, astronomers detected a faint object in our solar system so far away from the sun that they nicknamed it Farfarout. After a couple of years of additional observations, the astronomers are now, as of February 10, 2021, ready to declare that this object, with the formal designation 2018 AG37, is indeed the farthest object in the solar system yet discovered.

Because Farfarout is so, well, far out, it has a huge orbit around the sun that takes it a long time more than a millennium to complete. Because of this slow movement along its orbit, astronomers must take many observations over a long period of time to truly determine how far away it is.

The Subaru Telescope in Hawaii made the initial observation, and followup observations with the Gemini North telescope in Hawaii and the Magellan Telescopes in Chile have allowed astronomers to calculate Farfarouts current distance of 132 astronomical units from the sun. An astronomical unit (AU) is defined as the distance of Earth from the sun, with 1 AU equaling 93 million miles (150 million km). Therefore, Farfarout is 132 times farther away from the sun than Earth is from the sun.

As delightful as the name Farfarout is, the object will get an official name in coming years after more observations provide more information. Astronomers estimate the dim solar system body to be about 250 miles across, which puts it at the small end to be labeled a dwarf planet.

View larger. | This artists concept depicts the most distant object yet found in our solar system, nicknamed Farfarout, in the lower right. In the lower left, a graph shows the distances of the planets, dwarf planets, candidate dwarf planets, and Farfarout from the sun in astronomical units (AU). One AU is equal to Earths average distance from the sun. Farfarout is 132 AU from the sun. Image via NOIRLab/ NSF/ AURA/ J. da Silva.

A quirky fact about Farfarout is that it isnt always that far away. The objects orbit is so stretched out that at its farthest point, it reaches 175 AU from the sun, while at its closest point its 27 AU. This would occasionally put it inside Neptunes orbit. In fact, a close encounter with Neptune is probably what flung the object out into the far reaches of the solar system in the first place.

How far out is Farfarout compared to distant solar system objects? Neptune is 30 AU from the sun, and Pluto is, on average, 39 AU from the sun. Voyager 1 and Voyager 2, spacecraft launched in the 1970s, are the furthest manmade objects in space and lie at 152 and 126 AU, respectively.

The previous record holder for farthest object in the solar system was nicknamed Farout (no surprise) and was estimated to be 124 AU from the sun. But these far out objects cant compete with the hypothesized Oort Cloud of comets, which is believed to lie between 2,000 and possibly as far as 50,000 AU from the sun, or 4/5 of a light-year away. So while Farfarout is indeed a very long way away, there are still things farther out to be discovered. Maybe a future Farfarfarout is in the offing.

This is a side-by-side comparison of the discovery images of Farfarout (2018 AG37) taken on January 15, 2018, and January 16, 2018. Note how the object moves in reference to background stars and galaxies. Image via Scott Sheppard/ NOIRLab.

Scott Sheppard, an astronomer from the Carnegie Institution for Science, co-discovered both Farout and Farfarout. He sees the newest record holder as a starting point, not an end:

The discovery of Farfarout shows our increasing ability to map the outer solar system and observe farther and farther towards the fringes of our solar system. Only with the advancements in the last few years of large digital cameras on very large telescopes has it been possible to efficiently discover very distant objects like Farfarout. Even though some of these distant objects are quite large the size of dwarf planets they are very faint because of their extreme distances from the sun. Farfarout is just the tip of the iceberg of objects in the very distant solar system.

Bottom line: Farfarout is the nickname given to the farthest known object in the solar system, which is currently at 132 AU, or more than 12 billion miles from the sun.

See original here:

Farthest known object in the solar system identified - EarthSky

Homan: Surge at border will be ‘astronomical’ without Title 42 in Mexico – Yahoo News

The Telegraph

New Zealand Prime Minister Jacinda Ardern has accused Australia of "exporting its problems" for cancelling the citizenship of a dual national Australian-New Zealander who reportedly joined the Islamic State in Syria On Monday Turkeys Defence ministry said a 26-year-old New Zealand Daesh terrorist was being deported with her two children after Turkish border staff caught them crossing illegally from the northwest Syrian province of Idlib. Media reports identified the woman as Suhayra Aden, who moved to Australia from New Zealand when she was six years old and lived in Melbourne before travelling to Syria on her Australian passport in 2014 to live under the so-called Islamic State. On Tuesday an irate Ms Ardern said she had spoken with Australian Prime Minister Scott Morrison about the dual national in 2019 after she was detained with her two children after Western-backed Syrian Kurdish forces retook the final sliver of IS territory in Syria. Mr Morrison then revoked Ms Adens citizenship without telling Ms Ardern, leaving New Zealand to deal with the dilemma alone. You can imagine my response, she said, after learning the next year that Australia had acted unilaterally. Our very strong view on behalf of New Zealanders was that this individual was clearly most appropriately dealt with in Australia That is where their family reside, that is where their links reside, and that is the place they departed for Syria, she said. Ms Ardern said the welfare of Ms Adens surviving children, aged five and two, was paramount. These children were born in a conflict zone through no fault of their own, Ms Ardern. Ms Aden reportedly had a third child who died of pneumonia, after marrying twice in Syria to Swedish nationals who also both died. Ms Ardern said Australia had abdicated responsibility for Ms Aden, who spent most of her life in Australia. New Zealand, frankly, is tired of having Australia exporting its problems, Ms Ardern said. If the shoe were on the other foot we would take responsibility, that would be the right thing to do and I ask Australia to do the same. But an uncontrite Mr Morrison said his only concern was the safety of Australians. Its my job as Australias prime minister to put Australias national security interests first, he told a press conference. Australian legislation to automatically cancel citizenship for dual nationals determined to have engaged in terrorism has been used against at least 17 people who reportedly joined IS. The case highlights the unresolved issue of tens of thousands of prisoners left in limbo following the territorial defeat of IS. Most are held in squalid conditions in the Al-Hol near the Iraqi border, though following hundreds of escapes from the sprawling camp authorities last year moved dozens of Western prisoners to the smaller and more secure Roj camp. At one time up to 66 Australians, including 44 children, were believed to be in the camps, though the Australian government repatriated eight children in June 2019, and others may have escaped. One New Zealand man is known to be detained in northeast Syria. Mark Taylor, who became known as the Bumbling Jihadi for revealing his location in posts calling for attacks on New Zealanders, has been held in a Kurdish jail since surrendering in late 2018. Earlier this month a group of United Nations experts called on the 57 governments who are believed to have nationals in the camps to repatriate their citizens, following reports that 20 people were murdered in Al-Hol in January.

Continued here:

Homan: Surge at border will be 'astronomical' without Title 42 in Mexico - Yahoo News

Radio Astronomer F. Peter Schloerb Named to ARCS Foundation Alumni Hall of Fame – UMass News and Media Relations

At the January National Board Meeting. ARCS Foundation Inc. announced radio astronomer and planetary Scientist F. Peter Schloerb, has been selected as the 2021 inductee into theARCS Alumni Hall of Fame.

Hall of Fame inductees are ARCS Scholar Alumni who have made outstanding contributions to the advancement of science and increased our nations scientific competitiveness. Selection is by a panel comprised of ARCS Foundation national board members and advisors and is based on alumni contributions in the areas of scientific innovation, discovery, economic impact, development of future scientists, and enhancement of US scientific superiority.

Schloerb is theUMassdirector of theLarge Millimeter Telescope (LMT) Alfonso Serrano,a joint project between UMass andInstituto Nacional de Astrofsica, ptica y Electrnicain Mexico. He is also a professor and director of theFive College Radio Astronomy Observatoryat UMass.

ARCS Foundation has recognized Schloerb for his leadership and vision for the concept, design and construction of the LMT, which is a crucial component in a network of eight strategically placed telescopes around the globe known as the Event Horizon Telescope (EHT).The international collaboration, which includes hundreds of scientists in twenty countries, forms an Earth-sized virtual telescope with unprecedented sensitivity and resolution.

In April 2019, the EHT partnership, including Schloerb and the LMT, publicly revealed the first ever image of a supermassive black hole at the center of a galaxy 55 million light-years awayalso marking a powerful confirmation ofAlbert Einsteins theory of general relativity.

The image produced prolific honors for Dr. Schloerb and members of the EHT, including the2020 Breakthrough Prize in Fundamental Physics.

Schloerb was an ARCS Scholar in 1977 while attending theCalifornia Institute of Technologywhere he received his Ph.D. in planetary science. Upon accepting his induction into the ARCS Alumni Hall of Fame, he thanked ARCS for their encouragement in his scientific studies that culminated into the historic observations he shares with the world today.

ARCS Foundation means a great deal to me because they supported me during my graduate education. The support of young scientists is very important to the advancement of science in the US, and more generally, around the world.

It is truly our pleasure to welcome Dr. F. Peter Schloerb into the distinguished ARCS Alumni Hall of Fame, said Sherry Lundeen, national president of ARCS Foundation. His prolific impact in US planetary space studies embodies ARCS Foundations historic beginnings and our mission to award outstanding scholars who will promote US competitiveness in STEM fields. We are extremely proud of Dr. Schloerbs work, as it continues to provide significant images that will be studied for years to come.

As a member ofARCS Alumni Hall of Fame, Schloerb joins the company of eleven other outstanding alumni who also received ARCS Foundation funding to support their education in past years.

Read more:

Radio Astronomer F. Peter Schloerb Named to ARCS Foundation Alumni Hall of Fame - UMass News and Media Relations

The benefits and risks of neural interfaces – Bangkok Post

This week is dedicated to the brain-computer interface, or BCI. For some time now, sci-fi movies and TV series have presented the idea of a mind-to-computer interface that controls technology, retrieves information and displays it on virtual screens. Meanwhile, in the background, a number of companies have been working on this and the technology is close to realising some of the outcomes only seen in fiction so far.

- The concept is not a new one. I've seen experiments from many years ago where a test subject moved a pointer around on a screen using a passive interface. These days, there are many places like the University of Pittsburgh Medical Center conducting experiments with implanted chips. Around the time Covid-19 popped up, one test subject, Nathan Copeland -- paralysed from the waist down -- took home an advanced brain-computer interface device that allows him to control on-screen actions using only his mind. This device uses a multi-electrode array chip that was implanted inside him in 2015 to control a robotic arm that allows him to play computer games, including supporting the fine motor control required to play complex games like Final Fantasy XIV.

- The game developer Valve has also put a lot of effort into this technology over the past few years but with a passive, rather than an implanted, interface. Valve halted production of their next generation of virtual goggles to investigate this further. According to studio president Gabe Newell: "We're way closer to The Matrix than people realise." In practical terms, the goal is to have a headset where the controls are directed by the wearer's mind. Neurable, a start-up gaming company, already had such a device back in 2017 that could control an escape game using sensors in a cap and a Vive virtual reality headset. That company has since moved onto military applications but the technology still remains for the gaming market.

- In the case of Nathan Copeland, the signals don't just go in one direction. Sensors in the arm also trigger responses like tingling, pressure, warmth, tapping and vibrations -- elements of the sense of touch. However, this has also raised questions and ethical concerns. Could you stimulate a craving, or addiction, or a preference so that your behaviour could be modified? Any game manufacturer's marketing and sales departments would love it if their products became addictive, no matter what disclaimers they might publish. Imagine for example getting positive physical feedback, triggered directly by the game, for each level you pass or win. There are already people spending thousands of dollars for levelling up in games like Genshin Impact without that direct link. Add physical feedback to the equation and you can see where this might lead. Now, add in the sharing of this data with a corporation like Google and this technology potentially starts to get scary.

- Will there be principles on the permissible use and misuse of neurotechnology and user rights? Typically, government policy lags technology, so the creation of a technological bill of rights will be well behind the potential misuse. I'd be happy to start with a basic on/off switch for data transmission to an outside entity like Google.

- Personally, I'm waiting for the neurofeedback device from Mendi that I signed up for on the equivalent of Kickstarter. It is a passive device that is billed to improve mental well-being, performance and overall health via "brain enhancement training" at home. I'll let you know how it goes when I get it.

- Elon Musk recently revealed that his Neuralink brain implant has been successfully implanted in a monkey's brain allowing it to play video games. Musk also claimed that he's not an unhappy monkey and that a US Department of Agriculture representative sent to check the facility said it was the nicest she'd seen in her entire career. So happy monkey aside, the aim of the Neuralink is to improve and speed up human-machine communication. Musk pointed out that the bandwidth between your cortex and your smartphone is slow. He estimates a direct connection would speed things up by a factor of 1,000 orders of magnitude -- a lot. If two people had a Neuralink, it might even be like telepathy because of the compressed exchange. "Sort of like a Fitbit in your skull with tiny wires that go to your brain," he added, before promising more updates in a couple of months.

- To end the article this week, we have some artificial intelligence news. Samsung is planning to test an autonomous ship in August this year. The 133m-long, 9,200 tonne training vessel will make a five-and-a-half hour trip under automation. The aim is to sell automation kits for ships by 2022. What could possibly go wrong with a 200,000 tonne vessel under computer-only control? Back in South Korea, Hyundai and Kai have announced that despite the rumours, they are not working on an Apple car.

Originally posted here:

The benefits and risks of neural interfaces - Bangkok Post

JAMA Highlights Success of BrainScope’s EEG-based Concussion Index As Reliable Indicator of Concussion – Herald-Mail Media

BETHESDA, Md., Feb. 16, 2021 /PRNewswire/ --JAMA Network Open has published "Validation of a Machine Learning Brain Electrical Activity-Based Index to Aid in Diagnosing Concussion Among Athletes," a ground-breaking study on the accuracy of the BrainScope FDA-cleared biomarker, the Concussion Index, to indicate the likelihood and severity of concussive brain injury and to aid in evaluating an athlete's readiness to return to play.

"What this shows us is that for the first time we have a point of care and objective marker that can rapidly identify the likelihood of concussive injury and can be used to follow patients from baseline through recovery," said lead author Dr.Jeffrey Bazarian, Department of Emergency Medicine, University of Rochester School of Medicine. "With its demonstrated accuracyand ease of use in the athletic environment, the Concussion Index has great potentialto be incorporated intoexisting standard assessments of concussion to aid in objective clinical diagnosis andin determination of readiness to return to play."

In the study, male and female (age 13-25 years) athletes with concussion and athlete "controls" (without concussion) were assessed through a variety of methods, including EEG, cognitive testing and symptom inventories within 72 hours of the injury, when they returned to play, and 45 days after they returned to play. Specific variables from the multi-modal assessment were used to generate a Concussion Index at each time point, with EEG having the largest contribution.

The study was conducted between 2017 to 2019 with referrals from 49 high schools, colleges, and concussion clinics in the US. Of the 580 eligible participants in the analysis, 207 had concussion and 373 were control athletes without concussion, and a total of 1318 evaluations, including follow-ups, were included in the analyses. The Concussion Index had a sensitivity of 86%, specificity of 71%, and negative predictive value of 90%. These results support the high accuracy of the Concussion Index in the identification of the likelihood of concussion, with performance above that reported in the literature for concussion assessment tools, which are largely subjective and show poor replicability. The study can be accessed here.

"The results of this study are an independent demonstration of the power and reliability of BrainScope's Concussion Index as an objective marker in the clinical assessment of concussions at the time of injury and as a reliable indicator of change over time," said Dr. Leslie Prichep, Chief Scientific Officer of BrainScope. "Importantly, the study targeted patients of high school and college age who are at great risk for both short and long-term consequences of concussion, as the brain is still developing."

The Concussion Index can be used for baselining (particularly with athletes and military recruits), at the time of injury, and to aid in decision-making on readiness to return to activity or duty. The addition of BrainScope's AI-derived Concussion Index, which received FDA clearance in late 2019, will complement the previously FDA cleared triage algorithms for assessing the likelihood of brain bleeds (99% sensitivity) and assessing the severity of functional impairment already on the device. The assessment takes less than 20 minutes from prep to results. The hand-held BrainScope device and disposable headsets are available today for sale through the company and must be used with a physician's order. The company is currently taking orders on the enhanced BrainScope device and expects the first customer placements and trainings to begin in March.

This material is based upon work supported by the US Army Contracting Command, Aberdeen Proving Ground, Natick Contracting Division, under Contract No. W911QY-14-C-0098. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the US Army Contracting Command, Aberdeen Proving Ground, Natick Contracting Division. This work was further supported by an award received as a Grand Prize Winner of the GE-NFL Head Health Challenge to advance the development of technologies that can detect early-stage mild traumatic brain injuries and improve brain protection.

About BrainScope Company, Inc.

BrainScope is a medical neurotechnology company that is improving brain health by providing objective, diagnostic insights that enable better patient care. BrainScope is leading the way in the rapid and objective assessment of brain-related conditions, starting with mild traumatic brain injury (mTBI), utilizing multiple integrated assessment capabilities, artificial intelligence (AI), and digitization. The company's technology supports the American College of Emergency Physicians (ACEP) Choosing Wisely campaign to avoid CT scans of the head in emergency department patients with minor head injury. BrainScope's innovative EEG-based, AI-driven platform empowers physicians to quickly make accurate head injury assessments, addressing the full spectrum of traumatic brain injuries from structural (brain bleed) to functional (concussion) injuries, providing for the first time a full picture of the injury, and doing so in less time and without radiation. For more information, please visit http://www.brainscope.com.

Media Contact:

Cherie Lucier, BrainScope Company Inc.

VP Brand Experience

cherie.lucier@brainscope.com

215-805-0131

View original content to download multimedia:http://www.prnewswire.com/news-releases/jama-highlights-success-of-brainscopes-eeg-based-concussion-index-as-reliable-indicator-of-concussion-301228711.html

SOURCE BrainScope Company, Inc.

Read the original post:

JAMA Highlights Success of BrainScope's EEG-based Concussion Index As Reliable Indicator of Concussion - Herald-Mail Media

Connecting the unconnected in rural America – Brookings Institution

A majority of Americans believe the COVID-19 pandemic has made the internet essential. Once a nice to have service in to enjoy streaming videos or playing games, broadband connections are now must have. Reaching online services to work and learn from home, visit the doctor, and even order toilet paper has never been more important.

But there are up to 42 million Americans for whom this essential network is not available, and millions more for whom it is available but unaffordable.

There are up to 42 million Americans for whom this essential network is not available, and millions more for whom it is available but unaffordable.

The Biden FCC is inheriting a two-part broadband challenge: an accessibility gap of those (largely in rural areas) for whom there is no network, and an adoption gap (more prevalent in urban areas) of those unable to afford the network. This paper addresses the accessibility problem and how was aggravated by the Trump FCCs substitution of PR for progress and the mismanagement of the most promising effort to connect unserved Americans. The next paper in this series will address the adoption gap. One day before leaving office and only nine months after its previous annual report the Trump Federal Communications Commission (FCC) took one last turn at the political spin machine with a self-congratulatory analysis that proclaimed significant progress in expanding access to the internet. The report pegged the number of Americans lacking access to high-speed internet connections decreasing to 14.5 million. An independent study of the previous regularly-scheduled report found the FCC was consistently underreporting reality; that up to 42 million Americans were unserved by broadband.

Since my first day as Chairman of theFCC, my number one priority has been closing the digitaldivideand bringing the benefits of the Internet age to all Americans, Trump FCC Chairman Ajit Pai proclaimed. In a 14-page list of FCC Accomplishments Under Chairman Ajit Pai that he ordered be placed on the FCC website, Bridging the Digital Divide consumes the first four pages. It is a triumph of self-promotion, but a laundry list devoid of significant measurable results.

The Biden FCCs challenge begins with correcting the charades that inflated internet coverage by deflating the definition of broadband and overstated the number of homes connected.

Defining broadband down

Any assessment begins with what is being counted. In this case, the Trump FCCs spin was to define broadband unrealistically. In 2015 the Obama FCC over the objection of then-commissioner Pai increased the network speed definition for broadband to 25 megabits per second (mbps) into the home and 3 mbps out of the home. It was adequate for the time, but the simple evolution of the internet in the intervening years, including the proliferation of in-home internet devices even before COVID subsequently rendered those speeds inadequate.[1]

The Trump FCC, realizing that as broadband speeds went up, coverage calculations went down, refused to update the old definition. Mignon Clyburn, the sole Democrat on the Commission when the decision was made, called this out, wisely identifying what would become manifest during COVID as multiple family members tried to get online simultaneously. The Trump FCCs decision, she objected, does not even consider that multiple devices are likely utilizing a single fixed connection, or the multiple uses of a mobile device.

The coverage charade

There is no national address-by-address database of internet coverage. Instead, the FCC relies upon data voluntarily provided by the industry on the agencys Form 477. That process is fatally flawed because it looks at census blocks rather than street addresses. Under this approach, if there is one internet-accessible household in the census block, the entire area is deemed to have broadband access.

Telecom providers argued that requiring them to collect more accurate data was unduly burdensome. That argument may have worked in the days when the internet was a nice to have service; but with broadband now an essential service, such flawed data is unacceptable. For its entire four years, the Trump FCC failed to resolve this mapping problem. Finally, Congress passed a law in 2020 ordering the FCC to reform its data collection. The Trump FCC failed to meet the congressionally-established deadline to adopt procedures for the mapping effort. It is not unreasonable to expect that when the real facts replace the Trump-era charade, the number of unserved Americans will increase. By slow-rolling that result, the Trump FCC avoided such a reckoning on its watch.

There is an old management maxim: if you can measure it, you can manage it. An early challenge of the Biden FCC will be to measure broadband penetration based on the facts rather than the spin.

Decades ago, the FCC adopted a strategy of subsidizing rural telephone companies for the higher-than-normal costs created by lower-than-normal population densities. By subsidizing rural telecommunications carriers, the program provided rural customers with services comparable in cost and quality to their urban counterparts.[2]

As the internet age developed, the FCC refocused the program from telephone service to broadband. By one estimate, the period from 2013 to 2018 saw multiple federal efforts provide $22 billion in support to rural telecommunications companies.[3] Yet the number of Americans without access to a broadband internet connection remains unacceptably high.

Allocation via reverse auction is a good idea. Poor management by the Trump FCC, however, has limited its effectiveness.

In an effort to overhaul an underperforming subsidy system, the Obama FCC developed a reverse auction plan for the distribution of broadband subsidies. To avoid the shortcomings of previous efforts, the plan allowed providers to bid for unserved areas and allocating subsidies to the company committing to provide broadband service for the lowest subsidy. In 2018 the Trump FCC held a reverse auction for what was then called the Connect America Fund (the CAF-II auction). In 2020 the Trump FCC renamed the reverse auction concept the Rural Digital Opportunity Fund (RDOF).

Allocation via reverse auction is a good idea. Poor management by the Trump FCC, however, has limited its effectiveness. The Biden FCC has an opportunity to fix some of that poor management and thus deliver more broadband quicker.

A budget not spent is broadband not built

Im thrilled at the success of this auction, Chairman Pai crowed about the RDOF results, our strategy worked. In fact, the strategy failed and as a result fewer unserved areas will be receiving broadband than would have in a well-designed auction.

Instead of an auction designed to make its $16 billion budget serve as many areas as possible, the Trump FCC built an auction that would spend as little as possible. In this case $6.8 billion of the $16 billion budgeted was not put to work. The Chairman applauded the savings.

While saving money would seem a good thing, in this situation every dollar unspent translates into a rural area unserved. There is a high probability that an auction that prioritizes a race-to-the-bottom for subsidies will produce a similar race-to-the-bottom in service. The Trump FCC turned an auction intended to prioritize delivery of quality service to unserved areas into an auction that disincentivized delivery in favor of a quick win, even if the bidders and their technology were unproven. As one observer noted, such a policy cheated huge numbers of people out of getting fiber and the future proof high-speed connections that optical fiber offers.[4]

As a strategy to add to the Chairmans List of Accomplishments, the RDOF auction was a success. As a method to deliver quality broadband service to the most people as quickly as possible, RDOF missed the mark and left a mess for the Biden FCC to clean up.

Failing the smell test

When something looks too good to be true, it often is., The rules the Trump FCC established for RDOF invited mischief and malfeasance. Consider just a few of the warning signals:

The Biden FCC inheritance

In order to speed the process to its declaration of victory, the Trump FCC bucked any meaningful due diligence on bidders until after the auction (and, coincidentally, after the presidential election). The Trump FCC could have decided, for instance, that any technology not currently available on a commercial basis at the speeds and the capacity required should simply not be eligible for the auction. Instead, the principal check on bidders qualifications and the feasibility of their technology was simply the completion of the so-called short-form in which the bidders indicated interest in participating in the auction.

The lack of due diligence allowed companies to bid for billions of dollars without any meaningful vetting of whether their technology would work in the areas they were bidding, or any validation of operational or technical capabilities to deliver on their bids. Following the auction, a bipartisan-bicameral letter from 160 senators and representatives focused on this problem, telling the FCC to increase its scrutiny of RDOF winners. We urge you to thoroughly vet the winning bidders to ensure they are capable of deploying and delivering the services they committed to providing.

The Biden FCC will be responsible for the long form process that collects and assesses the information the Trump FCC should have collected and assessed in the first place. The letter from congress that identified the work the Trump FCC did not do asked the agency to redouble its efforts to review the long-form applicationsto validate that each provider has the technical, financial, managerial, operational skills, capabilities, and resources to deliver services that they have pledged

By shirking its responsibilities over such obvious matters, the Trump FCC not only rushed out an under-performing product, but also left behind for the Biden FCC a political trap built on a game of chicken.

By shirking its responsibilities over such obvious matters, the Trump FCC not only rushed out an under-performing product, but also left behind for the Biden FCC a political trap built on a game of chicken.

If the Biden FCC demands and inspects the technical details of the winning bidders such as whether wireless can provide gigabit speeds to the promised areas, or wether a small fixed wireless company that has never deployed fiber-to-the-home can now reliably do so across hundreds of thousands of homes in multiple states it risks triggering defaults on the winners bids before a dime has been put to work. Alternatively, if it proceeds without such investigation it risks wasting subsidy dollars on networks that dont perform an answer that, because of the buildout schedule, could be years off.

The Trump FCC created a situation in which a bidder could over promise to win the bid and then play chicken with the FCC to force the agency to renegotiate to more favorable terms. The Trump FCC knew of this problem because it caved to such a strategy in the CAF-II auction. When a bidder in that auction could not deliver on what it promised, the FCC retroactively changed the rules. This established a bid what it takes now and negotiate a settlement later precedent that by all indications was adopted as a primary strategy by many RDOF participants.

The haunting challenge the Biden FCC faces with the RDOF legacy is that Americans could be cheated out of meaningful, future proofed broadband service. If subsidies are paid for technologies that cannot work or that perform below promises, potentially millions of Americans will lose. If selecting inexperienced companies or unproven technology over companies that bid based on reality and experience does not work, American can lose again. If the federal government ends up spending public monies on transitional technologies that are better than today, but below what should be built for long-term performance, America can lose yet again.

Because the previous FCC failed to develop a process to validate bidders and technology, it now falls to the Biden FCC to fix the Trump FCCs RDOF mismanagement in the long form process that will soon begin. It will not be an easy process.

As it cleans up the Trump FCCs debacle, the Biden FCC will fortunately have its own opportunity to deploy the unspent funds to build out unserved areas. The RDOF was budgeted to spend $16 billion but only spent $9.2 billion. The Biden FCC thus has almost $7 billion to allocate correctly. Add this residual to the $4.4 billion that has previously been identified for the next round, and there could be $11 billion for the effective support of rural broadband.

Why any new program should follow a fiber-first strategy is illustrated by this chart illustrating how once it is installed, optical fiber is future proofed with ever-expanding capacity.

Everyone knows of Moores Law and the growth of processing power of microchips, but a similar (and not unrelated) acceleration happened with optical fiber as well. When the Trump FCC decided to make RDOF technology neutral they hewed to Republican orthodoxy but ignored technological reality. The Biden FCC has the opportunity to run its own auction in a manner that promotes high-speed, future-proofed results by embracing a Moores Law-like technology that will keep Americans up to speed long into the future.

[1] The amount of bandwidth or speed needed varies by household. Using Broadband Nows Bandwidth Calculator, a family of four, each with their own mobile device and laptop, sharing three TVs and two gaming consoles, two tablets and a smart home device (e.g., Alexa) requires 179 megabits per second (mbps) service.

[2] To fund this, the FCC requires carriers to add a surcharge to every telephone bill (the Universal Service Fee) and the ability to pass that fee on to subscribers.

[3] I was Chairman of the FCC during many of those years. The failure to more rapidly expand broadband was one of my greatest disappointments.

[4] Without a doubt, there are areas where alternatives to fiber should be utilized; but those should be subsidized only if fiber is impossible.

Originally posted here:

Connecting the unconnected in rural America - Brookings Institution

The first photo of Mars delivered by the UAE’s Hope probe …

The first image of Mars snapped by the Al Amal, or Hope, spacecraft. The photo was captured at a distance of 15,500 miles from the planet's surface.

Mars is the place to be this month. Two spacecraft have already entered orbit around the red planet: China's Tianwen-1 got there on Feb. 10. And a day earlier, the United Arab Emirates made history bysliding the Al Amal (Hope) spacecraft into Martian orbitand becoming just the fifth country to reach Earth's dusty, barren neighbor.

Thefirst-ever Arab interplanetary mission has snapped a couple of images of Mars during its journey so far, but nothing quite like what it delivered early Sunday. From a distance of about 15,500 miles (25,000 kilometers), the probe's camera -- officially known as the Emirates eXploration Imager (EXI) -- captured a picturesque view of Mars as a yellowed semicircle against the black curtain of space.

Some of Mars most famous features are visible in the image. Olympus Mons, the biggest volcano in the solar system peeks out at the terminator, where the sunlight wanes, while the three volcanoes of the Tharsis Montes dazzle under a mostly dust-free sky.

Olympus Mons is barely visible at the terminator, where night meets day. It's circled here, in red.

From the cosmos to your inbox. Get the latest space stories from CNET every week.

The picture was shared in a tweet by Sheikh Mohamed bin Zayed Al Nahyan, de facto ruler of the UAE.

"The transmission of the Hope Probe's first image of Mars is a defining moment in our history and marks the UAE joining advanced nations involved in space exploration," he tweeted Sunday.

The Al Amal mission hopes to provide the most complete picture of the Martian atmosphere yet. It's suite of instruments includes the EXI camera and both an ultraviolet and infrared spectrometer. Detailed observations will allow researchers to determine how particles escape from the gravity of Mars and reveal the mechanisms of global circulation in the lower atmosphere.

You can find previous images from the Hope probe at theEmirates Mars Mission website.

FollowCNET's 2021 Space Calendarto stay up to date with all the latest space news this year. You can even add it to your Google Calendar.

Read more:

The first photo of Mars delivered by the UAE's Hope probe ...

NASA Mars mission 2021: Perseverance rover landing date, time

NASA's Mars Perseverance rover is on the cusp of landing on the Red Planet after a seven-month journey. Here's what happens next. USA TODAY

On the surface, Mars presents itself as a world on the verge of inhospitality.

Average temperatures that hover around negative 81 degrees. A thin, carbon dioxide-rich atmosphere sometimes rendered opaque by planet-wide dust storms that can even be seen from Earth. Gravity thats just one-third of what humans have evolved to tolerate.

But the Red Planets features tell a different story.

Looking at photos captured by satellites in orbit, it doesnt take much imagining to see Mars was likely once home to rivers of running water and enormous crater-lakes. With the right conditions, perhaps this planet that gets its rusty color from iron oxide-rich rocks could once have been suitable for life or at least life as we know it.

This dichotomy has left experts asking one of the most difficult-to-answer questions in science today: What happened to Mars, and can the same thing happen here on Earth?

We know that Mars had a bad past, said Thomas Zurbuchen, associate administrator of NASAs Science Mission Directorate. We used our Spirit and Opportunity rovers (2003) to follow the water in search of answers as to why this once ocean world is now dry and desolate. Following those missions came our Curiosity rover, which landed on Mars in 2012 and is still operating.

Augmented reality:Mission to Mars: Explore the Perseverance rover

Now its time for NASAs next robotic explorer Perseverance to follow in the dusty tracks of its predecessors. After a 293-million-mile trek across the expanse since its July 2020 launch from Cape Canaveral Space Force Station, the upgraded rover is slated to land on the Red Planet at 3:55 p.m. EST Thursday.

Its target: Jezero Crater, a harsh surface feature that was likely once a deep lake fed by rivers of running water.

Perseverance is our robotic astrobiologist, and it will be the first rover NASA has sent to Mars with the explicit goal of searching for signs of ancient life, Zurbuchen said.

But before it can begin roving its targeted landing site at a breakneck 0.1 mph, Perseverance has to pull off a series of risky landing maneuvers all by itself.

In this animation, NASA's Perseverance rover is seen during its "Seven Minutes of Terror," or the entry, descent, and landing process. Using a unique "Sky Crane Maneuver," the 10-foot rover will land on Mars on Feb. 18, 2021. Florida Today

Getting to Mars with help from a United Launch Alliance Atlas V rocket and interplanetary cruise stage was one thing, but slowing down from thousands of miles an hour to a soft 1.7 mph at landing is another.

This seven-minute process from 3:48 p.m. to 3:55 p.m. is known as the seven minutes of terror. Because signals take 11 minutes to reach Earth, human input in the event of a mishap is impossible. Perseverance is on her own.

The nail-biting entry is made even more tense by the fact that once mission managers at NASAs Jet Propulsion Laboratory in California get the first confirmation of entry, Perseverance will have already landed or crashed in real-time. The unavoidable signal delay, however, is a short hurdle for teams that have been waiting for this moment for a decade.

Landing on Mars is really all about finding a way to stop and land in a safe place, said Al Chen, NASA's entry, descent, and landing lead at the Jet Propulsion Laboratory.

As it approaches Mars thin atmosphere, the heat shield affixed to the front of Perseverance's protective capsule will bear the brunt of fiery entry while also acting as an airbrake of sorts. A massive 70-foot parachute then automatically deploys, further slowing down the 2,200-pound rover.

While coming down on the parachute, Perseverance needs to figure out where it is, Chen said. Itll jettison the heat shield that protected us during entry, and it will use a radar and a new system we call Terrain-Relative Navigation to figure out where it is.

After the newly exposed radar and cameras have a lock on Perseverances location and landing prospects, its time for the riskiest part: dropping out of the protective capsule with a web of machinery and eight retrorockets, which begin firing to slow the rover down.

About 65 feet from the surface, the still-firing retrorockets slow Perseverances approach to 1.7 mph. The descent stage then kicks off the Sky Crane Maneuver, which uses strong nylon cords to slowly lower the rover to the ground. After confirmation of touchdown, the sky crane severs the cords and flies off to put distance between it and the rover.

Perseverance is expected to begin transmitting photos of its new surroundings immediately after landing.

NASA's Perseverance rover is seen on Mars in this rendering by the agency. The 10-foot robotic vehicle will touch down on the surface on Feb. 18, 2021, after a series of complicated maneuvers.(Photo: NASA)

NASAs 10-foot-long, $2.4 billion Perseverance roveris equipped with suites of technologies designed to aid in the hunt for life.

Sixteen engineering and science cameras support safe navigation and help observe the surface, from extreme close-ups to far away. Some of these are part of larger scientific systems, like an ultraviolet spectrometer and another that uses X-rays.

A 7-foot arm attached to the front of Perseverance includes a powerful drill that can pull core samples from rocks that interest scientists. The samples can then be sealed and stored in tubes inside the rovers main body for more analysis later.

Perseverance also has the capability to remove the stored samples and leave them in designated spots around Jezero Crater. A future mission yet to be scheduled could one day land on the Red Planet, pick up the tubes and then fly off to return them to scientists on Earth.

Unlike older Mars rovers, Perseverance and its Curiosity sibling rely on nuclear power. Essentially a nuclear battery, both rovers use energy generated by the decay of plutonium to charge onboard lithium batteries during dormancy. While the Department of Energy-provided hardware can power Perseverance for up to 14 years, the rovers mission is currently set to last at least one Martian year (two Earth years).

Perseverance even has a friend hitching a ride for this mission: Ingenuity. This 4-pound drone will host the first-ever flight on another planet during a roughly monthlong window. Though Ingenuityhas no science hardware, two cameras will help steer the drone and teach NASA engineers how to fly on a world with an atmosphere just 1% as dense as Earths.

But why look for life past or present in the first place? For Manasvi Lingam, a professor of astrobiology, aerospace, physicsand space sciences at Florida Tech, its the ultimate journey.

Any sign of life will of course be one of the most momentous discoveries in the entire history of humanity, Lingam said. Even if it is extinct life, just knowing that there was something out there is certainly Nobel Prize-level.

Lingam acknowledges that getting even a hint of an answer usually leads to more questions.

Would finding life on Mars inform our perception of how common it is elsewhere in the universe? If life on Mars and Earth appear to be similar, could the millennia-old theory of panspermia that life can spread via asteroids or comets, for example see a resurgence? Or what if the discovery is so foreign that it doesnt appear to rely on the building blocks of life were used to, like DNA and RNA?

All of these questions are really fascinating, Lingam said. If you find something very alien, thats great and we can try to understand what it is.

It might even have some practical implications because humans learn from biology all the time. Thats in fact how weve made a lot of drugs we looked at actual organisms and borrowed ideas from them, he said.

Autoplay

Show Thumbnails

Show Captions

No follow-up rovers are solidly planned after Perseverance. Nicknamed Percy by her JPL mission managers, shes on her own in Jezero Crater for the foreseeable future.

But what about dropping sample tubes for pickup by a separate mission? Thats still in the works at NASA.

Lingam said a sample return mission has two advantages for scientists: the breadth and number of instruments available on Earth vastly outclass whats available on Perseverance; and despite technological advances, having a human eye looking at samples is still the preferred method.

For his research, Lingam would like to see more missions to Venus a planet that hasnt seen enough investigation surrounding potential for life, he said. Missions like Perseverance, combined with upcoming investigations of other parts of our solar system, will ultimately provide a more holistic view of the history of life.

Theres definitely part of me that wants to believe theres life in the oceans of Europa, that there was life on Mars, and potentially even in the clouds of Venus, Lingam said. Its always more tempting to think of a cosmos that is filled with all kinds of weird and wonderful life, because that would mean were not alone.

One should not allow the belief to cloud ones mind about the data and the scientific method. But I do hope that there is life out there.

Contact Emre Kellyon Twitterat @EmreKelly.

By the numbers: NASA's Perseverance rover

---

Timeline: Seven minutes of terror (all times Eastern on Feb. 18)

Visitfloridatoday.com/spaceat 3 p.m. Thursday to watch live as Perseverance targets a landing on the Red Planet.

Read or Share this story: https://www.usatoday.com/story/news/nation/2021/02/15/nasa-mars-mission-2021-perseverance-rover-set-landing-date/4484978001/

See the original post here:

NASA Mars mission 2021: Perseverance rover landing date, time

News . The Mars Relay Network Connects Us to NASA’s Martian Explorers – Jet Propulsion Laboratory

Science Operations

Of course, communications dont stop after landing. Thats when the complicated task of sending commands to Perseverance and receiving the rovers huge science data output will begin.

During its mission, the rover will have all of the orbiters in the Mars Relay Network for support including NASAs MRO, MAVEN, Odyssey, and ESAs TGO, which has been playing a key role in the network for the past few years. Even ESAs Mars Express orbiter will be available for emergency communications should the need arise. While the NASA orbiters communicate exclusively with the DSN, the ESA orbiters also communicate via the European Space Tracking network and ground stations located in Russia.

Although the Mars Relay Network has expanded to include more spacecraft and more international partners, with every new surface mission comes added complexity when scheduling the relay sessions for each orbiter flyover.

Curiosity and InSight are near enough to each other on Mars that they are almost always visible by the orbiters at the same time when they fly over. Perseverance will land far enough away that it cant simultaneously be seen by MRO, TGO, and Odyssey, but sometimes MAVEN, which has a larger orbit, will be able to see all three vehicles at the same time, added Gladden. Since we use the same set of frequencies when communicating with all three of them, we have to carefully schedule when each orbiter talks to each lander. Weve gotten good at this over the last 18 years as rovers and landers have come and gone, including collaborating with ESA, and were excited to see the Mars Relay Network set new throughput records as it returns Perseverances huge data sets.

Ultimately, this communications endeavor connecting Earth and Mars will enable us to see high-resolution images (and hear the first sounds) captured by Perseverance, and scientists will be able to further our knowledge about the Red Planets ancient geology and fascinating astrobiological potential.

More About Perseverance

A key objective of Perseverances mission on Mars is astrobiology, including the search for signs of ancient microbial life. The rover will characterize the planets geology and past climate, pave the way for human exploration of the Red Planet, and be the first mission to collect and cache Martian rock and regolith.

Subsequent missions, currently under consideration by NASA in cooperation with ESA, would send spacecraft to Mars to collect these sealed samples from the surface and return them to Earth for in-depth analysis.

The Mars 2020 mission is part of a larger NASA initiative that includes missions to the Moon as a way to prepare for human exploration of the Red Planet. Charged with returning astronauts to the Moon by 2024, NASA will establish a sustained human presence on and around the Moon by 2028 through NASAs Artemis lunar exploration plans.

JPL, which is managed for NASA by Caltech in Pasadena, California, built and manages operations of the Perseverance rover.

For more about Perseverance:

mars.nasa.gov/mars2020/

nasa.gov/perseverance

For more information about NASA's Mars missions:

https://www.nasa.gov/mars

More About DSN

The Deep Space Network is managed by JPL for NASAs Space Communications and Navigation (SCaN), which is located at NASAs headquarters within the Human Exploration and Operations Mission Directorate.

More About the Mars Relay Network

The Mars Relay Network is part of the Mars Exploration Program, which is managed at JPL on behalf of NASAs Planetary Science Division within its Science Mission Directorate.

Read more:

News . The Mars Relay Network Connects Us to NASA's Martian Explorers - Jet Propulsion Laboratory

Have You Played… Sayonara Wild Hearts? – Rock Paper Shotgun

Have You Played? is an endless stream of game retrospectives. One a day, every day, perhaps for all time.

Ive always found that Ben & Jerrys Cookie Dough helps heal a broken heart, but the main character from Sayonara Wild Hearts prefers battling a three-headed wolf robot that spits out spikey yoyos. Whatever works. And, to be fair, it's better that developer Simogo went down this route. A game about a 30-year-old, head first in a bucket of ice cream, probably doesnt have as much appeal to the masses.

Although, auto-runners werent exactly in vogue when this was released. What made this one stand out from the pack was how complete it was as a package. Everyone focuses on the music, and I get it: the 23-track offering is full of wonderful, upbeat pop, wispy shoegazing, and driving trance. Songs flow from one to the next, with any harsh deviations feeling like a deliberate jolt to the listener.

But Ive always seen Sayonara Wild Hearts as more of an elaborate stage show, rather than a tightly-written 60-minute album. The seafoam greens, the trippy blues, the electric pinks: the chosen palette works so well with Simogos penchant for twirling shots and smash zooms. Yes, the music is a huge part of the presentation, but it is exactly that: one part. Everything comes together to create what is a joyous overload on the senses. And you ride a turquoise vomit wave at one point, too, which is also good fun.

For all intents and purposes, this is a score attack game, where you can revisit levels to collect more shiny objects in order to better your score. But, I feel a bit wrong if I cherry pick a stage to play. Like any good musical performance, Sayonara Wild Hearts is one that should always be experienced in full. From start to finish.

More:

Have You Played... Sayonara Wild Hearts? - Rock Paper Shotgun

Pawri Ho Rahi Hai: This Little Girl’s Version of Viral ‘Pawri’ Video is The Cutest Thing Ever | Check Hilario – India.com

New Delhi: Unless you have been living under a rock, all of you must have heard the viral video clip of a social media influencer from Pakistan, Dananeer Mobeen. In the video, the 19-year-old social media influencer can be seen having fun with her friends and saying, Yeh hamari car hei aur yeh hum hein, aur yeh hamari Pawri horahi hei (This is our car, this is us and this is our party). Also Read - Netizens Go Gaga Over Pakistani Influencer's 'Pawri Ho Rahi Hai' Video, Yashraj Mukhate Comes up with its Musical Twist

Not only is the video being shared extensively, netizens have gone all gaga over it and have also created their own hilarious versions with the hashtag #pawrihoraihai on Twitter. In one such awwdorable video, a little girl can be heard saying Ye hamara bed hai, ye hamara takiya hai aur yahan sone ki tayyari ho rahi hai.

Watch the adorable video here:

In yet another hilarious video, a little boy says, ye mai hoon, ye mera washroom hai aur yahan potty ho rahi hai.

The rib-ticking viral meme has been composed by none other than music composer Yashraj Mukhate who earned fame for his viral tracksRasode Mein Kaun Tha, Biggini Shoot and Tuadda Kutta Tommy. He gave a musical twist by adding music and beats to the video of the pawri girl. Sharing his unique creation on Instagram, Mukhate captioned the video as: Aajse me party nahi karunga Sirf pawri karunga. Kyuki party karneme wo mazaa nahi jo pawri karneme hai (from today I will not party, I will pawri, there is more fun in pawri than a party).

And here is how netizens are coming up with their own versions:

The viral video was posted as a joke by Dananeer while she was vacationing with her friends in a hilly area of Pakistan. She had shared the video with her followers with the caption, No one: When borgors visit northern areas: yeh hamari pawri horai haai. This is the gold content you guys signed up for. 10/10 meme material (sic).

Dananeer, in an Instagram live video, said that she is overwhelmed by the response she got and she is happy that people are enjoying the video so much.

Continue reading here:

Pawri Ho Rahi Hai: This Little Girl's Version of Viral 'Pawri' Video is The Cutest Thing Ever | Check Hilario - India.com

The Evolution of Automation Technologies – Security Boulevard

Automation is making waves in many industries worldwide and encompasses a wide range of technologies including endpoint management, robotic process automation (RPA), artificial intelligence (AI) and machine learning (ML).

Automation allows organizations to get more things done at a lower cost. It increases the efficiency of individuals, teams and companies as a whole, and is likely to revolutionize the way organizations function.

Lets take a look at a simple three-level hierarchy of automation technologies.

The foundational layer of all automation technologies is the automation of IT processes. This involves the automation of routine IT tasks and workflows such as server maintenance, software patch management and software updates. It also includes auto-remediation of IT incidents and associated service tickets.

IT processes are automated by running scripts or what we call Agent Procedures. With Kaseya VSA, our unified remote monitoring and management (URMM) software, you can customize pre-built automation scripts or create your own in the Agent Procedure Editor.

Auto-remediation of incidents involves first setting up monitoring on the endpoint and then automatically executing the script(s) when an alert occurs. This process can involve both the endpoint management tool and your service desk solution where you manage tickets. Tickets can automatically be created by the endpoint management tool. Then, those tickets can be resolved using automated workflows in the service desk to run the scripts.

A few benefits of IT process automation are:

The major challenge with automating IT processes is creating the scripts that automate the tasks. With the Kaseya Automation Exchange, you can get started quickly and easily. Kaseya Automation Exchange offers more than 800 pre-built automation scripts, monitor sets and reports that can be used to reduce the time it takes for you to implement automation.

Research firm Gartner predicted that the global robotic process automation (RPA) market revenue will reach $1.89 billion in 2021 an increase of 19.5 percent from 2020.

RPA is the business process layer. In RPA, simple, repeatable rules are followed by bots or virtual systems to automate manual computing steps or simple business processes. For example, in the insurance industry, RPA allows companies to automate the processing of claims without manual intervention. In any organization, the process of onboarding/offboarding of employees can be done with RPA.

Another use case for RPA is in the hiring process. It enables human resource executives to automate some or all of the process. This means a job candidate can be hired or rejected automatically based on a certain set of rules.

Hyperautomation brings together artificial intelligence/machine learning (AI/ML) tools with RPA to enable the automation of complex business processes. Hyperautomation can lead to real digital transformation of the business. As one of the top 10 strategic trends of 2021 touted by Gartner, the idea of hyperautomation is to automate anything that can be automated.

Implementing hyperautomation requires streamlining entire processes in an organization, getting rid of legacy applications and enforcing lean, optimized and interconnected processes.

One of the hyperautomation use cases is automating all banking processes. Right from regulatory reporting, marketing, bank servicing, payment and lending operations to customer support, all functions can be automated for a swift banking experience.

The major challenge with hyperautomation is actually automating complex processes. Grappling with process complexity can cause frustration and increase implementation costs.

Create a roadmap for the processes to be automated. Start with the smaller ones and expand to the most complicated ones.

For automation to be highly beneficial, companies must choose which activities to automate, the level of automation to be implemented and the technologies to adopt based on their business needs. Having a strong foundation in IT process automation is the best place to start.

Learn about the 7 Processes to Automate to Improve Productivity and Reduce IT Costs in our checklist.

The post The Evolution of Automation Technologies appeared first on Kaseya.

*** This is a Security Bloggers Network syndicated blog from Blog Kaseya authored by John Emmitt. Read the original post at: https://www.kaseya.com/blog/2021/02/16/the-evolution-of-automation-technologies/

More:

The Evolution of Automation Technologies - Security Boulevard

Automation in the Age of COVID-19 – Advanced Manufacturing

Collaborative robots are easier to implement in part because they can be trained by manually dragging the robot head where you want it to go.

The longstanding trend toward manufacturing automation has understandably been accelerated by the COVID-19 pandemic. Roughly a year into the crisis, its a good time to ask about longer term impacts. Has the pandemic pushed automation into new areas? Have suppliers made automation more flexible? Easier to implement?

By their nature, collaborative robots (cobots) are easier to implement than their traditional cousins. Because cobots are designed to share a workspace with their human counterparts, as a matter of routine, they often dont require guarding and can fit into relatively small, occupied spaces (when confirmed by safety risk assessment). Theyre also easier to set up, said Dick Motley, the Director of FANUC America Corp.s authorized system integrators group in Rochester Hills, Mich. He explained that a user can, in part, train FANUCs CRX series cobots by literally grabbing the arm and leading it around. This makes setup of a simple application really straightforward and intuitive. Youve got robotic automation that can be deployed quickly. He added that theres a growing ecosystem of peripheral suppliers for grippers and pedestals for the robot to sit on, and different provisions to easily address the utilities that go out to the robots end-of-arm tool.

The bad news is that cobots are slow. Even though theyre built with sensors that limit the force theyll impart if they come into contact with something, its pretty tough to meet safety standards at high speed, explained Motley. Because regardless of how sensitive your contact sensing system is, youre trying to defy physics if youve got something moving really fast and then need to immediately bring it to rest. So although you might think cobots would be taking over the world of automation (and COVID-19 era sales have been explosive), their applicability has limits.

Motley referred to a relatively low-speed palletizing operation at the end of a customers manufacturing lines as a good fit for a cobot. They were making two to four cases of product a minute, with an incredible density of conveyor lines feeding these products down to the end to be palletized. They didnt have the physical space to do a traditional robot implementation with the stopping distance calculations and all the things that go into a traditional robot cell. Their only automation option was to put in a cobot. If you go slow enough, a cardboard box is probably not going to hurt you.

This is not to say that traditional robots cant operate in proximity to people, or that theyre very difficult to set up. Or that slow-moving cobots can operate without guarding if theyre handling something sharp or otherwise dangerous. To address these concerns, FANUC and other OEMs have systems that restrict either the motion range or the speed of the robot to allow for intermittent interaction with a human, explained Motley. FANUC calls its safety architecture Dual Check Safety, or DCS. Maybe you want to establish a keep out zone on one side of the robot while an operator loads parts or something that the robot is going to retrieve. You enable a software constraint to keep the robot from going there, typically reinforced by a light curtain or a safety mat or a scanner. But then, once the person leaves that loading zone, the robot can be right back up to full speed.

Motley also pointed out that adjustments to DCS can be done in FANUCs Roboguide offline programming environment simulation package. They can be done with a laptop attached to the robot, or with one of our user interface devices, whether its our iPendant or our new Teach Pendant tablet. The new tablet-based interface is particularly easy to use, said Motley. Its a whole new programming style thats not even close to computer language. Its a drag-and-drop icon timeline.

At Promess Inc., Brighton, Mich., Director of Application John Lytle reported that the pandemic has accelerated his companys effort to make its Electric Press Workstations perform additional functions. Promess had already made it easy to add its units to a production line by making them compact and self contained, with an integrated light curtain to prevent injury to the operator. They automate assembly with sensing that determines if it made a good part, and you can put them wherever you want and adapt to things as they change in the plant. Promess has added ancillary functions and enhanced the information the units are able to communicate to the rest of the factory. For example, in addition to pressing two parts together and confirming that the measured force and travel were as expected and the assembly is therefore good, the Work Station might also take dimensional measurements and pass that information along. This eliminates the need for a separate gaging station.

Performing such measurements requires cameras and/or lasers, and Promess integrates the technology such that the end-user still gets a stand-alone, plug and play unit. As Lytle explained, Were focused on making it simple for the end user, so its not like a big science project requiring a camera technician, a PLC expert, a high-level integrator, and so on. We have a software team of 20 people here, working every day to make it easy. So when the customer gets a Work Station, its already set up. Theyre just entering parameters. He added that cameras can do more than measure parts. They can also be used for part orientation. This enables more complex arrangements, like being able to automatically pick from multiple parts on a pallet. Plus Promess has integrated cobots for automatic part load/unload. The end result, as Lytle sees it, is a multi-function Work Station that simplifies the transfer line and contributes to social distancing, while also sending data to the other equipment in the plant via the Internet or Intranet to make a decision about what to do.

According to Joe Chudy, general manager of ABB Robotics USA, Auburn Hills, Mich., all industries, in large and small installations, are looking for ways to remove people from their processes. The biggest increase in demanda boost that can be tied directly to the pandemicis in medical manufacturing, packaging/logistics, and food processing. The latter two are particularly challenging, given their need for extreme speed in the face of inconsistent inputs. As Chudy put it, its no secret that Amazon cant hire enough people and cant automate quickly enough. The same is true of WalMart and everyone else in that space. But the quantity and diversity of the items you have to pick up and sort quickly forces you to implement some form of AI (artificial intelligence).

Chudy said the food processing industry is also hard pressed and driving automation innovation. He described meat cutting and packing as an inherently miserable environment for humans to work in, with the pandemic only adding to the woes. And the protein (as the industry refers to their product) varies from piece to piece. We asked ourselves if we could debone a chicken. What could we do with the wings? Things like that, recounted Chudy.

Given the inconsistencies of the forms of the protein, meeting this challenge required both a smart camera system and AI to orient the robot grippers. Theres also quick payback on limiting the protein lost in making the cut, added Chudy. So the vision technology you use, with water knives or other techniques to cut this material as close as you can, is a big deal. Learning how the protein is presented to the robot, where the vision system should go, and how you should orient it, all factors into the application.

Long term, Chudy thinks, advances in AI driven by these challenges will also be applied to the metalcutting world. For example, random bin picking capability is improving due to improvements in AI, he observed, as well as de-palletizing and some of the logistic [tasks]. Acquisition times are really what matters in random bin picking. How quickly can I locate that part? How fast can I go get it? Its the same in the logistics market. Chudy believes speed is important and processing the volume of image data required to pick the parts has until now limited those applications. [Now] were seeing those applications flourishing in this market as the technology grows and becomes stronger and less maintenance-intense, Chudy said.

He said fixtureless welding and smart welding are also being studied, in which youre presenting the piece and part to a camera and its deciding how to weld it, and the tolerances, and measuring the gaps, and doing all the things a traditional programmer would do. It works well in the lab, but we have not implemented it in production yet.

Vision technology is central to these systems. And as Chudy put it, that used to make a lot of guys in the metal industry nervous, because vision systems previously required specialized technicians to install and maintain. Now these newer applications are able to self monitor.

Motley also spoke of vision technology as a tremendous enabler that has gotten easier to implement. When you give a robot eyes to be able to adapt to the environment, that enables all kinds of things in terms of reduced cost for setting an application up, reduced changeover time, fewer changed parts, better flexibility, higher reliability, and in-process inspection. He said FANUCs new 3D vision technology, called 3DV, offers a versatile, compact sensor that can easily be built into end-of-arm tooling on a robot. The robot literally carries its eyes around with it rather than having just a stationary sensor, although stationary sensors still have their place. And a 3D point cloud provides the robot with much more information than a flat 2D image, he added. With more information about whats in front of it, the robots control is better able to decide what to do next.

Handling packaging and protein is a lot different than a typical metal working application. So, as Dave Suica, president of Fastems LLC, West Chester, Ohio, explained, gripper technology is changing. We have started going to servo-controlled grippers. A lot of the parts are deformable. A regular power gripper can apply too much force, more than whats needed to overcome the friction factor to lift the part. With servo-control, you go to a position, and then it has an override for how much pressure it applies. More generally, Suica said people are tending toward higher end automation that doesnt require manually changing grippers in order to switch jobs. With smart automation and automatic gripper changes, and computer control versus a PLC, you can make it dynamic. You can switch from part A to part B to part C without any people there. We have systems that run for 72 hours autonomously.

While Fastems is best known for large flexible manufacturing systems (FMS), its handles the entire range of robot and pallet handling configurations. Suica said the pandemic has caused some companies to buy and automate single machines just to quickly re-shore a certain product. But whether Fastems delivers a large FMS or a robot for a single machine, it still has a full line of manufacturing management software, said Suica. It still has scheduling. It still has the capacities to run different parts at different times. Fastems prides itself on being able to integrate with a companys ERP system for predictive and dynamic scheduling. So, as the ERP system changes the requirements, we automatically change the sequence of what part gets made when, such that you maintain your flow without building inventory.

Returning to the packaging and assembly challenge, Motley said FANUCs automotive roots have served it well. Our ability to dispense a bead of sealant to seal up a car body prior to painting, and coordinating that very tightly with robot motions such that when you go fast, you dont get a thin spot in the sealant bead and when you go slow around a corner you dont get a big, thick pile up contributed directly to being able to dispense adhesive in assembly operations.

The increase in automation and social distancing driven by the pandemic has in turn highlighted the need for remote monitoring capabilities. Remote monitoring isnt new, and virtually all control, automation, and machine manufacturers offer such solutions. But Fagor AutomationUSA, Elk Grove Village, Illinois, recently went a step further by accelerating the release of an HTML5 based control architecture. As General Manager for North America Harsh Bibra explained, HTML5 is not only browser based, its consistent across multiple browsers. One person might be using an iPhone. Another might have a Google device. A third person could be using Windows 10 on a laptop. With an HTML5-based interface on the machine, they would all see the same thing in a similar way. HTML5 makes your machine platform independent.

HTML5 provides better mobile access to business intelligence too, said Bibra, along with geolocation. With geolocation, he pointed out, you can limit remote connections to devices that are in specific locations, thereby improving security. For example, you could limit a remote connection to an employees house, but not elsewhere, to prevent access if the employee lost his phone. Whats more, added Bibra, HTML5 is without limits. Depending on the power of the logic you write, or the power of the human- machine interface you create, it can provide the Nth degree of freedom in the sense that the person on the other end can have access to anything. That means you could empower a remote connection to not just monitor activity, but also enter machine commands, like cycle start or cycle stop. In other words, remote control is the ultimate in social distancing.

Read more:

Automation in the Age of COVID-19 - Advanced Manufacturing

Analysts need advanced automation tools to reduce fear of missing incidents – Help Net Security – Help Net Security

Security analysts are becoming less productive due to widespread alert fatigue resulting in ignored alerts, increased stress, and fear of missing incidents, according to an IDC survey of 350 internal and MSSP security analysts and managers.

To improve job satisfaction and effectiveness, the report also uncovered the top activities analysts felt would be best to automate to better secure their Security Operations Centers (SOCs).

Security analysts are being overwhelmed by a flood of false positive alerts from disparate solutions while growing increasingly concerned they may miss a true threat, said Chris Triolo, VP of Customer Success at FireEye.

To solve these challenges, analysts are asking for advanced automation tools, like Extended Detection and Response, which can help reduce the fear of missing incidents while strengthening their SOCs cybersecurity posture.

False positives create alert fatigue: While analysts and IT security managers receive thousands of alerts every day, respondents indicated 45 percent of the alerts are false positives, making in-house analysts jobs less efficient and slowing workflow processes. To manage alert overload in the SOC, 35 percent of this group said that they ignore alerts.

MSSPs spend even more time sifting through false positives, and they ignore more alerts: MSSP analysts indicated that fifty-three percent of the alerts they receive are false positives. Meanwhile, 44 percent of analysts at managed service providers said they ignore alerts when their queue gets too full, which could lead to a breach involving multiple clients.

As analysts experience more challenges managing alerts manually, their worry of missing an incident also increases: Three in four analysts are worried about missing incidents, and one in four worry a lot about missing incidents.

Yet, this FOMI is plaguing security managers even more than their analysts: More than 6 percent of security managers reported losing sleep due to fear of missing incidents.

Less than half of enterprise security teams are currently using tools to automate SOC activities: Respondents shared the top tools they use to investigate alerts, showing that less than half use artificial intelligence and machine learning technologies (43 percent), security orchestration automation and response (SOAR) tools (46 percent), security information and event management (SIEM) software (45 percent), threat hunting (45 percent), and other security functions.

In addition, only two in five analysts use artificial intelligence and machine learning technologies alongside other tools.

To manage their SOCs, security teams need advanced automated solutions to reduce alert fatigue and improve success by focusing on more high-skilled tasks like threat hunting and cyber investigations: When ranking the activities that are best to automate, threat detection was the highest (18 percent) on the analysts wish list, followed threat intelligence (13 percent) and incident triage (9 percent).

Read more:

Analysts need advanced automation tools to reduce fear of missing incidents - Help Net Security - Help Net Security

Notable Health seeks to improve COVID-19 vaccine administration through intelligent automation – TechCrunch

Efficient and cost-effective vaccine distribution remains one of the biggest challenges of 2021, so its no surprise that startup Notable Health wants to use their automation platform to help. Initially started to address the nearly $250 billion annual administrative costs in healthcare, Notable Health launched in 2017 to use automation to replace time-consuming and repetitive simple tasks in health industry admin. In early January of this year, they announced plans to use that technology as a way to help manage vaccine distribution.

As a physician, I saw firsthand that with any patient encounter, there are 90 steps or touch points that need to occur, said Notable Health Medical Director Muthu Alagappan in an interview. Its our hypothesis that the vast majority of those points can be automated.

Notable Healths core technology is a platform that uses robotic process automation (RPA), natural language processing (NLP) and machine learning to find eligible patients for the COVID-19 vaccine. Combined with data provided by hospital systems electronic health records, the platform helps those qualified to receive the vaccine set up appointments and guides them to other relevant educational resources.

By leveraging intelligent automation to identify, outreach, educate and triage patients, health systems can develop efficient and equitable vaccine distribution workflows, said Notable Health strategic advisor and Biden Transition COVID-19 Advisory Board Member Dr. Ezekiel Emanuel, in a press release.

Making vaccine appointments has been especially difficult for older Americans, many of whom have reportedly struggled with navigating scheduling websites. Alagappan sees that as a design problem. Technology often gets a bad reputation, because its hampered by the many bad technology experiences that are out there, he said.

Instead, he thinks Notable Health has kept the user in mind through a more simplified approach, asking users only for basic and easy-to-remember information through a text message link. Its that emphasis on user-centric design that I think has allowed us to still have really good engagement rates even with older populations, he said.

While the startups platform will likely help hospitals and health systems develop a more efficient approach to vaccinations, its use of RPA and NLP holds promise for future optimization in healthcare. Leaders of similar technology in other industries have already gone on to have multibillion dollar valuationsand continue to attract investors interest.

Artificial intelligence is expected to grow in healthcare over the next several years, but Alagappan argues that combining that with other, more readily available intelligent technologies is also an important step toward improved care. When we say intelligent automation, were really referring to the marriage of two concepts: artificial intelligence which is knowing what to do and robotic process automation which is knowing how to do it, he said. That dual approach is what he says allows Notable Health to bypass administrative bottlenecks in healthcare, instructing bots to carry out those tasks in an efficient and adaptable way.

So far, Notable Health has worked with several hospital systems across multiple states in using their platform for vaccine distribution and scheduling, and are now using the platform to reach out to tens of thousands of patients per day.

See the article here:

Notable Health seeks to improve COVID-19 vaccine administration through intelligent automation - TechCrunch

Salesforce: Combining AI and automation can give us superpowers and make us more productive – ZDNet

At Dreamforce 2020, Salesforce unveiled Einstein Automate, an automation platform designed to help customers automate workflows and connect applications using the low-code or no-code tools. Robotic process automation is a key part of enterprise digital transformation strategies and I had a chance to speak with John Kucera, SVP of Product Management for Automation at Salesforce, about Einstein Automate, how the company is using AI as part of its automation solution, the kinds of tasks Salesforce customers are automating, whether he believes automation will lead to fewer jobs and when we'll be able to talk to Tableau like we do a smart speaker. The following is a transcript of our interview, edited for readability.

SEE:Managing the multicloud(ZDNet/TechRepublic special feature) |Download the free PDF version(TechRepublic)

Bill Detwiler: Automation, process automation has been part of the digital transformation efforts and plans of companies for a while now. I'd love to get your take since you talk to Salesforce customers, and you're talking to companies about those plans, where does automation figure in the mix right now? Is it work that they're still doing? Is it work that is more prevalent now than it maybe was a few years ago? Talk a little about that.

John Kucera, SVP Product Management, Automation, Salesforce

John Kucera: Sure. And of course COVID is the big subtext. We found all of our customers, they had to do two years of digital transformation in two weeks or two months when this entire new way of working was thrust upon so many different people. And so that created this massive need and demand for automation. You had things like the paycheck protection program. It didn't exist. It wasn't on a bank's backlog. It wasn't in the IT team's group to say, "We need a mortgage or a loan application process next week for thousands of new loans."

And so this created this huge, huge need for our customers to think about how can we adapt to this new way to work? And so we're so happy and thankful that we've been able to help them with this transformation due to the power of all these tools. And so fundamentally, I think of automation as taking away the tedious, the drudgerous, the inaccuracies from our processes, which frees people up to do the work that we really like to do. Nobody wants to be copy and pasting things on a form a 100 times a day. And so we've actually found a surprising insight is that automation has helped with employee satisfaction and retention. Not what some people have thought might be the impact.

SEE: Digital Transformation: A CXO's Guide (ZDNet special feature)

Bill Detwiler: Well, let's drill down on that because I think that's a great place to start. That's one of the fears that workers have with automation, whether it was robotics in the early days in the auto industry, or whether it's automated manufacturing or it's automation like we're talking about more in process and behind the scenes. Talk a little bit about, expand on that, how automation is really augmenting companies, existing workforces, and is it really leading to just a reduction of staff?

John Kucera: Sure. So fundamentally we people are really good at what we do. People are the only things that can build these relationships, that can delight customers and make judgment decisions. That's what we do every day. We figure out, okay, which direction should we go? Should we prioritize this? Or should we do that? I wanted to help make this customer satisfied and successful. I need to build that relationship so that we can have a mutual understanding of the problem, strategize, form my hypothesis of what to do and then work towards it.

And so automation, I see, as really a digital assistant that gives us superpowers. It takes away those tedious, those silly things that we don't want to have to do. Nobody wants to go between the two different systems and move data. They don't want to have to look between places and do those updates.

What it really does is frees us up to what we are uniquely gifted at, making hard judgment decisions and building relationships. And so we've seen that this has been a huge boon for our customers and that a lot of the fears that I've seen out there in the press are not really justified. As much as technology software people might say that it's going to improve productivity and it does. Fundamentally, it doesn't replace people, it frees them up to do these higher level tasks that also they like doing. So they're happier in their jobs and retention rates go up.

SEE: Salesforce rolls out permanent remote work plans (ZDNet)

Bill Detwiler: So let's talk about Einstein Automate then. Let's get into the nitty gritty of what it is, what it does and how the latest evolution of automation within Salesforce platforms. It was announced at Dreamforce in December in 2020. So give me a rundown on Einstein Automate.

John Kucera: Sure. And we chose the name because we're bringing the best of AI, Einstein, and the best of automation. And basically they go together like peanut butter and chocolate. they're the best bunch. And so fundamentally there's two sets of things that we're really, really focused on. How can we take all of those repetitive, tedious things that people are doing and have software free them up?

And to do that, you need to do this in a very cost-effective way. You need to have, basically, non-developers be able to do things that usually only developers could do, which drastically reduces the cost of it. You need to be able to integrate across all of those systems so you can have all the data in one place to make that happen. And then you need to unify these processes and move the work efficiently between people so that they can all have this concerted way to make these judgment calls, making customers and employees happy. And so that's fundamentally our founding thesis is, how can we take away the tedious? And then how can we also unify and make people smarter in the decisions they are making to make for a more harmonious way to do work?

SEE: Highlights: Salesforce TrailheaDX 2020 (free PDF)

Bill Detwiler: And so tell me about some of the functionality that it has, that it allows customers to do. How can customers integrate it into and use it in their existing either Salesforce, architecture or find new ways to use it with their existing systems?

John Kucera: Sure. And so Salesforce customers today already have Flow. Flow Builder is this fantastic tool for process automation. You can kick it off by a trigger. You can do a batch job, and then you can have it do all of this fancy stuff. You can update data in Salesforce. You can make actions in another system like Stripe for doing payments. You can then also have it do pretty complex logic. And then it also integrates with the number one integration platform in the world MuleSoft, so that you can connect to any system. It's not just the popular web apps out there. One of the things that is MuleSoft's secret power is doing all of those arcane legacy protocols. So that if you have this old system, it can do it.

Salesforce Einstein Automate: Flow Builder

And then further we're partnering with the best in breed RPA vendors like Automation Anywhere, like Blue Prism. So that if you have disconnected systems, you can further have this integration, this automation take action in those, too. So really we're unifying it all the way across all of those. And then we're also introducing Flow Orchestrator, which is to help organize the people processes where we could have this person make the decision on, okay, the underwriter has to decide, is this an appropriate risk? Then they pass it off to the loan officer who has to make a step and review it.

We want to make that visible, easy to monitor and unified with all of the rich integrations so that people can automate end to end processes in an easy to use way through all these tools. And last but not least of course is MuleSoft Composer, which is the non-developer way to integrate across systems. And so we're making all of these work well together with the best in breed systems to have this really unique value proposition in the market.

Bill Detwiler: So let's talk about that portion of it, which is the low code, no code, clicks not code mantra that Salesforce has always had, but that we see more and more today, which is, as you pointed out, people who maybe don't have a development background or they aren't coders by trade, but they understand how the systems work. Talk about the interface maybe just to a little bit or about how those people can use Einstein Automate to accelerate or to accomplish those digital transformation plans.

John Kucera: Yeah. We want to meet all of the personas where they are. So you have, let's say a sales manager. Hey, I just want to create something simple. I click the save button. If it's a big deal, I want to alert my team. And so inside a flow, we make it so that they can do that easily. That happens by the way, literally a trillion times a month that people will click the save button and it kicks off those basic triggers that then send emails and do field updates.

Salesforce Einstein Automate: OmniScript

Then you have your ops person, your person that is comfortable with Excel, they can build a macro. They can basically do a pivot table. These people then can make more complicated types of solutions. So they can then automate solutions for the entire department. They can use Omni Studio to put together a guided web form that you can integrate to a website. They can create a self-service chat bot to do customer self-service. They can do more of these guided workflows for a call center rep to do, say, password resets and a super fast way.

And so, fundamentally, this is transforming how people build because it's reducing the expertise and therefore increasing productivity and allowing all these processes that before couldn't be automated to be cost-effectively. But even better, these work with all those pro code building blocks. So the developers can avoid writing a wizard. Nobody as a developer wants to write like a wizard to move people through things. So they can put a best in class web components, like a custom UI component or an apex piece of logic, a pro code building block inside of these tools and extend them without limits. And so it's really this unique value prop of helping each person do what they need to make their jobs, their department, their company, more effective.

Bill Detwiler: So you've given me a few examples. You talked about a call center. We talked about creating a flow for reps, help desk reps, to be able to change passwords, redo password resets. You talked about some payment options. Give me a really good, maybe example, or how one of the customers is using Einstein Automate now to change one workflow that people who are watching and listening might identify with and say, "You know what, I could use that in my work."

John Kucera: Sure. I'll try to spit off probably five or so. One is just creating a support ticket. So usually there's a bunch of questions in a call center that are coming in and you need to jump between places. Using a guided flow for creating a support ticket is really important. The number one request that comes in for a lot of software vendors is reset my password. Making it easy to do that, either self service or in a guided way, because often you can't log in to actually get to that UI is really, really important for salespeople. It's renewables management. I need to make sure I have a structured process so that when I'm 30 days or 60 days out from people buying more, something expiring, I can have that process in place. I can guide people through it.

And then it's also things like in government. So we had all the unemployment insurance requests. And so a lot of these governments had to basically scale about a 100 acts in the course of a week or two. And so they use Einstein chatbots, help people get the solutions they needed really quickly. Then you also have things in finance like mortgage processes. This is a really, really hairy, gnarly process with about a hundred separate sub ones.

We have our customers using this solution to move the work between the right people, making the customers aware of where things are at in the individual processes and more. And so there's so many use cases across so many people that one of our problems is making people aware of these are all the great things you can do through this great suite.

Bill Detwiler: So I love those examples and I think people, at a fundamental level, if they've been involved in designing a system, whether from a coding or from an admin perspective, or even from a product perspective or an owner perspective, right? Where does AI come into that process? Where does the Einstein component come into the automation part? What role does it play?

John Kucera: So fundamentally what I think about what AI does and I'm going to undersell it a little bit, is it helps make predictions and recommendations. And fundamentally it's like an assistant to people. So people are really good at the judgment decisions and building relationships. AI gives us recommendations to help make those decisions. And then if we're confident enough in those predictions or recommendations, we then say, okay, AI, great, just do that.

Salesforce Einstein Automate: AppExchange

And so some examples of that, are Einstein chatbots. we have the best in class AI that can figure out what you typed and say, what did you actually mean? You're saying, okay, I need to check on the order for this. We can figure out, okay, you needed to get the order status. Then we're going to transfer that over and then go fetch the order status, bring it back to you.

So this is really that peanut butter and chocolate story where the AI can figure out whether it's extracting the text, whether it's looking at an image and figuring out the form fields on it, or whether it's helping make a recommendation to somebody of what to do, based on all of this other stuff that we have in Einstein next best action, that we have in call coaching and more. So really the AI is helping us make those better decisions. And when we have real good confidence in it, as we do in chatbots, actually taking those actions to make these better experiences.

Bill Detwiler: So, I mean, I love the analogy of the peanut butter and chocolate there. The two things being complimentary. I'm curious how, and I like the examples too, about how AI is manifesting with the chat bots and actually trying to gauge user intent and not just actually what the words are, but what they mean. What do you think the next step for AI and automation is? What's that next place we'll be moving to with this combination?

John Kucera: There's so much more. So as the models get smarter, as it gets easier, one of the things that's really transformative is when you don't have to be a data scientist yet you can use all of these insights. Like collecting all the data into more places or into a single place to run that against, that unlocks all of these possibilities. So there's really unique solutions that are popping up there as our data is consolidated in one place and the predictions get smarter.

So maybe 10 years ago reading a form with confidence wasn't good. The technology just wasn't really there. Now we're able to introduce things like form reader, where you can put a driver's license or a 1099, or any type of different PDF that you have and say with confidence, this is exactly what the people have written in there even if it's with a hand scribble. And what's transformative about that is that's a tedious job somebody had to do before. They had to manually retype that looking at the form. And so then it frees those people up to making those judgment calls and making those decisions.

I think voice is another frontier that we've talked a lot about. And one of the things with service cloud voice is transcription in real time. We now have high enough accuracy to figure out what you are saying, make text out of that and then further have more AI on top of the text, kind of like bots to figure out the intent behind what people are saying. Then that lets us say, hey, you might want to suggest saying this. They might be objecting to this concern. Or they sound kind of upset, you might want to be careful about what you say next. And so it's really helping superpower people and have AI be the digital assistant for building these relationships and making people more productive.

Bill Detwiler: So, eventually you're telling me I'll be able to talk to Tableau and ask it for a certain report and it'll give it to me without me having to understand how to do query language and how to build that.

John Kucera: Exactly. It's your natural language search for everything. I think the ultimate that all the smart people in the movies figure out is we want our Jarvis from Ironman. We want to just be able to ask questions to this really smart assistant, have them figure out what we're saying, do the legwork to collect and organize that info and bring it back to us so that we can make decisions. That's where the future is and that's the vision that we keep marching towards.

Bill Detwiler: And I think what some people don't realize is how difficult that is to do. I mean, especially with different languages and with different meanings, different pronunciations of words. It is not a simple task for a machine to understand human intent. I mean, it's not a simple task for humans to understand human intent sometimes.

I'd love to close things out and we've touched on this a little bit, but talk about where you see process automation and automation in general, maybe going in the next few years. What are your customers asking you to automate for them that maybe we can't do now because of limitation in technology or we just aren't there yet.

John Kucera: One of the things that a big focus is integration. You're starting to see this where all of the different systems, how can we talk to all of those? How can we get data from one, bring it to this system and vice versa, how can we trigger off of changes there? There's been a huge amount of innovation in this area, and there's a lot of progress, but there's still so many different protocols, so many different places to unify. And so just getting all of the data into one place is really hard.

And so that's one of the things that I think is going to be really transformative over the next few years is you're going to increasingly see all of these different systems brought together. And what this is unlocking is a way to automate across those systems, automate across teams, across departments, do these company-wide types of processes in a unified way where you can monitor, where are the bottlenecks? You can analyze what's working and what's not? And then you can really do that productivity gain at the next level.

So I think integration is a massive, massive piece there. And then of course, on the AI side, it's going to keep getting smarter. One of the funny things people don't know, we actually only ourselves when we're talking to a human get 95% of the words that are spoken. And so we need the AI to get that high and higher. We're basically at that cusp of understanding what people are saying and almost as good as a human, I believe. And soon I think we'll surpass it, which will be really interesting.

And we might have aides to basically have the AI tell us what we missed when we were talking to people like when your Zoom goes down and it's kind of fuzzy. So I think like the increase of AI getting above those certain accuracy thresholds is really, really interesting because it opens up all these different use cases, whether it's in voice and talking to people and the text transcription which you have for bots and call coaching and so many more use cases.

The Monday Morning Opener is our opening salvo for the week in tech. Since we run a global site, this editorial publishes on Monday at 8:00am AEST in Sydney, Australia, which is 6:00pm Eastern Time on Sunday in the US. It is written by a member of ZDNet's global editorial board, which is comprised of our lead editors across Asia, Australia, Europe, and North America.

Read the original post:

Salesforce: Combining AI and automation can give us superpowers and make us more productive - ZDNet

3 Ways Intelligent Process Automation (IPA) is Knocking at the Doors of the Recruitment World – BBN Times

3 Ways Intelligent Process Automation (IPA) is Knocking at the Doors of the Recruitment World

Intelligent process automation (IPA) will change the way recruiters hire candidates.

It will allow talent acquisition teams to automate their recruitment workflows, right from candidate sourcing to resume screening to interview scheduling to automated data entry, thereby benefiting recruiters by increasing their efficiency at work.

The digital era has enabled the merging of sci-fi fantasies and sophisticated technologies. While commoners are enjoying the readily available tech-powered services that allow them to perform just about any task in a few clicks, this has also opened up new challenges for organizations, regardless of their domain, to stay ahead of the market trends. To be able to follow the ever-changing digital trend, it is essential that companies leverage new-age technologies, as per the requirement. However, adopting modern technologies is easier said than done. To make technological implementation a success, companies need to have a high-impact, skillful, and experienced workforce. As the war for talented candidates intensifies, hiring managers are moving to recruitment automation with the hope to match the right kind of talent to the right jobs.

From classifieds to job boards to mass recruitment campaigns to social recruiting, to now, recruitment automation, recruitment as a process has indeed come a long way, modernizing and streamlining the hiring efforts. Recruitment automation is meant to eliminate the pain points that prick recruiters. Taking away the cumbersome tasks, recruitment automation will improve the ratio of high-performing talent in organizations. Intelligent Process Automation (IPA) is one such recruitment automation tool that can prove its effectiveness at par with human intelligence , or perhaps even beyond. The definition of IPA Automation lies at the heart of almost every organization today. To takeadvantage of double or triple-digit returns, companies are already striving to automate and replace repetitive manual jobs that are highly prone to mistakes. Understanding and realizing the importance of automation in the recruitment landscape, companies are now making a seismic shift towards recruitment automation.

While we have already introduced one of the incredible recruitment automation tools, IPA, its now time to understand what exactly it means and its impact on companies trying to achieve digital transformation. Before IPA, the one tool that had assisted organizations in their digital transformation journey wasrobotic process automation(RPA). RPA, also referred to as software robots, is a technology-powered application that intends to perform rule-based mundane and time-intensive tasks. Offering dramatic changes and disruptions in terms of accuracy, productivity, optimization, and efficiency, RPA soon established a place in majority of the companies. But just like any other sophisticated technology, RPA had its own set of flaws too. RPA lacked the ability to sense, analyze, and take decisions on its own. For RPA to deliver accurate results, the tool had to be fed with inputs that were in an understandable format. The tool drastically failed to analyze and understand unstructured data, which was majority of the data that organizations would collect. To address this problem, experts carried out their study, research, analysis, and experimentation. Coming up with probably the best fix to the flaw, the automation tool was powered by AI and ML so that it not only intelligently handles all business activities but also learns and takes decisions on its own. Thats where the concept of IPA came out. Augmented with sophisticated technologies like AI, big data analytics, smart workflow management, and natural language generation, RPA turned to IPA, which was able to comprehend a given situation and also act suitably. It wouldnt be wrong to say that IPA is an enhancement to RPA in terms of comprehension, sophistication, and intelligence. The role of IPA in recruitment automation

To begin with, IPA is an incredible tool that can help the new-age HRs bring down functional costs, get leisure time, and focus on strategic roles. Here are a few use cases for IPA in recruitment automation:

One of the initial yet important steps to staying ahead in this hyper-competitive market is proactively searching for talented candidates to fill both, the current and the future open positions. While there will be dozens of candidates interested for an opening, engaging with every candidate is neither desirable nor effective. IPA can not only track the online world for gathering details on potential active and passive candidates, but also augment outreach to candidates by sending them automated emails. Once the interested candidates send their resume, IPA can sense, scan, and understand the details provided. If the details match the job criteria, IPA will check the recruiters schedule by accessing her calendar details. Accordingly, IPA will draft another email regarding the interview date and time.

Manually screening a resume is one of the most repetitive, time-consuming, and dull tasks for recruiters. Hundreds of job applications pile up on recruiters inbox every minute, leaving them with hardly any time to read every detail on the resume. Left with no other option, theserecruiters took an average of 5-7 secondsto go through the data in the resume. And this indicates that there are high chances of them missing out on potential candidates. Well today, the case isnt the same. With IPA, manual resume screening goes out of the picture, which leaves recruiters with spare time to strategize for enabling business growth. By reading and comprehending the candidates experience, skills, and qualifications, IPA takes its own decision whether the applicant is fit for the vacant position or not. The tool will automatically send an email to the potential candidates, notifying them about any further rounds for the selection process. Not only the shortlisted candidates, but also the candidates who didnt match the profile will be sent a rejection letter via an email.

Once done with the resume screening, IPA will schedule interviews for the shortlisted candidates. Recruiters can usevideo-interviewing platformsto take up interviews. Recording the digital interviews, IPA can help recruiters to make decisions on whether candidates are a perfect fit or not. The technology will analyze, gauge, and assess the candidates speech, choice of words, behavioral patterns, and body language. Indeed, IPA is a revolutionary technology that will not only enhance the quality of hire, but also enable recruiters to take less time to fill the open positions.

Right from candidate sourcing to outreach to resume screening to interviews, IPAs promise to automate the recruiting workflow is truly promising. However, organizations must implement IPA after considering their existing infrastructure, the changes that will be required, and the elements of IPA that are beneficial to their business. To get the most out of IPA, it is best if companies build a well-crafted and holistic strategies, covering the points that are required to embrace innovation and change. Having a comprehensive strategy in-place will be the first and foremost step in the digitalization journey of recruitment, giving recruiters the right path to achieving success in recruitment.

See more here:

3 Ways Intelligent Process Automation (IPA) is Knocking at the Doors of the Recruitment World - BBN Times

Automation in Retail to scale new heights and pave way for dynamism – Latest Digital Transformation Trends | Cloud News – Wire19

The automated future will be all about encircling eCommerce business scales to embrace thesmart technology, diving into pools of efficiency. The trend has an accelerated interest since it vouches to replace the human heads with the machines for a superior customer experience. The inedible effect caused by the pandemic in the retail sector, in no small measure, will pave way for a pragmatic pick to limit the physical contact. This is exactly where the automated retail jumps into the dingy picture to tackle consumerism issues in the longer run.

Let us now understand what is meant by automated retail, without using any techno jargons and pacing towards advancement without any unnecessary digressions. Automated retail is a streamlined process,which can be identified as resorting to automation rather than the legacy manual systems in the retail sphere. The pandemic calls in for clear navigation in the maze of unprecedented and turbulent crises. Thus, it is important to bolster consumer experience with automation to reshape the eCommerce sector.

There is a legion of retailerswho believe that the seasonal spikes with increasing demand will bring margin-pressure into the limelight. Herein, the hypercompetitive environment needs automated retail solutions in the longer run. To cross the hurdle of inertia of business, companies need to invest in shelf-scanning robots, electronic labels, advanced automated backroom uploading and automated check-out terminals. Inching towards innovation with an omnichannel set of strategies, the enterprises will need optimized assortments along with ambitious plans of expansion. This is exactly wherein the automation in retail jumps into the picture to enhance the overall customer-employee journey.

Let us have a look at the listicle of tangible benefits catered by the advanced automated retail system with a dash of innovation to raise the bar of convenience in the longer run:

Customer retention:Retail automated companies promise for structured sync in anefficient workflow to spur profits by customer retention. Lets understand this the easier way. The more streamlined process, the better productivity and thus, more time for your professional squad to understand your customers requirements. This rational approach helps to retain a one-to-one connection with the customer while the technology will take care of the endless exerting tasks.

Resilient systems and intelligent operations:With automated retail, your business will never face a spike in demand since the order management and inventory is taken care of by the advanced resilient system. Integrating technology in your business is more likely tostrengthen a robust future for intelligent operations, painting the canvas with promising business opportunities.

Holistic technology stack: Automation in retail eases the administration process with modern infrastructure and a holistic technological stack. To have a real picture of the business outlook, the automated systems bombards you with delivery data, business performance and data transformation.

Enhanced customer experience:Enhanced customer experience is one of the big competitive differentiators, which will set you apart from the legion market players. Companies can analyze the caliber of their productive heads and yet feel that their customer experience is typical. Thus, to drive greater spirits and meet support demand, the crafted automated retail solutionshave come for a hype rescue. The streamlined operations meet the dynamic demands of the customer without any hassle along with addressing the uncertainties of the sales channel.

Envisaging challenges ahead of time: A journey ahead of time always needs to stitch solutions for the unforeseen bag of challenges. Picking automated retail solutions is resolving the problems even before they can arise. In simple terms it means staying ahead of the future. The bespoke and curated automated technology will fill in the gaps caused by the employees to recover the costs, deliver efficiency and resource at a faster pace.

The data-driven processes reinforce the rooted concept of an advanced customer journey with cutting-edge technologies. It brings the aim to integrate business intelligence to bridge the gap created by the legacy systems and meet the notes of a smoothly executed retail with enhanced agility. The intuitive automated retail vouches to bring covert experiences to the table with comprehensive expertise. It is time to scale to new heights by acing the game and blossoming the profit graph with the automated retail revolution.

The post Automation in Retail to scale new heights and pave way for dynamism appeared first on NASSCOM Community |The Official Community of Indian IT Industry.

See more here:

Automation in Retail to scale new heights and pave way for dynamism - Latest Digital Transformation Trends | Cloud News - Wire19