Maniyanpilla Rajus son rubbishes fake news on actors health, says he recovered from COVID – Mathrubhumi English

Thiruvananthapuram: Actor Maniyanpilla Rajus son Niranjan has rubbished the fake news on his fathers health. He appealed to all not to circulate rumours on his fathers health condition. Maniyanpilla Raju who was hospitalised after testing COVID positive has recovered from the disease. He is currently taking rest at home.

The actor who is also a producer is expected to resume work soon after his health improves.

Maniyanpilla Raju was hospitalised for more than two weeks after pneumonia along with COVID-19 infection. When this news was reported, many people circulated fake news on his health condition.

Now, his son Niranjan took to his Facebook page reacting to the fake news.

I kindly ask everyone and those medias to stop publishing fakes news about my father, he recovered over two weeks ago and is doing well and fine at home. Thank you!, wrote Niranjan on his Facebook page.

Go here to see the original:

Maniyanpilla Rajus son rubbishes fake news on actors health, says he recovered from COVID - Mathrubhumi English

Kevin Durant blasts Shannon Sharpe for sharing fake news about him: ‘You go on tv in front of everybody pushing fake s**t’ – Basketball Network

Kevin Durant didnt take it too kindly after some of the recent comments Shannon Sharpe made on his show Undisputed with Skip Bayles. Sharpe went on a rant, saying that Kevin Durant stated he is no longer chasing championship rings after joining the Brooklyn Nets. The thing that bothers Durant the most is the quote Sharpe used on the show that apparently Durant never said even though Sharpe quoted him and made a story out of it.

If LeBron James is the GOAT, I beat the GOAT twice, hit the shots in his building, what does that make me.

Durant went on his Twitter account and actively called out Sharpe for lying and saying things that werent true. That resulted in Sharpe blocking Durant on social media, which added more fuel to the fire, and Durant called him sensitive. Apparently, Sharpe reached out to Durant to discuss everything in a private and civilized way, but Durant is not buying it.

That is not the first time the media members get into an altercation with the players and is most definitely not the last time. These types of confrontations occur pretty frequently, and this time if Durant is right and if Sharpe misquoted him to make a story out of it, that is unprofessional, and he has every right to be frustrated. Well see if he accepts Sharpes apology and they squash everything, but knowing Durant, he will probably not let Sharpe get off the hook so easily.

See the original post here:

Kevin Durant blasts Shannon Sharpe for sharing fake news about him: 'You go on tv in front of everybody pushing fake s**t' - Basketball Network

View On Astronomy: April means it’s time to say farewell to winter constellations – The Independent

Weather-wise, spring has hopefully sprung as we begin April. But did you know that many of the prominent winter constellations can still be observed? If the cold and snowy conditions of February and March prevented you from exploring the winter sky, its now time to say farewell to some star patterns that often get overlooked during mid-winter in our region of the country. With public night viewing at Seagrave Observatory and Ladd Observatory closed due to COVID-19, even I did not observe my winter sky friends. Its difficult to set up a telescope in ones backyard with 18 inches of snow on the ground.

Once this column is published, I want you to scan the western sky after sunset to bid goodbye to many of the skies brightest star patterns. See the accompanying star map. Start with Perseus toward the northwest, then move your gaze south (to the left). Here you will encounter Taurus, with the prominent star clusters named the Pleiades (aka the Seven Sisters) and the Hyades. While a binocular view of the Pleiades does show a nice image, the ideal sight you want to achieve is with a telescope under low magnification so the entire cluster fits into the field of view. In a dark, moonless sky, the Pleiades remind me of sparkling diamonds scattered upon black velvet.

Above Perseus and Taurus, youll find Auriga. To the south (left) of Taurus youll encounter the Mighty Hunter Orion. And further to the left will be Canis Major, home of Sirius, the brightest star we can see in our sky other than the sun. Youll find Gemini, the twins above Orion, and this star pattern will be the last of the winter constellations to set below the horizon.

If you dont explore anything else in this region of the heavens before the constellations set, make an effort to observe the Orion Nebula if you havent already done so this past winter season. Usually, the local observatories would have focused on this beautiful region of stellar dust and gas for many weeks, but closures prevented that activity during the winter of 2020-2021. In past columns over the years, I have highlighted this remarkable region of space where new stars are in the process of being born. Following is a brief description.

The grandeur of Orion resides in the region of his sword. Using binoculars, youll see a wispy, hazy patch of green light enshrouding the stars. Using a telescope even under low magnification will reveal a greenish tinged nebula of dust and gas the magnificent Orion Nebula. I never tire of observing this vast dust cloud, often imagining what this region of space will look like when upwards of 1000 stars will be born here.

Mars Still Visible

Due to the orbital paths of Mars and the Earth, when Mars is visible it remains so for an extended period of time. The contrary is also true. When Mars disappears from view it remains hidden for an extended period of time. Right now, you can still observe Mars, as it resides in the constellation of Taurus. See star map. You may recall that Mars and the Earth had a close encounter last Oct. 6, when our two worlds were only 38.6 million miles away from one another. At that time Mars was a very bright pumpkin orange in the night sky and its disk was large enough to see a wealth of detail using a telescope. On April 1 that distance will be 164.5 million miles. Mars will be much dimmer than it was back in October. In fact, it will now be fainter than Taurus brightest star Aldebaran. While the planet will appear small even in a modest-sized telescope, perfect seeing conditions may allow one to discern a Martian surface feature or two. It doesnt hurt to try.

April Lyrids Meteor Shower

I always look forward to a decent display of shooting stars. While the upcoming April Lyrids meteor display on the night of April 21-22 is not a blockbuster event, one can potentially observe upwards of 20 meteors per hour in a dark country sky. The Lyrids appear to radiate outward from an area of sky on the Lyra-Hercules border near the bright star Vega, which will be about 45 degrees (halfway between the horizon and zenith) above the eastern horizon at midnight and well-placed for observing.

A bright waxing gibbous Moon, about 70% illuminated, will somewhat reduce visibility of the fainter meteors. However, it will be located more than 100 degrees away to the west in the constellation of Leo, to the right of the backwards question mark asterism. While still a nuisance light source, the moon shouldnt compromise your observing session. Try to block its brightness using a building or some trees.

These swift and bright meteors disintegrate after hitting our atmosphere at a moderate speed of 29.8 miles per second. They often produce luminous trains of dust that can be observed for several seconds. The moon will set just before 4 a.m. EDT, leaving a little more than an hour of moonless sky before dawns early light will begin to overwhelm the stars and the meteors.

Best of luck in all your observing endeavors.

The author has been involved in the field of observational astronomy in Rhode Island for more than 35 years. He serves as historian of Skyscrapers Inc., the second oldest continuously operating amateur astronomical society in the United States.

More here:

View On Astronomy: April means it's time to say farewell to winter constellations - The Independent

Astronomers find the ‘safest place’ to live in the Milky Way – Space.com

Astronomers havesearched the entire Milky Way to identify the safest places to live.It turns out, we're in a pretty good spot.

But if the past year has made you feel ready to relocate to another planet, you might want to looktoward thecenter of the galaxy, according to the new research.

The new findings were made by a group of Italian astronomers, who studied locations where powerful cosmic explosions may have killed off life. These explosions, such as supernovas and gamma-ray bursts, spew high-energy particles and radiation that can shred DNA and kill life. By this logic, regions that are more hospitable to life will be the ones without frequent explosions, the astronomers reasoned.

"Powerful cosmic explosions are not negligible for the existence of life in our galaxy throughout its cosmic history," said lead author on the new study, Riccardo Spinelli, astronomer at the University of Insubria in Italy. "These events have played a role in jeopardizing life across most of the Milky Way."

Related: 11 fascinating facts about our Milky Way galaxy

In addition to finding the deadliest hotspots, the astronomers also identified the safest places throughout the galaxy's history, going back 11 billion years. The results show that we're currently at the edge of a wide band of hospitable real estate. But in the Milky Way'syouth, the galaxy's edges were a safer bet.

Many factors make a planet habitable. For instance, planets need to be in a Goldilocks zone, where heat and activity from their host star isn't too much or too little it's just right. But in addition to these local conditions, life also has to combat harmful radiation coming from interstellar space.

Powerful cosmic events, such assupernovas and gamma-ray bursts, stream dangerous, high-energy particles at nearly the speed of light. Not only can they kill all the lifeforms we know about, but these particles can also strip entire planets of their atmospheres. After such an event, the scientists believe that planets orbiting nearby star systems would be wiped clear of life.

Related: The 9 real ways Earth could end

"For planets very close to the stellar explosion it is plausible that there is a complete sterilization," Spinelli told Live Science. "In those far away, a mass extinction is more likely."

The authors wrote in the study that a nearby gamma-ray burst may have played a leading role in theOrdovician mass extinction event around 450 million years ago the second largest in Earth's history. While there is no concrete evidence linking a specific gamma-ray burst to this extinction event, the authors think it could be likely, given Earth's position in the galaxy.

Using models of star formation and evolution, the astronomers calculated when specific regions of the galaxy would be inundated with killer radiation. Early on in the galaxy's history, the inner galaxy out to about 33,000 light-years was alight with intense star formation, which rendered it inhospitable. At this time, the galaxy was frequently rocked by powerful cosmic explosions, but the outermost regions, which had fewer stars, were mostly spared these cataclysms.

Until about 6 billion years ago, most of the galaxy was regularly sterilized by massive explosions. As the galaxy aged, such explosions became less common. Today, the mid regions, forming a ring from 6,500 light-years from the galaxy's center to around 26,000 light-years from the center, are the safest areas for life. Closer to the center, supernovas and other events are still common, and in the outskirts, there are fewer terrestrial planets and more gamma-ray bursts.

Luckily for us, our galactic neighborhood is getting more and more life-friendly. In the long-term galactic future, there will be fewer extreme events nearby that could cause another mass extinction.

The new paper's conclusions seem reasonable at first glance, Steven Desch, an astrophysicist at Arizona State University, told Live Science.

"I'm pleased to note that they do seem to put [the research] in a rigorous framework and have realistic expectations about what a gamma ray burst would do, and account for factors that sometimes people forget," such as how the energy and material released by gamma-ray bursts isnt equal in all directions, said Desch, who was not involved with the new work. "I haven't gone through their numbers in detail, but at first glance it's reasonable."

The new research, published in the March issue of the journal Astronomy and Astrophysics, might one day help astronomers decide where to search for habitable exoplanets. But for now technology limits astronomers to only searching nearby areas, Desch said.

Originally published on Live Science.

Link:

Astronomers find the 'safest place' to live in the Milky Way - Space.com

The Vera C. Rubin Observatory and Women of Chilean Astronomy – National Air and Space Museum

In March 2020, the Vera C. Rubin Observatory sat partially erected, perched on Chiles Cerro Pachn in the foothills of the Andes Mountains. The Observatory had halted construction of the 8.4-meter telescope and its associated buildings due to the coronavirus pandemic. By October 2020, with safety precautions in place, construction teams began to slowly return to the mountain. Earlier this month, just one year after its unexpected closure, the Rubin Observatory reached a major milestone when crew used a crane to lower the top end of the telescope, weighing approximately 28 tons and measuring 10 meters in diameter, through the observatorys open dome and into its place on the telescope. This was one of the last remaining heavy pieces to be added to the telescope as the project nears completion and looks forward to beginning regular observations in 2022.

Once in operation, the Rubin Observatory will survey the sky above it, capturing images every few nights to create a catalog of data and a map of the visible universe. Astronomers will use this accumulation of roughly 20 terabytes of data each night, enough to hold the equivalent of four million of your favorite songs, to push our scientific understanding of the structure and evolution of the universe.

Initially called the Large Synoptic Survey Telescope, the Vera C. Rubin Observatory was renamed to honor a pioneer in astronomy, particularly in the field of dark matter, one of the many mysteries the new observatory is expected to help probe. Beginning in the 1960s, Dr. Vera Rubin used a new instrument designed by Kent Ford to study the motion of galaxies. Rubin discovered that the stars in the galaxies she observed orbited faster than expected. One explanation for this discrepancy was that there was more mass in the galaxy than could be seen in the stars alone. Rubins observations helped provide the best observational evidence that the universe is not only composed of ordinary matter, but is actually dominated by dark matter.

In 2019, two U.S. House of Representative members, Eddie Bernice Johnson and Jennifer Gonzlez-Coln, introduced the congressional bill to rename the observatory, the text of which noted Rubins pioneering astronomical work, but also the barriers she faced because of her gender. Princeton University, Rubins preferred choice for graduate work, did not allow women to apply to its programs and the astronomical community largely ignored Rubins research early in her career. Eventually she succeeded in securing a position at the Carnegie Institution of Washington and became the first woman to officially observe at the Palomar Observatory, which was home to the worlds largest telescope. Before her death in 2016, Rubin served as a mentor to other women astronomers and fought for better gender parity in astronomy.

Rubin observed the universe with some of the largest telescopes available during the late twentieth century, including those in Chile, at the newly established Cerro Tololo Inter-American Observatory and the Las Campanas Observatory. When Rubin began her astronomical career, Chile held a small fraction of the worlds telescopes. However, largely due to the nearly perfect dry and clear conditions, particularly in the Atacama Desert in Chiles northern region, today Chile contains the vast majority, around 70%, of the worlds large ground-based telescopes.

Most Chilean observatories constructed in the last 60 years are operated by North American and European nations. For their access to Chiles pristine skies, these international collaborators agreed to reserve 10% of observing time for Chilean astronomers, a percentage that many argue is not adequate. The number of Chilean universities offering PhD degrees in astronomy has increased in the last decade and the number of professional astronomers working in Chile has tripled in that decade alone. At the Vera C. Rubin Observatory, all of the data will be made available to both Chilean and U.S astronomers which should aid the growing number of astronomers in Chile. However, in Chile, women astronomers still only account for 15% of the countrys astronomers, which is about half their representation worldwide. Placing Rubins name on a new observatory and providing greater access to its data is a recognition of her incredible accomplishments and tireless efforts but it is also a reminder of the continued marginalization of women in astronomy and the further inequity across race and nationality.

While the number of women astronomers in Chile remains low, women have succeeded in contributing to the extension of our knowledge of the universe. Dr. Mara Teresa Ruiz broke through her own barriers as she worked to become a trailblazer for women in Chilean astronomy. Born in Santiago, Ruiz was the first woman to earn a degree in the newly formed astronomy program at the University of Chile. When she graduated there were no astronomy PhD granting programs in Chile so she traveled to the United States where she attended Princeton University, the same institution where two decades earlier, Rubin had not been permitted to apply. In 1975, Ruiz became the first woman to earn a PhD in astrophysics at Princeton. Ruiz eventually returned to Chile and helped to rebuild and foster the university system. In 1997, she discovered one of the first free floating brown dwarfs using the European Southern Observatorys La Silla observatory. Brown dwarfs are star-like objects that are too small to fuse hydrogen but too large to be planets. Their discovery and subsequent study refuted the hypothesis that brown dwarfs may account for a significant amount of the dark matter in the universe. For her long and accomplished career in astronomy, Ruiz was awarded Chiles National Prize for Exact Sciences and remains a leader for science in Chile.

Ruiz paved the way for younger scientists to follow in her footsteps. Dr. Brbara Rojas-Ayala began her astronomical studies under Ruiz and continues to research dwarf stars at the University of Tarapac. Dr. Maritza Soto has already impressed with the discovery of three planets, the first of which she discovered in 2011 while a graduate student at the University of Chile. Soto continues her research while hoping to normalize careers in astronomy, particularly for women. In 2019, Soto hoped to import that astronomy is not alien stuff that only two people in the world do; its really a career path. Its something you can do, that anyone can do, if you work a lot for it. Its not impossible, you dont have to be a genius, she says. You can just be a normal person.

By the time the Vera Rubin Observatory begins operations in 2022, followed by other large telescopes built along the Chilean Andes, we can hope that the number of women astronomers using those facilities will continue to rise. To accomplish this, major steps still need to be taken and enforced to make the astronomy community more inviting and more supportive of women, particularly in the places that host the worlds telescopes.

Link:

The Vera C. Rubin Observatory and Women of Chilean Astronomy - National Air and Space Museum

Light pollution from satellites ‘poses threat’ to astronomy – The Guardian

Artificial satellites and space junk orbiting the Earth can increase the brightness of the night sky, researchers have found, with experts warning such light pollution could hinder astronomers ability to make observations of our universe.

There are more than 9,200 tonnes of space objects in orbit around the Earth, ranging from defunct satellites to tiny fragments, according to the European Space Agency (ESA). Now it seems space junk not only poses a collision risk but, together with other space objects, is contributing to light pollution.

Writing in the Monthly Notices of the Royal Astronomical Society, researchers describe how sunlight that is reflected and scattered from space objects can appear as streaks in observations made by ground-based telescopes.

Because the streaks are often comparable to or brighter than objects of astrophysical interest, their presence tends to compromise astronomical data and poses the threat of irretrievable loss of information, the team writes.

But for some instruments, the impact could be greater still. When imaged with high angular resolution and high sensitivity detectors, many of these objects appear as individual streaks in science images, they write. However, when observed with relatively low-sensitivity detectors like the unaided human eye, or with low-angular-resolution photometers, their combined effect is that of a diffuse night sky brightness component, much like the unresolved integrated starlight background of the Milky Way.

Calculations in the report suggest this glow could reach up to 10% of the natural night sky brightness a level of light pollution previously set by the International Astronomical Union (IAU) as being the limit that is acceptable at astronomical observatory sites.

While the researchers say the idea of a natural level of brightness has its own difficulties, they stress further research is necessary, adding that the situation could become worse as further satellites, including mega-constellations, are launched.

Greg Brown, a Royal Observatory astronomer who was not involved in the study, said light pollution was a big problem for astronomers.

Telescopes like the soon-to-be-operational Vera C Rubin Observatory are expecting vast contamination of their images from just the mega-constellations expected in the next few years, which will be difficult and costly to compensate for and do seriously risk scientists missing out on key scientific discoveries, he said.

While Brown said it was unclear whether the assumptions made in the study held true, given changes in satellite design and the difficulty of estimating small space debris, he said astronomical observations would be increasingly affected by such light pollution.

This is definitely the time to be concerned about the future of both professional and amateur astronomy, he said.

Prof Danny Steeghs of the University of Warwick said there was a balance to be struck between the benefits of satellites and their impact on our ability to study the night sky, but agreed light pollution was likely to be a growing, and escalating, problem.

We can, as astronomers, remove or reduce the direct impact on our data somewhat by employing image processing techniques, but of course it would be a lot better if they are not there for starters, he said.

Fabio Falchi, from the Light Pollution Science and Technology Institute in Italy, said the problem was global. The distribution of the space debris is fairly uniform around our planet, so the contamination is already present everywhere, he said, suggesting those responsible for the problem should help to solve it.

Maybe Elon Musk can put his engineers at work to find out a solution, at least to counterbalance a little the damage that his Starlink mega-constellation of satellites is going to make to the starry sky, he said.

While projects have recently begun to clean up space junk, Steeghs said one difficulty was that small fragments could be tricky to sweep up yet could nonetheless contribute to the light pollution.

Chris Lintott, a professor of astrophysics at the University of Oxford, also stressed the need for action. It does seem that simple efforts like building satellites out of darker materials might be very helpful, and I hope operators will take such steps as soon as possible, he said.

Link:

Light pollution from satellites 'poses threat' to astronomy - The Guardian

If Astronomers see Isoprene in the Atmosphere of an Alien World, Theres a Good Chance Theres Life There – Universe Today

It is no exaggeration to say that the study of extrasolar planets has exploded in recent decades. To date, 4,375 exoplanets have been confirmed in 3,247 systems, with another 5,856 candidates awaiting confirmation. In recent years, exoplanet studies have started to transition from the process of discovery to one of characterization. This process is expected to accelerate once next-generation telescopes become operational.

As a result, astrobiologists are working to create comprehensive lists of potential biosignatures, which refers to chemical compounds and processes that are associated with life (oxygen, carbon dioxide, water, etc.) But according to new research by a team from the Massachusetts Institute of Technology (MIT), another potential biosignature we should be on the lookout for is a hydrocarbon called isoprene (C5H8).

The study that describes their findings, Assessment of Isoprene as a Possible Biosignature Gas in Exoplanets with Anoxic Atmospheres, recently appeared online and has been accepted for publication by the journal Astrobiology. For the sake of their study, the MIT team looked at the growing list of possible biosignatures that astronomers will be on the lookout for in the coming years.

To date, the vast majority of exoplanets have been detected and confirmed using indirect methods. For the most part, astronomers have relied on the Transit Method (Transit Photometry) and the Radial Velocity Method (Doppler Spectroscopy), alone or in combination. Only a few have been detectable using Direct Imaging, which makes it very difficult to characterize exoplanet atmospheres and surfaces.

Only on rare occasions have astronomers been able to obtain spectra that allowed them to determine the chemical composition of that planets atmosphere. This was either the result of light passing through an exoplanets atmosphere as it transitted in front of its star or in the few cases where Direct Imaging occurred and light reflected from the exoplanets atmosphere could be studied.

Much of this has had to do with the limits of our current telescopes, which do not have the necessary resolution to observe smaller, rocky planets that orbit closer to their star. Astronomers and astrobiologists believe that it is these planets that are most likely to be potentially habitable, but any light reflected from their surfaces and atmospheres is overpowered by the light coming from their stars.

However, that will change soon as next-generation instruments like the James Webb Space Telescope (JWST) takes to space. Sara Seager, the Class of 1941 Professor of Physics and Planetary Sciences at MIT, leads the research group responsible (aka. the Seager Group) and was a co-author on the paper. As she told Universe Today via email:

With the upcoming October 2021 launch of the James Webb Space Telescope wewillhave our first capability of searching for biosignature gasesbut it will be tough because the atmospheric signals of small rocky planet are so weakto begin with. With the JWST on thehorizon the number of people working in thefield has growntremendously.Studies such as this one coming up with newpotential biosignature gases, and other workshowing potential false positives even for gases such as oxygen.

Once it is deployed and operational, the JWST will be able to observe our Universe at longer wavelengths (in the near- and mid-infrared range) and with greatly improved sensitivity. The telescope will also rely on a series of spectrographs to obtain composition data, as well as coronagraphs to block out the obscuring light of parent stars. This technology will enable astronomers to characterize the atmospheres of smaller rocky planets.

In turn, this data will allow scientists to place much tighter constraints on an exoplanets habitability and could even lead to the detection of known (and/or potential) biosignatures. As noted, these biosignatures include the chemical indications associated with life and biological process, not to mention the types of conditions that are favorable to it.

These include oxygen gas (O2), which is essential to most forms of life on Earth and is produced by photosynthetic organisms (plants, trees, cyanobacteria, etc.). These same organisms metabolize carbon dioxide (CO2), which oxygen-metabolizing life emits as a waste product. Theres also water (H2O), which is essential to all life as we know it, and methane (CH4), which is emitted by decaying organic matter.

Since volcanic activity is believed to play an important role in planetary habitability, the chemical byproducts associated with volcanism hydrogen sulfide (H2S), sulfur dioxide (SO2), carbon monoxide (CO), hydrogen gas (H2), etc. are also considered biosignatures. To this list, Zhan, Seager, and their colleagues wished to add another possible biosignature isoprene. As Zhan explained to Universe Today via email:

Our research group at MIT focuses on using a holistic approach to explore all possible gases as potential biosignature gas. Our prior work led to the creation of the all small molecules database. We proceed to filter the ASM database to identify the most plausible biosignature gas candidates, one of which is isoprene, using machine learning and data-driven approaches Dr. Zhuchang Zhan.

Like its cousin methane, isoprene is an organic hydrocarbon molecule that is produced as a secondary metabolite by various species here on Earth. In addition to deciduous trees, isoprene is also produced by a diverse array of evolutionary-distant organisms such as bacteria, plants, and animals. As Seager explained, this makes it promising as a potential biosignature:

Isoprene is promising because it is produced in vastqualities by life on Earthas much as methane production!Furthermore, a hugevariety of life forms (from bacteria to plants and animals), those that are evolutionary distant from each other, produce isoprene, suggesting it might be some kind of keybuilding block that life elsewhere might also make.

While isoprene is about as abundant as methane here on Earth, isoprene is destroyed by interaction with oxygen and oxygen-containing radicals. For this reason, Zhang, Seager, and their team chose to focus on anoxic atmospheres. These are environments that are predominantly composed of H2, CO2, and nitrogen gas (N2), which is similar to what Earths primordial atmosphere was composed of.

According to their findings, a primordial planet (where life is beginning to emerge) would have abundant isoprene in its atmosphere. This would have been the case on Earth between 4 and 2.5 billion years ago when single-celled organisms were the only life and photosynthetic cyanobacteria were slowly converting Earths atmosphere into one that was oxygen-rich.

By 2.5 billion years ago, this culminated in the Great Oxygenation Event (GOE), which proved toxic to many organisms (and metabolites like isoprene). It was also during this time that complex lifeforms (eukaryotes and multi-celled organisms) began to emerge. In this respect, isoprene could be used to characterize planets that are in the midst of a major evolutionary shift and laying the groundwork for future animal phyla.

But as Zhang noted, teasing out this potential biosignature will be a challenge, even for the JWST:

The caveats with isoprene as a biomarker are that: 1. 10x-100x the Earths Isoprene production rate is needed for detection; 2. Detecting Near-Infrared isoprene spectral feature can be hindered by the presence of methane or other hydrocarbons. Unique detection of isoprene will be challenging with JWST, as many hydrocarbon molecules share similar spectra features in Near-Infrared wavelengths. But future telescopes that focus on the mid-IR wavelength will be able to detect isoprene spectral features uniquely.

Beyond the JWST, the Nancy Grace Roman Space Telescope (successor to the Hubble mission) will also be taking to space by 2025. This observatory will have the power of One-Hundred Hubbles and its recently-upgraded infrared filters will allow it to characterize exoplanets on its own and through collaborations with the JWST and other great observatories.

There are also several ground-based telescopes currently being built here on Earth that will rely on sophisticated spectrometers, coronographs, and adaptive optics (AOs). These include the Extremely Large Telescope (ELT), the Giant Magellan Telescope (GMT), the Thirty Meter Telescope (TMT) These telescopes will also be able to conduct Direct Imaging studies of exoplanets, and the results are expected to be ground-breaking.

Between improved instruments, rapidly improving data analysis and techniques, and improvements in our methodology, the study of exoplanets is only expected to accelerate further. In addition to having tens of thousands of more available for study (many of which will be rocky and Earth-like), the unprecedented views we will have of them will let us see just how many habitable worlds are out there.

Whether or not this will result in the discovery of extraterrestrial life within our lifetimes remains to be seen. But one thing is clear. In the coming years, when astronomers start combing through all the new data they will have on exoplanet atmospheres, they will have a comprehensive list of biosignatures to guide them.

Seager and Zhans previous work include a concept for a Martian greenhouse that could provide all the necessary food for a crew of four astronauts for up to two years. This greenhouse, known as the Biosphere Engineered Architecture for Viable Extraterrestrial Residence (BEAVER), took second place in the 2019 NASA BIG Idea Challenge. You can read more about it here.

Further Reading: arXiv

Like Loading...

More:

If Astronomers see Isoprene in the Atmosphere of an Alien World, Theres a Good Chance Theres Life There - Universe Today

Astronomers see a ghostly ‘radio jellyfish’ rise from the dead in the southern sky – Livescience.com

Galaxy clusters are the largest structures in the universe bound together by gravity. They can contain thousands of galaxies, enormous oceans of hot gas, invisible islands of dark matter and sometimes the glowing ghost of a jellyfish or two.

In the galaxy cluster Abell 2877, located in the southern sky about 300 million light-years from Earth, astronomers have discovered one such jellyfish. Visible only in a narrow band of radio light, the cosmic jelly is more than 1 million light-years wide and includes a large lobe of supercharged plasma, dripping with tentacles of hot gas.

The structure's jelly-like appearance is both "ghostly" and "uncanny," according to the authors of a new paper published March 17 in the Astrophysical Journal. However, even more astonishing than the space jelly's shape is how quickly the structure vanishes from view, the authors said.

Related: 12 Trippy objects hidden in the Zodiac

"This radio jellyfish holds a world record of sorts," lead study author Torrance Hodgson, of the International Centre for Radio Astronomy Research (ICRAR) in Perth, Australia, said in a statement. "Whilst it's bright at regular FM radio frequencies, at 200 megahertz the emission all but disappears. No other extragalactic emission like this has been observed to disappear anywhere near so rapidly."

The universe is swimming with energetic structures that are only visible in radio wavelengths, like the mysterious X-shaped galaxies cartwheeling through space, or the twin blobs at the center of the Milky Way. However, no structure this large has ever been observed in such a narrow band of the radio spectrum.

According to the researchers, that likely means this cosmic jellyfish is actually an odd bird known as a "radio phoenix."

Like the mythical bird that died in flame and rose again from the ashes, a radio phoenix is a cosmic structure that's born from a high-energy explosion (like a black hole outburst), fades over millions of years as the structure expands and its electrons lose energy, then finally gets reenergized by another cosmic cataclysm (such as the collision of two galaxies).

To create a radio phoenix, that last cosmic event must be powerful enough to send shockwaves surging through the dormant cloud of electrons, causing the cloud to compress and the electrons to spark with energy again. According to the study authors, that could cause a structure like the jellyfish cluster to glow brightly in certain radio wavelengths, but dim rapidly in others.

"Our working theory is that around 2 billion years ago, a handful of supermassive black holes from multiple galaxies spewed out powerful jets of plasma," Hodgson said.

That plasma's energy faded over millions of years, until "quite recently, two things happened the plasma started mixing at the same time as very gentle shock waves passed through the system," Hodgson said. "This has briefly reignited the plasma, lighting up the jellyfish and its tentacles for us to see."

The researchers used a computer simulation to show that this explanation is a plausible origin story for that big jellyfish in the sky, though several big questions such as where the "gentle shockwaves" came from remain unanswered. The team hopes to take a closer look at the jellyfish in the future, following the completion of the Square Kilometre Array a network of hundreds of radio telescope antennas planned for construction in the Australian Outback.

Originally published on Live Science.

See original here:

Astronomers see a ghostly 'radio jellyfish' rise from the dead in the southern sky - Livescience.com

Astronomers Have Captured the Most Detailed Photo of a Black Hole EverSee the Magnetic Fields That Power It Here – artnet News

Two years ago, astronomers managed to photograph a black hole for the very first time. The team behind theEvent Horizon Telescopeproject was awarded the Breakthrough Prizeknown as the Oscars of sciencefor their effort, and New Yorks Museum of Modern Art acquired the image in the form of an inkjet print.

Now, the same astronomers have captured the most detailed photograph to date of a black hole, one of the universes most enigmatic features, which was once thought to be unobservable.

Seen in polarized light, the fuzzy ring of light in the original image is now in focus, with crisp lines swirling in toward the center of what appears to be a bottomless pit, sucking in anything and everything within its grasp.

Its like putting on a pair of polarized sunglasses on a bright sunny dayall of a sudden you can see whats going on, astronomer Sheperd Doeleman of the Harvard-Smithsonian Center for Astrophysics told the New York Times.

The Event Horizon Telescope was designed to capture images of a black hole. The image shows the light around the black holes boundary. Image courtesy of the Event Horizon Telescope.

A black hole is a field of matter so dense that not even rays of light can escape its gravitational pull. But as the black hole inexorably draws in gas, dust, and stars, some light is actually propelled outward in jets of energetic particles.

This jet process is totally amazingsomething the size of our solar system can shoot out a jet that pierces through entire galaxies and even galaxy neighborhoods, Event Horizon Telescope team member Sara Issaoun told IGN.

The new image shows the black holes vortex, and the magnetic field lines at its inner edge, illustrating how the magnetic field both feeds the black holes insatiable hunger and powers the intergalactic fireworks show that surrounds it.

This image shows the jet in the M87 galaxy in polarized light. It is 6,000 light-years long. Image courtesy of ALMA (ESO/NAOJ/NRAO), Goddi et al.

The main finding is that we not only see the magnetic fields near the black hole as expected, but they also appear to be strong. Our results indicate that the magnetic fields can push the gas around and resist being stretched. The result is an interesting clue to how black holes feed on gas and grow,Jason Dexter, a professor at the University of Colorado Boulder, told Space.com.

We are now seeing the next crucial piece of evidence to understand how magnetic fields behave around black holes, and how activity in this very compact region of space can drive powerful jets that extend far beyond the galaxy, said Monika Mocibrodzka, coordinator of the EHT Polarimetry Working Group, in a statement.

The galaxy Messier 87, in the constellation Virgo, as capture by the European Southern Observatorys Very Large Telescope. Photo courtesy of the European Southern Observatory.

The findings from the new image are the subject of three papers published last week in the Astrophysical Journal Letters, two by the Event Horizon Telescope Collaboration and one by lead author Ciriaco Goddi of Radboud University in the Netherlands.

This black hole captured by the telescope lies 55 million light-years away, at the heart Messier 87, a supergiant elliptical galaxy in the constellation Virgo.At 6.5 billion times bigger than our sun, it is unimaginably supermassivethe surrounding circular field of electrified gas or plasma captured in the image measures about 30 billion miles across, or four times Plutos orbit.

The Atacama Large Millimeter/submillimeter Array (ALMA), part of the Event Horizon Telescope Collaboration, set against the Milky Way. Photo by European Southern Observatory Photo Ambassador Babak Tafreshi.

Capturing the image was a global effort. TheEvent Horizon Telescope collaboration is powered by eight ground-based radio telescopes in Chile, Mexico, Spain, Hawaii, Arizona, and the Antarctic, overseen by an international team of radio astronomers thatsynchronize their observations by atomic clock. Together, the sites essentially create a planet-sized telescope.

The projects name comes from the point of no return around a black hole. Beyond the event horizon, no light or matter can escape.

Watch a video approaching the Messier 87 galaxy black hole below.

Read more:

Astronomers Have Captured the Most Detailed Photo of a Black Hole EverSee the Magnetic Fields That Power It Here - artnet News

Terrascope: The Whole Earth Telescope – SYFY WIRE

One of the biggest problems is astronomy is a simple one: Not enough light. One of the reasons we make telescopes so big is to collect light to see faint things. The analogy is like a bucket in rain: The wider the bucket the more rain you collect. Photons fall from the sky, and the bigger your mirror the more light you collect.

It's hard to make big telescopes. Supporting a mirror bigger than 8 meters is hard, though making them segmented (like the James Webb Space Telescope) helps. Still, the cost is huge, and to make scopes bigger than what we have now you have to start thinking in the billions of dollars. Ouch.

So astronomers get clever in ways to collect light. One of the most clever astronomers I know is David Kipping. I've written about his work a few times, including about his search for exomoons. Like so many others, he's a photon-starved astronomer, and he recently published an amazing concept to collect more light from cosmic objects: The Terrascope.

It's called that because it uses the Earth as a gigantic lens to focus light.

Yes, seriously. It's the Whole Earth Telescope.

He made a video explaining it:

A lens bends the path light takes (this is called refraction), so that a photon that would otherwise miss your camera gets directed into it. Again with a rain analogy: A raindrop that falls a meter away from you misses you, but if you could deflect (refract!) the path of that drop a little bit while it's still up high, it'll be aimed right at you, and you get wet.

In the case of the Terrascope, the lens is actually Earth's atmosphere. When light moves from one medium to another (like air to water, or space to air), its path bends a little bit. The amount it bends depends on the angle it enters and the stuff (what we usually call the medium) it's passing through. This is why the Sun (and Moon) looks flattened when it sets; the top part of the Sun is passing through less air than the bottom part, and in effect the light from those parts of the Sun get squeezed together, making the Sun look smaller vertically.

So a beam of light coming from a distant star, say, passes through Earth's air and its path bends. The most it bends is about a degree (twice the size of the Sun or Moon in the sky). Now imagine you're floating in space, with the Earth between you and that star. You can't see the star because the Earth is in the way. BUT if you are at just the right distance, the Earth's air will bend the light of the star right at you. The distance from Earth you need to be for this is about 360,000 kilometers, nearly all the way to the Moon.

Now think about this: From there, the Earth looks like a disk, and the air around it a thin ring. Any photon from that star hitting Earth's air at any point in that ring will get bent toward you. All those photons would miss you otherwise, but with Earth's air bending them you see lots of photons. That's exactly how a lens works.

I'll note that the header image of his Twitter account is a fanciful drawing of how this works.

This same effect also creates a phenomenon called a central flash, when an object with an atmosphere passes directly in front of a much more distant object and you get a sudden brightening of light when this eclipse (really, called an occultation) is at its midpoint. It's exactly the same effect I'm talking about here. It's been seen with Neptune's moon Triton blocking a distant star, as well as with Saturn's moon Titan doing the same thing.

So faint stars will appear much brighter because of the extra photons. The light amplification can be huge: For a one-meter telescope, it can be as much as a factor of 80,000! That's incredible. That's 10.5 magnitudes, for those who know their astronomy. A telescope like that could see objects down to Hubble-like faintness even better than Hubble.

It turns out there's a lot of physics involved here, which I've elided over. For example, the air above the Earth gets less dense (and in general colder) with height, and that affects refraction. It depends on the wavelength of light you're looking at as well. And placing a one-meter telescope that far from Earth has other issues, too. The Moon will get close enough that you'll have to do a lot of station keeping every month, for one. Clouds in Earth's atmosphere block light, which is kind of a pain, too, which greatly reduces the light you can see.

Also, you can only look at whatever is behind the Earth, so either you just look at stuff in the sky that happens to be there (and that changes all the time, repeating once a year as the Earth orbits the Sun) or you'll have to move your telescope a loooong way. Kipping goes into some of those problems in his paper.

He finds that a lot of these issues can be overcome simply by moving the telescope farther out. Put it about 1.5 million kilometers from the Earth, and the light that passes about 14 km above Earth's surface gets bent into focus. That avoids clouds, which is great, and most infrared light can pass through the air at that height, allowing that part of the spectrum to be observed as well.

At that distance, a one-meter telescope has the equivalent collecting power of a 150meter telescope! Yegads. That's a lot of light, and the more light you collect from an object the better your data are. Which reminds me: You'll have to block the very bright Earth from your detector as well, but that's something we know how to do (using a coronagraph), and is likely just an engineering problem.

Is a Terrascope actually possible? Well, even Kipping is skeptical in his video and by that I don't mean cynical, I mean scientifically skeptical, being careful to acknowledge the plusses and minuses of the concept. He notes there's a long way to go to figure out all the problems here, but it's very much worth proposing this idea to the community and seeing what happens. And if it does work, it'll be the coolest telescope flying.

View original post here:

Terrascope: The Whole Earth Telescope - SYFY WIRE

Astronomy: The Moon, the most romantic object in the sky – RTL Today

Earths only natural satellite, The Moon, is by far the most romantic and inspiring celestial object in our Solar System. Since the beginning of time, adults and children indistinctively have been lifting their eyes to the sky to gaze at this magnificent celestial body in awe and amazedness.

For millennia, The Moon has sparked the imagination of writers and poets, including James Joyce (What counsel has the hooded moon, Put in thy heart, my shyly sweet, Of Love in ancient plenilune, Glory and stars beneath his feet) or Shakespeare ("th'inconstant moon, That monthly changes in her circled orb"), among many others.

In music, the Moon has captivated the spirit of great artists like The Rolling Stones (Moon Is Up), Pink Floyd (The Dark Side Of the Moon), The Beatles (Mr Moonlight) and The Police (Walking on the Moon). Neil Young, the authors favourite, sings a tribute to his wife, picturing the couple dancing under an Harvest Moon, the closest full Moon to the autumn equinox.

And what better representation than art? For centuries, The Moon has played a central role on the canvases of great painters such as Michelangelo (Creation of the Sun, Moon, and Plants), Edvard Munch (Moon Light), Vincent Van Gogh (White House at Night) and Henri Rousseau (La Encantadora de Serpientes).

But our natural satellite is not only a muse for artists and poets. At an average distance of 384,400 km (238,855 miles), The Moon has a strong impact on our planet. It moderates Earths motion on its axis, which stabilises the climate thus making life possible and favouring agriculture. It also regulates the tides to a rhythm that certain species of crabs, worms and fishes follow for their reproduction.

Sadly, The Moon is constantly shifting away from us at a rate of approximately 3.8 cm (1.5 in) each year, similar to the rate our fingernails grow. But no fear, it will take billions of years before the effects become noticeable.

From Earth, we only see one face of The Moon. This phenomena is called tidal locking, and means that The Moon rotates on its own axis at the same rate that it orbits Earth thus showing us always the same face. A complete orbit around Earth takes 27 days but due to our motion and orbit around the Sun, from our perspective the Moon appears to orbit us every 29 days.

The surface of The Moon is marked by long-inactive volcanoes, impact craters, and lava flows. The areas of the surface that appear bright are called Highlands whereas the dark features are called Marea (from Latin mare: sea). The latter are impact basins that were filled with lava from the volcanoes between 4.2 and 1.5 billion years ago. Some of the Mareas are so marked that are easily visible to the naked eye.

The surface temperature can reach peaks of approximately 130C (265F) when lit by the Sun, then dropping to -170C ( -274F) in darkness. Initial samples returned to Earth from the Apollo missions did not detect any signs of liquid water. In 2008, however, the Indian mission Chandrayaan-1 detected hydroxyl molecules spread across the lunar surface and concentrated at the poles.

Following missions went even further and proved that the surface presents high concentrations of ice water in the permanently shadowed regions of the lunar poles. Finally, in October 2020 NASA confirmed for the first time to have found water also on the sunlit surface of the Moon.

This discovery is of significant importance as it makes the Moon a little more hospitable and simplifies life for NASAs scientists in view of the establishment of a stable colony on our satellite, but also for future missions such as the Artemis Program, which will land the next man and the first woman on the Moon by 2024.

Among other things, the mission will serve as an experiment before sending astronauts to Mars. As a great man once said: That's one small step for a man, one giant leap for mankind.

See the rest here:

Astronomy: The Moon, the most romantic object in the sky - RTL Today

An astronomer’s animation shows how Earth and the moon both orbit a spot 3,000 miles from the true center of the planet – Yahoo News

The Deep Space Climate Observatory (DSCOVR) captured a view of the moon as it passed between the spacecraft and Earth. DSCOVR EPIC team

The moon orbits Earth - right? The answer is actually a little more complicated than that.

The moon is circling a point about 3,000 miles from our planet's center, just below its surface. Earth is wobbling around that point, too, making its own circles.

That spot is the Earth-moon system's center of mass, known as the barycenter. It's the point of an object (or system of them) at which it can be balanced perfectly, with the mass distributed evenly on all sides.

The Earth-moon barycenter doesn't line up exactly with our planet's center. Instead, it's "always just below Earth's surface," as James O'Donoghue, a planetary scientist at the Japanese space agency (JAXA), explained on Twitter.

It's hard to imagine what that looks like without seeing it for yourself. So O'Donoghue made an animation to demonstrate what's going on. It shows how Earth and the moon will move over the next three years.

The distance between Earth and the moon is not to scale in the animation, but O'Donoghue used NASA data, so the positions over time are accurate.

"You can pause the animation on the present date to figure out where the Earth and moon physically are right now," O'Donoghue said.

Every planetary system - including the star or planet that appears to be at the center - orbits an invisible point like this one. Our solar system's barycenter is sometimes inside the sun, sometimes outside of it. Barycenters can help astronomers find hidden planets circling other stars: A star's wobbling motion allows scientists to calculate mass they can't see in a given system.

Story continues

O'Donoghue made a similar animation of Pluto and its moon, Charon. In this system, the barycenter is always outside of Pluto.

That's because Charon's mass is not that much smaller than Pluto's, so the system's mass is more evenly distributed than Earth and our moon.

Because the barycenter is outside of Pluto, O'Donoghue said, you could actually consider this to be a "double (dwarf-)planet system" rather than a dwarf planet and its moon.

In his free time, O'Donoghue has also made animations to explain why leap years are necessary, why you've probably never seen a model of the solar system to scale, and how incredibly slow the speed of light is.

Read the original article on Business Insider

Go here to read the rest:

An astronomer's animation shows how Earth and the moon both orbit a spot 3,000 miles from the true center of the planet - Yahoo News

Building Competitive Advantage With Automation – Broadband Communities

ISPs can use autonomous technologies to help networks grow with companies as their business needs and markets change.

Since the introduction of the first public switched telephone network, networks have continually evolved. Through the various stages of development from fixed endpoints in the early days of the internet to todays broadband networks that connect mobile users to massive data centers and bandwidth behemoths such as Netflix, Amazon and Facebook networks have adjusted to accommodate new demands. The once-static infrastructure is undergoing a more profound transformation than ever before.

The latest incarnation is autonomous networks, a trend that has been building for some time. An autonomous network runs without much human intervention; it can configure, monitor and maintain itself independently. As business and residential subscribers networks become more complex than ever, internet service providers (ISPs) can harness this evolution of networking by committing to providing networks that can grow with companies as their business needs and markets change.

In addressing the needs of tomorrow, ISPs must understand that todays world is hyper-connected, reliant on complex infrastructures and multicloud environments, connected through a mesh of networks. But modern network demands create new challenges for ISPs. To advance, teams need to start moving away from manual efforts and start harnessing the power of artificial intelligence (AI) and machine learning (ML) to drive automation and self-healing networks.

To succeed with AI and ML, companies must have full network visibility. The networking community hungers for disruptive ideas to address the unsustainable economics of present-day networks. Today, operational complexity is increasing exponentially as traffic continues to explode and new devices proliferate. Meanwhile, rising operational costs and slower time to revenue squeeze margins for traditional service providers.

The answer to this problem is taking shape in the form of AI-driven networks, a new approach model that will eliminate operational complexity regardless of the type and volume of network traffic. ISPs must act quickly to incorporate disruptive technology that advances AI and ML concepts to transform static networks into dynamic, programmable environments that are predictive, proactive and automated.

The reality is that to be successful, companies cannot build a new future on old technologies. The days of closed, proprietary networks and vendor lock-in are over; the market demands new solutions that are open, intelligent, agile and secure. The investment companies make into any new technology also requires that they leverage their resources to quickly learn and understand the power of automating workflows. Fortunately, the ability for ISPs to automate and optimize operations on the fly and build sustainably within a standards-based approach is becoming the new norm, which is exactly what data-driven ISPs need today.

AI-driven networks will take the tedious job of data mining out of the equation, focusing on proactive problem resolutions. As ISPs get into more complex things in which people dont really understand all the correlations or how they correlate, AI can help draw the correlation in a fraction of the time it would take network operations teams. The future network will self-configure, monitor, manage, correct, defend and analyze with little human intervention, providing more time for service providers to innovate their businesses.

Traffic spikes on todays networks can cause goliath challenges in determining the problem, ranging from a new video game release to widespread streaming of national events to distributed denial of service (DDoS) attacks. Luckily, ML algorithms are becoming more intelligent, interpreting vast amounts of network traffic behavior data to predict performance issues before subscribers are affected. The reality of networking software today is that ISPs need tools that intelligently analyze and adapt, providing immediate security during DDoS attacks and increased bandwidth to support traffic surges.

AI-driven networks powered by ML algorithms will be the end state of a progressive journey beginning with data collection and visualization, leading to automated event correlation and programmability and allowing networks to run autonomously.

The ability for ISPs to leverage their software investments to automatically intervene and correct issues that they identify before they become noticeable to subscribers will be key to addressing the digital customer experience revolution.

Advancements in automation and AI technologies often invoke fears of job displacement. Conversely, the introduction of AI will free network staff from repetitive manual tasks, meaning customer support personnel will spend less time troubleshooting performance issues and running networks and more time working strategically and developing innovation that secures businesses and drives them forward.

As the internet of things (IoT) gathers steam, these emerging software tools will be in high demand to make sense of the deluge of incoming data. For ISPs and technology vendors, it will be imperative to implement ML algorithms that filter out the normal and allow service providers to focus on the anomalous, the unexpected and the dangerous.

Companies considering choosing a vendor with AI and ML claims to their technology should be sure to investigate the companys longevity in the market, along with its strategic software partners, to evaluate whether a long-term customer relationship offers an easy ability to scale over time. Those that make the early investment in AI capabilities need to understand that in the not-so-distant future, applications and hardware technology will become less artificial and more intelligent. When selecting a partner to define a future network, companies should consider three critical components:

Where ML shows its real value is in the ability to rely less on vast amounts of data and more on top-down reasoning that more closely mimics how humans approach problems and tasks. ML products will have more efficient reasoning, ready expertise and common sense.

As technology vendors continue to substantially invest in AI and ML development to set the foundation for autonomous networks, the feedback from early adopters will form the basis of AI-driven tools for the next five years. Transforming networks into dynamic, programmable environments that are predictive, proactive and automated will be key for service providers of the future.

Here is the original post:

Building Competitive Advantage With Automation - Broadband Communities

What Do We Mean by Automation? – Forbes

BROOKLINE - DECEMBER 14: There are numerous self-checkout aisles available at the Brookline Stop and ... [+] Shop. (Photo by John Tlumacki/The Boston Globe via Getty Images)

I suspect Im not alone in having less-than-satisfactory experiences with self-checkout machines at grocery stores. Many of us have dealt with the frustration of the machine blaring at us to remove a mysterious item from the bagging area, only to then wait for a human employee to assist us anyway. Im not just put off by the technology as a customer but also as a professor who specializes in the field of automation, and for one particular reason: even though self-checkouts are labeled as automation, theyre actually not.

According to Merriam-Webster, automation is the automatically controlled operation of an apparatus, process, or system by mechanical or electronic devices that take the place of human labor. Increasingly, there is a tendency to label any new technology as such, especially if it appears to do the job of a human, but perceptions can be deceiving. In the case of self-checkout machines, a robot hasnt replaced the cashiers job of scanning or bagging groceries; a human is still performing those tasks but its now the customer rather than the cashier. In other words, nothing has actually been automated. Self-checkouts have not only lowered the customer experience, making it more inconvenient and less enjoyable, but also displaced jobs, many of which are incorrectly blamed on automation.

In order to improve public understanding, as well as guide how we utilize automated technologies moving forward, engineers and other stakeholders need to consider the following. First, we must find ways to better communicate what automated technologies are and what they arent and highlight how they improve peoples lives. Secondly, it is incumbent on us to determine why we are designing these technologies. Is the goal to improve quality of life and work? Or human productivity? Both, or neither? If automation is going to continue permeating society, its crucial that we, both those creating it and those affected by it, have a fuller, more thoughtful approach to how we think and talk about automation.

So-so automation

While our lives may seem more automated, which is cause for alarm for some, the reality is that weve been living with automation for decades without fully appreciating it. For instance, an appliance many people use daily provides an understated example: your oven. You set the temperature you want to cook at, and the oven maintains that temperature on its own. Its a simple directive, with the oven performing its functionality without user involvement.

But other forms of automation arent always as helpful or productive. A recent New York Times article about changes to corporate workplaces due to COVID-19 addressed so-so automation technology that is just barely good enough to replace human workers, but not good enough to create new jobs or make companies significantly more productive, according to the article. At best, its a neutral addition and at worst, a negative one; the output is the same, and little to nothing new comes out of it. The article cites self-checkouts as an example, but also notes white-collar jobs are increasingly at risk, and at a faster pace than ever before. If automation is simply displacing humans, rather than improving what they can do, its no surprise that it will be met with hesitation, if not opposition.

In other cases, true automation is deployed without full consideration of its impact on individual human populations. The New York Police Department recently began using a dog robot called Digidog during patrols. On one hand, Digidog can help police surveil dangerous situations and cover more ground. But Jay Stanley, a senior policy analyst with the American Civil Liberties Union, raised concerns to the Times about bias, surveillance and privacy over using a robot for this type of work. Technology like Digidog can have an adverse effect on the communities it is being used in, particularly if those communities are not educated about the technology, and its capabilities and limits. This can further alienate people from automation.

The case for automation

I conduct research and teach courses in the field of control systems which are at the heart of automated technologies, and was driven to this field because of the positive impact it has had, and can continue to have on society. Nonetheless, I acknowledge why automation is resisted by those who are most negatively affected by it, whether in the workplace or in their daily lives.

The solution is to move away from so-so automation and towards innovation that actually pushes the ball forward for people, workers, business, and society more holistically. There are already plenty of examples of automated technologies that we engage with regularly, such as autofill text on your mobile devices, robot vacuums, smart thermostats, adaptive cruise control in passenger vehicles, and plenty more on the way. We intrinsically understand how all of these types of automation make our lives easier, just as we fundamentally recognize when a technology is not.

If both users of automation and its creators have a clear-eyed view of what the technology is and how they can use it, we will be able to leverage it more effectively. This may require improving how we educate students, and the general public, about technology and its impact on humanity. Automation is sure to have a significant impact on our lives, so lets make sure its a positive one.

The rest is here:

What Do We Mean by Automation? - Forbes

NG automation to catapult productivity and profitability with edge computing – FutureIoT

You may associate automation more with Western country manufacturing but automation indeed has a significant history in Asia-Pacific also.

From the Japanese car factories of the 1980s to todays advanced manufacturing floors using imaging and AI for quality control, the region has seen some of the most impressive gains from leveraging technology.

The difference this time is that the automation that is transforming manufacturing will go hand in hand with more intelligence gained from data sensing and analytics.

A tire manufacturer, for example, may be able to detect mere millimetres of error in a product and have that data fed back into the system to make constant near-real-time adjustments.

Not only is this more cost-efficient in saving staff and materials costs, but the system also delivers a whole new level of quality control. Quality control used to be a checking process after the goods are manufactured. With this next-generation automation, quality control is built right into the process when the goods are being manufactured.

With constant refinement, the manufacturer may even be able to make new products that were not possible without this feedback loop continually driving improvement.

In 2021, we can expect this trend to grow steadily. With Industry 4.0 on the agenda, industry leaders across different verticals are fast-tracking their transformation efforts with foundational technologies.

Among those surveyed by McKinsey in 2020, 39% have implemented a nerve-centre, or control-tower, approach to increase end-to-end supply-chain transparency. Around a quarter are fast-tracking automation programs to stem worker shortages arising from Covid-19.

To get there, of course, you need to have the right tools. This is where edge computing will play an increasingly important part in the years ahead.

In the tire manufacturer example, what is needed is a fast analysis of the data that is constantly being produced by the sensors inside the tire making machine.

For this to be analysed on the spot, a round trip to the data centre at a centralised location may involve too much latency. Thats not to mention the quality of broadband connections that may vary greatly in different parts of a country.

The data eventually has to be stored in a data centre, but the important analysis that is carried out in the field has to be accurate and timely. For that, you need adequate computing power at the edge to digest the data and to make parameter changes in real-time for optimal production.

Indeed, there are many other ways in which the edge will make a difference. Besides running data analysis, it could be used to orchestrate and operate complex machines remotely, a scenario that the pandemic has forced on many manufacturers. The ability to operate remotely has tremendous value and companies are allocating more budget to make edge orchestration a corporate priority.

Edge computing resources could also help drive the adoption of AI on the manufacturing floor.

While a simple sensor or camera can give you the raw image data, what is needed is a compute unit right next to the sensor, or on the edge, to analyse that. It also has to complete this task quickly because there could be hundreds or thousands of devices to be checked in a short period of time.

Lets not forget automated guided vehicles (AGVs), either. While each of these smart vehicles can navigate its way around a warehouse with its own sensors and onboard processors, they still need to relay information, say, on stock levels to human operators.

You still need a capable compute unit located near to the action to make sense of the data from these AGVs and present a coherent picture of what is happening on the ground. Again, this is where the edge has an advantage relative to the cloud.

Not every edge computing platform will do, of course. What is needed is a setup that not only brings the compute performance but also the robustness to work in a tough environment.

Another quality to look out for in an edge computing device is the ease of maintenance. Are the units easy to upkeep, say, by operators who are not IT savvy?

After all, with factories often distributed across a country, it might take an IT team hours or even days to get to a site to fix a simple maintenance issue.

Security is of utmost importance as well. Any edge computing unit that is connected in the field has to have security baked in from the start, not added on as an afterthought. It is essential to have a host-based firewall that allows users to blacklist or whitelist specific IP addresses, domain names, protocols, or ports. In addition, all data should be sent through secure, encrypted channels.

Like many other technologies that came to the forefront during the pandemic, edge computing has seen an acceleration in terms of adoption.

This is the foundation that many businesses will build on as they boost their automation efforts in the years ahead. The good news for those that have invested early is that they will be more ready for the recovery, better prepared to scale up when demand returns and taking more market share from the competition.

Here is the original post:

NG automation to catapult productivity and profitability with edge computing - FutureIoT

How automating and streamlining can provide a treasure trove of data – Smart Business Network

If your business is still doing financial reporting the traditional way manually entering data and reconciling reports each month youre likely missing out on a tremendous opportunity to use that data to move your company forward.

Traditional reporting is time-consuming, mistake-prone, produces numbers that can be difficult to understand and reports numbers after the fact, says Matt Long, Client Advisory Services principal at Rea & Associates. Automating and streamlining reporting provides real-time, actionable data that is easy to understand and can be customized depending on what is important to your business. Youre already collecting the data. So why not do it more effectively?

Smart Business spoke with Long about how automating your financial reporting can save time and money, and move you ahead of the competition.

How is the automation of financial reporting changing the pace of business?

In the past, businesses waited until the end of the month, made sure all activities had been posted, then spent several days reconciling. It was an exercise that reported what happened the previous month, with no end game in mind beyond having the data for an audit or tax purposes. There was no direction as to why you were doing it, what you were getting out of it or what you could take away from it. Now, there has been a monumental shift as companies realize they need live information, not static reports at months end.

How has automation moved data beyond the accounting department?

Traditionally, the accounting department manually input information, and no one else understood or cared about the numbers. Automation takes the data out of accounting and allows people in all departments to look beyond the traditional profit and loss statements to look at dashboards and graphs. And you dont need to be an accountant to interpret the data.

The data itself is not valuable, its how youre using it. Departments can access the numbers they need every day. Every industry has its own metrics to consider, whether its a manufacturer looking at how many hours a machine is running and what thats costing, a restaurant looking at food costs, or a dentist looking at the numbers at a procedural level.

This is the future, with more automation and fewer human hours going into compiling data. The days of having a full-time accounting department manually entering data are over. And the pandemic has accelerated that trend, forcing businesses to prioritize what is important, opening them up to new opportunities to evaluate what is being done and how its being done.

How can a business start the automation process?

Evaluate how you are currently doing things. An outside adviser can help you take a step back and get out of the weeds. What is happening day to day, and how does that flow into the transactional level, the accounting system?

Look at your processes and procedures, your systems, where you can find areas to improve and where there is better technology available. Talk through logistics and timing and how you can make an impactful change. And while some companies want to change very quickly, that can be a mistake. Be methodical and intentional. Do it the right way to get systems in place that will last and that you can build on as the business grows.

Its very important to spend the time up front and do it right, or youll constantly be chasing your tail and playing catch-up. Its not a one-time thing; you dont go through the transition to automation and then forget it. Theres room for improvement in any system, some of it incremental and some very impactful. The landscape is changing so quickly, almost daily, and you need to constantly re-evaluate.

And if youre not looking at how to do things more effectively, your competition is. You may have the edge now because of the quality of your products and services, but eventually someone who is more strategic is going to catch up and make decisions that are more impactful, passing you by.

INSIGHTS Accounting is brought to you by Rea & Associates.

Visit link:

How automating and streamlining can provide a treasure trove of data - Smart Business Network

Addressing great need for automation of composites – Advanced Manufacturing

The aerospace industry is setting itself up for a massive conjunction of need for industrial capacity to produce parts in the near future. Pre-COVID, Boeing and Airbus estimated that 40,000 commercial aircraft will be required over the next 20 years (compared with about 25,000 in service today). The U.S. military is also researching concepts of swarms of unmanned aircraft with limited life and very low cost. Finally, the first air taxis are projected to be in service in 2023. All of these concepts will require significant use of composites to meet their range and speed requirements, which will stress todays industrial base.

Today, the composites industrial base is predicated on hand lay up or current automated layup machines that have only 20-50 percent machine utilization. It is also still more analog than digital. Companies are dabbling in digital to solve specific pain points, such as asset tracking, but few manufactures have a true enterprise-wide Industry 4.0 environment.

A state change must happen for the aerospace industry to achieve production rates for future aircraft. First, automation of composites must be a major component to enable the coming production wave. Done right, automation will alleviate the need for significant capital investment and lessen the number of new skilled workers for manufacturing aircraft. Second, the composites community needs to embrace Industry 4.0 concepts to get actionable insights from data generated during fabrication and assembly. Key themes for future research to make automation and Industry 4.0 available for aerospace composites production include:

Potential solutions: Integrate AI/machine learning, automation, data, analytics, manufacturing and products. Use an integrated, computer-based system comprised of simulation, 3D visualization, analytics and collaboration tools to develop a virtual representation of the entire manufacturing process.

Potential solution: Move from inspection to measurement. This requires manufacturing simulation and in process measurement, as well as in-service structural simulation and measurement, at a level good enough to satisfy regulatory requirements. Use of Industry 4.0 tools to provide an understanding of the state of the part or assembly, not just to track them.

SME has launched a Technical Community on Composites Automation to address these needs. I am part of the technical community, and I welcome input for the discussions we are beginning.

Link:

Addressing great need for automation of composites - Advanced Manufacturing

NASAs small business picks take on automation in space – TechCrunch

NASAs SBIR program regularly doles out cash to promising small businesses and research programs, and the lists of awardees is always interesting to sift through. Here are a dozen companies and proposals from this batch that are especially compelling or suggest new directions for missions and industry in space.

Sadly these brief descriptions are often all that is available. These things are often so early stage that theres nothing to show but some equations and a drawing on the back of a napkin but NASA knows promising work when it sees it. (You can learn more about how to apply for SBIR grants here.)

Autonomous deorbiting system

Martian Sky Technologies wins the backronym award with Decluttering of Earth Orbit to Repurpose for Bespoke Innovative Technologies, or DEORBIT, an effort to create an autonomous clutter-removal system for low Earth orbit. It is intended to monitor a given volume and remove any intruding items, clearing the area for construction or occupation by another craft.

Image Credits: Getty Images

Ultrasonic additive manufacturing

There are lots of proposals for various forms of 3D printing, welding, and other things important to the emerging field of On-orbit servicing, assembly, and manufacturing or OSAM. One I found interesting uses ultrasonics, which is weird to me because clearly, in space, theres no atmosphere for ultrasonic to work in (Im going to guess they thought of that). But this kind of counterintuitive approach could lead to a truly new approach.

Robots watch each others backs

Doing OSAM work will likely involve coordinating multiple robotic platforms, something thats hard enough on Earth. TRAClabs is looking into a way to enhance perceptual feedback and decrease the cognitive load on operators by autonomously moving robots not in use to positions where they can provide useful viewpoints of the others. Its a simple idea and fits with the way humans tend to work if youre not the person doing the actual task, you automatically move out of the way and to a good position to see whats happening.

3D printed Hall effect thrusters

Hall effect thrusters are a highly efficient form of electric propulsion that could be very useful in certain types of in-space maneuvering. But theyre not particularly powerful, and it seems that to build larger ones existing manufacturing techniques will not suffice. Elementum 3D aims to accomplish it by developing a new additive manufacturing technique and cobalt-iron feedstock that should let them make these things as big as they want.

Venusian batteries

Venus is a fascinating place, but its surface is extremely hostile to machines the way theyre built here on Earth. Even hardened Mars rovers like Perseverance would succumb in minutes, seconds even in the 800F heat. And among the many ways they would fail is that the batteries they use would overheat and possibly explode. TalosTech and the University of Delaware are looking into an unusual type of battery that would operate at high temperatures by using atmospheric CO2 as a reactant.

Neuromorphic low-SWaP radio

When youre going to space, every gram and cubic centimeter counts, and once youre out there, every milliwatt does as well. Thats why theres always a push to switch legacy systems to low size, weight, and power (low-SWaP) alternatives. Intellisense is taking on part of the radio stack, using neuromorphic (i.e. brainlike but not in a sci-fi way) computing to simplify and shrink the part that sorts and directs incoming signals. Every gram saved is one more spacecraft designers can put to work elsewhere, and they may get some performance gains as well.

Making space safer with lidar

Astrobotic is becoming a common name to see in NASAs next few years of interplanetary missions, and its research division is looking at ways to make both spacecraft and surface vehicles like rovers smarter and safer using lidar. One proposal is a lidar system narrowly focused on imaging single small objects in a sparse scene (e.g. scanning one satellite from another against the vastness of space) for the purposes of assessment and repair. The second involves a deep learning technique applied to both lidar and traditional imagery to identify obstacles on a planets surface. The team for that one is currently also working on the VIPER water-hunting rover aiming for a 2023 lunar landing.

Monitoring space farms

Bloomfield does automated monitoring of agriculture, but growing plants in orbit or on the surface or Mars is a little different than here on Earth. But its hoping to expand to Controlled Environment Agriculture, which is to say the little experimental farms weve used to see how plants grow under weird conditions like microgravity. They plan to use multi-spectral imaging and deep learning analysis thereof to monitor the state of plants constantly so astronauts dont have to write leaf 25 got bigger every day in a notebook.

Regolith bricks

The Artemis program is all about going to the Moon to stay, but we havent quite figured out that last part. Researchers are looking into how to refuel and launch rockets from the lunar surface without bringing everything involved with them, and Exploration Architecture aims to take on a small piece of that, building a lunar launchpad literally brick by brick. It proposes an integrated system that takes lunar dust or regolith, melts it down, then bakes it into bricks to be placed wherever needed. Its either that or bring Earth bricks, and I can tell you thats not a good option.

Several other companies and research agencies proposed regolith-related construction and handling as well. It was one of a handful of themes, some of which are a little too in the weeds to go into.

Another theme was technologies for exploring ice worlds like Europa. Sort of like the opposite of Venus, an ice planet will be lethal to ordinary rovers in many ways and the conditions necessitate different approaches for power, sensing, and traversal.

NASA isnt immune to the new trend of swarms, be they satellite or aircraft. Managing these swarms takes a lot of doing, and if theyre to act as a single distributed machine (which is the general idea) they need a robust computing architecture behind them. Numerous companies are looking into ways to accomplish this.

You can see the rest of NASAs latest SBIR grants, and the technology transfer program selections too, at the dedicated site here. And if youre curious how to get some of that federal cash yourself, read on below.

See the original post here:

NASAs small business picks take on automation in space - TechCrunch

Cockpit automation leading to airline industry complacency – Airline Ratings

Cockpit automation is leading to airline industry complacency warns a new study from the Royal Aeronautical Society Flying Operations Group, which says many crashes would never have happened if pilots were just capable of basic piloting skills and standards were higher.

As the 12th anniversary of AF447 looms, the chorus of concern over the lack of progress on the degradation of flying standards has now been joined by the highly respected RAeS flying group, a body hardly noted for exaggeration, with a study that calls for skill-based training and an urgent raising of regulatory standards and oversight, and the application of uniform standards of compliance. They are not alone in this view.

In 2009, just before the loss of AF447, University of Southern California engineering professor and aviation safety expert Najmedin Meshkati suggested that the aviation industry and its regulators had become star-struck by technological solutions. He surmised that this was due to the fatality-free record at that time of aircraft such as the A330 and 777 in commercial service.

READ: Qatar Airways a standout for COVID-19 safety

We have become complacent by thinking that technology will solve all the problems, he said just four days before AF447 plunged into the mid-Atlantic.

Meshkati warned at the time that he believed that all the wizardry of modern technology was masking a deterioration and de-skilling in basic flying ability and that the lessons learned by generations of pilots may be lost to the new breed of pilots.

Fast forward 12 years to 2021 and Captains John Leahy, Robert Scott, and Alex Fisher, assisted by other experienced members of the RAeS Flight Operations Group are issuing a similar blunt warning in a new paper. They emphasize that now is the moment for decisive industry action, given the COVID pause a period of much-reduced aviation activity

Their study, Airline Pilot Training It is time to revisit the basics, warns that many recent airline accidents have shown clear evidence of a common cause, whether from fatal crashes or devastating hull losses without fatalities. That common cause is the inability of the pilots, in far too many cases, to cope with the situation they faced. Sometimes it was when the automatic systems failed, requiring them to fly manually. In others, they were trying to deal with what should have been a relatively benign situation and they simply did not cope.

It adds that pilot training is currently a combination of learning basic handling skills and the ability to manage complex automated systems but with an ever-increasing emphasis on the latter. So, although management of automation has improved, much less time is now spent on developing and maintaining the basic skills that are so necessary when automation fails or causes confusion.

The problem is not new

Tragically, the problem is not new and reflects the findings of a 1995 comprehensive NASA Research Centre study based on Royal Air Force Institute of Aviation Medicine data by Dr. Marianne Rudisill who surveyed more than 1,000 pilots from 20 airlines and aircraft manufacturers about pilots attitudes and experience with flight deck automation.

The strength of that study was that most respondents had flown aircraft from basic cockpit types like 727s through to Glass 2 types such as A320s and 747-400s. It found the general consensus was that safety is increased with automation, but automation may lend a false sense of security, particularly with inexperienced pilots.

Very concerning was that pilots reported that there was a higher sense of insecurity during an automation failure and a general temptation to ignore raw information and follow the green/magenta line. The most worrying aspect was that pilots said their colleagues were becoming complacent and relied too much on automation but that was often because airline SOPs mandated reliance on automation.

And that aspect has not changed since, according to a 2019 International Air Transport Association pilot survey, which found that only 36 per cent of respondents said that their airline policy supported manual flying without restrictions.

In the same survey 92 per cent of respondents said they believed that training should put more emphasis on the unexpected transition from automatic flight to manual flying and vice versa.

In its 2019 Aircraft Handling and Manual Flying Skills report, IATA said that continuous use of automation does not strengthen pilots knowledge and skills in manual flight operations and in fact could lead to degradation of the pilots ability to quickly recover the aircraft from an undesired state.

It added that poor manual techniques are flagged by a number of accident analyses that cite inappropriate or erroneous control inputs by the flight crew in response to abnormal events. Although the overall Loss of Control in Flight (LOC-I) accident rate has decreased, this accident category continues to outpace other factors as the leading cause of fatal accidents. A number of these accidents may have had a different outcome if the pilots had shown a higher level of monitoring and manual flying skills. Poor manual techniques may also lead to other events such as hard landings, unstable approaches, runway excursions, and others.

A tragic example is the loss of the Atlas Air Boeing 767-300 at Trinity Bay Texas in February 2019 where the NTSB found that the probable cause of this accident was the inappropriate response by the first officer as the pilot flying to inadvertent activation of the go-around mode, which led to his spatial disorientation and nose-down control inputs that placed the airplane in a steep descent from which the crew did not recover.

It added that contributing to the accident was the captains failure to adequately monitor the airplanes flight path and assume positive control of the airplane to effectively intervene. Also contributing were systemic deficiencies in the aviation industrys selection and performance measurement practices, which failed to address the first officers aptitude-related deficiencies and maladaptive stress response.

The RAeS paper expresses concerns that pilot training has changed greatly, particularly in the last two decades. They said in a recent interview with AirlineRatings.com pilots need to be trained to fly on instruments once again (IF Skills), in all conditions of weather, g forces and distracting illusionary factors, and have complete confidence in their ability to do this. That skill deficit is the one most in need of urgent reform.

The RAeS paper states that pilot training is now shorter in duration, with less flight time on real aircraft, less exposure to the resultant stress of actual flight and much of it, computer-based. Full Flight (Level 4) simulator time is reduced in many cases to the minimum required to satisfy regulatory requirements.

That concern is supported by the president and CEO of GHS Aviation Group, LLC, George Snyder, who is troubled by the dichotomy between OEMs suggested automation use and the need for operators to ensure that flight crews are equally proficient in both manual flying and use of automation.

Mr. Snyder, former chief pilot at US Air and responsible for turning Korean Airs flight standards around just over 20 years ago adds I am now seeing it show up in the General Aviation sector, most notably in flight schools. I brought out my E6B (circular computer) in front of some primary students recently and it was like they were looking at a fossil. I understand the need for enhanced use of automation and technology, but a balance remains a critical flight safety issue, Mr. Snyder said.

Of particular concern says the RAeS paper is the high percentage of the accidents that fall into the LOC-I category. Often this has nothing to do with extraordinary and demanding circumstances but instead the pilots failure to cope with the most fundamental activity, that of being capable of using the aircraft flight controls to manage the flight path of the aircraft when the automation fails them. Although the actual loss of control was sometimes triggered by external factors such as adverse weather, it was often compounded by human factors such as poor decision making, technology mode confusion, and inadequate communications between the pilots. In most cases, LOC-I resulted in a non-survivable accident.

The fact is LOC-I has been the number one cause of fatalities in aircraft accidents for many years, yet the training required to address this most fundamental pilot activity is obviously not being carried out to the extent that it should. A strong safety culture with an emphasis on pilot performance excellence must be supported at the highest level in any airline, whatever the perceived financial costs.

One reason says Captain Fisher of the RAeS group is maybe that accident reports rarely get to the fundamental problem of the pilots inability to fly on instruments. Instead, they ascribe the loss of control to sensory illusions and g forces. But overcoming such illusions is the number one task of any instrument-rated pilot; blaming LOC-I on sensory illusion is like blaming the crash on gravity.

The RAeS paper asks the question. How is the [safety] risk evaluated and thus the amount of training which is considered appropriate? It would appear that, in some airlines, senior management equates the industrys low accident rate to low risk and thus any training that extends beyond satisfying regulatory requirements and the recommendations of the airplane manufacturer is an unnecessary expense. This completely overlooks the fact that this decision may then restrict the knowledge and skills of the pilots concerned to a narrow range and make it extremely difficult for them to cope with unforeseen events.

And the ability to handle unforeseen events was front and center for the crew of Qantas flight QF32 led by Capt. Richard de Crespigny, which suffered a catastrophic failure of its number 2 engine shortly after take-off from Singapore in 2010.

The crew had to nurse their crippled A380 for two hours while dealing with multiple system failures and sometimes confusing information before landing back safely at Singapore.

The Australian Transport Safety Bureau which conducted the accident investigation noted that the well-trained crew had saved the A380.

Discussing that event and wider training issues with the author in a 2011 interview, Captain Dave Evans, who was checking another check captain on that flight noted that some young prospective pilots lacked basic skills because they learned to fly using Flight Simulator and had not used rudder pedals and thus their flying skills once they got into the real world were degraded.

Flight Simulator with a single screen also narrowed their perspectives and they lacked peripheral skills, Capt. Evans noted.

Those disturbing observations resonated with Captain Robert Sumwalt III, former Chairman, Human Factors and Training Group Air Line Pilots Association International, and now Chairman of the NTSB who co-authored a 2002 paper Enhancing Flight-crew Monitoring Skills Can Increase Flight Safety which found that effective crew monitoring and cross-checking can literally be the last line of defense.

At the time Sumwalt cited NTSBs examination of 37 accidents which found that 84 per cent involved inadequate crew monitoring or challenging of the fellow cockpit crew.

The NTSB has found that lack of monitoring of instruments is still a major factor in accidents, Sumwalt said.

An example of this was the loss of Pakistan International Airlines A320 at Karachi in May 2020 and 97 souls where the preliminary report found multiple failures by the crew.

A Wide Disparity in Standards

Another area of concern in the RAeS paper is that on the one hand, many airlines, for example, Qantas, Cathay Pacific Airways and British Airways (as just three examples) have an excellent safety record acquired over many decades while on the other, far too many do not.

It is the view of the group that this is not just a matter of luck. It is abundantly clear from our many years of working in the industry that some airlines set a high standard that far exceeds the minimum required by the regulations. This may include such measures as only recruiting superior candidates from the best training colleges, and training them to the highest standards, not only at the entry stage but on an ongoing basis during their time with the airline.

That claim is backed up by a former Airbus check and training captain who told Airlineratings.com that top-line airlines such as Cathay Pacific and Singapore Airlines would often double the required training hours for their pilots, whereas other airlines, would not even pay for basic Airbus training because it was too expensive.

The situation is complicated says the RAeS paper by the fact that there are different training methods in use globally and there isnt universal agreement on which ones are better than the others. In some cases, culture influences the decisions on which ones to use, and in others, the decision can be financial or even based on the recommendations of regulators or OEMs. A wide variety of modern advanced training tools and techniques (such as AQP, CBTA, MPL and EBT) are popular in the industry but have gained different traction in different operating environments.

The authors of the RAeS paper are concerned about the wide disparity in safety standards of airlines, and of their pilots, together with a similar disparity in regulators and regulatory oversight. Alignment of training methodology on a global scale would lead to better exchanges of ideas and understanding of each others operational challenges and an overall increase in operating standards.

It warns that as the accident rate is relatively low compared to the past it could be assumed that all is well in the industry. However, it is not the number of accidents that is the concern, it is the cause and the severity of them. The lack of essential skills and handling ability that have led to many recent accidents is evidence of a low proficiency of some of the pilots currently flying public transport aircraft.

A major factor is poor training. Many pilots are rushed through inadequate ab initio training courses with the emphasis on getting them through quickly and as inexpensively as possible. Thus, they are poorly prepared for the challenges of converting on to larger, more complex aircraft in an airline setting, carried out in accordance with a significantly reduced training model.

This environment does not encourage the development and maintenance of the skills and knowledge that are essential to safely and competently operating todays complex aircraft, something that is borne out by the accident reports.

This issue is highlighted in IATAs 2020 safety report with Captain Rubn Morales, Chair, of IATA Accident Classification Technical Group warning that when we look at the contributing factors present in 2020 accidents, manual handling is at top of the contributing factors associated with flight crew errors. Other areas of concern are deficient safety management systems, regulatory oversight, and selection systems, all of them latent conditions present in the system before the accident happened. These latent conditions have been present consistently year after year, highlighting the need for improvement in these areas.

The IATA safety report also states that for effective Safety Leadership in aviation, airline executives should set a leadership mindset that enables safety-focused behaviors to embed a positive organizational safety culture. Applied globally, this should be supported by clearly defined safety accountabilities to enable an effective safety culture to exist within each, and every, aviation service provider around the world.

The role of the regulators is coming under much greater scrutiny with the global compliance of the ICAO country audits ranging from only 56.32 per cent for Aircraft Investigation to 81.23 per cent for Airworthiness with an overall average just 69.8 per cent. The eight audit areas are Legislation, Organization, Licensing, Operations, Airworthiness, Accident Investigation Air Navigation Services, and Aerodromes.

A disturbing 23 countries were below average for all eight audit criteria, while a further 22 only achieved one or two above-average passes.

These poor results are played out and underscored the RAeS papers concerns in the IATA 2020 safety report which found that Regulatory Oversight was a major latent factor in aircraft accidents at 45 per cent contribution just behind Safety Management at 47 per cent.

With Flight Crew errors for 2020, manual handling/flight controls topped with a 39 per cent contribution with SOP Adherence/ SOP Cross Verification at 29 per cent.

Disturbingly the five years figures are much the same.

Far worse, however, is the role these latent issues played in Fatal Accidents over the last five years with Safety Management at 71 per cent and Regulatory Oversight at 65 per cent.

Again, Flight Crew errors and SOP Adherence / SOP Cross Verification and Manual Handling / Flight Controls were top contributors in Fatal Accidents in the past five years at 56 per cent and 50 per cent.

These regulatory issues also play out in the EU and FAA banned lists, with 98 airlines from 24 countries banned from European skies and all airlines from 24 countries banned from the USA under the FAAs International Aviation Safety Assessment program.

Faustian bargain

Prof. Meshkati considers the aviation industrys exuberant blind embracement of more automation is promoted for the sake of more efficiency and cost-saving, without considering its serious unwanted consequences and contingencies for what if it fails and what to do next, as a rendition of Faustian bargain.

He has an additional and equally serious concern that when an utterly unexpected, unfamiliar, non-routine event occurs, which was unforeseen by the automation system designers, then the flight crew, having exhausted all options by trying all emergency operating procedures, has to resort to problem-solving (instead of simple decision-making) and improvisation.

In this phase, in order to save the day, the crew has to have a shared mental model of the situation, be equipped with a good technical knowledge of interacting subsystems and their safety margins, and finally has to be able to operate at the knowledge-based level, according to the late Professor Jens Rasmussens taxonomy of levels of cognitive controls.

Prof. Meshkati cites the 2009 emergency water landing and safe evacuation of US Airways Flight 1549, as a great example of a successful improvisation in the face of no more emergency operating procedures. This shows particularly in the non-verbal communication between Captain Sullenberger and First Officer Jeff Skiles, who although they did not have time to exchange words, they knew that they were on the same page through observation and hearing. At the NTSB hearings, Captain Sullenberger mentioned the critical role of a dedicated, well-experienced, highly trained crew that can overcome substantial odds, working together as a team, Prof. Meshkati said.

Resilience

In his article Resilience Recovering pilots lost flying skills about the industry in general, Capt. de Crespigny said that many pilots have lost confidence in flying manually because automation has taken much of their hand flying skills away.

Capt. de Crespigny says that pilots of older aircraft such as the 707 and 747 had excellent flying skills. They usually flew their approaches and landings without using autopilots and auto thrust because these systems were often too inaccurate or unreliable. These pilots built a mental body model that included their aircraft they wore and manipulated their aircraft like it was a fitted glove, Capt. de Crespigny said.

He warns that modern cockpit designs insidiously induce pilots to focus on just the green and magenta targets (airspeed, attitude, altitude, and track) at the expense of awareness of the underlying raw data.

He stresses that it takes more effort to operate modern jet aircraft than the older jets. Capt. de Crespigny asserts that the benefits of automation come at the expense of learning these complex mechatronic systems. Pilots must have a deep understanding of the core systems on their aircraft. Because when these automated systems fail, and they will fail, the pilot must be able to land the aircraft with the remaining systems.

Pilots of todays computerized aircraft must understand complicated software logic rules and procedures. If you dont it could be a disaster. You never want to fly an aircraft that takes control away from you. So, you must know when to trust automation and to respect its limitations, when to be skeptical and when to reject it and take manual control.

Paradigm Shift

Captain Robert Scott of theRAeS Flight Operations Group sums up the situation like this:

Over the last two decades, we have seen what can only be described as a Dumbing Down of the Airline Pilot. The intellectual and physical skills once required of the pilot have largely been replaced by an emphasis on soft skills and automation management. The pilot who once cynically challenged sources of information now readily accept information from a variety of sources, many computer-generated, without question. We know from bitter experience that when this information is flawed it is often not recognized as being useless to safe flight path management.

It seems hardly surprising, therefore, that many pilots lack the technical knowledge of their forbears and may thus feel they are on the periphery of the operation, rather than in charge of it. Regrettably, events often indicate that improvements to human skills have not matched improvements in technology, and until they do, the human operator will continue to make mistakes due to a misunderstanding of the technology, or, more commonly, complacency due to over-reliance on the automated systems.

An editorial comment in a major aviation publication laid the blame on regulators for the current problems. However, this is an inaccurate and unfair comment. Many CEOs, Directors of Operations, and Flying Training Managers have been seduced by the idea that modern aircraft are so reliable that traditional skills and knowledge can be reduced to the absolute minimum and replaced by mere management of the automatic systems. Consequently, pilots often receive the minimum amount of training, which is borne out by some recent accidents. Regrettably, while the names and reputations of the pilots involved in some aircraft accidents will always be associated with incompetence, the people who bear much of the responsibility for their lack of skills, the CEOs, Directors of Operations and Flying Training Managers will enjoy comfortable anonymity. The RAeS paper states that the days are now over where the senior management team can avoid scrutiny.

Is there a solution?

The authors of the RAeS paper believe there is, and moreover, now there is a once-off opportunity to do it. Capt. Leahy says This unique period of relative inactivity provides what is probably the last chance to make a major correction in the trajectory of this juggernaut of an industry; the objective must be to get pilots back to a level of skill that permits them to understand and oversee the automatics, yet still be able to take over when they fail. This chance should not be wasted.

So now that we know the problem, is there a realistic solution?

The RAeS authors say the list is long, but they suggest six immediate actions;

If those six were to be addressed, it would make a massive difference to the quality of future generations of flight crew say the RAeS authors.

The last words go to Capt. de Crespigny in his book QF32 when he warns; There is one potential problem with automation: that it will be accompanied by complacency and ignorance.

Excerpt from:

Cockpit automation leading to airline industry complacency - Airline Ratings

HAL in Permian Automation Breakthrough and More – Rigzone News

Here are some of Rigzone's top upstream stories during the last week, just in case you missed them

Halliburton Claims Permian Automation Breakthrough

Halliburton revealed that it has given a major operator real-time automated control of fracture placement while pumping on a multi-well pad in the Permian Basin. The name of the operator was not revealed, however.

Read full article here

Biden Plan Gives Oil Sector Surprise Boost

Bloomberg noted that U.S. President Joe Biden plans to set off one more oil-sector boom beforeshadows descendon fossil fuels.

Read full article here

Report Predicts Oilfield Job Losses to Robots

Rystad Energy revealed that at least two out of every ten oil workers globally in drilling, operational support, and maintenance could be replaced by automation over the next decade.

Read full article here

Aker Solutions Wins Large ConocoPhillips Deal

Aker Solutions announced that it has won a large contract from ConocoPhillips to provide a subsea production system for the Eldfisk North development offshore Norway. The company defines a large contract as being worth between $139.4 million (NOK 1.2 billion) and $232.4 million (NOK 2 billion).

Read full article here

OPEC+ Poised to Provide Dose of Bullish Medicine

In a statement sent to Rigzone on Wednesday, Rystad Energy oil markets analyst Louise Dickson noted that the expectation was that the OPEC+ group would not increase oil output from May onwards. The group did, however, agree to raise oil production gradually from May to July.

Read full article here

DUC Backlog is Rapidly Shrinking

Bloomberg noted that a backlog of pre-drilled shale wells is rapidly shrinking as oil prices rise, signaling that producers are ready to put drilling crews back to work. Drilled uncompleted wells are expected to fall to less than 5,000 by the final three months of 2021, Bloomberg highlighted.

Read full article here

To contact the author, emailandreas.exarheas@rigzone.com

More here:

HAL in Permian Automation Breakthrough and More - Rigzone News