Daily Archives: April 26, 2020

Robots, AI, and the road to a fully autonomous construction industry – VentureBeat

Posted: April 26, 2020 at 6:45 pm

Built Robotics executives are fond of saying that their autonomous system for construction equipment, like dozers and excavators, might be further along than many autonomous vehicles. In fact, CEO Noah Ready-Campbell insists youll see autonomous vehicles in controlled industrial environments like construction sites before you see level 5 driverless cars on public roads. That may be in part because autonomous construction equipment often operates on privately owned land, while public roads face increased regulatory scrutiny.

Theres a quote that Cold fusion is 20 years in the future and always will be,' Ready-Campbell told VentureBeat. I think theres a chance that that might be true for level 5 self-driving cars as well.

That might have seemed like an absurd thing to say back when autonomous driving first entered the collective imagination and companies established their intention to solve AIs grand autonomous vehicle challenge. But Waymo now takes billions from outside investors, and the delay of major initiatives like GMs Cruise and taxi service and Fords autonomous driving program call into question the progress automakers have made on autonomous vehicles.

One thing Ready-Campbell credits autonomous vehicle companies with is generating excitement around AI for use in environments beyond public roads, like on construction sites.

We were the beneficiaries of that when we did our series B last year, he said. I definitely think construction benefited from that.

From computer vision systems and drones to robots walking and roving through construction projects, Built Robotics and a smattering of other companies are working in unstructured industrial environments like mining, agriculture, and construction to make autonomous systems that can build, manage, and predict outcomes.

To take a closer look at innovation in the field, the challenges ahead, and what its going to take to create fully autonomous construction projects in the future, VentureBeat spoke with startups that are already automating parts of their construction work.

Built Robotics creates control systems for existing construction equipment and is heavily focused on digging, moving, and placing dirt. The company doesnt make its own heavy construction equipment; its solution is instead a box of tech mounted inside heavy equipment made by companies like Caterpillar, Komatsu, and Hyundai.

Built Robotics VP of strategy Gaurav Kikani told VentureBeat that the company started with autonomous skid steers the little dozers that scoop up and transport sand or gravel on construction sites. Today, Built Robotics has autonomous systems for bulldozers and 40-ton excavators.

We have a software platform that actuates the equipment that takes all the data being read by the sensors on the machine every second and then makes decisions and actuates the equipment accordingly, Kikani said.

Built Robotics focuses on earthmoving projects at remote job sites in California, Montana, Colorado, and Missouri far removed from human construction workers. Autonomous heavy equipment monitored by a human overseer tills the earth in preparation for later stages of construction, when human crews arrive to do things like build homes or begin wind or solar energy projects. In the future, the startup, which raised $33 million last fall, wants to help with more infrastructure projects.

Kikani and Built Robotics CEO Ready-Campbell say the company is currently focused on projects where theres a lot of dirt to move but not a lot of qualified operators of heavy machinery.

Calling to mind John Henry versus the machine, Kikani said human operators can go faster than a Built-controlled excavator, for example, but machine automation is meant to provide consistency and maintain a reliable pace to ensure projects finish on schedule.

Built Robotics combines lidar with cameras for perception and to recognize humans or potential obstacles. Geofencing keeps machinery from straying outside the footprint of a construction site. Excavators and dozers can work together, with dozers pushing material away or creating space for the excavator to be more productive.

The fleet coordination element here is going to be critical. In Built [Robotic]s early days, we really focused on standalone activities, where you have one piece of equipment just on its own taking care of the scope. But realistically, to get into the heart of construction, I think were going to start to coordinate with other types of equipment, Kikani said. So you might have excavators loading trucks [and] autonomous haulage routes where you have fleets of trucks that are all kind of tracking along the same route talking to each other, alerting each other to what they see along the route if conditions are changing.

I think the trickiest thing about construction is how dynamic the environment is, building technology that is pliable or versatile enough to account for those changing conditions and being able to update in real time to plan to accommodate for that. I think that is really going to be the key here, he said.

Equipment operated by systems from companies like Built Robotics will also need computer vision to recognize utility lines, human remains, or anomalies like archeological or historically important artifacts. Its not an everyday occurrence, but construction activity in any locale can unearth artifacts that lead to work stoppage.

Drones that can deploy automatically from a box are being developed for a variety of applications, from fire safety to security to power line inspection. Drones hovering above a construction site can track project progress and could eventually play a role in orchestrating the movement of people, robotic equipment, and heavy machinery.

In a nod to natural systems, San Francisco-based Sunflower Labs calls its drones bees, its motion and vibration sensors sunflowers, and the box its drones emerge from a hive.

Sensors around a protected property detect motion or vibrations and trigger the drones to leave their base station and record photos and video. Computer vision systems working with sensors on the ground guide the drone to look for Intruders or investigate other activity. Autonomous flight systems are fixed with sensors on all four sides to influence where the drone flies.

Sunflower Labs CEO Alex Pachikov said his companys initial focus is on the sale of drones-in-a-box for automated security at expensive private homes. The company is also seeing a growing interest from farmers of high-value crops, like marijuana.

Multiple Sunflower Labs drones can also coordinate to provide security for a collection of vacation homes, acting as a kind of automated neighborhood watch that responds to disturbances during the months of the year when the homes attract few visitors.

Stanley Black and Decker, one of the largest security equipment providers in the United States, became a strategic investor in Sunflower Labs in 2017 and then started exploring how drones can support construction project security and computer vision services. Pachikov said Sunflowers security is not intended to replace all other forms of security, but to add another layer.

The companys system of bees, hives, and sunflowers is an easy fit for construction sites, where theft and trespassing at odd hours can be an issue, but the tools can do a lot more than safeguard vacant sites.

When a Sunflower Labs drone buzzes above a construction site, it can deploy computer vision-enabled analytics tools for volumetric measurement to convert an image of a pile of gravel into a prediction of total on-site material.

Then tools from computer vision startups like Pics 4D, Stockpile Reports, and Drone Deploy can provide object detection, 3D renderings of properties for tracking construction progress, and other image analysis tools.

Companies like Delair take a combination of data from IoT sensors, drone footage, and stationary cameras from a construction project to create a 3D rendering that Delair calls a digital twin. The rendering is then used to track progress and identify anomalies like cracks or structural issues.

Major construction companies around the world are increasingly turning to technology to reduce construction project delays and accident costs. The 2019 KPMG global construction survey found that within the next five years, 60% of executives at major construction companies plan to use real-time models to predict risks and returns.

Indus.ai is one of a handful of companies making computer vision systems for tracking progress on construction sites.

We can observe and use a segmentation algorithm to basically know every pixel what material it is and therefore we know the pace of your concrete work, your rebar work, your form work and [can] start predicting whats happening, Indus.ai CEO Matt Man told VentureBeat in a phone interview.

He envisions robotic arms being used on construction sites to accomplish a range of tasks, like creating materials or assembling prefabricated parts. Digitization of data with sensors in construction environments will enable various machine learning applications, including robotics and the management of environments with a mix of working humans and machines.

For large projects, cameras can track the flow of trucks entering a site, the number of floors completed, and the overall pace of progress. Computer vision could also follow daily work product and help supervisors determine whether the work of individuals and teams follows procedure or best trade practices.

Imagine a particular robotic arm can start putting drywall up, then start putting tiles up, all with one single robotic arm. And thats where I see the future of robotics [] To be able to consolidate various trades together to simplify the process, Man said. There could be armies of robot-building things, but then there is an intelligent worker or supervisor who can manage five or 10 robotic arms at the same time.

Man thinks software for directing on-site activity will become more critical as contractors embrace robotics, and he sees a huge opportunity for computer vision to advance productivity and safety in industrial spaces.

Stanford University engineers have explored the use of drones for construction site management, but such systems do not appear to be widely available today or capable of coordinating human and robotic activity.

Having all these kinds of logistical things run together really well, its something I think AI can do. But its definitely going to take some time for the whole orchestration to be done well, for the right materials to get to the right place at the right time for the robot to pick it up and then to do the work or react if some of the material gets damaged, Man said. In the current construction methodology, its all about managing surprises, and there are millions of them happening over the course of the whole construction plan, so being able to effectively manage those exceptions is going to be a challenge.

Boston Dynamics, known for years as the maker of cutting-edge robots, also entered construction sites last year as part of its transition from an R&D outfit to a commercial company.

Like Sunflower Labs drones, Boston Dynamics four-legged Spot with a robotic grasping arm acts as a sensor platform for 360-video surveys of construction projects. Capable of climbing stairs, opening doors, and regaining its balance, the robot can also be equipped with other sensors to track progress and perform services that rely on computer vision.

An event held by TechCrunch at the University of California, Berkeley last month was one of the first opportunities Bay Area roboticists have had to convene since the pandemic precipitated an impending recession. Investors focused on robotics for industrial or agricultural settings urged startups to raise money now if they could, to be careful about costs, and to continue progress toward demonstrating product-market fit.

Speaking on a panel that included Built Robotics CEO Ready-Campbell, startups debated whether there will be a dominant platform for construction robotics. Contrary to others on the panel, Boston Dynamics construction technologist Brian Ringley said he believes platforms will emerge to coordinate multiple machines on construction sites.

I think long-term there will be enough people in the markets that there will be more competition, but ultimately its the same way we use lots of different people and lots of machines on sites now to do these things. I do believe there will be multiple morphologies on construction sites and it will be necessary to work together, Ringley said.

Tessa Lau is cofounder and CEO of Dusty Robotics, a company that makes an automated building layout called FieldPrinter. She said theres a huge opportunity for automation and human labor augmentation in an industry that currently has very little automation. Systems may emerge that are capable of doing the work of multiple trades or on-site activity management, but Lau said there can be nearly 80 different building trades involved in a construction site. Another problem: Construction sites are by definition in various stages of fairly constant change. The dynamic nature of construction sites where there is no set or static state like you might find in a factory presents another challenge.

I think the flip side is if you look at a typical construction site, its chaos, and anyone with a robotics background who knows anything about robotics knows its really hard to make robots work in that kind of unstructured environment, she said.

One thing the TechCrunch panelists agreed on is that robots on construction sites wont succeed unless the people working alongside them want them to. To help ensure that happens, Lau suggested startups slap googly eyes on their robots because people want to see things that are cute or beloved succeed.

Our customers are rightfully concerned that robots are going to take their jobs, and so we have to be careful about whether we are building a robot or building a tool, Lau said. And, in fact, we call our product a FieldPrinter. Its an appliance like a printer. It uses a lot of robotic technology it uses sensors and path planning and AI and all the stuff that powers robotics today, but the branding and marketing is really around the functionality. Nobody wants to buy a robot; they want to solve a problem.

Built Robotics CEO Ready-Campbell wholeheartedly agreed, arguing that even a thermostat can be considered a robot if the only requirement to meet that definition is that its a machine capable of manipulating its environment.

Last month, just before economic activity began to slow and shelter-in-place orders took effect, the International Union of Operating Engineers, which has over 400,000 members, established a multi-year training partnership with Built Robotics. Executives from Built Robotics say its systems operate primarily in rural areas that experience skilled labor shortages, but Ready-Campbell thinks its still a good idea to drop the term robot because it scares people. Opposition to construction robotics could also become an issue in areas that see high levels of unemployment.

Thats how we position Built [Robotics] in the industry, because when people think of robots, it kind of triggers a bunch of scary thoughts. Some people think about The Terminator, some people think about losing jobs, he said. Its an industry that really depends on using advanced machinery and advanced technology, and so we think that automation is just the next step in the automation of that industry.

Read more from the original source:

Robots, AI, and the road to a fully autonomous construction industry - VentureBeat

Posted in Ai | Comments Off on Robots, AI, and the road to a fully autonomous construction industry – VentureBeat

Pre & Post COVID-19 Market Estimates-Artificial Intelligence (AI) Market in Retail Sector 2019-2023| Increased Efficiency of Operations to Boost…

Posted: at 6:45 pm

LONDON--(BUSINESS WIRE)--The artificial intelligence (AI) market in retail sector is expected to grow by USD 14.05 billion during 2019-2023. The report also provides the market impact and new opportunities created due to the COVID-19 pandemic. The impact can be expected to be significant in the first quarter but gradually lessen in subsequent quarters with a limited impact on the full-year economic growth, according to the latest market research report by Technavio. Request a free sample report

Companies operating in the retail sector are increasingly adopting AI solutions to improve efficiency and productivity of operations through real-time problem-solving. For instance, the integration of AI with inventory management helps retailers to effectively plan their inventories with respect to demand. AI also helps retailers to identify gaps in their online product offerings and deliver a personalized experience to their customers. Many such benefits offered by the integration of AI are crucial in driving the growth of the market.

To learn more about the global trends impacting the future of market research, download a free sample: https://www.technavio.com/talk-to-us?report=IRTNTR31763

As per Technavio, the increased applications in e-commerce will have a positive impact on the market and contribute to its growth significantly over the forecast period. This research report also analyzes other significant trends and market drivers that will influence market growth over 2019-2023.

Artificial Intelligence (AI) Market in Retail Sector: Increased Applications in E-commerce

E-commerce companies are increasingly integrating AI in various applications to gain a competitive advantage in the market. The adoption of AI-powered tools helps them to analyze the catalog in real-time to serve customers with similar and relevant products. This improves both sales and customer satisfaction. E-commerce companies are also integrating AI with other areas such as planning and procurement, production, supply chain management, in-store operations, and marketing to improve overall efficiency. Therefore, the increasing application areas of AI in e-commerce is expected to boost the growth of the market during the forecast period.

Bridging offline and online experiences and the increased availability of cloud-based applications will further boost market growth during the forecast period, says a senior analyst at Technavio.

Register for a free trial today and gain instant access to 17,000+ market research reports

Technavio's SUBSCRIPTION platform

Artificial Intelligence (AI) Market in Retail Sector: Segmentation Analysis

This market research report segments the artificial intelligence (AI) market in retail sector by application (sales and marketing, in-store, planning, procurement, and production, and logistics management) and geographic landscape (North America, APAC, Europe, MEA, and South America).

The North America region led the artificial intelligence (AI) market in retail sector in 2018, followed by APAC, Europe, MEA, and South America respectively. During the forecast period, the North America region is expected to register the highest incremental growth due to factors such as early adoption of AI, rising investments in R&D and start-ups, and increasing investments in technologies.

Technavios sample reports are free of charge and contain multiple sections of the report, such as the market size and forecast, drivers, challenges, trends, and more. Request a free sample report

Some of the key topics covered in the report include:

Market Drivers

Market Challenges

Market Trends

Vendor Landscape

About Technavio

Technavio is a leading global technology research and advisory company. Their research and analysis focus on emerging market trends and provides actionable insights to help businesses identify market opportunities and develop effective strategies to optimize their market positions.

With over 500 specialized analysts, Technavios report library consists of more than 17,000 reports and counting, covering 800 technologies, spanning across 50 countries. Their client base consists of enterprises of all sizes, including more than 100 Fortune 500 companies. This growing client base relies on Technavios comprehensive coverage, extensive research, and actionable market insights to identify opportunities in existing and potential markets and assess their competitive positions within changing market scenarios.

More here:

Pre & Post COVID-19 Market Estimates-Artificial Intelligence (AI) Market in Retail Sector 2019-2023| Increased Efficiency of Operations to Boost...

Posted in Ai | Comments Off on Pre & Post COVID-19 Market Estimates-Artificial Intelligence (AI) Market in Retail Sector 2019-2023| Increased Efficiency of Operations to Boost…

One Supercomputers HPC And AI Battle Against The Coronavirus – The Next Platform

Posted: at 6:45 pm

Normally, supercomputers installed at academic and national laboratories get configured once, acquired as quickly as possible before the money runs out, installed and tested, qualified for use, and put to work for a four or five or possibly longer tour of duty. It is a rare machine that is upgraded even once, much less a few times.

But that is not he case with the Corona system at Lawrence Livermore National Laboratory, which was commissioned in 2017 when North America had a total solar eclipse and hence its nickname. While this machine, procured under the Commodity Technology Systems (CTS-1) to not only do useful work, but to assess the CPU and GPU architectures provided by AMD, was not named after the coronavirus pandemic that is now spreading around the Earth, the machine is being upgraded one more time to be put into service as a weapon against the SARS-CoV-2 virus which caused the COVID-19 illness that has infected at least 2.75 million people (confirmed by test, with the number very likely being higher) and killed at least 193,000 people worldwide.

The Corona system was built by Penguin Computing, which has a long-standing relationship with Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Sandia National Laboratories the so-called Tri-Labs that are part of the US Department of Energy and that coordinate on their supercomputer procurements. The initial Corona machine installed in 2018 had 164 compute nodes, each equipped with a pair of Naples Epyc 7401 processors, which have 24 cores each running at 2 GHz with an all core turbo boost of 2.8 GHz. The Penguin Tundra Extreme servers that comprise this cluster have 256 GB of main memory and 1.6 TB of PCI-Express flash. When the machine was installed in November 2018, half of the nodes were equipped with four of AMDs Radeon Instinct MI25 GPU accelerators, which had 16 GB of HBM2 memory each and which had 768 gigaflops of FP64 performance, 12.29 teraflops of FP32 performance, and 24.6 teraflops of FP16 performance. The 7,872 CPU cores in the system delivered 126 teraflops at FP64 double precision all by themselves, and the Radeon Instinct MI25 GPU accelerators added another 251.9 teraflops at FP64 double precision. The single precision performance for the machine was obviously much higher, at 4.28 petaflops across both the CPUs and GPUs. Interestingly, this machine was equipped with 200 Gb/sec HDR InfiniBand switching from Mellanox Technologies, which was obviously one of the earliest installations of this switching speed.

In November last year, just before the coronavirus outbreak or, at least we think that was before the outbreak, that may turn out to not be the case AMD and Penguin worked out a deal to installed four of the much more powerful Radeon Instinct MI60 GPU accelerators, based on the 7 nanometer Vega GPUs, in the 82 nodes in the system that didnt already have GPU accelerators in them. The Radeon Instinct MI60 has 32 GB of HBM2 memory, and has 6.6 teraflops of FP64 performance, 13.3 teraflops of FP32 performance, and 26.5 teraflops of FP16 performance. Now the machine has 8.9 petaflops of FP32 performance and 2.54 petaflops of FP64 performance, and this is a much more balanced 64-bit to 32-bit performance, and it makes these nodes more useful for certain kinds of HPC and AI workloads. Which turns out to be very important to Lawrence Livermore in its fight against the COVID-19 disease.

To find out more about how the Corona system and others are being deployed in the fight against COVID-19, and how HPC and AI workloads are being intertwined in that fight, we talked to Jim Brase, deputy associate director for data science at Lawrence Livermore.

Timothy Prickett Morgan: It is kind of weird that this machine was called Corona. Foreshadowing is how you tell the good literature from the cheap stuff. The doubling of performance that just happened late last year for this machine could not have come at a better time.

Jim Brase: It pretty much doubles the overall floating point performance of the machine, which is great because what we are mainly running on Corona is both the molecular dynamics calculations of various viral and human protein components and then machine learning algorithms for both predictive models and design optimization.

TPM: Thats a lot more oomph. So what specifically are you doing with it in the fight against COVID-19?

Jim Brase: There are two basic things were doing as part of the COVID-19 response, and this machine is almost entirely dedicated to this although several of our other clusters at Lawrence Livermore are involved as well.

We have teams that are doing both antibody and vaccine design. They are mainly focused on therapeutic antibodies right now. They are basically designing proteins that will interact with the virus or with the way the virus interacts with human cells. That involves hypothesizing different protein structures and computing what those structures actually look like in detail, then computing using molecular dynamics the interaction between those protein structures and the viral proteins or the viral and human cell interactions.

With this machine, we do this iteratively to basically design a set of proteins. We have a bunch of metrics that we try to optimize on binding strength, the stability of the binding, stuff like that and then we do a detailed molecular dynamics calculations to figure out the effective energy of those binding events. These metrics determine the quality of the potential antibody or vaccine that we design.

TPM: To wildly oversimplify, this SARS-CoV-2 virus is a ball of fat with some spikes on it that wreaks havoc as it replicates using our cells as raw material. This is a fairly complicated molecule at some level. What are we trying to do? Stick goo to it to try to keep it from replicating or tear it apart or dissolve it?

Jim Brase: In the case of in the case of antibodies, which is what were mostly focusing on right now, we are actually designing a protein that will bind to some part of the virus, and because of that the virus then changes its shape, and the change in shape means it will not be able to function. These are little molecular machines that they depend on their shape to do things.

TPM: Theres not something that will physically go in and tear it apart like a white blood cell eats stuff.

Jim Brase: No. Thats generally done by biology, which comes in after this and cleans up. What we are trying to do is what we call neutralizing antibodies. They go in and bind and then the virus cant do its job anymore.

TPM: And just for a reference, what is the difference between a vaccine and an antibody?

Jim Brase: In some sense, they are the opposite of each other. With a vaccine, we are putting in a protein that actually looks like the virus but it doesnt make you sick. It stimulates the human immune system to create its own antibodies to combat that virus. And those antibodies produced by the body do exactly the same thing we were just talking about Producing antibodies directly is faster, but the effect doesnt last. So it is more of a medical treatment for somebody who is already sick.

TPM: I was alarmed to learn that for certain coronaviruses, immunity doesnt really last very long. With the common cold, the reason we get them is not just because they change every year, but because if you didnt have a bad version of it, you dont generate a lot of antibodies and therefore you are susceptible. If you have a very severe cold, you generate antibodies and they last for a year or two. But then youre done and your body stops looking for that fight.

Jim Brase: The immune system is very complicated and for some things it creates antibodies that remembers them for a long time. For others, its much shorter. Its sort of a combination of the of the what we call the antigen the thing about that, the virus or whatever that triggers it and then the immune system sort of memory function together, cause the immunity not to last as long. Its not well understood at this point.

TPM: What are the programs youre using to do the antibody and protein synthesis?

Jim Brase: We are using a variety of programs. We use GROMACS, we use NAMD, we use OpenMM stuff. And then we have some specialized homegrown codes that we use as well that operate on the data coming from these programs. But its mostly the general, open source molecular mechanics and molecular dynamics codes.

TPM: Lets contrast this COVID-19 effort with like something like SARS outbreak in 2003. Say you had the same problem. Could you have even done the things you are doing today with SARS-CoV-2 back then with SARS? Was it even possible to design proteins and do enough of them to actually have an impact to get the antibody therapy or develop the vaccine?

Jim Brase: A decade ago, we could do single calculations. We could do them one, two, three. But what we couldnt do was iterate it as a design optimization. Now we can run enough of these fast enough that we can make this part of an actual design process where we are computing these metrics, then adjusting the molecules. And we have machine learning approaches now that we didnt have ten years ago that allow us to hypothesize new molecules and then we run the detailed physics calculations against this, and we do that over and over and over.

TPM: So not only do you have a specialized homegrown code that takes the output of these molecular dynamics programs, but you are using machine learning as a front end as well.

Jim Brase: We use machine learning in two places. Even with these machines and we are using our whole spectrum of systems on this effort we still cant do enough molecular dynamics calculations, particularly the detailed molecular dynamics that we are talking about here. What does the new hardware allow us to do? It basically allows us to do a higher percentage of detailed molecular dynamics calculations, which give us better answers as opposed to more approximate calculations. So you can decrease the granularity size and we can compute whole molecular dynamics trajectories as opposed to approximate free energy calculations. It allows us to go deeper on the calculations, and do more of those. So ultimately, we get better answers.

But even with these new machines, we still cant do enough. If you think about the design space on, say, a protein that is a few hundred amino acids in length, and at each of those positions you can put in 20 different amino acids, you on the order of 20200 in the brute force with the possible number of proteins you could evaluate. You cant do that.

So we try to be smart about how we select where those simulations are done in that space, based on what we are seeing. And then we use the molecular dynamics to generate datasets that we then train machine learning models on so that we are basically doing very smart interpolation in those datasets. We are combining the best of both worlds and using the physics-based molecular dynamics to generate data that we use to train these machine learning algorithms, which allows us to then fill in a lot of the rest of the space because those can run very, very fast.

TPM: You couldnt do all of that stuff ten years ago? And SARS did not create the same level of outbreak that SARS-CoV-2 has done.

Jim Brase: No, these are all fairly new early new ideas.

TPM: So, in a sense, we are lucky. We have the resources at a time when we need them most. Did you have the code all ready to go for this? Were you already working on this kind of stuff and then COVID-19 happened or did you guys just whip up these programs?

Jim Brase: No, no, no, no. Weve been working on this kind of stuff for her for a few years.

TPM: Well, thank you. Id like to personally thank you.

Jim Brase: It has been an interesting development. Its both been both in the biology space and the physics space, and those two groups have set up a feedback loop back and forth. I have been running a consortium called Advanced Therapeutic Opportunities in Medicine, or ATOM for short, to do just this kind of stuff for the last four years. It started up as part of the Cancer Moonshot in 2016 and focused on accelerating cancer therapeutics using the same kinds of ideas, where we are using machine learning models to predict the properties, using both mechanistic simulations like molecular dynamics, but all that combined with data, but then also using it other the other way around. We also use machine learning to actually hypothesize new molecules given a set of molecules that we have right now and that we have computed properties on them that arent quite what we want, how do we just tweak those molecules a little bit to adjust their properties in the directions that we want?

The problem with this approach is scale. Molecules are atoms that are bonded with each other. You could just take out an atom, add another atom, change a bond type, or something. The problem with that is that every time you do that randomly, you almost always get an illegal molecule. So we train these machine learning algorithms these are generative models to actually be able to generate legal molecules that are close to a set of molecules that we have but a little bit different and with properties that are probably a little bit closer to what we what we want. And so that allows us to smoothly adjust the molecular designs to move towards the optimization targets that we want. If you think about optimization, what you want are things with smooth derivatives. And if you do this in sort of the discrete atom bond space, you dont have smooth derivatives. But if you do it in these, these are what we call learned latent spaces that we get from generative models, then you can actually have a smooth response in terms of the molecular properties. And thats what we want for optimization.

The other part of the machine learning story here is these new types of generative models. So variational autoencoders, generative adversarial models the things you hear about that generate fake data and so on. Were actually using those very productively to imagine new types of molecules with the kinds of properties that we want for this. And so thats something we were absolutely doing before COVID-19 hit. We have taken these projects like ATOM cancer project and other work weve been doing with DARPA and other places focused on different diseases and refocused those on COVID-19.

One other thing I wanted to mention is that we havent just been applying biology. A lot of these ideas are coming out of physics applications. One of our big things at Lawrence Livermore is laser fusion. We have 192 huge lasers at the National Ignition Facility to try to create fusion in a small hydrogen deuterium target. There are a lot of design parameters that go into that. The targets are really complex. We are using the same approach. Were running mechanistic simulations of the performance of those targets, we are then improving those with real data using machine learning. So now we now have a hybrid model that has physics in it and machine learning data models, and using that to optimize the designs of the laser fusion target. So thats led us to a whole new set of approaches to fusion energy.

Those same methods actually are the things were also applying to molecular design for medicines. And the two actually go back and forth and sort of feed on each other and support each other. In the last few weeks, some of the teams that have been working on the physics applications have actually jumped over onto the biology side and are using some of the same sort of complex workflows that were using on these big parallel machines that theyve developed for physics and applying those to some of the biology applications and helping to speed up the applications on these on this new hardware thats coming in. So it is a really nice synergy going back and forth.

TPM: I realize that machine learning software uses the GPUs for training and inference, but is the molecular dynamics software using the GPUs, too?

Jim Brase: All of the molecular dynamics software has been set up to use GPUs. The code actually maps pretty naturally onto the GPU.

TPM: Are you using the CUDA variants of the molecular dynamics software, and I presume that it is using the Radeon Open Compute, or ROCm, stack from AMD to translate that code so it can run on the Radeon Instinct accelerators?

Jim Brase: There has been some work to do, but it works. Its getting its getting to be pretty solid now, thats one of the reasons we wanted to jump into the AMD technology pretty early, because you know, any time you do first-in-kind machines its not always completely smooth sailing all the way.

TPM: Its not like Lawrence Livermore has a history of using novel designs for supercomputers. [Laughter]

Jim Brase: We seldom work with machines that are not Serial 00001 or Serial 00002.

TPM: Whats the machine learning stack you use? I presume it is TensorFlow.

Jim Brase: We use TensorFlow extensively. We use PyTorch extensively. We work with the DeepChem group at Stanford University that does an open chemistry package built on TensorFlow as well.

TPM: If you could fire up an exascale machine today, how much would it help in the fight against COVID-19?

Jim Brase: It would help a lot. Theres so much to do.

I think we need we need to show the benefits of computing for drug design and we are concretely doing that now. Four years ago, when we started up ATOM, everybody thought this was nuts, the general idea that we could lead with computing rather than experiment and do the experiments to focus on validating the computational models rather than the other way around. Everybody thought we were nuts. As you know, with the growth of data, the growth of machine learning capabilities, more accessibility to sophisticated molecular dynamics, and so on its much more accepted that computing is a big part of this. But we still have a long way to go on this.

The fact is, machine learning is not magic. Its a fancy interpolator. You dont get anything new out of it. With the physics codes, you actually get something new out of it. So the physics codes are really the foundation of this. You supplement them with experimental data because theyre not right necessarily, either. And then you use the machine learning on top of all that to fill in the gaps because you havent been able to sample that huge chemical and protein space adequately to really understand everything at either the data level or the mechanistic level.

So thats how I think of it. Data is truth sort of and what you also learn about data is that it is not always the same as you go through this. But data is the foundation. Mechanistic modeling allows us to fill in where we just cant measure enough data it is too expensive, it takes too long, and so on. We fill in with mechanistic modeling and then above that we fill in that then with machine learning. We have this stack of experimental truth, you know, mechanistic simulation that incorporates all the physics and chemistry we can, and then we use machine learning to interpolate in those spaces to support the design operation.

For COVID-19, there are there are a lot of groups doing vaccine designs. Some of them are using traditional experimental approaches and they are making progress. Some of them are doing computational designs, and that includes the national labs. Weve got 35 designs done and we are experimentally validating those now and seeing where we are with them. It will generally take two to three iterations of design, then experiment, and then adjust the designs back and forth. And were in the first round of that right now.

One thing were all doing, at least on the public side of this, is we are putting all this data out there openly. So the molecular designs that weve proposed are openly released. Then the validation data that we are getting on those will be openly released. This is so our group working with other lab groups, working with university groups, and some of the companies doing this COVID-19 research can contribute. We are hoping that by being able to look at all the data that all these groups are doing, we can learn faster on how to sort of narrow in on the on the vaccine designs and the antibody designs that will ultimately work.

Excerpt from:

One Supercomputers HPC And AI Battle Against The Coronavirus - The Next Platform

Posted in Ai | Comments Off on One Supercomputers HPC And AI Battle Against The Coronavirus – The Next Platform

[The Future of Viewing] Innovative Sound Technologies, Powered by AI – Samsung Global Newsroom

Posted: at 6:45 pm

Now is the age of home entertainment. The concept of the modern home goes way beyond being merely a residential space; it has become a place for relaxation, for recreation and for quality time with others. At the center of this change is the near-three-dimensional content experiences granted by todays ultra-large, ultra-high definition, ultra-fine pixel TVs. Of course, high quality audio provides the finishing touch to such experiences.

With its 2020 QLED 8K TVs and AI sound technologies, Samsung has raised the bar for TV audio experiences. Object Tracking Sound+ uses AI-based software to match the movement of audio with movement on-screen; Active Voice Amplifier (AVA) tracks the users audio environment; and Q-Symphony creates a more realistic, three-dimensional sound.

Samsung Newsroom sat down with the sound developers of Samsung Electronics Visual Display Business to learn more about their extensive capabilities and the journey to fostering innovation in sound.

(From left to right) Youngtae Kim (Sound Lab), Jongbae Kim (Sound Lab), Yoonjae Lee (Sound Device Lab) and Sunmin Kim (Sound Lab)

Action movies with amazing sound arrangements provide the most realistic experiences when experienced at movie theaters. This is because movie theaters have multi-channel speakers with 3D sound placed on almost all the walls (including the ceiling), as well as around the screen. Compared with two-channel sound that features speakers only on the left- and right-hand sides, the multi-channel speakers in a movie theater deliver a more refined sense of realism. So how can this realism be recreated in the home? Samsungs sound developers came up with Object Tracking Sound+ technology, in which sound follows movement onscreen through six speakers built into the TV.

Thanks to this technology, a videos audio follows the action on-screen in real time. When a car moves from the left to the right-hand side of the screen, so will the sound it makes; and when a heavy object drops from the top to the bottom of the screen, so will the audio.

When developing the 2020 QLED 8K TVs, Samsungs TV developers increased the number of QLED TV speakers from two to six in order to realize sound that can mimic action. By placing two speakers on each side of the screen, as well as on the top and bottom, we enabled the dissemination of sound in all directions from a total of six speakers, explained Jongbae Kim. The distance between the two main speakers has been widened as much as possible, and the additional speakers have been installed in order to maximize sound across all axes to be as three-dimensional as possible. For example, we placed speakers on the upper side of the screen to enable the movement of sound in a vertical direction for a more immersive sound experience. Additionally, Kim highlighted how, despite the complex nature of a TV structure that includes six embedded speakers, the team managed to keep the design of the TV slim and minimal.

In order to ensure audio will follow on-screen movements accurately, it is important to understand the original intentions of content creators. The role of a sound engineer is to increase the consistency between the action onscreen and its accompanying audio track when mixing. The location information of sound in a piece of video content, including sound panning information, is subsequently audio-signaled into the audio channel by the sound engineer something we must be able to track in order to reproduce the location and movement of a contents audio accurately, noted Jongbae Kim. Our Object Tracking Sound+ technology analyzes the location information contained in these audio signals as originally placed during mixing. This means the TV can then effectively distribute the sound amongst its six speakers by distinguishing between sound orientations and whether or not the audio source is on-screen, off-screen, close-up or distant.

When that crucial scene in the show you are watching is overwhelmed by mixer sound or an important breaking news report is obscured by loud thunder, the act of reaching for the remote to adjust the TVs volume can come too late. This is why the team developed their AVA technology, which recognizes exterior noises and increases the volume of voices in content accordingly if surrounding conditions become too loud.

The way it works is intelligent. The TVs sound sensor, attached to the bottom middle of the TV, takes in audio from the content onscreen as well as its surrounding environs. AI technology then compares the volume levels of the two types of sound, and if external sounds are found to be louder than that of the TVs content, it will selectively raise the TVs volume. The system does not have one set definition of noise, explained Sunmin Kim. It considers any and all elements that disturb enjoyment of content as noise. When exterior sounds persist above a certain decibel level, that is when the system registers it as noise.

However, AVA technology does not just raise the volume of the TV when it recognizes a louder environment, as this would only contribute to a boisterous room experience. The system harnesses AI to keep sound effect and background audio levels consistent and to only raise the volume of voice audio, highlighted Sunmin Kim. Our research showed us that the majority of content is dialogue-heavy, so we believe that enhancing the delivery of dialogue would be most beneficial to aid comprehension.

One of key elements to achieving a realistic sound during content playback on a TV is three-dimensionality, which encompasses both horizontal and vertical audio characteristics. Until recently, these perspectives had been being developed separately by the TV and Soundbar teams. However, with the inclusion of upper side speakers on Samsungs 2020 QLED 8K TVs, the team developed an all-inclusive solution that utilizes the capabilities of both the TV and the soundbar in perfect harmony. Q-Symphony is a feature that plays audio using both the TV speakers and the soundbar at the same time, and as an industry first achievement, the Q-Symphony technology was even recognized with a Best of Innovation Award at CES 2020.

The core of Q-symphony, which manages sound playback harmoniously using speakers with different characteristics, is technology that follows the sound playback rules which are determined in advance and exchanges the necessary information between the TV and the soundbar, when connected. This approach allows for a superior reproduced sound experience, explained Yoonjae Lee. A key element of the technology is a special algorithm that we created which divides and harmonizes sounds seamlessly between the TVs speakers and soundbar.

During development, challenge arose regarding the quality of dialogue reproduction. When both the soundbar and the TV speakers played dialogue simultaneously, the sound quality was noticeably diminished. However, the sound development team were able to resolve this issue by separating the main sound track, including dialogue, from the entire signal and assigning the different tracks to the TV speakers and soundbar respectively. In the 2020 QLED 8K TV range, the voice signal is extracted and removed from the sound being reproduced by the TVs embedded speakers, which are then assigned to play ambient sound signals such as sound effects, explained Lee. The soundbar then reproduces the main sound involving any dialogue. With this technology, Q-Symphony harnesses the advantages of both TV speakers and the soundbar in order to deliver the best, and most harmonious, sound experience to users.

The sound development team agreed that realizing the addition of speakers, the new placement of speakers and the AI harmonization with the soundbar on the 2020 QLED 8K TV range was possible because of close coordination with a variety of other teams. When developing new TVs, all areas need to be in sync with their innovations, noted Youngtae Kim. We came together in suggesting various solutions to overcome each and every technological hurdle with an open mind.

Youngtae Kim (left) and Sunmin Kim introduce the innovative sound technologies of the 2020 QLED 8K TV range

The sound development team has always been and always will be dedicated to developing the best possible audio experiences for users. As well as working with Samsungs Audio Lab based in the U.S. for the future audio technology, the team also works with Samsung Researchs R&D Centers, universities and start-up experts around the world. We want to bring about sound experiences that are as natural and as real as possible, explained Youngtae Kim. To achieve this, we will continue to work hard to understand the end-to-end process of the sound and realize sounds in our TVs.

Samsungs sound innovation also helps to realize its vision of Screens Everywhere. In the future, we will bolster the use of AI so that users do not even need to use a remote control to find the perfect audio balance when enjoying their favorite content, said Sunmin Kim. As time goes by, TV installation environments, lifestyles and age groups will be diversified. We want users to enjoy the sound of their content as intended, regardless of content type or listening environment.

Read more from the original source:

[The Future of Viewing] Innovative Sound Technologies, Powered by AI - Samsung Global Newsroom

Posted in Ai | Comments Off on [The Future of Viewing] Innovative Sound Technologies, Powered by AI – Samsung Global Newsroom

Why Software Matters in the Age of AI? – Eetasia.com

Posted: at 6:45 pm

Article By : Geoff Tate

Inference chips typically have lots of MACs and memory but actual throughput on real-world models is often lower than expected. Software is usually the culprit.

Inference accelerators represent an incredible market opportunity not only to chip and IP companies, but also to the customers who desperately need them. As inference accelerators come to market, a common comment we hear is: Why is my inference chip not performing like it was designed to?

Oftentimes, the simple answer is the software.

Software is keyAll inference accelerators today are programmable because customers believe their model will evolve over time. This programmability will allow them to take advantage of enhancements in the future, something that would not be possible with hard-wired accelerators. However, customers want this programmability in a way where they can get the most throughput for a certain cost, and for a certain amount of power. This means they have to use the hardware very efficiently. The only way to do this is to design the software in parallel with the hardware to make sure they work together very well to achieve the maximum throughput.

One of the biggest problems today is that companies find themselves with an inference chip that has lots of MACs and tons of memory, but actual throughput on real-world models is lower than expected because much of the hardware is idling. In almost every case, the problem is that the software work was done after the hardware was built. During the development phase, designers have to make many architectural tradeoffs and they cant possibly do those tradeoffs without working with both the hardware and software and this needs to be done early on. Chip designers need to closely study the models, and then build a performance estimation model to determine how different amounts of memory, MACs, and DRAM would change relevant throughput and die size; and how the compute units need to coordinate for different kinds of models.

Today, one of the highest-volume applications for inference acceleration is object detection and recognition. That is why inference accelerators must be very good at mega-pixel processing using complex algorithms like YOLOv3. To do this, it is critical that software teams work with hardware teams throughout the entire chip design process from performance estimation to building the full compiler and when generating code. As the chip designer has the chip RTL done, the only way to verify the chip RTL at the top level is to run entire layers of models through the chip with mega-pixel images. You need to have the ability to generate all the code (or bit streams) that control the device and that can only be done when software and hardware teams work closely together.

Today, customer models are neural networks and they come in ONNX or TensorFlow Lite. Software takes these neural networks and applies algorithms to configure the interconnect and state machines that control the movement of data within the chip. This is done in RTL. The front end of the hardware is also written in RTL. Thus, the engineering team that is writing the front-end design is talking a similar language to the people that are writing the software.

Why software also matters in the futureFocusing on software is not only critical early on, but will also be critical in the future. Companies that want to continue delivering improvements are going to need their hardware teams studying how the software is evolving and how the models emerging are shifting in a certain direction. This will enable chip designers to make changes as needed, while the company also improves their complier and algorithms to better utilize the hardware over time.

In the future, we expect companies to continue bringing very challenging models to chip designers, with the expectation that new inference accelerators can deliver the performance needed to handle those models. Like we see today, many chip companies may try and cut costs and development times by focusing more on the hardware initially. However, when the chips are done and delivered to market, its going to be the ones that focused on software early on that will offer the best performance and succeed.

Geoff Tate is CEO of Flex Logix Technologies

Related Posts:

See original here:

Why Software Matters in the Age of AI? - Eetasia.com

Posted in Ai | Comments Off on Why Software Matters in the Age of AI? – Eetasia.com

The Sky This Week from April 24 to May 1 – Astronomy Magazine

Posted: at 6:44 pm

Tuesday, April 28At magnitude 8.4, Vesta is within easy reach of most binoculars. To find it, locate Aldebaran, the brightest star in Taurus, and draw an imaginary line northeast. First, youll hit the open star cluster NGC 1647, which contains several dozen scattered 8th- to 11th- magnitude stars. Continue that line roughly the same distance to the northeast and begin scanning for Vesta, which is slowly advancing through a region with few background stars. Try this exercise two or three nights in a row to find the spot that has moved thats the asteroid youre looking for.

Wednesday, April 29Mars remains an ideal morning target to catch before sunrise. The Red Planet glows at magnitude 0.4 in the southeastern sky, positioned midway between two 4th-magnitude stars: Iota () and Gamma () Capricorni. Mars is nearly 20 above the horizon an hour before sunrise.

Mars also stands at the center of a planetary gathering. Look west to find Saturn nearly 19 away, with Jupiter just 5 farther in the same direction. These two solar system giants shine at magnitude 0.6 and 2.4, respectively. Telescopic observers and imagers can add a dwarf planet to the mix: Pluto is just 2 southwest of Jupiter, glinting faintly at magnitude 14.

Turn your telescope 30 east of Mars to glimpse magnitude 8 Neptune. The ice giant is still low on the eastern horizon, rising higher as the sky brightens with the coming dawn. See how long you can track it before the bright sky hides it from view.

Thursday, April 30First Quarter Moon occurs at 4:38 P.M. EDT. An hour after sunset, our satellite stands high in the southwestern sky in the faint constellation Cancer the Crab. In the moonlit sky, you might have better luck spotting Gemini the Twins and their bright luminaries, Castor and Pollux, to the west. Look east of the Moon to find Leo the Lion, with his brightest star Regulus, and follow the ecliptic farther east to reach Virgo the Maiden, whose brightest star is Spica. This blue-white magnitude 1 star is not one star, but two however, the stars are so close that they cannot be split visually. Instead, astronomers discovered Spicas dual nature by noticing that as one star orbits the other, gravitys effects shift the light we see from the star slightly red and then blue over time.

The larger of the two, Spica A, is roughly seven times wider than our Sun and 10 times as massive. Most of the light we see from the star comes from this component. The smaller Spica B is a little less than four times wider than the Sun and seven times as massive.

Friday, May 1 The Eta Aquariids have been slowly ramping up since last week and will peak in another few days. Its not one of the years best meteor showers, due to its low-altitude radiant in the Northern Hemisphere and low predicted rate of just 10 meteors per hour at its peak. But with Mars hanging nearby and a still-crescent Moon in the sky, its worth trying to catch a few shooting stars this morning.

Find the darkest skies possible and spend some time scanning overhead. Try concentrating on a spot away from the constellation Aquarius, where the showers meteors originate. You may only see five or so Eta Aquariid meteors an hour, but this is also a great chance to relax beneath the stars and get to know the morning sky much better.

Follow this link:

The Sky This Week from April 24 to May 1 - Astronomy Magazine

Posted in Astronomy | Comments Off on The Sky This Week from April 24 to May 1 – Astronomy Magazine

Astronomers Find a Six-Planet System Which Orbit in Lockstep With Each Other – Universe Today

Posted: at 6:44 pm

To date, astronomers have confirmed the existence of 4,152 extrasolar planets in 3,077 star systems. While the majority of these discoveries involved a single planet, several hundred star systems were found to be multi-planetary. Systems that contain six planets or more, however, appear to be rarer, with only a dozen or so cases discovered so far.

This is what astronomers found after observing HD 158259, a Sun-like star located about 88 light-years from Earth, for the past seven years using the SOPHIE spectrograph. Combined with new data from the Transiting Exoplanet Space Satellite (TESS), an international team reported the discovery of a six planet system where all were in near-perfect rhythm with each other.

The international team responsible for this discovery was led by Dr. Nathan Hara, a postdoctoral researcher at the University of Geneva (UNIGE), a member of the Swiss PlanetS institute, and a Fellow with the European Space Agencys (ESA) CHaracterising ExOPlanets Satellite (CHEOPS) mission. The study that describes their findings recently appeared in the journal Astronomy & Astrophysics.

Using SOPHIE, astronomers have been conducting velocity measurements of many stars in the northern hemisphere to determine if they have exoplanets orbiting them. This method, known as the Radial Velocity Method (or Doppler Spectroscopy), consists of measuring the spectra a star to see if it is moving in place which is an indication that the gravitational force of one or more planets is working on it.

Interestingly enough, it was SOPHIEs predecessor (the ELODIE spectrograph) that led to one of the earliest exoplanet discoveries in 1995 the hot Jupiter 51 Peg b (Dimidium). After examining HD 158259 for seven years, SOPHIE succeeded in obtaining high-precision radial velocity measurements that revealed the presence of a six planet system.

This system consists of an innermost large rocky planet (a super-Earth) and five small gas giants (mini-Neptunes) that have exceptionally regular spacing between them. As Franois Bouchy, a professor of astronomy and science at UNIGE and the coordinator of the observation program, explained in a UNIGE press release:

The discovery of this exceptional system has been made possible thanks to the acquisition of a great number of measurements, as well as a dramatic improvement of the instrument and of our signal processing techniques.

These planets range from being 2 (the innermost super-Earth) to 6 times (the mini-Neptunes) as massive as Earth. The system is also very compact, with all of six planets orbit closely to the star and the outermost being just 0.38 times as distant as Mercury is from the Sun. This places the planets well inside the stars habitable zone (HZ), which means none are likely to have water on the surfaces or dense enough atmospheres to support life.

Meanwhile, TESS monitored HD 158259 for signs of transits (aka. the Transit Method) and observed a decrease in the stars brightness as the innermost planet passed in front of the star. According to Isabelle Boisse, a researcher at the Marseille Astrophysics Laboratory and co-author of the study, the TESS readings (combined with the radial velocity data) allowed them to constrain the properties of this planet (HD 158259 b) further.

The TESS measurements strongly support the detection of the planet and allow to estimate its radius, which brings very valuable information on the planets internal structure, she said. But as noted earlier, the most impressive feature of this system is its regularity. Basically, the planets in the system have an almost exact 3:2 orbital resonance

This means that for every three orbits the innermost planet makes, the second one will complete about two. In the time it takes the second planet to complete three orbits, the third will complete about two. This ratio applies to all six planets in the system and came as quite a surprise to Hara and his colleagues.

When describing the planets orbits, Hara compared it to an orchestra playing music, though the arrangement is not quite perfect:

This is comparable to several musicians beating distinct rhythms, yet who beat at the same time at the beginning of each bar. Here, about is important. Besides the ubiquity of the 3:2 period ratio, this constitutes the originality of the system.

Resonances, even imperfect ones, are of interest to astronomers because of how they provide hints to a star systems formation and evolution. In astronomical circles, there is still considerable debate about how star systems come together and change over time. A particularly contentious point is whether planets form close to their final position in the system, or if they change their orbits after forming.

This latter scenario (known as planetary migration) has been gaining traction in recent years thanks to the discovery of exoplanets like Hot-Jupiters, leading many astronomers to question if planetary shake-ups occur. This theory would appear to explain the formation of the six planets in the HD 158259 system. Said Stephane Udry, a professor of astronomy and science at UNIGE:

Several compact systems with several planets in, or close to resonances are known, such as TRAPPIST-1 or Kepler-80. Such systems are believed to form far from the star before migrating towards it. In this scenario, the resonances play a crucial part.

The fact that HD 158259s planets are close to a 3:2 resonance, but not exactly within one, suggests that they were trapped in one in the past. However, they would have subsequently undergone synchronous migration and moved away from the resonance. According to Hara, thats not all that this system can tell us.

Furthermore, the current departure of the period ratios from 3:2 contains a wealth of information, he said. With these values on the one hand, and tidal effect models on the other hand, we could constrain the internal structure of the planets in a future study. In summary, the current state of the system gives us a window on its formation.

The more we learn about this multi-planet system and others like it, the more we can learn about how star systems like our own came to be. The resolution of these and other questions about the formation and evolution of planetary systems will put us one step closer to knowing how life can emerge (and perhaps where to look for it!)

Further Reading: University of Geneva, Astronomy & Astrophysics

Like Loading...

Continue reading here:

Astronomers Find a Six-Planet System Which Orbit in Lockstep With Each Other - Universe Today

Posted in Astronomy | Comments Off on Astronomers Find a Six-Planet System Which Orbit in Lockstep With Each Other – Universe Today

Astronomers Have Watched a Nova Go From Start to Finish For The First Time – ScienceAlert

Posted: at 6:44 pm

A nova is a dramatic episode in the life of a binary pair of stars. It's an explosion of bright light that can last weeks or even months. And though they're not exactly rare - there are about 10 each year in the Milky Way - astronomers have never watched one from start to finish.

Until now.

A nova occurs in a close binary star system, when one of the stars has gone through its red giant phase. That star leaves behind a remnant white dwarf. When the white dwarf and its partner become close enough, the massive gravitational pull of the white dwarf draws material, mostly hydrogen, from the other star.

That hydrogen accretes onto the surface of the white dwarf, forming a thin atmosphere. The white dwarf heats the hydrogen, and eventually the gas pressure is extremely high, and fusion is ignited. Not just any fusion: rapid, runaway fusion.

Artist's impression of a nova eruption, showing the white dwarf accreting matter from its companion. (Nova_by K. Ulaczyk, Warschau Universitt Observatorium)

When the rapid fusion ignites, we can see the light, and the new hydrogen atmosphere is expelled away from the white dwarf into space. In the past, astronomers thought these new bright lights were new stars, and the name "nova" stuck.

Astronomers now call these types of nova "classical" novae. (There are also recurrent novae, when the process repeats itself.)

This is an enormously energetic event, that produces not only visible light, but gamma rays and x-rays too. The end result is that some stars that could only be seen through a telescope can be seen with the naked eye during a nova.

All of this is widely accepted in astronomy and astrophysics. But much of it is theoretical.

Recently, astronomers using the BRITE (BRIght Target Explorer) constellation of nanosatellites were fortunate enough to observe the entire process from start to finish, confirming the theory.

BRITE is a constellation of nanosatellites designed to "investigate stellar structure and evolution of the brightest stars in the sky and their interaction with the local environment," according to the website.

They operate in low-Earth orbit and have few restrictions on the parts of the sky that they can observe. BRITE is a coordinated project between Austrian, Polish, and Canadian researchers.

This first-ever observation of a nova was pure chance. BRITE had spent several weeks observing 18 stars in the Carina constellation. One day, a new star appeared. BRITE Operations Manager Rainer Kuschnig found the nova during a daily inspection.

"Suddenly there was a star on our records that wasn't there the day before," he said in a press release. "I'd never seen anything like it in all the years of the mission!"

Werner Weiss is from the Department of Astrophysics at the University of Vienna. In a press release, he emphasized the significance of this observation.

A shows bright V906 Carinae labelled with a white arrow. B and C show the star before and after the V906 Carinae nova. (A. Maury and J. Fabrega)

"But what causes a previously unimpressive star to explode? This was a problem that has not been solved satisfactorily until now," he said.

The explosion of Nova V906 in the constellation Carina is giving researchers some answers and has confirmed some of the theoretical concept behind novae.

V906 Carinae was first spotted by the All-Sky Automated Survey for Supernovae. Fortunately, it appeared in an area of the sky that had been under observation by BRITE for weeks, so the data documenting the nova is in BRITE data.

"It is fantastic that for the first time a nova could be observed by our satellites even before its actual eruption and until many weeks later," says Otto Koudelka, project manager of the BRITE Austria (TUGSAT-1) satellite at TU Graz.

V906 Carinae is about 13,000 light years away, so the event is already history. "After all, this nova is so far away from us that its light takes about 13,000 years to reach the earth," explains Weiss.

The BRITE team reported their findings in a new paper. The paper is titled "Direct evidence for shock-powered optical emission in a nova." It's published in the journal Nature Astronomy. First author is Elias Aydi from Michigan State University.

"This fortunate circumstance was decisive in ensuring that the nova event could be recorded with unprecedented precision," explains Konstanze Zwintz, head of the BRITE Science Team, from the Institute for Astro- and Particle Physics at the University of Innsbruck.

Zwintz immediately realised "that we had access to observation material that was unique worldwide," according to a press release.

Novae like V906 Carinae are thermonuclear explosions on the surface of white dwarf stars. For a long time, astrophysicists thought that a nova's luminosity is powered by continual nuclear burning after the initial burst of runaway fusion. But the data from BRITE suggests something different.

In the new paper, the authors show that shocks play a larger role than thought. The authors say that "shocks internal to the nova ejecta may dominate the nova emission."

These shocks may also be involved in other events like supernovae, stellar mergers, and tidal disruption events, according to the authors. But up until now, there's been a lack of observational evidence.

"Here we report simultaneous space-based optical and gamma-ray observations of the 2018 nova V906 Carinae (ASASSN-18fv), revealing a remarkable series of distinct correlated flares in both bands," the researchers write.

Since those flares occur at the same time, it implies a common origin in shocks.

"During the flares, the nova luminosity doubles, implying that the bulk of the luminosity is shock powered." So rather than continual nuclear burning, novae are driven by shocks.

"Our data, spanning the spectrum from radio to gamma-ray, provide direct evidence that shocks can power substantial luminosity in classical novae and other optical transients."

In broader terms, shocks have been shown to play some role in events like novae. But that understanding is largely based on studying timescales and luminosities. This study is the first direct observation of such shocks, and is likely only the beginning of observing and understanding the role that shocks play.

In the conclusion of their paper the authors write: "Our observations of nova V906 Car definitively demonstrate that substantial luminosity can be produced - and emerge at optical wavelengths - by heavily absorbed, energetic shocks in explosive transients."

They go on to say that: "With modern time-domain surveys such as ASAS-SN, the Zwicky Transient Facility (ZTF) and the Vera C. Rubin Observatory, we will be discovering more - and higher luminosity - transients than ever before. The novae in our galactic backyard will remain critical for testing the physical drivers powering these distant, exotic events."

This article was originally published by Universe Today. Read the original article.

See the original post:

Astronomers Have Watched a Nova Go From Start to Finish For The First Time - ScienceAlert

Posted in Astronomy | Comments Off on Astronomers Have Watched a Nova Go From Start to Finish For The First Time – ScienceAlert

Five Essay Collections to Read in Quarantine – Willamette Week

Posted: at 6:43 pm

Make It Scream, Make It Burn, Leslie Jamison

Leslie Jamison knows how to write a good personal essay because she doesn't assume you want to read about her personally. This was true in her first collection, Empathy Exams, and it is true in her second, Make It Scream, Make It Burn, which pieces together the things that interest Jamison most. In "Sim Life," Jamison examines our e-companions, those virtual characters we find ourselves strangely invested in. In "The Quickening," she reflects on the anxieties of pregnancy, at times addressing her unborn daughter directly, drawing the reader into the most private spaces of pre-parenthood. Each essay is an exercise in thoughtful restraint, never allowing itself to be confused for the work of a diarist.

Black Is the Body, Emily Bernard

On its most superficial level, Black Is the Body is a collection about storytelling within the familyas Bernard lays out in the subtitle, these are 12 stories from her grandmother's time, her mother's time, and her own. Beneath that, Black Is the Body is an expertly crafted collection about blackness in America, as only Bernard has lived it. One essay, "Interstates," documents the time when Bernard, her parents, and her white fianc pulled over to change a flat tire, exposing the family to every prejudice that may pass them on the highway. Other stories examine the relationship between white and black life in the American South, two experiences "ensnared in the same historical drama."

Interior States, Meghan OGieblyn

There are some writers who leave the worlds of devout religionworlds that are at once large, and impossibly smalland spare no second thoughts, rejecting both the baby and the bathwater. Meghan O'Gieblyn's debut collection leaves no thoughts behind, turning to her upbringing of conservative evangelicalism for a series of essays offering razor-sharp cultural criticism on the state of American life. "Ghost in the Cloud," a particular strong point, sews together the parallel theologies of transhumanism (technology that works to avoid death) and Christian millennialism (salvation that works to avoid death). O'Gieblyn is unapologetic in her takes, producing wholly original commentary slated for these times.

Human Relations and Other Difficulties, Mary-Kay Wilmers

Mary-Kay Wilmers, one of the founders of the London Review of Books and its sole editor since 1979, has a lot to say about writing, and women, and the ways women write for themselves and for men. Human Relations and Other Difficulties is the product of a veteran career in book reviewing, and it showsthe essays are clever, frank and delightfully readable. Some provide the literary commentary that Wilmer is known foron Joan Didion, Alice James and Jean Rhyswhile others turn inward, looking to Wilmer's own life as a child and a parent. "There's nothing magical about a mother's relationship with her baby," Wilmer writes of early motherhood. "Like most others, it takes two to get it going."

If there were ever a time to renew your love for the natural world, as the late poet Mary Oliver did throughout her career, it's now. Upstream, a collection of essays published three years before Oliver's death, is the author in her purest formreflecting on the beauty of codfish, grass, and seagulls on the beach. Life, as she writes about it, is precious in all things, without ever dipping into sentimentality. Oliver's meditation on her literary counterparts, including Walt Whitman, a childhood "friend," gives rare insight into the making of the poet, while other essays invite the reader to observe the outdoors with new eyes.

See the original post here:
Five Essay Collections to Read in Quarantine - Willamette Week

Posted in Transhumanist | Comments Off on Five Essay Collections to Read in Quarantine – Willamette Week

‘Ion Fury’ Coming to Consoles – Exclusively Games

Posted: at 6:43 pm

Developer Voidpoint, LLC and publisher 3D Realms took to Twitter to tease that their well-received previously released retro-inspired action title Ion Fury will be releasing on Nintendo Switch. When asked in follow up tweets, they also confirmed the title will make its way to PlayStation 4 and Xbox One. They did not state a release date, but the month of May has been floated around from various locations. Ion Fury previously released back in August of 2019 on PC via Steam.

If youre unfamiliar with Ion Fury, it follows Shelly Bombshell Harrison, who has earned the code name for being able to defuse bombs for the Global Defense Force. When an evil transhumanist mastermind by the name of Dr. Jadus Heskel decides to unleash the members of his cybernetic cult onto the streets of Neo DC, Shelly decides its time to start causing explosions, instead of preventing them. Shellys quest will lead players down a path full of carnage, gigantic explosions, levels with multiple pathways, tons of secrets to discover, and inhuman enemies set to stop your path. Players will find that there is no regenerative health, and will instead need to utilize cover and run and gun methods to defeat enemies. Other features are as follows:

Are you excited for Ion Fury coming to more platforms? Will you be picking it up? For those whove played it on PC, would you recommend it? For other indie titles to keep an eye on, make sure to check out VirtuaVerse, Shop Titans, and Jet Lancer. To stay up to date on Ion Fury, make sure to follow the developers on Twitter, Facebook, YouTube, and their official website.

Read more here:
'Ion Fury' Coming to Consoles - Exclusively Games

Posted in Transhumanist | Comments Off on ‘Ion Fury’ Coming to Consoles – Exclusively Games