12345...102030...


Construction robotics is changing the industry in these 5 ways – Robot Report

The SAM100 bricklaying robot at the Brighton Health Center South site of the University of Michigan Hospitals and Health Centers. Source: Construction Robotics

Until recently, construction was one of the least digitized and automated industries in the world. Many projects could be completed more efficiently with the help of the right construction robotics, mainly because the related tasks are incredibly repetitive.

While manual labor will likely always be a huge component of modern construction, technology has been steadily improving since the first pulleys and power tools. Robots, drones, autonomous vehicles, 3D printing, and exoskeletons are beginning to help get the work done. With low U.S. unemployment and shortages of skilled labor, automation is key to meeting demand and continued economic growth.

Construction robots may be involved in specific tasks, such as bricklaying, painting, loading, and bulldozing. We expect hundreds of AMRs in the next two years, mainly doing haulage, said Rian Whitton, an analyst at ABI Research. These robots help to protect workers from a hazardous working environment, reduce workplace injuries, and address labor shortages.

Many potential solutions rely on artificial intelligence and machine learning to deliver unprecedented levels of data-driven support. For instance, a driverless crane could transport materials around a worksite, or an aerial drone could gather information on a worksite to be compared against the plan.

Here are just a few examples of how robotics is transforming construction.

An example of how construction robotics are revolutionizing the industry can be seen in the HadrianX bricklaying machine from Australia-based FBR Ltd. (also known as Fastbrick Robotics). It employs an intelligent control system aided by CAD to calculate the necessary materials and movements for bricklaying.

Hadrian also measures environmental changes, such as movement caused by wind or vibrations, in real time. This data is then used to improve precision during the building process.

While Hadrian does require the use of proprietary blocks and adhesive, FBR noted that the related materials are 12 times bigger than standard house bricks and are lighter, stronger, and more environmentally sustainable.

Robots like Hadrian and SAM100 from Victor, N.Y.-based Construction Robotics promise to reduce operating costs and waste, as well as provide safer work environments and improve productivity. Hadrian can build the walls of a house in a single day, which is much faster than conventional methods.

While the major automakers and technology companies are working on self-driving cars, autonomous vehicles are already part of construction robotics.

Such equipment can transport supplies and materials. For instance, Volvo has been working on its HX2, an autonomous and electric load carrier that can move heavy loads without additional input. It has no driver cab and instead uses a digital logistics-driven control technology backed by what Volvo calls a vision system to detect humans and obstacles while on the move.

Another company, Built Robotics, which last month raised $33 million, offers autonomous bulldozers and excavators. AI guidance systems direct the equipment to their destinations and ensure that the necessary work is completed safely and accurately.

Autonomous vehicles and construction robotics is not intended to replace manual labor entirely, but to augment and enhance efficiency. Safety is vastly improved as well, as we eliminate the potential for human error.

Construction robotics and drones using sensors such as lidar with Global Positioning System technologies can provide vital information about a worksite. Along with AI, it can help predict what tasks are required.

Doxel Inc. makes a small tread-based robot that does exactly that. It scans and assesses the progress of a construction project by traversing the site. The information it collects is used to detect potential errors and problems early.

Doxels data is stored in the cloud, where its filtered through a deep-learning algorithm to recognize and assess more minute details. For example, the system might point out that a ventilation duct is installed incorrectly, and the early detection can allow for the proper correction well before costly revisions are needed.

Humans are still in the loop for much of construction robotics, combining the strengths of human supervision with multiple technologies. The Internet of Things, additive manufacturing, and digitization are contributing to the industrys growth, noted Caterpillar.

Painting drones are an excellent example, since they can be controlled via tablet or smartphone via an app, and they can report on the data they gather thats analyzed in the cloud.

Remote-control technology can also be applied to semi-autonomous vehicles. Project managers can use it to deliver instructions and orders to their workforce instantly.

Barcelona-based Scaled Robotics offers construction robotics that can be remotely controlled by mobile devices. The companys Husky unmanned ground vehicle can roam a construction site and capture critical information via multiple sensors. The data is transferred to the cloud, where its used for building information modeling (BIM) of the project.

Before, during, and after a construction project, many assessments require the review of a worksite and surrounding area. Limited surveillance is also necessary for supervising workers and securing the site. In addition, project managers and supervisors must walk the site to conduct final inspections. Construction robotics and drones can help all of these processes.

Aerial drones and ground-based robots can survey a worksite and gather multiple types of data, depending on the sensors used. Augmented reality and virtual reality can enable operators to get a realistic and real-time feel for what the drones are seeing.

While donning a VR headset, for instance, viewers can see a live feed of captured video from the drone. More importantly, that immersive experience is provided remotely, so project managers dont even have to be on the job site to get an accurate assessment. The video feed is also recorded for playback at a later time, providing yet another resource.

Companies are already using drone technology to this end. In 2018, Chinese drone maker DJI announced a global partnership with Skycatch for a fleet of 1,000 high-precision custom drones to create 3D site maps and models of project sites.

The global market for construction robotics also represents a huge opportunity for developers and suppliers. It could grow from $22.7 million in 2018 to $226 million by 2025, predicts Tractica. Research and Markets estimates that the market will grow to $126.4 million by 2025.

According to the International Federation of Robotics and the Robotic Industries Association, the construction robotics market will experience a compound annual growth rate (CAGR) of 8.7% between 2018 and 2022. Research firm IDC is more bullish, predicting a CAGR of 20.2%.

Automation and digitization are driving a revolution in the construction industry, which has historically been slow to adopt new technologies. From design through final inspection and maintenance, the full benefits of construction robotics have yet to be realized.

Follow this link:

Construction robotics is changing the industry in these 5 ways - Robot Report

First graders construct hand with robotics class – The Columbian

WASHOUGAL Columbia River Gorge Elementary first graders and Jemtegaard Middle School students had the opportunity to dive into engineering when they worked together to construct a moving hand. The students were studying the human body, including skeletal and muscular systems. Jemtegaard Middle School science teacher Greg Lewiss robotics class helped. Using paper hands along with string, straws and tape to represent muscles, bones and tendons, older students helped the younger students examine how these systems work together to make a hand move, Allison McGranahan, a first-grade teacher at Columbia River Gorge Elementary, said in a news release. It is exciting to see these first-graders looking deeper into the study of a body part, first-grade teacher Sydney Termini said.

See more here:

First graders construct hand with robotics class - The Columbian

Waxahachie FIRST Robotics Team takes 2nd place at tournament – Waxahachie Daily Light

Daily Light report

SaturdayOct19,2019at11:33AM

ROCKWALL Students from the Career & Technical Student Organization, Waxahachie FIRST Robotics Team, represented the Waxahachie Independent School District at the NTX Tournament of Robots on Oct. 12-13 in Rockwall, Tx.

FIRST Robotics (For Inspiration and Recognition of Science and Technology) serves students enrolled in CTE courses aligned with the engineering and manufacturing STEM careers cluster. Participants enjoy the experience of applying classroom and laboratory lessons in hands-on activities and competitive events.

Waxahachie Robotics is a WISD district team, open to all high school students within the Waxahachie school district.

Students participating this season so far are all from Waxahachie Global High School: Camile Condron, Jacob Mendoza, Steven Cloud, Eddie Almaguer, Cole Shelby, Evan Ford, Brendon Blankenship, Talon Wilderman, Ashauntee Fairley, Conner Teague, Carl Bicada, Katherine Keys and Miles Charpentier.

The NTX Tournament of Robots consisted of 29 teams from three states, and Waxahachie Robotics took second place overall in this years contest.

Students will be traveling again in February and March to Dallas and Greenville to test their skills against competitors from across the nation and around the world. WISD proudly supports these students, teachers and organizations.

For more information about Waxahachie Robotics, contact Waxahachie Global High School at 972.923.4761 or email swarren@wisd.org or dmathiesen@wisd.org.

Read the rest here:

Waxahachie FIRST Robotics Team takes 2nd place at tournament - Waxahachie Daily Light

UT robotics team working to make new home for the Army Futures Command – KXAN.com

by: Russell Falcon, KXAN Staff

AUSTIN (KXAN) The University of Texas is working to make a new home for the Army Futures Command on the 40 acres.

UT is remaking a space inside the historic Anna Hiss Gym. Right now, robotics is spread out among four different departments.

What the University of Texas here has created is a space where the faculty who are doing robotics, no matter what department, can come together and work together, said Mitch Pryor, Director at the Robotics Center of Excellence.

The center will allow faculty to focus their efforts, as they work with the Army Futures Command on their robotics partnership.

The new lab is set to be completed in Spring 2021.

Visit link:

UT robotics team working to make new home for the Army Futures Command - KXAN.com

[Hearing from an AI Expert 5] At the Intersection of Robotics and Innovation – Samsung Newsroom

There is much anticipation these days around the field of robotics with its immense potential and promising future applications. However, a large gap exists between public expectations and what is actually deemed technically feasible by scientists and engineers today. Fortunately, Samsungs New York AI Center is buoyed by the presence of a team of highly skilled researchers, led by robotics and AI expert Dr. Daniel D. Lee, who are working to close this gap. Samsung Newsroom spoke with Dr. Lee about the work being done at the center, as well as the facilitys ability to foster collaboration in a range of areas and attract top talent.

Asked about his centers mandate, Lee explains that the New York AI Center focuses on fundamental research at the intersection of AI, robotics and neuroscience. The centers objective is to solve challenging problems at this intersection, and one good example is the problem of robotic manipulation1.

Put simply, robots need to become far more skillful before they are ready to help humans with physical tasks in their daily lives. The first step involves endowing robots with the intelligence to perceive and understand their surroundings. Next, they must be able to make swift decisions in unpredictable situations. Finally, robots should be dexterous and nimble enough to perform the appropriate actions. However, it is impossible for robot designers to anticipate every contingency robots will encounter in real world environments. Thus, robots need to be able to learn from experience just as humans do.

At this time, most common machine learning methods are not suitable for teaching robots since enormous amounts of training data are required. Lee explained that there are several challenges that need to be addressed regarding machine learning for robotics.

Dealing with the physical world is much more difficult for AI than playing video games or Go, he explains, We are currently developing AI learning methods that can deal with the uncertainty and diversity of the physical world so that robots become more prevalent in homes and workplaces. I would compare the state of robots today to computers in the 1980s during the transformation from mainframes to personal computers.

The New York AI Center is addressing such challenges to provide a richer AI and robotics experience. For instance, the center has recently developed novel AI methods that are able to efficiently teach robots using limited data. One recently-developed method trains a neural network to generate motion trajectories for a robot arm directly from camera images.

In order to allow robots to handle things for people, robots need to learn how to touch, grasp, and move a variety of everyday objects. Lee explains how the problem of dexterous robotic manipulation is an area of focus for the New York AI Center.

Lee comments that the ability of humans and some animals to manipulate household objects is currently unmatched by machines. Thats why we are investigating how AI-based solutions can be applied to make breakthroughs in this area. Extrapolating further, Lee explains that dexterous robotic manipulation requires the ability to precisely and robustly handle objects exhibiting uncertain material properties.

Manipulation is relatively easy if the objects and environments are carefully controlled, such as on a factory floor, Lee reports, But it becomes much more difficult in unknown, cluttered environments when faced with a diverse array of objects.

By way of an example, Lee lays out the capabilities that would be required for a robot to serve a chilled glass of wine in a restaurant. How heavy is the glass, and how slippery is it due to condensation? He adds, Its impossible to completely model all the possible physical characteristics of the glass of wine, so machine learning is critical in training robots to handle the difficult situations.

As the AI sector has grown more sophisticated, it has become increasingly clear that collaborative solutions are critical for researchers to overcome the challenges they face. In an area as complex and multi-faceted as robotic manipulation, contributions from and collaborations with the worlds best and brightest will be instrumental, comments Lee. He highlights the value of working with both other Samsung AI Centers and academic institutions, saying that, solving fundamental problems in AI to positively impact society requires drawing upon the ability and skills of numerous experts globally.

He added, The Samsung AI Centers invite collaborations with researchers who can help address these difficult challenges. We currently have a number of faculty from leading academic institutions who are collaborating with us in New York.

Lee highlights just how beneficial being located in New York has been for his team, saying that certainly, New York City is one of the greatest and most diverse cities in the world. It is a magnet for world-class research and engineering talent.

Attracting the very best in talent is extremely important to remain on the bleeding edge of future AI advancements, and Lee reports that the center has been fortunate in this area, saying, We have benefited from being able to attract and recruit some outstanding researchers since we started the Center.

Our team is composed of expert scientists and engineers who are creating innovative theories and algorithms and state-of-the-art technological developments, Lee adds, Its been great working with them to publish in leading academic conferences and journals as well.

Speaking about how he envisions robots will fit into society in the future, Lee points out that, in their infancy, some robots drew attention because they were cute and fun, but that people tended to use them less as the novelty wore off. In order for people to see robots as valuable and relevant, new systems need to have enough intelligence that they become indispensable in our daily lives.

Intelligent robotic systems have the potential to completely revolutionize how people go about their activities in the future, Lee extrapolates, In the near term, we will see modest improvements on simple tasks in constrained environments. But more complete systems that can handle a variety of chores and complex tasks will require further research breakthroughs. The Samsung AI Centers are helping to generate those new advances.

Asked to outline what he sees as the ultimate vision for AI and robotic intelligence, Lee says, I grew up reading and watching science fiction stories that envisaged amazing robots helping humans. It would be incredible to see some of those positive visions actually come to life.

1 The ability for robots to interact with and move physical objects in a range of environments.

Original post:

[Hearing from an AI Expert 5] At the Intersection of Robotics and Innovation - Samsung Newsroom

News Desk Wrapup: Quick Hits on Robot News for the Week – Robotics Business Review

Kamuthi Solar Project

Its amazing how quickly the week goes by when youre monitoring the world of robotics news. Its almost like we need a robot or AI around here to start producing more news copy (no, dont think about that just yet).

With most people thinking about their mid-October weekend plans, take one quick moment to see what else has been going on that I found interesting this week:

SenseHawk, which develops AI-powered software for the solar industry, said this week it benchmarked the condition of 2.5 million solar modules in record time at one of the worlds largest solar sites, the Kamuthi Solar Power Project in India. Using drones with thermography imaging technology, as well as its cloud-based SenseHawk Therm software, the company was able to assess the solar site in less than three weeks, which would have taken several months if tackled monthly. The site spans an area of 2,500 acres, the equivalent of 950 football fields, or about four square miles.

The software is able to detect hot spots, evaluate energy loss, schedule maintenance and track defects over time. The company said it could detect more than 99.9% of all hotspots on solar modules.

If youre in the New England area and want to hear some smart people talk about robots next week, head to the Robotics Engineering Research Symposium at Worcester Polytechnic Institute (WPI) on Tuesday, Oct. 22. Titled Launching to a Robotic Future, the event will include robotics researchers from around the world, along with a reception highlighting the history of WPIs robotics program.

Speakers include Dr. Robert Howe (Harvard), Dr. Wolfram Burgard (Toyota Research Institute), Dr. Shiekegi Sugano (Waseda University) and Dr. Al Rizzi (Boston Dynamics). Head here for more details.

Amazon announced this week it would return to Las Vegas in 2020 for the second edition of its re:MARS conference, which covers machine learning, automation, robotics and space topics. The event will be held June 16-19, 2020, with more details on the agenda and speakers announced in early 2020. Click here to read about my experiences at this years re:MARS event, and why the robotics industry needs Amazon.

Yaskawa Motoman Americas employees at the Dayton, Ohio, headquarters.

The Motoman Robotics Division of Yaskawa America recently celebrated its 30th anniversary. Previously known as Motoman, the company began as a 50/50 joint venture between Hobart Brothers Company and Yaskawa Electric America, and then officially began operations on Aug. 1, 1989. In 1994, Motoman Inc. became a wholly-owned subsidiary of Yaskawa Electric Corp.

The company started with just 59 employees, and now has nearly 700 employees in 11 facilities throughout the U.S. (Dayton, Ohio; Detroit; Los Angeles; Austin, Texas; Birmingham, Ala.), Canada (Mississauga, Ontario; and Pointe-Claire, Quebec), Mexio (Aguascalientes, Apodaca N.L C.P. and Queretaro); and Brazil (Diadema, Sao Paulo).

Vectis Automation has teamed up with Universal Robots to create the Vectis Cobot Welding Tool, aimed to help manufacturers boost productivity by reducing the learning curve, deployment time, risk, and cost of robotic welding. The tool is powered by a UR10e cobot, and is also available as a low-risk, rent-to-own option. The two companies will show off the tool at the upcoming FABTECH show in Chicago, Nov. 11-14.

SkyOp, which develops drone training courseware for educational institutions, announced recently it was awarded a cooperative purchasing contract to make its SkyOp Drone Training Curriculum available to local school districts in New York through the Boards of Cooperative Educational Services (BOCES) program. Under the agreement, SkyOp will deliver its workforce-development STEM curriculum directly to local districts while the BOCES will provide support and training for teachers and district staff.

The curriculum, which includes more than 300 hours of instruction and coursework, covers topics such as the introduction to drones, Part 107 test preparation, hands-on trone flight training, drone photo and video production, and intro to Pix4D, among others.

See more here:

News Desk Wrapup: Quick Hits on Robot News for the Week - Robotics Business Review

An Army of Tiny Robots Could Assemble Huge Structures in Space – Universe Today

We live in a world where multiple technological revolutions are taking place at the same time. While the leaps that are taking place in the fields of computing, robotics, and biotechnology are gaining a great deal of attention, less attention is being given to a field that is just as promising. This would be the field of manufacturing, where technologies like 3D printing and autonomous robots are proving to be a huge game-changer.

For example, there is the work being pursued by MITs Center for Bits and Atoms (CBA). It is here that graduate student Benjamin Jenett and Professor Neil Gershenfeld (as part of Jenetts doctoral thesis work) are working on tiny robots that are capable of assembling entire structures. This work could have implications for everything from aircraft and buildings to settlements in space.

Their work is described in a study that recently appeared in the October issue of the IEEE Robotics and Automation Letters. The study was authored by Jenett and Gershenfeld, who were joined by fellow graduate student Amira Abdel-Rahman and Kenneth Cheung a graduate of MIT and the CBA, who now works at NASAs Ames Research Center.

As Gerensheld explained in a recent MIT News release, there have historically been two broad categories of robotics. On the one hand, youve got expensive robotics made out custom components that are optimized for particular applications. On the other hand, there are those that are made from inexpensive mass-produced modules with lower performance.

The robots that the CBA team is working on which Jenett has dubbed the Bipedal Isotropic Lattice Locomoting Explorer (BILL-E, like WALL-E) represent an entirely new branch of robotics. On the one hand, they are much simpler than the expensive, custom and optimized variety of robots. On the other, they are far more capable than mass-produced robots and can build a wider variety of structures.

At the heart of the concept is the idea that larger structures can be assembled by integrating smaller 3D pieces which the CBA team calls voxels. These components are made up of simple struts and nodes and can be easily fastened together using simple latching systems. Since they are mostly empty space, they are lightweight but can still be arranged to distribute loads efficiently.

The robots, meanwhile, resemble a small arm with two long segments that are hinged in the middle with a clamping device at each end that they use to grip onto the voxel structures. These appendages allow the robots to move around like inchworms, opening and closing their bodies in order to move from one spot to the next.

However, the main difference between these assemblers and traditional robots is the relationship between the robotic worker and the materials it is working with. According to Gershefeld, it is impossible to distinguish this new type of robot from the structures they build since they work together as a system. This is especially apparent when it comes to the robots navigation system.

Today, most mobile robots require a highly precise navigational system to keep track of their position, such as GPS. The new assembler robots, however, need only know where they are in relation to the voxels (small subunits they are currently working on). When an assembler moves onto the next one, it readjusts its sense of position, using whatever it is working on to orient itself.

Each of the BILL-E robots is capable of counting its steps, which in addition to navigation allows it to correct any errors it makes along the way. Along with control software developed by Abdel-Rahman, this simplified process will enable swarms of BILL-Es to coordinate their efforts and work together, which will speed up the assembly process. As Jenett said:

Were not putting the precision in the robot; the precision comes from the structure [as it gradually takes shape]. Thats different from all other robots. It just needs to know where its next step is.

Jenett and his associates have built several proof-of-concept versions of the assemblers, along with corresponding voxel designs. Their work has now progressed to the point where prototype versions are able to demonstrate the assembly of the voxel blocks into linear, two-dimensional, and three-dimensional structures.

This kind of assembly process has already attracted the interest of NASA (which is collaborating with MIT on this research), and Netherlands-based aerospace company Airbus SE which also sponsored the study. In NASAs case, this technology would be a boon for their Automated Reconfigurable Mission Adaptive Digital Assembly Systems (ARMADAS), which co-author Cheung leads.

The aim of this project is to develop the necessary automation and robotic assembly technologies to develop deep-space infrastructure which includes a lunar base and space habitats. In these environments, robotic assemblers offer the advantage of being able to assemble structures quickly and more cost-effectively. Similarly, they will be able to conduct repairs, maintenance, and modification with ease.

For a space station or a lunar habitat, these robots would live on the structure, continuously maintaining and repairing it, says Jenett. Having these robots around will eliminate the need to launch large preassembled structures from Earth. When paired with additive manufacturing (3D printing), they would also be able to use local resources as building materials (a process known as In-Situ Resource Utilization or ISRU).

Sandor Fekete is the director of the Institute of Operating Systems and Computer Networks at the Technical University of Braunschweig, Germany. In the future, he hopes to join the team in order to further develop the control systems. While developing these robots to the point that they will be able to build structures in space is a significant challenge, the applications they could have are enormous. As Fekete said:

Robots dont get tired or bored, and using many miniature robots seems like the only way to get this critical job done. This extremely original and clever work by Ben Jenett and collaborators makes a giant leap towards the construction of dynamically adjustable airplane wings, enormous solar sails or even reconfigurable space habitats.

There is little doubt that if humanity wants to live sustainably on Earth or venture out into space, it is going to need to rely on some pretty advanced technology. Right now, the most promising of these are the ones that offer cost-effective ways of seeing to our needs and extending our presence across the Solar System.

In this respect, robot assemblers like BILL-E would not only be useful in orbit, on the Moon, or beyond, but also here on Earth. When similarly paired with 3D printing technology, large groups of robotic assemblers programmed to work together could provide cheap, modular housing that could help bring an end to the housing crisis.

As always, technological innovations that help advance space exploration can be tapped to make life on Earth easier as well!

Further Reading: MIT, IEEE

Like Loading...

View post:

An Army of Tiny Robots Could Assemble Huge Structures in Space - Universe Today

If a Robotic Hand Solves a Rubiks Cube, Does It Prove Something? – The New York Times

This article is part of our continuing Fast Forward series, which examines technological, economic, social and cultural shifts that happen as businesses evolve.

SAN FRANCISCO Last week, on the third floor of a small building in San Franciscos Mission District, a woman scrambled the tiles of a Rubiks Cube and placed it in the palm of a robotic hand.

The hand began to move, gingerly spinning the tiles with its thumb and four long fingers. Each movement was small, slow and unsteady. But soon, the colors started to align. Four minutes later, with one more twist, it unscrambled the last few tiles, and a cheer went up from a long line of researchers watching nearby.

The researchers worked for a prominent artificial intelligence lab, OpenAI, and they had spent several months training their robotic hand for this task.

Though it could be dismissed as an attention-grabbing stunt, the feat was another step forward for robotics research. Many researchers believe it was an indication that they could train machines to perform far more complex tasks. That could lead to robots that can reliably sort through packages in a warehouse or to cars that can make decisions on their own.

Solving a Rubiks Cube is not very useful, but it shows how far we can push these techniques, said Peter Welinder, one of the researchers who worked on the project. We see this as a path to robots that can handle a wide variety of tasks.

The project was also a way for OpenAI to promote itself as it seeks to attract the money and the talent needed to push this sort of research forward. The techniques under development at labs like OpenAI are enormously expensive both in equipment and personnel and for that reason, eye-catching demonstrations have become a staple of serious A.I. research.

The trick is separating the flash of the demo from the technological progress and understanding the limitations of that technology. Though OpenAIs hand can solve the puzzle in as little as four minutes, it drops the cube eight times out of 10, the researchers said.

This is an interesting and positive step forward, but it is really important not to exaggerate it, said Ken Goldberg, a professor at the University of California, Berkeley, who explores similar techniques.

A robot that can solve a Rubiks Cube is not new. Researchers previously designed machines specifically for the task devices that look nothing like a hand and they can solve the puzzle in less than a second. But building devices that work like a human hand is a painstaking process in which engineers spend months laying down rules that define each tiny movement.

The OpenAI project was an achievement of sorts because its researchers did not program each movement into their robotic hand. That might take decades, if not centuries, considering the complexity of a mechanical device with a thumb and four fingers. The labs researchers built a computer system that learned to solve the Rubiks Cube largely on its own.

What is exciting about this work is that the system learns, said Jeff Clune, a robotics professor at the University of Wyoming. It doesnt memorize one way to solve the problem. It learns.

Development began with a simulation of both the hand and the cube a digital recreation of the hardware on the third floor of OpenAIs San Francisco headquarters. Inside the simulation, the hand learned to solve the puzzle through extreme trial and error. It spent the equivalent of 10,000 years spinning the tiles up, down, left and right, completing the task over and over again.

The researchers randomly changed the simulation in small but distinct ways. They changed the size of the hand and the color of the tiles and the amount of friction between the tiles. After the training, the hand learned to deal with the unexpected.

When the researchers transferred this computer learning to the physical hand, it could solve the puzzle on its own. Thanks to the randomness introduced in simulation, it could even solve the puzzle when wearing a rubber glove or with two fingers tied together.

At OpenAI and similar labs at Google, the University of Washington and Berkeley, many researchers believe this kind of machine learning will help robots master tasks they cannot master today and deal with the randomness of the physical world. Right now, robots cannot reliably sort through a bin of random items moving through a warehouse.

The hope is that will soon be possible. But getting there is expensive.

That is why OpenAI, led by the Silicon Valley start-up guru Sam Altman, recently signed a billion-dollar deal with Microsoft. And its why the lab wanted the world to see a demo of its robotic hand solving a Rubiks Cube. On Tuesday, the lab released a 50-page research paper describing the science of the project. It also distributed a news release to news outlets across the globe.

In order to keep their operation going, this is what they have to do, said Zachary Lipton, a professor in the machine learning group at Carnegie Mellon University in Pittsburgh. It is their life blood.

When The New York Times was shown an early version of the news release, we asked to see the hand in action. On the first attempt, the hand dropped the cube after a few minutes of twisting and turning. A researcher placed the cube back into its palm. On the next attempt, it completed the puzzle without a hitch.

Many academics, including Dr. Lipton, bemoaned the way that artificial intelligence is hyped through news releases and showy demonstrations. But that is not something that will change anytime soon.

These are serious technologies that people need to think about, Dr. Lipton said. But it is difficult for the public to understand what is happening and what they should be concerned about and what will actually affect them.

See the original post here:

If a Robotic Hand Solves a Rubiks Cube, Does It Prove Something? - The New York Times

CMR Surgical installs its first surgical robotics system – DOTmed HealthCare Business News

The first of CMR Surgicals Versius platforms has found its home at a center specializing in laparoscopy.

The surgical robotics system will now be used at Galaxy Care Hospital in Pune, India, in a wide range of surgical procedures that include transthoracic operations, hysterectomies and myomectomies.

"One of the key features that makes Versius a good fit for Galaxy Care Hospital is its modularity," Martin Frost, chief executive officer and co-founder of CMR Surgical, told HCB News. "Because the system is modular, the Versius system can be moved between operating theatres quickly and easily, increasing the opportunity for use and the cost-effectiveness of Versius. With our partnership with Galaxy Care, we're pleased to be able to make minimal access surgery available to more people globally."

Ad StatisticsTimes Displayed: 844114Times Visited: 7416

Special-Pricing Available on Medical Displays, Patient Monitors, Recorders, Printers, Media, Ultrasound Machines, and Cameras.This includes Top Brands such as SONY, BARCO, NDS, NEC, LG, EDAN, EIZO, ELO, FSN, PANASONIC, MITSUBISHI, OLYMPUS, & WIDE.

The installation also marks the release of the first clinical registry for a surgical robotic system, according to CMR Surgical, which manages it in partnership with providers to record and monitor patient outcomes of all procedures involving Versius to ensure patient safety. Outcomes measured include operative time, length of stay, 30-day readmissions, and returns to the operating room within 24 hours. The creation of the registry stems from CMR Surgicals aim to provide post-market surveillance as part of the IDEAL (Idea, Development, Exploration, Assessment, Long-term study) framework, which calls for manufacturers to describe the stages of innovation in surgery and other interventional procedures.

"Our mission is to address the higher unmet need for flexible, surgical care," said Frost. "Versius is indicated for upper GI, general, gynaecological and colorectal procedures, and since its introduction into Galaxy Care Hospital, Versius has been used to complete transthoracic, hysterectomies and myomectomies, under the leadership of Dr. Shailesh Puntambekar."

CMR Surgical expects the system to gain traction in European and Asia-Pacific markets, and for it to become a competitor of the leading robotics system, the da Vinci system by Intuitive Surgical, in the near future.

It plans to have additional Versius Surgical Robotic Systems in use at hospitals across India and Europe by the end of 2019, including in ones that are a part of the National Health Service in the U.K.

Back to HCB News

See the original post:

CMR Surgical installs its first surgical robotics system - DOTmed HealthCare Business News

Open-Source Arm Puts Robotics Within Reach – Hackaday

In November 2017, we showed you [Chris Annin]s open-source 6-DOF robot arm. Since then hes been improving the arm and making it more accessible for anyone who doesnt get to play with industrial robots all day at work. The biggest improvement is that AR2 had a closed-loop control system, and AR3 is open-loop. If something bumps the arm or it crashes, the bot will recover its previous position automatically. It also auto-calibrates itself using limit switches.

AR3 is designed to be milled from aluminium or entirely 3D printed. The motors and encoders are controlled with a Teensy 3.5, while an Arduino Mega handles I/O, the grippers, and the servos. In the demo video after the break, [Chris] shows off AR3s impressive control after a brief robotic ballet in which two AR3s move in hypnotizing unison.

[Chris] set up a site with the code, his control software, and all the STL files. He also has tutorial videos for programming and calibrating, and wrote an extremely detailed assembly manual. Between the site and the community already in place from AR2, anyone with enough time, money and determination could probably build one. Check out [Chris] playlist of AR2 builds people are using them for photography, welding, and serving ice cream. Did you build an AR2? The good news is that AR3 is completely backward-compatible.

The AR3s grippers work well, as youll see in the video. If you need a softer touch, try emulating an octopus tentacle.

Thanks for the tip, [Andrew]!

See the original post here:

Open-Source Arm Puts Robotics Within Reach - Hackaday

Rethink Robotics launches Sawyer Black Edition – Robotics and Automation News

Rethink Robotics, which is now part of the Hahn Group, has launched a new version of its collaborative robot, Sawyer.

The Sawyer Black Edition is now available for pre-order and will be presented for the first time at K 2019 in Dsseldorf.

The new hardware update comes one year after the takeover of Rethink Robotics assets by Hahn Group.

The Sawyer Black Edition is the result of the combination of German engineering and longstanding application experience.

Rethink says Sawyer is now quieter and has more reliable components with higher quality.

The Sawyer Black Edition can be pre-ordered now; first deliveries will take place in 2019.

Rethink says the Sawyer Black Edition contributes to a quieter working environment and is therefore even more popular among employees.

The company adds that the improved component quality of the Sawyer Black Edition significantly raises the collaborative robots reliability.

At K 2019, the Sawyer Black Edition will be demonstrated at the Hahn Group at booth E61 in hall 10 as part of a palletizing application for the packaging of boxes and plastics parts.

With the Black Edition Rethink Robotics continues to stand for easy application, flexibility in use and high acceptance among employees.

Tasks that are dangerous for humans are, among others, possible applications. Some of these include CNC machine assembly, circuit board assembly, metal processing, injection molding, packaging, loading and unloading, as well as tests and inspections.

Rethink says the Sawyer collaborative robot solution is ready for use immediately after delivery and equipped with Intera software and two camera systems.

You might also like

View post:

Rethink Robotics launches Sawyer Black Edition - Robotics and Automation News

Robotics company offers $190,000 for the rights to your face – NEWS.com.au

Heres your chance to be the literal face of a robotics company.

A tech firm is looking for the right person to lend their likeness to a new line of robot assistants for the elderly. And while it might sound like the plot to a bad sci-fi flick, the company will pay the chosen candidate 100,000 (about $A190,000) for the privilege.

The privately funded firm has opted to remain anonymous due to the projects secretive nature, but it has hired robotics recruiter Geomiq to find the right face for the job, reports the Mirror.

RELATED: Sex robots are here, are they therapeutic or gross?

Ideal applicants will possess a kind and friendly face for the prototype, per the head, er, face hunters recruitment ad. Its a once-in-a-lifetime opportunity for the right person; lets hope we can find them, said a Geomiq spokesperson.

The lucky winner of the face-off will have their likeness reproduced on thousands of virtual friends a la Will Smiths disturbing 2004 movie I, Robot as well as rake in the aforementioned big bucks. The project has been five years in the making.

RELATED: Robots already taking jobs

Designers havent disclosed much beyond that, only that the robotic doppelgangers will hit the assembly line next year and will be readily available to the public upon completion.

On the application page, Geomiq acknowledges that licensing ones visage to an unnamed robotics company for eternity is potentially an extremely big decision.

The face-cloning campaign has drawn flack from social media sceptics, with many of them analogising it to bad dystopian movie tropes. Janelle Mone warned us about this, cautioned one.

Others wondered why a supposedly tech-savvy robotics company needed a human face at all and couldnt just save money by using an online random-face generator. Have these people ever heard of GANs? asked one Twitter techie. There are datasets with 100k realistic (but not real) faces available already.

This article originally appeared on the New York Post and was reproduced with permission

Is $190,000 enough for you to sell the rights to your appearance forever? Let us know what you think in the comments below.

See original here:

Robotics company offers $190,000 for the rights to your face - NEWS.com.au

Geek+ launches smart factory with ‘robots making robots’ – Robotics and Automation News

Geek+, a supplier of robotics and AI technologies for warehouses, has launched what it claims is the worlds first smart factory to use robot arms to make mobile robots.

Based in Nanjing, the factory uses Geek+ robots, AI algorithms and other automated solutions to manufacture new Geek+ robots. All of the companys robots are produced at this factory.

With an increasing demand for customization and limited release products, product cycles are getting shorter and shorter, making flexible production an essential aspect of the manufacturing industry.

Autonomous mobile robots in factories are the best way to achieve flexible production and can also help companies realize a smart and agile supply chain.

With this objective, the company has launched its Geek+ Smart Factory Solution, using its Nanjing facility as a blueprint for flexible production and intelligent manufacturing.

Geek+ says it can adapt and implement the smart factory template to manufacturing facilities worldwide, and customize the solution to meet various production and industrial scenarios.

With over 200 projects across the world, Geek+ has gained considerable experience and data knowledge developing smart logistics solutions for warehousing and manufacturing environments.

It has developed new AI algorithms for scenarios spanning numerous industries, from retail and apparel, to manufacturing and pharmaceutical companies.

Through this, the company has built an ecosystem with international technology partners to develop a total solution for smart warehouses and smart factories including AI vision, robot arms, and internet of things, production management system, logistics management system, big data analysis and advanced robotics.

With its Smart Factory Solution, Geek+ says it is helping customers upgrade their operations to an intelligent and agile supply chain.

Robots making robots

The Nanjing factory output has almost doubled traditional manual production capacity, and single-shift annual production is designed to exceed 10,000 robots. Under a production logistics management system, the robots operate together.

They include:

The robots are powered by AI technologies including:

Once assembled, the new robots direct themselves to the calibration area to receive basic parameter settings.

They automatically complete final testing and finished product inspection after which they directly proceed to the finished product area to be packaged and ready to ship.

Geek+ smart factory management system, powered by AI

To operate smart factories, Geek+ has developed a new integrated system, the Geek+ Production Logistics Management System. It powers all aspects of the facility, from inventory to the production line, integrating logistics and production into a flexible and efficient system. It connects the stock area with the production area and unifies the management of all the different robotic solutions.

This system replaces the traditional conveyor belt system with a new island production mode of autonomous mobile robots.

These production islands can be easily duplicated and the solution is a completely flexible and scalable:

This new intelligent and flexible production model offers a real alternative to costly and rigid conveyor belts.

A game changer for intelligent manufacturing

Production capacity has almost doubled, compared to traditional manual production, with annual output expected to exceed 10,000 robots.

The new solution also guarantees more precise process control and higher accuracy with a straight-through rate for the final assembly are exceeding 98 percent, and higher traceability of the whole process, which reduces overall management cost.

Yong Zheng, founder and CEO of Geek+, says: Smart factories will be a turning point for the entire industry as they provide a truly proven alternative to traditional, fixed production and achieve flexible production.

What better way to show to the world the value of our solutions than to apply it to our own production? Our Nanjing factory is a window into the future of intelligent logistics and manufacturing.

The Geek+ smart factory solution is applicable to a wide range of industries, including automobile manufacturers, auto part factories and 3C electronics factories.

It is particularly well suited for industries that require more flexible manufacturing processes, to keep up with the demand for new product lines and allow for capacity expansion.

With smart factories, trial production of new products and product line transformation can be easily implemented, says Geek+.

Zheng says: In the past four years, we have already developed and implemented game changing technologies for warehousing operations. With smart factories, we continue to pave the way for a truly intelligent supply chain.

You might also like

Originally posted here:

Geek+ launches smart factory with 'robots making robots' - Robotics and Automation News

Robotics startup wants to pay 100k to use a real human face on its robots – DIGIT.FYI

A robotics startup is offering 100,000 to use a real persons face on its robots. The unnamed company has contacted manufacturer Geomiq for help finding the ideal kind and friendly human face for its robots, described as virtual friends for elderly people.

The robotics startup said the need for anonymity is due to the secretive nature of the project. But production of the robots is expected to begin in 2020 and will be readily available to the public, it added.

Geomiq said that the company is privately-funded and that the project has been in development for five years. It has since, apparently, taken on investment from a number of independent VCs, as well as a top fund based in Shanghai.

A spokesperson for Geomiq said: At this point, were not allowed to share any more details about the project, but were hoping that someone with the right face will get in touch as a result of this public appeal.

We know that this is an extremely unique request, and signing over the licenses to your face is potentially an extremely big decision. But its a once-in-a-lifetime opportunity for the right person; lets hope we can find them.

Dr Kate Devlin, an author on the topics of AI and robotics, said: Im cool with the whole friendly robot thing. But I cant work out why a) it needs a realistically human face and, b) why that face needs to be of a real individual.

If you are interested in selling your face, you can apply here. Candidates who make it through the next phase will be given full details on the project, while unsuccessful candidates will not be contacted.

Like Loading...

Related

View original post here:

Robotics startup wants to pay 100k to use a real human face on its robots - DIGIT.FYI

Conference on Collaborative Robots, Advanced Vision and Artificial Intelligence Comes to San Jose November 12-13 – Business Wire

ANN ARBOR, Mich.--(BUSINESS WIRE)--Automation expertsand those who want to explore how to grow their business with the latest trends and innovationswill descend on San Jose November 12-13 for the Collaborative Robots, Advanced Vision & AI (CRAV.ai) Conference. Sponsored by the Association for Advancing Automation (A3), this conference is ideal for engineers and manufacturers seeking effective ways to reduce cost, improve quality and advance productivity, while increasing flexibility. CRAV.ai also holds appeal for experienced users seeking new applications or prospective users trying to determine if robotics, vision and artificial intelligence make sense for their companies. Registration for the conference is open at https://crav.ai.

The automation industry continues to change and disrupt, with new innovations and new examples of automation solutions helping businesses around the world, said Jeff Burnstein, president, A3. This conference in particular brings in some of the most influential minds in the space to share the technologies, trends and actionable insights that will help companies become more competitive. Come learn how to not get left behind in this increasingly automated world.

In addition to three in-depth tracks featuring dozens of sessions highlighting practical solutions and emerging technologies, the conference will feature the following keynotes:

Last year, CRAV.ai drew more than 500 attendees, including engineers and decision makers from companies like Google, Apple, Intel, Lockheed Martin, Toyota and many more.

The full agenda can be found here: https://crav.ai/agenda. Register at https://crav.ai.

About Association for Advancing Automation (A3)The Association for Advancing Automation (A3) is the global advocate for the benefits of automating. A3 promotes automation technologies and ideas that transform the way business is done. A3 is the umbrella group for Robotic Industries Association (RIA), AIA - Advancing Vision + Imaging, Motion Control & Motor Association (MCMA) and A3 Mexico. RIA, AIA, MCMA and A3 Mexico combined represent over 1,250 automation manufacturers, component suppliers, system integrators, end users, research groups and consulting firms from throughout the world that drive automation forward. For more information, visit: A3, RIA, AIA, MCMA, A3 Mexico.

Upcoming A3 Events:Collaborative Robots, Advanced Vision & AI Conference (CRAV.ai) Nov. 12-13, 2019, San Jose, California.A3 Business Forum Jan. 13-15, 2020, Orlando, Florida.Robotic Grinding & Finishing Conference April 27-28, 2020, St. Paul, Minnesota.The Vision Show June 9-11, 2020, Boston, Massachusetts.Automate May 17-20, 2021, Detroit, Michigan.

Visit link:

Conference on Collaborative Robots, Advanced Vision and Artificial Intelligence Comes to San Jose November 12-13 - Business Wire

How will AI and robotics transform jobs of the future? – Big Think

Image: Lear21, CC BY-SA 3.0

East and West Berliners on top of the recently opened Berlin Wall, early November 1989.

Image: TD Architects

The rich world, developed world, first world or Western world by another name: the walled world.

Image source: Korean Culture and Information Service (Jeon Han), CC BY 2.0

The Demilitarized Zone (DMZ) between North and South Korea.

Image: ngel Gutirrez Rubio, CC BY 2.0

The 'Valla' in Melilla, where Europe touches Africa.

Image source: Duke Human Rights Center, CC BY 2.0

One of the 99 "Peace Walls" in Belfast, Northern Ireland.

Image source: Cedric31, GFDL

The expansion of Morocco's Berm, in six phases from 1982 to 1987.

Read the original here:

How will AI and robotics transform jobs of the future? - Big Think

Realtime Robotics Scoops Up $11.7M in Series A Funding – Robotics Business Review

BOSTON Realtime Robotics, which is developing responsive motion planning for industrial robots and autonomous vehicles, today announced it raised $11.7 million in Series A Funding. Led by SPARX Asset Management, the round included participation from Mitsubishi Electric Corp., Hyundai Motor Company, and OMRON Ventures.

Existing investors Toyota AI Ventures, Scrum Ventures, and the Duke Angel Network also participated in the round. The company said the new funding will be used to accelerate the development of more commercial product releases and expand the team to support key customers and partners across the globe.

The company said its solutions can help eliminate obstacles to widespread adoption of advanced automation in the industrial, agriculture, food service, construction, healthcare, and consumer markets. Despite the growing demand for automation, todays robots are not safe or smart enough to navigate in dynamic, unstructured environments without costly safeguards and oversight, the company said in a statement. Realtime Robotics solutions eliminate these challenges and enable robots to work at a productive pace.

The companys specialized computer processor and software enables machines, including industrial, collaborative robots, and autonomous vehicles, to evaluate millions of alternative motion paths to avoid a collision and choose the optimal route before making a move, all in milliseconds. Realtime released its first commercial system, RapidPlan and RapidSense, earlier this year.

Peter Howard, CEO, Realtime Robotics

The commitment garnered from strategic investors reflects both the need and the demand for smarter robots, said Peter Howard, CEO of Realtime Robotics. Our technology transforms the way machines interact with both people and other machines. Robots will now be able to take on a wide range of new tasks, and manufacturers will finally benefit from the productivity and efficiency gains that increased automation has promised, but failed to deliver.

Realtime Robotics was founded in 2016 by Duke University professors Dan Sorin, George Konidaris and researchers Sean Murray and Will Floyd-Jones. The company was based on its groundbreaking DARPA-funded research in motion planning.

Read more from the original source:

Realtime Robotics Scoops Up $11.7M in Series A Funding - Robotics Business Review

How this school designed a robotics program from the ground up – eSchool News

As a former computer engineer with a background in applied math, Im a firm proponent of STEM education. As a math teacher with 14 years of experience facilitating robotics clubs for students, Im also an ardent supporter of programming and robotics as a vehicle for STEM ed, so when I had the opportunity to build a K5 robotics program from the lab up, I leapt at the opportunity.

Our school is a brand-new Title 1 campus. Were in our first year and just opened in August, so were still tweaking and learning as we go, but weve developed a solid foundation for introducing studentseven those who are very youngto a range of STEM and other concepts in an environment that feels more like fun than work. Heres how we did it.

When I was designing the robotics program, I wanted to make sure we were building a bridge from kindergarten all the way to 5th grade and beyond, so our program is designed to be progressive throughout the six years students are with us and to set them up for more advanced robotics in middle and high school, should they choose to pursue it.

Related:11 educators share how they bring coding into the classroom

For kindergartners and first graders, we use two products: LEGOs STEAM Parkand KinderLabs KIBO.

STEAM Park uses Duplo LEGO bricks and gears, pulleys, and other simple machines to help very young children begin to understand concepts like leverage, chain reactions, motion, measurement, and even buoyancy, which isnt usually introduced until 2nd grade.

Read more from the original source:

How this school designed a robotics program from the ground up - eSchool News

Tech company will pay $130K to put your face on a line of robots – New York Post

Heres your chance to be the literal face of a robotics company.

A tech firm is looking for the right person to lend their likeness to a new line of robot assistants for the elderly. And while it might sound like the plot to a bad sci-fi flick, the company will pay the chosen candidate about $130,000 for the privilege.

The privately funded firm has opted to remain anonymous due to the projects secretive nature, but they have hired robotics recruiter Geomiq to find the right face for the job, reports the Mirror. Ideal applicants will possess a kind and friendly face for the prototype, per the head, er, face hunters recruitment ad. Its a once-in-a-lifetime opportunity for the right person; lets hope we can find them, said a Geomiq spokesperson.

The lucky winner of the face-off will have their likeness reproduced on thousands of virtual friends la Will Smiths disturbing 2004 movie I, Robot as well as rake in the aforementioned big bucks. The project has been five years in the making.

Designers havent disclosed much beyond that, only that the robotic doppelgngers will hit the assembly line next year and will be readily available to the public upon completion.

On the application page, Geomiq acknowledges that licensing ones visage to an unnamed robotics company for eternity is potentially an extremely big decision.

The face-cloning campaign has drawn flack from social media skeptics, with many of them analogizing it to bad dystopian movie tropes. Janelle Mone warned us about this, cautioned one.

Others wondered why a supposedly tech-savvy robotics company needed a human face at all and couldnt just save money by using an online random-face generator. Have these people ever heard of GANs? asked one Twitter techie. There are datasets with 100k realistic (but not real) faces available already.

Visit link:

Tech company will pay $130K to put your face on a line of robots - New York Post

If a Robotic Hand Solves a Rubiks Cube, Does It Prove Something? – The New York Times

This article is part of our continuing Fast Forward series, which examines technological, economic, social and cultural shifts that happen as businesses evolve.

SAN FRANCISCO Last week, on the third floor of a small building in San Franciscos Mission District, a woman scrambled the tiles of a Rubiks Cube and placed it in the palm of a robotic hand.

The hand began to move, gingerly spinning the tiles with its thumb and four long fingers. Each movement was small, slow and unsteady. But soon, the colors started to align. Four minutes later, with one more twist, it unscrambled the last few tiles, and a cheer went up from a long line of researchers watching nearby.

The researchers worked for a prominent artificial intelligence lab, OpenAI, and they had spent several months training their robotic hand for this task.

Though it could be dismissed as an attention-grabbing stunt, the feat was another step forward for robotics research. Many researchers believe it was an indication that they could train machines to perform far more complex tasks. That could lead to robots that can reliably sort through packages in a warehouse or to cars that can make decisions on their own.

Solving a Rubiks Cube is not very useful, but it shows how far we can push these techniques, said Peter Welinder, one of the researchers who worked on the project. We see this as a path to robots that can handle a wide variety of tasks.

The project was also a way for OpenAI to promote itself as it seeks to attract the money and the talent needed to push this sort of research forward. The techniques under development at labs like OpenAI are enormously expensive both in equipment and personnel and for that reason, eye-catching demonstrations have become a staple of serious A.I. research.

The trick is separating the flash of the demo from the technological progress and understanding the limitations of that technology. Though OpenAIs hand can solve the puzzle in as little as four minutes, it drops the cube eight times out of 10, the researchers said.

This is an interesting and positive step forward, but it is really important not to exaggerate it, said Ken Goldberg, a professor at the University of California, Berkeley, who explores similar techniques.

A robot that can solve a Rubiks Cube is not new. Researchers previously designed machines specifically for the task devices that look nothing like a hand and they can solve the puzzle in less than a second. But building devices that work like a human hand is a painstaking process in which engineers spend months laying down rules that define each tiny movement.

The OpenAI project was an achievement of sorts because its researchers did not program each movement into their robotic hand. That might take decades, if not centuries, considering the complexity of a mechanical device with a thumb and four fingers. The labs researchers built a computer system that learned to solve the Rubiks Cube largely on its own.

What is exciting about this work is that the system learns, said Jeff Clune, a robotics professor at the University of Wyoming. It doesnt memorize one way to solve the problem. It learns.

Development began with a simulation of both the hand and the cube a digital recreation of the hardware on the third floor of OpenAIs San Francisco headquarters. Inside the simulation, the hand learned to solve the puzzle through extreme trial and error. It spent the equivalent of 10,000 years spinning the tiles up, down, left and right, completing the task over and over again.

The researchers randomly changed the simulation in small but distinct ways. They changed the size of the hand and the color of the tiles and the amount of friction between the tiles. After the training, the hand learned to deal with the unexpected.

When the researchers transferred this computer learning to the physical hand, it could solve the puzzle on its own. Thanks to the randomness introduced in simulation, it could even solve the puzzle when wearing a rubber glove or with two fingers tied together.

At OpenAI and similar labs at Google, the University of Washington and Berkeley, many researchers believe this kind of machine learning will help robots master tasks they cannot master today and deal with the randomness of the physical world. Right now, robots cannot reliably sort through a bin of random items moving through a warehouse.

The hope is that will soon be possible. But getting there is expensive.

That is why OpenAI, led by the Silicon Valley start-up guru Sam Altman, recently signed a billion-dollar deal with Microsoft. And its why the lab wanted the world to see a demo of its robotic hand solving a Rubiks Cube. On Tuesday, the lab released a 50-page research paper describing the science of the project. It also distributed a news release to news outlets across the globe.

In order to keep their operation going, this is what they have to do, said Zachary Lipton, a professor in the machine learning group at Carnegie Mellon University in Pittsburgh. It is their life blood.

When The New York Times was shown an early version of the news release, we asked to see the hand in action. On the first attempt, the hand dropped the cube after a few minutes of twisting and turning. A researcher placed the cube back into its palm. On the next attempt, it completed the puzzle without a hitch.

Many academics, including Dr. Lipton, bemoaned the way that artificial intelligence is hyped through news releases and showy demonstrations. But that is not something that will change anytime soon.

These are serious technologies that people need to think about, Dr. Lipton said. But it is difficult for the public to understand what is happening and what they should be concerned about and what will actually affect them.

Read more from the original source:

If a Robotic Hand Solves a Rubiks Cube, Does It Prove Something? - The New York Times


12345...102030...