The evolution of robotics: research and application progress of dental implant robotic systems | International Journal of … – Nature.com

Implantology is widely considered the preferred treatment for patients with partial or complete edentulous arches.34,35 The success of the surgery in achieving good esthetic and functional outcomes is directly related to correct and prosthetically-driven implant placement.36 Accurate implant placement is crucial to avoid potential complications such as excessive lateral forces, prosthetic misalignment, food impaction, secondary bone resorption, and peri-implantitis.37 Any deviation during the implant placement can result in damage to the surrounding blood vessels, nerves, and adjacent tooth roots and even cause sinus perforation.38 Therefore, preoperative planning must be implemented intraoperatively with utmost precision to ensure quality and minimize intraoperative and postoperative side effects.39

Currently, implant treatment approaches are as follows: Free-handed implant placement, Static computer-aided implant placement, and dynamic computer-aided implant placement. The widely used free-handed implant placement provides less predictable accuracy and depends on the surgeons experience and expertise.40 Deviation in implant placement is relatively large among surgeons with different levels of experience. When novice surgeons face complex cases, achieving satisfactory results can be challenging. A systematic review41 based on six clinical studies indicated that the ranges of deviation of the platform, apex, and angle from the planned position with free-handed implant placement were (1.250.62)mm(2.771.54)mm, (2.101.00)mm(2.911.52)mm, and 6.904.409.926.01, respectively. Static guides could only provide accurate guidance for the initial implantation position. However, it is difficult to precisely control the depth and angle of osteotomies.42 The lack of real-time feedback on drill positioning during surgery can limit the clinicians ability to obtain necessary information.42,43,44 Besides, surgical guides may also inhibit the cooling of the drills used for implant bed preparation, which may result in necrosis of the overheated bone. Moreover, the use of static guides is limited in patients with limited accessibility, especially for those with implants placed in the posterior area. Additionally, the use of guides cannot flexibly adjust the implant plan intraoperatively. With dynamic computer-aided implant placement, the positions of the patient and drills could be tracked in real-time and displayed on a computer screen along with the surgical plan, thus allowing the surgeon to adjust the drilling path if necessary. However, the surgeons may deviate from the plan or prepare beyond it without physical constraints. During surgery, the surgeon may focus more on the screen for visual information rather than the surgical site, which can lead to reduced tactile feedback.45 The results of a meta-analysis showed that the platform deviation, apex deviation, and angular deviation were 0.91mm (95% CI 0.791.03mm), 1.26mm (95% CI 1.141.38mm), and 3.25 (95% CI 2.843.66) respectively with the static computer-aided implant placement, and 1.28mm (95% CI 0.871.69mm), 1.68mm (95% CI 1.451.90mm), and 3.79 (95% CI 1.875.70), respectively, with dynamic computer-aided implant placement. The analysis results showed that both methods improved the accuracy compared to free-handed implant placement, but they still did not achieve ideal accuracy.46 Gwangho et al.47 believe that the key point of a surgical operation is still manually completed by surgeons, regardless of static guide or dynamic navigation, and the human factors (such as hand tremble, fatigue, and unskilled operation techniques) also affect the accuracy of implant placement.

Robotic-assisted implant surgery could provide accurate implant placement and help the surgeon control handpieces to avoid dangerous tool excursions during surgery.48 Furthermore, compared to manual calibration, registration, and surgery execution, automatic calibration, registration, and drilling using the dental implant robotic system reduces human error factors. This, in turn, helps avoid deviations caused by surgeons factors, thereby enhancing surgical accuracy, safety, success rates, and efficiency while also reducing patient trauma.7 With the continuous improvement of technology and reduction of costs, implant robotics are gradually becoming available for commercial use. Yomi (Neocis Inc., USA) has been approved by the Food and Drug Administration, while Yakebot (Yakebot Technology Co., Ltd., Beijing, China), Remebot (Baihui Weikang Technology Co., Ltd, Beijing, China), Cobot (Langyue dental surgery robot, Shecheng Co. Ltd., Shanghai, China), Theta (Hangzhou Jianjia robot Co., Ltd., Hangzhou, China), and Dcarer (Dcarer Medical Technology Co., Ltd, Suzhou, China) have been approved by the NMPA. Dencore (Lancet Robotics Co., Ltd., Hangzhou, China) is in the clinical trial stage in China.

Compared to other surgeries performed with general anesthesia, dental implant surgery can be completed under local anesthesia, with patients awake but unable to remain completely still throughout the entire procedure. Therefore, research related to dental implant robotic system, as one of the cutting-edge technologies, mainly focuses on acquiring intraoperative feedback information (including tactile and visual information), different surgical methods (automatic drilling and manual drilling), patient position following, and the simulation of surgeons tactile sensation.

The architecture of dental implant robotics primarily comprises the hardware utilized for surgical data acquisition and surgical execution (Fig. 4). Data acquisition involves perceiving, identifying, and understanding the surroundings and the information required for task execution through the encoders, tactile sensors, force sensors, and vision systems. Real-time information obtained also includes the robots surrounding environment, object positions, shapes, sizes, surface features, and other relevant information. The perception system assists the robot in comprehending its working environment and facilitates corresponding decision-making as well as actions.

The architecture of dental implant robotics

During the initial stage of research on implant robotics, owing to the lack of sensory systems, fiducial markers and corresponding algorithms were used to calculate the transformation relationship between the robots and the models coordinate system. The robot was able to determine the actual position through coordinate conversions. Dutreuil et al.49 proposed a new method for creating static guides on casts using robots based on the determined implant position. Subsequently, Boesecke et al.50 developed a surgical planning method using linear interpolation between start and end points, as well as intermediate points. The surgeon performed the osteotomies by holding the handpieces, with the robot guidance based on preoperatively determined implant position. Sun et al.51 and McKenzie et al.52 registered cone-beam computed tomography (CBCT) images, the robots coordinate system, and the patients position using a coordinate measuring machine, which facilitated the transformation of preoperative implant planning into intraoperative actions.

Neocis has developed a dental implant robot system called Yomi (Neocis Inc.)53 based on haptic perception and connects a mechanical joint measurement arm to the patients teeth to track their position. The joint encoder provides information on the drill position, while the haptic feedback of handpieces maneuvered by the surgeon constrains the direction and depth of implant placement.

Optical positioning is a commonly used localization method that offers high precision, a wide -field -of -view, and resistance to interference.54 This makes it capable of providing accurate surgical guidance for robotics. Yu et al.55 combined image-guided technology with robotic systems. They used a binocular camera to capture two images of the same target, extract pixel positions, and employ triangulation to obtain three-dimensional coordinates. This enabled perception of the relative positional relationship between the end-effector and the surrounding environment. Yeotikar et al.56 suggested mounting a camera on the end-effector of the robotic arm, positioned as close to the drill as possible. By aligning the cameras center with the drills line of sight at a specific height on the lower jaw surface, the cameras center accurately aligns with the drills position in a two-dimensional space at a fixed height from the lower jaw. This alignment guides the robotic arm in drilling through specific anatomical landmarks in the oral cavity. Yan et al.57 proposed that the use of eye-in-hand optical navigation systems during surgery may introduce errors when changing the handpiece at the end of the robotic arm. Additionally, owing to the narrow oral environment, customized markers may fall outside the cameras field of view when the robotic arm moves to certain positions.42 To tackle this problem, a dental implant robot system based on optical marker spatial registration and probe positioning strategies is designed. Zhao et al constructed a modular implant robotic system based on binocular visual navigation devices operating on the principles of visible light with eye-to-hand mode, allowing complete observation of markers and handpieces within the cameras field of view, thereby ensuring greater flexibility and stability.38,58

The dental implant robotics execution system comprises hardware such as motors, force sensors, actuators, controllers, and software components to perform tasks and actions during implant surgery. The system receives commands, controls the robots movements and behaviors, and executes the necessary tasks and actions. Presently, research on dental implant robotic systems primarily focuses on the mechanical arm structure and drilling methods.

The majority of dental implant robotic systems directly adopt serial-linked industrial robotic arms based on the successful application of industrial robots with the same robotic arm connection.59,60,61,62 These studies not only establish implant robot platforms to validate implant accuracy and assess the influence of implant angles, depths, and diameters on initial stability but also simulate chewing processes and prepare natural root-shaped osteotomies based on volume decomposition. Presently, most dental implant robots in research employ a single robotic arm for surgery. Lai et al.62 indicated that the stability of the handpieces during surgery and real-time feedback of patient movement are crucial factors affecting the accuracy of robot-assisted implant surgery. The former requires physical feedback, while the latter necessitates visual feedback. Hence, they employed a dual-arm robotic system where the main robotic arm was equipped with multi-axis force and torque sensors for performing osteotomies and implant placement. The auxiliary arm consisted of an infrared monocular probe used for visual system positioning to address visual occlusion issues arising from changes in arm angles during surgery.

The robots mentioned above use handpieces to execute osteotomies and implant placement. However, owing to limitations in patient mouth opening, performing osteotomies and placing implants in the posterior region can be challenging. To overcome the spatial constraints during osteotomies in implant surgery, Yuan et al.63 proposed a robot system based on earlier research which is laser-assisted tooth preparation. This system involves a non-contact ultra-short pulse laser for preparing osteotomies. The preliminary findings confirmed the feasibility of robotically controlling ultra-short pulse lasers for osteotomies, introducing a novel method for a non-contact dental implant robotic system.

It can be challenging for patients under local anesthesia to remain completely still during robot-assisted dental implant surgery.52,64,65,66,67 Any significant micromovement in the patients position can severely affect clinical surgical outcomes, such as surgical efficiency, implant placement accuracy compared to the planned position, and patient safety. Intraoperative movement may necessitate re-registration for certain dental implant robotic systems. In order to guarantee safety and accuracy during surgery, the robot must detect any movement in the patients position and promptly adjust the position of the robotic arm in real time. Yakebot uses binocular vision to monitor visual markers placed outside the patients mouth and at the end of the robotic arm. This captures motion information and calculates relative position errors. The robot control system utilizes preoperatively planned positions, visual and force feedback, and robot kinematic models to calculate optimal control commands for guiding the robotic arms micromovements and tracking the patients micromovements during drilling. As the osteotomies are performed to the planned depth, the robotic arm compensates for the patients displacement through the position following the function. The Yakebots visual system continuously monitors the patients head movement in real time and issues control commands every 0.008s. The robotic arm is capable of following the patients movements with a motion servo in just 0.2s, ensuring precise and timely positioning.

Robot-assisted dental implant surgery requires the expertise and tactile sense of a surgeon to ensure accurate implantation. Experienced surgeons can perceive bone density through the resistance they feel in their hands and adjust the force magnitude or direction accordingly. This ensures proper drilling along the planned path. However, robotic systems lack perception and control, which may result in a preference for the bone side with lower density. This can lead to inaccurate positioning compared to the planned implant position.61,62 Addressing this challenge, Li et al.68 established force-deformation compensation curves in the X, Y, and Z directions for the robots end-effector based on the visual and force servo systems of the autonomous dental robotic system, Yakebot. Subsequently, a corresponding force-deformation compensation strategy was formulated for this robot, thus proving the effectiveness and accuracy of force and visual servo control through in vitro experiments. The implementation of this mixed control mode, which integrates visual and force servo systems, has improved the robots accuracy in implantation and ability to handle complex bone structures. Based on force and visual servo control systems, Chen et al.69 have also explored the relationship between force sensing and the primary stability of implants placed using the Yakebot autonomous dental robotic system through an in vitro study. A significant correlation was found between Yakebots force sensing and the insertion torque of the implants. This correlation conforms to an interpretable mathematical model, which facilitates the predictable initial stability of the implants after placement.

During osteotomies with heat production (which is considered one of the leading causes of bone tissue injury), experienced surgeons could sense possible thermal exposure via their hand feeling. However, with free-handed implant placement surgery, it is challenging to perceive temperature changes during the surgical process and establish an effective temperature prediction model that relies solely on a surgeons tactile sense. Zhao et al.70, using the Yakebot robotic system, investigated the correlation between drilling-related mechanical data and heat production and established a clinically relevant surrogate for intraosseous temperature measurement using force/torque sensor-captured signals. They also established a real-time temperature prediction model based on real-time force sensor monitoring values. This model aims to effectively prevent the adverse effects of high temperatures on osseointegration, laying the foundation for the dental implant robotic system to autonomously control heat production and prevent bone damage during autonomous robotic implant surgery.

The innovative technologies mentioned above allow dental implant robotic systems to simulate the tactile sensation of a surgeon and even surpass the limitations of human experience. This advancement promises to address issues that free-handed implant placement techniques struggle to resolve. Moreover, this development indicates substantial progress and great potential for implantation.

The robotic assistant dental implant surgery consists of three steps: preoperative planning, intraoperative phase, and postoperative phase (Fig. 5). For preoperative planning, it is necessary to obtain digital intraoral casts and CBCT data from the patient, which are then imported into preoperative planning software for 3D reconstruction and planning implant placement. For single or multiple tooth gaps using implant robotic systems (except Yakebot),61,62,71,72 a universal registration device (such as the U-shaped tube) must be worn on the patients missing tooth site using a silicone impression material preoperatively to acquire CBCT data for registration. The software performs virtual placement of implant positions based on prosthetic and biological principles of implant surgery, taking into account the bone quality of the edentulous implant site to determine the drilling sequence, insertion depth of each drill, speed, and feed rate. For single or multiple tooth implants performed using Yakebot, there is no need for preoperative CBCT imaging with markers. However, it is necessary to design surgical accessories with registration holes, brackets for attaching visual markers, and devices for assisting mouth opening and suction within the software (Yakebot Technology Co., Ltd., Beijing, China). These accessories are manufactured using 3D printing technology.

Clinical workflow of robotic-assisted dental implant placement

For the intraoperative phase, the first step is preoperative registration and calibration. For Yakebot, the end-effector marker is mounted to the robotic arm, and the spatial positions are recorded under the optical tracker. The calibration plate with the positioning points is then assembled into the implant handpiece for drill tip calibration. Then, the registration probe is inserted in the registration holes of the jaw positioning plate in turn for spatial registration of the jaw marker and the jaw. Robot-assisted dental implant surgery usually does not require flapped surgery,73,74, yet bone grafting due to insufficient bone volume in a single edentulous space or cases of complete edentulism requiring alveolar ridge preparation may require elevation of flaps. For full-arch robot-assisted implant surgery, a personalized template with a positioning marker is required and should be fixed with metallic pins for undergoing an intraoperative CBCT examination, thus facilitating the robot and the jaws registration in the visual space and allowing the surgical robot to track the patients motion. The safe deployment of a robot from the surgical site is an essential principle for robot-assisted implant surgery. In the case of most robots, such as Yomi, the surgeon needs to hold the handpieces to control and supervise the robots movement in real time and stop the robotic arms movement in case of any accidents. With Yakebot, the entire surgery is performed under the surgeons supervision, and immediate instructions are sent in response to possible emergencies via a foot pedal. Additionally, the recording of the entrance and exit of the patients mouth ensures that the instruments would not damage the patients surrounding tissues. The postoperative phase aims at postoperative CBCT acquisition and accuracy measurement.

In clinical surgical practice, robots with varying levels of autonomy perform implant surgeries differently. According to the autonomy levels classified by Yang et al.6,8,33 for medical robots, commercial dental implant robotic systems (Table 2) currently operate at the level of robot assistance or task autonomy.

The robot-assistance dental implant robotic systems provide haptic,75 visual or combined visual and tactile guidance during dental implant surgery.46,76,77 Throughout the procedure, surgeons must maneuver handpieces attached to the robotic guidance arm and apply light force to prepare osteotomies.62 The robotic arm constrains the 3D space of the drill as defined by the virtual plan, enabling surgeons to move the end of the mechanical arm horizontally or adjust its movement speed. However, during immediate implant placement or full-arch implant surgery, both surgeons and robots may struggle to accurately perceive poor bone quality, which should prompt adjustments at the time of implant placement. This can lead to incorrect final implant positions compared to the planned locations.

The task-autonomous dental implant robotic systems can autonomously perform partial surgical procedures, such as adjusting the position of the handpiece to the planned position and preparing the implant bed at a predetermined speed according to the pre-operative implant plan, and surgeons should send instructions, monitor the robots operation, and perform partial interventions as needed. For example, the Remebot77,78 requires surgeons to drag the robotic arm into and out of the mouth during surgery, and the robot automatically performs osteotomies or places implants according to planned positions under the surgeons surveillance. The autonomous dental implant robot system, Yakebot,73,79,80 can accurately reach the implant site and complete operations such as implant bed preparation and placement during surgery. It can be controlled by the surgeon using foot pedals and automatically stops drilling after reaching the termination position before returning to the initial position. Throughout the entire process, surgeons only need to send commands to the robot using foot pedals.

Figure 6 shows the results of accuracy in vitro, in vivo, and clinical studies on robot-assisted implant surgery.20,46,48,55,62,64,67,68,69,70,71,72,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89 The results suggest that platform and apex deviation values are consistent across different studies. However, there are significant variations in angular deviations among different studies, which may be attributed to differences in the perception and responsiveness to bone quality variances among different robotic systems. Therefore, future development should focus on enhancing the autonomy of implant robots and improving their ability to recognize and respond to complex bone structures.

Accuracy reported in studies on robotic-assisted implant placement

Xu et al.77 conducted a phantom experimental study comparing the implant placement accuracy in three levels of dental implant robotics, namely passive robot (Dcarer, level 1), semi-active robot (Remebot, level 2), and active robot (Yakebot, level 2) (Fig. 7). The study found that active robot had the lowest deviations at the platform and apex of the planned and actual implant positions, While the semi-active robot also had the lowest angular deviations. Chen et al.46 and Jia et al.79 conducted clinical trials of robotic implant surgery in partially edentulous patients using a semi-active dental implant robotic system (level 1) and an autonomous dental implant robot (level 2). The deviations of the implant platform, apex, and angle were (0.530.23)mm/(0.430.18)mm, (0.530.24)mm/(0.560.18)mm and 2.811.13/1.480.59, respectively. These results consistently confirmed that robotic systems can achieve higher implant accuracy than static guidance and that there is no significant correlation between accuracy and implant site (such as anterior or posterior site). The platform and angle deviation of autonomous dental implant robots were smaller than those of semi-active dental implant robotic systems. Li et al.73 reported the use of the autonomous dental implant robot (level 2) to complete the placement of two adjacent implants with immediate postoperative restoration. The interim prosthesis fabricated prior to implant placement was seated without any adjustment, and no adverse reactions occurred during the operation.

Comparison of accuracy of dental implant robotics with different levels of autonomy (phantom experiments) (*P<0.05, **P<0.01, ***P<0.001)

Bolding et al.,53 Li et al.,20 Jia et al.,79 and Xie et al.90 used dental implant robots to conduct clinical trials in full-arch implant surgery with five or six implants placed in each jaw. The deviations of implant platform, apex, and angle are shown in Fig. 8. The haptic dental implant robot (level 1) used by Bolding et al.,53 achieved more deviations compared to other studies that used semi-active (level 1) or active robots (level 2). As its handpiece must be maneuvered by the surgeon, human errors such as surgeon fatigue may not be avoided. Owing to the parallel common implant placement paths between various implant abutments, prefabricated temporary dentures could be seated smoothly, and some patients wore temporary complete dentures immediately after surgery. These results indicate that robotic systems can accurately locate and perform implant placement during surgery.

Comparison of accuracy in robotic-assisted full-arch implant placement

As there are relatively few studies of implant robots in clinical applications, Tak acs et al.91 conducted a meta-analysis under in vitro conditions with free-handed, static-guided, dynamic navigated, and robotic-assisted implant placements, as shown in Fig. 9. It was found that, compared to free-handed, static guided and dynamic navigated implant placements, robotic-assisted implant placements have more advantages in terms of accuracy. However, in vitro studies cannot fully simulate the patients oral condition and bone quality. Recent clinical studies89,92,93 have shown a lower deviation in robotic-assisted implant placements compared to static-guided and dynamic-navigated implant placements. Common reasons for deviations in static-guided and dynamic-navigated implant placements include the following: deflection caused by hand tremors due to dense bone during surgery, surgeons experience, and other human factors. Larger clinical studies will be needed in the future to evaluate the differences between robotic and conventional surgical approaches and to provide guidance for the further development and refinement of robotic techniques.

Comparison of accuracy of free-handed, static, dynamic, and robotic-assisted implant placement. (FHIP free-hand implant placement, SCAIP static computer-aided implant placement, DCAIP dynamic computer-aided implant placement, RAIP robot-assisted implant placement)

For the long-term follow-up performance of robotic systems used in dental implant procedures, none of the comparative studies was longer than a year. One 1-year prospective clinical study by Xie et al.90 showed that the peri-implant tissues after robot-assisted full arch surgery at 1-year visit remained stable. There is little evidence indicating clinical outcomes especially for patient-reported outcomes. A more detailed clinical assessment should be included for further research.

Although robotic-assisted dental implant surgery can improve accuracy and treatment quality,94 it involves complex registration, calibration, and verification procedures that prolong the duration of surgery. These tedious processes may introduce new errors,61 and lower work efficiency, especially in single tooth implant placement62 that could extend visit times and affect patient satisfaction.62 Besides, surgeons are required to undergo additional training to familiarize themselves with the robotic system.87

During implantation, the drill tips at the end of the robotic arms cannot be tilted, and this can increase the difficulty of using robots in posterior sections with limited occlusal space.61,62 In addition, currently available marker systems require patients to wear additional devices to hold the marker in place. If these markers are contaminated or obstructed by blood, the visual system may not be able to detect them, limiting surgical maneuverability to some extent. During immediate implant placement or in cases of poor bone quality in the implant site, the drill tips may deviate towards the tooth sockets or areas of lower bone density, seriously affecting surgical precision.

Currently, only one study has developed a corresponding force-deformation compensation strategy for robots,68 but clinical validation is still lacking. Additionally, the dental implant robotic system, along with other dental implant robots developed for prosthetics, endodontics, and orthodontics, is currently single-functional. Multi-functional robots are required for performing various dental treatments.

Despite the enormous potential of robotic systems in the medical field, similar to the development of computer-aided design/computer-aided manufacturing technology, introducing and applying this technology faces multiple challenges in the initial stages. The high cost of robotic equipment may limit its promotion and application in certain regions or medical institutions. Surgeons require specialized technical training before operating robotic systems, which translates to additional training costs and time investment.95

Go here to see the original:

The evolution of robotics: research and application progress of dental implant robotic systems | International Journal of ... - Nature.com

Nvidia Announces Robotics-Oriented AI Foundational Model – InfoQ.com

At its recent GTC 2024 event, Nvidia announced a new foundational model to build intelligent humanoid robots. Dubbed GR00T, short for Generalist Robot 00 Technology, the model will understand natural language and be able to observe human actions and emulate human movements.

According to Nvidia CEO Jensen Huang, creating intelligent humanoid robots is the most exciting AI problem today. GR00T robots will learn coordination and other skills by observing humans to be able to navigate, adapt and interact with the real world. At the conference keynote, Huang showed several demos of what GR00T is capable of at the moment, including some robots performing a number of tasks.

The GR00T model takes multimodal instructions and past interactions as input and produces the actions for the robot to execute.

To power GR00T, Nvidia has created a new family of systems-on-modules, called Jetson Thor, using the latest Blackwell graphics architecture from the company and able to provide 800 teraflops (TFLOPS) of eight-bit floating-point compute.

At the foundation of GR00T lies Nvidia Isaac Sim, an extensible, Omniverse-based platform for robotics simulation aimed to improve the way AI-based robots are designed and tested, according to the company.

To train GR00T at scale, Nvidia has also built a new compute orchestration platform, Nvidia Osmo, aimed at coordinating training and inference across several Nvidia systems, including DGX systems for training, OVX systems for simulation, and IGX and AGX systems for hardware-in-the-loop validation.

Embodied AI models require massive amounts of real and synthetic data. The new Isaac Lab is a GPU-accelerated, lightweight, performance-optimized application built on Isaac Sim specifically for running thousands of parallel simulations for robot learning.

While GR00T is still very much a work in progress, Nvidia has announced two of the building blocks that will compose it, as part of the Isaac platform: a foundational model for robotic-arm manipulators, called Isaac Manipulator, and a collection of hardware-accelerated packages for visual AI and perception, the Isaac Perceptor.

According to Nvidia, Isaac Manipulator

provides up to an 80x speedup in path planning and zero-shot perception increases efficiency and throughput, enabling developers to automate a greater number of new robotic tasks.

On the other hand, Isaac Perceptor aims to improve efficiency and safety in environments where autonomous mobile robots are used, such as in manufacturing and fulfillment operations.

Both the Manipulator and the Perceptor should become available in the next quarter, says Huang.

On a related note, Nvidia has joined the Open Source Robotics Alliance, which aims to provide financial and industry support to the Robot Operating System (ROS). The company has not detailed if they plan to use ROS for GR00T robots, though.

Link:

Nvidia Announces Robotics-Oriented AI Foundational Model - InfoQ.com

Pioneering Emotional Intelligence in Robotics: The Rise of Emo – yTech

In a breakthrough for robotics and artificial intelligence (AI), a robot named Emo stands as a testament to technological ingenuity, possessing the capability to learn and replicate human emotional expressions. This development marks a significant stride in narrowing the emotional divide between humans and machines, potentially reshaping the way we interact with robots in a multitude of sectors.

Core Innovation Behind Emos Emotional Acuity Emos core innovation lies in its dual neural network architecture, which empowers the robot with unprecedented emotional intelligence. By utilizing advanced cameras and motor systems, Emo can observe and assimilate human expressions. Over time, its capacity to respond in contextually relevant ways improves, making human-robot interactions increasingly natural and seamless.

Professor Hod Lipson and his team are the visionaries behind Emos conceptualization and realization. Their work paves the way for a future where robots can forge emotional bonds with humans, setting a new benchmark in social robotics.

Potential for Transformative Impact Across Industries The ripple effect of Emos introduction is vast, with implications for customer service, therapy, elder care, and education. It foretells significant growth within the social robotics market, with affordable manufacturing techniques on the horizon and analysts predicting robust market development bolstered by the integration of empathetic robots in everyday life.

Navigating the Ethical Considerations of Advanced Robotics Notwithstanding the advancements and promises of Emos technology, ethical questions loom. Issues surrounding emotional authenticity, privacy, and employment disruptions accentuate the need for conscientious deployment of such robots. This underscores the importance of engaging with ethics-focused organizations like IEEE and ACM, which strive to establish standards that balance technological progress with societal well-being.

In summary, Emo represents a fusion of AI and emotional perception, potentially revolutionizing human-robot interaction and industry practices. Its advent warrants thoughtful consideration of the ethical landscape as we embrace the age of emotionally intelligent machines. The robotic companions evolution and the industrys path forward will be characterized by ethical vigilance, research brilliance, and insightful analysis, jointly shaping the role of robotics in our future.

Expanding the Market Forecast for Emotionally Intelligent Robots The global market for social and emotional robotics is expected to experience substantial growth over the coming years. According to a report by MarketsandMarkets, the social robot market, in particular, is expected to rise from USD 918 million in the current scenarios to over USD 3,900 million by the next decade, expanding at a CAGR of 14.5% during the forecast period. This growth is fueled by increasing adoption in sectors such as personal assistance, education, and healthcare, where they can perform tasks ranging from companionship to assisting with cognitive therapy and rehabilitation.

The emergence of robots like Emo will spur further research and development, reducing costs and enhancing functionalities. This will likely attract investment and increase the accessibility of these robots, thus making them more commonplace in both consumer and commercial environments.

Challenges and Controversies Within the Robotics Industry Despite these promising market forecasts, the robotics industry faces challenges and controversies that could impact the emotional intelligence sector. One of the primary concerns is job displacement, as robots become capable of performing tasks typically reserved for human workers. This could lead to significant shifts in the labor market and necessitate retraining for those whose jobs are affected.

Another key consideration is data privacy and security, especially with robots that can collect and analyze personal emotional data. Ensuring that this information is used responsibly and securely is paramount to maintaining public trust.

For research, development, and the establishment of standards in robotics, resources can be found through organizations such as IEEE and ACM.

Summary and Industry Outlook In conclusion, Emo exemplifies the potential for emotion recognition in robotics to drive innovation across various sectors. The social and emotional robot industry is anticipated to flourish, bringing about advancements in how these machines are integrated into our daily lives. As the industry progresses, it will be essential to monitor market dynamics, foster ethical practices, and encourage responsible innovation, thereby ensuring that the evolution of robots like Emo contributes positively to society.

The success of products like Emo and the industrys trajectory will heavily rely on striking a balance between innovation and the humane and ethical application of technology. Thought leaders, developers, and policymakers will need to collaborate to navigate these challenges successfully. The trends in the robotics industry point towards a future where emotionally intelligent machines become an integral part of the fabric of society, enhancing human life while addressing the ethical implications of such profound technological integration.

Leokadia Gogulska is an emerging figure in the field of environmental technology, known for her groundbreaking work in developing sustainable urban infrastructure solutions. Her research focuses on integrating green technologies in urban planning, aiming to reduce environmental impact while enhancing livability in cities. Gogulskas innovative approaches to renewable energy usage, waste management, and eco-friendly transportation systems have garnered attention for their practicality and effectiveness. Her contributions are increasingly influential in shaping policies and practices towards more sustainable and resilient urban environments.

Continued here:

Pioneering Emotional Intelligence in Robotics: The Rise of Emo - yTech

I tried using ChatGPT to help me move across the country – Mashable

When you hit your 20-somethings, nobody tells you how to adult. So now, four months away from the end of my lease, I need to figure out how to move across the country for the first time by myself. I could ask my parents, but where's the fun in that I am a big boy after all. This means obviously, as a reporter for an esteemed tech outlet, the solution is artificial intelligence.

I mean why not, right?

Big Tech has spent billions of dollars trying to find meaningful ways for us to incorporate generative AI into our lives. So, why not use generative AI as my personal assistant and financial planner during my cross-country move from Austin to Chicago?

In theory, moving to a new city is an ideal test of the tools OpenAI claims ChatGPT to be good at, especially now that it can access the internet, users can upload attachments and photos, and can be custom-built for specific needs. If ChatGPT can't ease some of my burdens when it comes to budgeting, searching for, financing, and driving to a new apartment that's more than 1,100 miles away, then perhaps it's not worth the GPUs it's built with.

Even before we look at apartments, I need ChatGPT to help me save money. On top of paying rent and utilities between now and June 1, I also started paying back my student loans in January, which runs me a cool $200 a month until the 2040s.

My goal is to paint a broad picture of what I need to do financially to have the money to make my move as stress-free as possible. ChatGPT and, thus, this experiment is inherently limited because the AI can't do all the financing for me. As much as I would love, in theory, for this AI to take care of my budgeting fully, it can't; nor do I feel comfortable allowing OpenAI to have access to my sensitive financial data.

Truly, this might be way above ChatGPT's pay grade considering it's a conversational AI with a hallucination problem (and not an arithmetic machine), but I gave both AIs the prompt specifying what I'm trying to achieve and asked it to calculate "how much I need to save from each paycheck to reasonably move to Chicago."

The AI was game to help.

You gotta be as specific as possible. The AI's don't appreciate having to do financial guesswork. Credit: OpenAI via screenshot

The big challenge was getting numbers that were realistic to my current situation especially when it came to dates. Between the time this was written and when my July 1 move-in date arrives, I'll receive 7 paychecks. During testing, if my prompt didn't specify July 1, 2024, ChatGPT would assume I meant July 1, 2025, and calculate for a year's worth of money. And even when I am specific about the dates, the AIs may still just hallucinate random numbers, like when ChatGPT randomly calculated for 10 pay periods instead of the 7 I'd specified.

The math was a little spotty, but with some tweaking to the prompts, ChatGPT gave me the ideal number that I should save based on my income and recurring payments. This matched up with the math I did by hand, which admittedly doesn't mean much that's why my degree is in journalism and not STEM.

Now that I know how much I need to save, I need to get a shortlist of places that fit within my budget. My range for rent for my next apartment is $1,000-$1,500. I'm not looking for anything fancy in the Windy City, but a studio/1 bed with an in-unit washer/dryer would be perfect.

Unfortunately, OpenAI has slowly wound down plugins so we're going to have to rely on CustomGPTs, an evolved version of plugins that allows users to create their own chatbots, to specify our real estate needs for this experiment. I used three different CustomGPTs: Apartment/House Rental Finder, Apartment Scout, and Apartment Finder.

So far, the running trend with these GPTs and this experiment, in general, is I need to be extremely specific which ruins the fun for me, because I'm trying to offload all my thinking to the robot. It's clearly a worse experience than just Googling. For instance, Apartment Scout gives a bulleted list of neighborhoods, and when I tell it which one I like, it goes:

It looks like I'm encountering some technical difficulties accessing the specific rental listings directly from the sources I usually rely on. However, you can find suitable studio or 1-bedroom apartments within your price range and criteria by checking out popular rental websites such as Zillow, Apartments.com, and Realtor.com.

What do you mean go somewhere else? Credit: OpenAI via screenshot

On another attempt, it provides a few links to actual apartment listings.

Now it wants to act right. Credit: OpenAI via screenshot

At least this GPT provides a link for me, unlike Apartment/House Rental Finder, which uses Craigslist to find apartments that specify my needs and then will tell me to go there myself if I ask for links to the apartments it listed.

Very helpful. Credit: OpenAI via screenshot

Ultimately, Apartment Finder was also not of much help and also told me to go somewhere else to find what I needed.

Credit: OpenAI via screenshot

The results (or lack thereof) are not surprising. ChatGPT was designed to tell the user whatever will make the user feel happy or smart, rather than be any sort of leader or authority, so if you are unsure about the broad strokes of where you want to live, then you're not gonna have much luck.

I'll guess I'll stick to late-night scrolling on Zillow.

The final step of this move is the packing and planning of the drive. One of the perks of ChatGPT is that you can upload pictures now, so I decided to see if showing ChatGPT my room would help it give me advice on how to pack up my belongings for the move.

Short answer: It didn't. Long answer: It didn't, and instead gave me generic advice on how to rip apart my room:

First, declutter the room, separating items to keep, donate, or discard. Gather moving supplies: boxes, bubble wrap, packing paper, tape, and markers. Pack items from least used to most used, labeling boxes with contents and destination room. Dismantle furniture, wrapping pieces in moving blankets or bubble wrap. Protect the electronics with bubble wrap and place them in boxes. Roll up the rug and secure with tape. Pack the artwork with bubble wrap and place in picture boxes. Use wardrobe boxes for hanging clothes and suitcases for others. Pack a box of essentials separately. Clean the room once emptied.

This advice, especially that neat tidbit about rolling up the rug in my room, wasn't all that useful (hint: my floor is carpeted).

Don't mind my room, it's a little dirty Credit: OpenAI via screenshot

Okay, well, that was a bust, so what about the drive? Unsurprisingly, this is where ChatGPT was the most helpful. I gave it two scenarios: one where I use a rented U-Haul to drive to Chicago, and another where I keep my car. According to the U-Haul website and the math done by ChatGPT, renting and driving a U-Haul costs around $700+.

The mover's dilemma: Sell all my stuff or spend $700 on a U-Haul? Credit: OpenAI via screenshot

If I drive my car, a 2012 Dodge Durango, the cost of just the gas is only around $240.

After calculating costs, ChatGPT gave me some routes to take. Of course, that information is superfluous since I can use Google Maps anyway, but it's nice to know now where my little adventure will take me. According to my friend from Illinois, the drive from Austin to Chicago is not great, so I'm glad ChatGPT told me to have some podcasts on deck to keep me entertained.

Here's the TL;DR: Don't use ChatGPT to plan a move across the country.

It's not that ChatGPT can't be helpful, cause it can. The chatbot helped me visualize a broad overview of my finances and gave me some useful tips and tricks for packing and route-planning. However, you need to be so hyper-specific with the prompts that all that time tinkering could be spent, you know, planning your move yourself.

Wanna use the CustomGPTs to help find apartments? Sorry, they'll just tell you to use Zillow. Wanna use ChatGPT to pin down how much to save out of your paychecks? Unless you're willing to get into the weeds about your financial security, good luck getting it to not just make shit up and even then it still might. Of course, these chatbots aren't designed to do life for you, but this exercise was somehow more frustrating than I thought it would be.

I guess I'll call my parents for help after all.

See original here:

I tried using ChatGPT to help me move across the country - Mashable

ANYmal robot excels in parkour feats thanks to neural network training – Interesting Engineering

The agility of ANYmal, a dog-like robot, has been enhanced with a new framework, enabling it to complete a basic parkour course at up to 6 feet (2 meters) per second. Parkour, the urban sport of navigating obstacles with athleticism, is gaining widespread popularity.

The modified learning approach, which emphasizes crawling, jumping, climbing, and crouching, may enable the robot to crawl under and vault over physical obstacles while conducting search and rescue operations.

Read the original:

ANYmal robot excels in parkour feats thanks to neural network training - Interesting Engineering

AI productivity tools can help at work, but some make your job harder – The Washington Post

In a matter of seconds, artificial intelligence tools can now generate images, write your emails, create a presentation, analyze data and even offer meeting recaps.

For about $20 to $30 a month, you can have the AI capabilities in many of Microsoft and Googles work tools now. But are AI tools such as Microsoft Copilot and Gemini for Google Workspace easy to use?

The tech companies contend they help workers with their biggest pain points. Microsoft and Google claim their latest AI tools can automate the mundane, help people who struggle to get started on writing, and even aid with organization, proofreading, preparation and creating.

Of all working U.S. adults, 34 percent think that AI will equally help and hurt them over the next 20 years, according to a survey released by Pew Research Center last year. But a close 31 percent arent sure what to think, the survey shows.

So the Help Desk put these new AI tools to the test with common work tasks. Heres how it went.

Ideally, AI should speed up catching up on email, right? Not always.

It may help you skim faster, start an email or elaborate on quick points you want to hit. But it also might make assumptions, get things wrong or require several attempts before offering the desired result.

Microsofts Copilot allows users to choose from several tones and lengths before you start drafting. Users create a prompt for what they want their email to say and then have the AI adjust based on changes they want to see.

While the AI often included desired elements in the response, it also often added statements we didnt ask for in the prompt when we selected short and casual options. For example, when we asked it to disclose that the email was written by Copilot, it sometimes added marketing comments like calling the tech cool or assuming the email was interesting or fascinating.

When we asked it to make the email less positive, instead of dialing down the enthusiasm, it made the email negative. And if we made too many changes, it lost sight of the original request.

They hallucinate, said Ethan Mollick, associate professor at the Wharton School of the University of Pennsylvania, who studies the effects of AI on work. Thats what AI does make up details.

When we used a direct tone and short length, the AI produced fewer false assumptions and more desired results. But a few times, it returned an error message suggesting that the prompt had content Copilot couldnt work with.

Using copilot for email isn't perfect. Some prompts were returned with an error message. (Video: The Washington Post)

If we entirely depended on the AI, versus making major manual edits to the suggestions, getting a fitting response often took multiple if not several tries. Even then, one colleague responded to an AI-generated email with a simple response to the awkwardness: LOL.

We called it Copilot for a reason, said Colette Stallbaumer, general manager of Microsoft 365 and future of work marketing. Its not autopilot.

Googles Gemini has fewer options for drafting emails, allowing users to elaborate, formalize or shorten. However, it made fewer assumptions and often stuck solely to what was in the prompt. That said, it still sometimes sounded robotic.

Copilot can also summarize emails, which can quickly help you catch up on a long email thread or cut through your wordy co-workers mini-novel, and it offers clickable citations. But it sometimes highlighted less relevant points, like reminding me of my own title listed in my signature.

The AI seemed to do better when it was fed documents or data. But it still sometimes made things up, returned error messages or didnt understand context.

We asked Copilot to use a document full of reporter notes, which are admittedly filled with shorthand, fragments and run-on sentences, and asked it to write a report. At first glance, the result seemed convincing that the AI had made sense of the messy notes. But with closer inspection, it was unclear if anything actually came from the document, as the conclusions were broad, overreaching and not cited.

If you give it a document to work off, it can use that as a basis, Mollick said. It may hallucinate less but in more subtle ways that are harder to identify.

When we asked it to continue a story we started writing, providing it a document filled with notes, it summarized what we had already written and produced some additional paragraphs. But, it became clear much of it was not from the provided document.

Fundamentally, they are speculative algorithms, said Hatim Rahman, an assistant professor at Northwestern Universitys Kellogg School of Management, who studies AIs impact on work. They dont understand like humans do. They provide the statistically likely answer.

Summarizations were less problematic, and the clickable citations made it easy to confirm each point. Copilot was also helpful in editing documents, often catching acronyms that should be spelled out, punctuation or conciseness, much like a beefed-up spell check.

With spreadsheets, the AI can be a little tricky, and you need to convert data to a table format first. Copilot more accurately produced responses to questions about tables with simple formats. But for larger spreadsheets that had categories and subcategories or other complex breakdowns, we couldnt get it to find relevant information or accurately identify the trends or takeaways.

Microsoft says one of users top places to use Copilot is in Teams, the collaboration app that offers tools including chat and video meetings. Our test showed the tool can be helpful for quick meeting notes, questions about specific details, and even a few tips on making your meetings better. But typical of other meeting AI tools, the transcript isnt perfect.

First, users should know that their administrator has to enable transcriptions so Copilot can interact with the transcript during and after the meeting something we initially missed. Then, in the meeting or afterward, users can use Copilot to ask questions about the meeting. We asked for unanswered questions, action items, a meeting recap, specific details and how we couldve made the meeting more efficient. It can also pull up video clips that correspond to specific answers if you record the meeting.

The AI was able to recall several details, accurately list action items and unanswered questions, and give a recap with citations to the transcript. Some of its answers were a little muddled, like when it confused the name of a place with the location and ended up with something that looked a little like word salad. It was able to identify the tone of the meeting (friendly and casual with jokes and banter) and censored curse words with asterisks. And it provided advice for more efficient meetings: For us that meant creating a meeting agenda and reducing the small talk and jokes that took the conversation off topic.

Copilot can be used during a Teams meeting and produce transcriptions, action items, and meeting recaps. (Video: The Washington Post)

Copilot can also help users make a PowerPoint presentation, complete with title pages and corresponding images, based off a document in a matter of seconds. But that doesnt mean you should use the presentation as is.

A documents organization and format seem to play a role in the result. In one instance, Copilot created an agenda with random words and dates from the document. Other times, it made a slide with just a persons name and responsibility. But it did better documents with clear formats (think an intro and subsections).

Google's Gemini can generate images like this robot. (Video: The Washington Post)

While Copilots image generation for slides was usually related, sometimes its interpretation was too literal. Googles Gemini also can help create slides and generate images, though more often than not when trying to create images, we received a message that said, for now were showing limited results for people. Try something else.

AI can aid with idea generation, drafting from a blank page or quickly finding a specific item. It also may be helpful for catching up on emails, meetings and summarizing long conversations or documents. Another nifty tip? Copilot can gather the latest chats, emails and documents youve worked on with your boss before your next meeting together.

But all results and content need careful inspection for accuracy, some tweaking or deep edits and both tech companies advise users verify everything generated by the AI. I dont want people to abdicate responsibility, said Kristina Behr, vice president of product management for collaboration apps at Google Workspace. This helps you do your job. It doesnt do your job.

And as is the case with AI, the more details and direction in the prompt, the better the output. So as you do each task, you may want to consider whether AI will save you time or actually create more work.

The work it takes to generate outcomes like text and videos has decreased, Rahman said. But the work to verify has significantly increased.

Continued here:

AI productivity tools can help at work, but some make your job harder - The Washington Post

| The Code Block CES 2024, Cyborgs, and Fingerprints | by Joshua Lin | The Code Block | Jan, 2024 – Medium

Bartesians Robot Bartender

The potential for full-life automation might be closer than you think. Robot bartenders, ice cream whenever you want within 2 minutes, and automated smart grills? Put on your hat Alfredo Linguine, today is the day anyone can become a chef, and you dont even need a rat to do it!

Chef AIs one-touch air fryer: All you need to do is put in the food that youre trying to cook, and the air fryer will detect the components inside and do its thing! Presumably available in September for $250.

Bartesians robot bartender: Capable of holding 4 types of spirits and automatically creates up to 60 different drinks. Said to retail for $369 later this year.

ColdSnaps no-clean ice cream: Automated ice cream within 2 minutes of cravings. Capable for the user to choose between multiple flavors before dispensing cold, creamy goodness.

Brisk Its easy-grill: I think you get the point. It does its job and you dont have to do anything.

With AI and automation, everything in life is about to get much easier. (AP News)

Wearables and performance-enhancing technology is starting to enter the scene.

Leave your physical limitations behind with Hypershells new Pro-X exoskeleton that makes hiking and traversing nature even easier. The device uses AI to sense your movements and adjust accordingly helping you carry more weight in your backpack, reducing strain on leg muscles, and weighs about 4.5 lbs or 2 kilograms. Users are said to be able to reach speeds of 12 miles per hour.

Another CES showcased product was the Evie ring. It is a discreet ring meant to track health primarily for women to track their period, ovulation, steps, sleep, mood, menstrual cycles, and many more metrics! A female analysts dream, Id assume. Evie says that the ring has 4+ days of battery life and orders are now open.

Looks like were one step closer to full-tech augmentation.

Whodunit?

An undergraduate senior at Columbia wanted to explore the truth on whether fingerprints were truly unique. He used a deep contrastive network and analyzed 60,000 fingerprints to find that different fingers of the same person shared strong similarities. Guo believes that this contradicted the notion that each fingerprint is completely unique.

In evaluation, the model had an accuracy rate of 77% when using one fingerprint from an individual to determine if another fingerprint belonged to the same person.

The practical use of this discovery in criminal investigations is debated, with some suggesting that it could help generate new leads in cold cases. This technology could potentially help solve very niche cases, where officials find one print at one scene, and another print at another.

Whether or not its practical, there is always the possibility that these researchers are on the right track in finding hidden patterns and features that humans may have overlooked.

Thats it for today! Follow The Code Block and get daily newsletters sent to your email inbox for free!

View post:

| The Code Block CES 2024, Cyborgs, and Fingerprints | by Joshua Lin | The Code Block | Jan, 2024 - Medium

This Cartoonish New Robot Dog Somehow Looks Even Scarier

Chinese robotics company called Weilan recently showed off a creepy, cartoonish-looking robot dog called

Dog Days

We've come across plenty of robot dogs over the years that can dance, speak using ChatGPT, or even assist doctors in hospitals.

But they all have one thing in common: they look like lifeless machines on four stilts.

In an apparent effort to put the "dog " back into "robodog," a Chinese robotics company called Weilan recently showed off an entirely new class of robotic quadruped called "BabyAlpha" — essentially half cartoon dog and half robot.

The company may have overshot its goal a little bit, though, ending up with an even more terrifying-looking machine that looks like it belongs in a "M3GAN"-esque horror flick.

Robot's Best Friend

The small robot canine has a spotted head, a cute little nose, and two floppy-looking ears.

According to the company's website, which we crudely translated using Google, the robot is "especially designed for family companionship scenarios."

"BabyAlpha likes to be by your side," the website reads adding that the little robot has "endless technological superpowers" thanks to AI. Not creepy at all!

Weilan is also targeting its pet as a way to teach children either English or Chinese or keep track of younger family members through a video call tool.

But we can't shake the feeling that BabyAlpha is exactly the kind of thing that kickstarts a series of unfortunate events in a shlocky horror movie.

In case you do trust your children to be around a BabyAlpha, the companion will cost the equivalent of around $1,700 when it goes on sale.

More on robot dogs: Oh Great, They Put ChatGPT Into a Boston Dynamics Robot Dog

The post This Cartoonish New Robot Dog Somehow Looks Even Scarier appeared first on Futurism.

Read more from the original source:
This Cartoonish New Robot Dog Somehow Looks Even Scarier