Virtual reality shows how fracking affects Argentinian river basin – Stockholm Environment Institute

A recent research conference at Argentinas National University of Comahue (UNCo) featured a new virtual reality (VR) experience that allows users to explore how local unconventional hydrocarbon production, known as fracking, affect the water supply in the Comahue river basin.

SEI scientists Laura Forni, Romina Daz Gmez and Marina Mautner are working together with UNCo on the research, which was featured in a news article about the virtual reality project in Ro Negro .

Argentinas Vaca Muerta region, located in the Patagonian south, is home to the worlds second-largest shale gas reserves and the fourth-largest shale oil deposits , where gas production is outpacing the growth of infrastructure to accommodate it.

SEI and UNCo are investigating how this gas production might pose risks to the water supply, agricultural production, and the population that depends on them. The researchers use remote sensing , or scanning performed by satellites and high-flying aircraft, to map the fracking wells proximity to rivers, farms, neighborhoods and cities. That data informed the interactive VR exhibit, accessible by VR goggles, at UNCos conference.

For the first time we are going to have all the information centralized and we are going to be able to visualize all the components at the level of the Comahue Basin, Daz Gmez told Ro Negro.

As climate change progresses, the regions water supply is expected to decline, while fracking increases the wastewater generated. The teams research on this topic indicates that shale operations increase pressure on the water supply and pose a risk to water quality.

The researchers hope the VR project will help educate the public about frackings impact on local populations and ecosystems, as well as promote the use of remote sensing to produce such data.

Go here to see the original:

Virtual reality shows how fracking affects Argentinian river basin - Stockholm Environment Institute

Digital Twin and Metaverse: The Future of Virtual Reality – NASSCOM Community

The concept of virtual reality (VR) has come a long way since its inception. With the advancements in technology, it has become possible to create virtual environments that are almost indistinguishable from the real world. Two recent concepts that are gaining traction in the world of VR are digital twin and metaverse. In this article, we will explore what these terms mean, how they are related to each other, and what the future of VR might look like with their implementation.

Table of Contents

Introduction

What is Virtual Reality?

Digital Twin: The Concept and Its Applications

The Rise of Metaverse

The Relation between Digital Twin and Metaverse

How Digital Twin and Metaverse will Change the Future of VR

Advantages of Digital Twin and Metaverse

Challenges and Risks Associated with Digital Twin and Metaverse

The Future of Virtual Reality

Conclusion

Introduction

Virtual reality has come a long way since the first crude head-mounted display was developed in the 1960s. With the advancements in technology, it is now possible to create fully immersive virtual environments that are almost indistinguishable from the real world. Two recent concepts that are gaining popularity in the world of VR are digital twin and metaverse. In this article, we will explore what these concepts are, how they are related to each other, and how they will shape the future of VR.

What is Virtual Reality?

Virtual reality is a computer-generated simulation of a three-dimensional environment that can be interacted with in a seemingly real way by a person using special electronic equipment, such as a head-mounted display, gloves, or a bodysuit. VR can be used for various purposes, such as entertainment, education, and training.

Digital Twin: The Concept and Its Applications

A digital twin is a virtual representation of a physical object, process, or system. It is created by collecting data from sensors, cameras, and other sources in real-time and using it to create a 3D model of the object, process, or system. Digital twins can be used in various industries, such as manufacturing, healthcare, and transportation. For example, in manufacturing, a digital twin can be used to simulate the production process and optimise it for efficiency.

The Rise of Metaverse

Metaverse is a term that was first coined by science fiction author Neal Stephenson in his 1992 novel Snow Crash. It refers to a virtual universe that is created by the convergence of multiple virtual worlds. The concept of metaverse has gained popularity in recent years, especially after the success of virtual worlds such as Second Life and Minecraft. Companies such as Facebook and Epic Games are also working on creating their own versions of metaverse.

The Relation between Digital Twin and Metaverse

Digital twin and metaverse are related concepts in the sense that both involve the creation of virtual environments. However, while digital twin is focused on creating a virtual representation of a physical object or system, metaverse is focused on creating a virtual universe that is inhabited by virtual beings.

How Digital Twin and Metaverse will Change the Future of VR

The implementation of digital twin and metaverse will bring about significant changes in the future of VR. Digital twins will enable us to create virtual replicas of physical objects and systems, which can be used for various purposes, such as training and maintenance. Metaverse, on the other hand, will create a virtual universe that is inhabited by virtual beings, opening up new possibilities for entertainment, education, and social interaction.

Advantages of Digital Twin and Metaverse

The implementation of digital twin and metaverse will bring several advantages to the field of VR. Digital twin will allow us to test and optimise physical systems in a virtual environment before implementing them in the real world, reducing the cost and risk of errors. It can also help in remote monitoring and maintenance of physical systems, leading to increased efficiency and reduced downtime.

Metaverse will create new opportunities for entertainment, social interaction, and education. It will allow people from different parts of the world to come together in a virtual environment and experience things that are not possible in the real world. It can also be used for educational purposes, such as simulating historical events or scientific experiments.

The Future of Virtual Reality

The implementation of digital twin and metaverse is just the beginning of the future of VR. With the advancements in technology, it is possible that we may one day be able to fully immerse ourselves in a virtual environment that is indistinguishable from the real world. This could revolutionise the way we work, play, and interact with each other.

Conclusion

Digital twin and metaverse are two concepts that are gaining popularity in the world of virtual reality. Digital twin involves creating a virtual representation of a physical object or system, while metaverse involves creating a virtual universe that is inhabited by virtual beings. The implementation of these concepts will bring several advantages to the field of VR, such as increased efficiency, new opportunities for entertainment and education, and the ability to test and optimise physical systems in a virtual environment.

Here is the original post:

Digital Twin and Metaverse: The Future of Virtual Reality - NASSCOM Community

Trends and Opportunities in Augmented Reality – Manufacturing.net

Economic uncertainties are pushing manufacturers to embrace emerging technologies as a means of staying competitive in a constantly evolving market. One example is augmented reality (AR). Deloittes 2023 manufacturing industry outlook reports that 12 percent of surveyed manufacturers plan to focus on AR in the coming year to improve operational efficiencies.

AR technologies enhance the physical world by superimposing digital information onto real-world objects and environments. Often AR displays are paired with gesture recognition to enable interaction with the AR environment. This creates an immersive experience that enhances the perception of and engagement with the real world.

AR can equip facility managers with real-time feeds indicating operating status and delivering production statistics. AR can assist industrial engineers by providing real-time notifications and alerts of potential maintenance issues, allowing for prompt and proactive resolution before they escalate into larger problems. Maintenance personnel can use AR to receive real-time guidance and step-by-step workflows during upkeep and repair operations, improving accuracy and minimizing downtime. Likewise, operators can use AR for training and ongoing guidance while operating equipment and machinery, reducing the risk of error and enhancing efficiency.

Over the last decade, patent applications that mention AR in either their title or abstract have grown from a few hundred applications per year to over a thousand. By examining patent filings, we can gain insight into how industrial manufacturers are utilizing AR technologies in their operations and achieving potential competitive advantages.

Obtaining a patent, however, can involve a significant investment and requires full disclosure of the invention. There is also no guarantee of securing meaningful patent protection, as the final outcome may be either no or limited protection. It is crucial to weigh the costs and benefits of revealing the details of an invention against the potential commercial value of any patent that might result, and the feasibility of enforcing patent rights against an infringer.

AR technologies are being used to enhance operations across entire manufacturing facilities. For instance, U.S. Patent No. 11,265,513, titled Virtual Reality and Augmented Reality for Industrial Automation, provides examples of AR systems that generate and deliver AR presentations to a user via a wearable device. These presentations include 3D holographic views of a plant facility or a location within a plant facility, which can be rendered based on the users current location or orientation on a facility floor.

The AR system can provide automation system data, notification, and proactive guidance by modifying the users view of the immediate surroundings. Additionally, the AR system can superimpose indicators near their corresponding machines, which can relate to critical operating parameters such as temperatures, pressures, speeds, and voltage levels, as well as statistics or key performance indicators such as overall equipment effectiveness, performance or production efficiency, percentage of machine availability over time, product quality statistics, cycle times, overall downtime or runtime durations, and more.

AR technologies are also being used to enhance maintenance operations. U.S. Patent No. 11,270,473, titled Mechanical Fastening Unit Management Method Using Augmented Reality, uses AR to aid operators in the proper tightening of fasteners. The AR-based system superimposes a virtual space on a real space to create an an augmented reality space. Within this AR space, virtual counterparts of a real-world fastening device, such as a torque wrench, and the real-world fasteners are presented. The AR system uses various visual cues such as colors, flashes, text, and graphics to indicate a proper tightening sequence, whether each fastener has received an appropriate amount of torque, and which fasteners still require tightening.

AR technologies have emerged as powerful tools for streamlining the handling of end products generated by this equipment. U.S. Patent No. 11,358,180, titled Workpiece Collecting Point Units and Methods for Supporting the Processing of Workpieces, describes using AR to improve the time-consuming and error-prone process of collecting, sorting, and arranging workpieces. With this system, an AR-equipped operator receives information indicating the appropriate sorting bins for the workpieces during a sorting process. During a collection process, the AR system can augment the display of a collection table to indicate the location of workpiece stacks of the same type and alert the operator when a stack of workpieces is complete.

U.S. Patent Application Publication No. 2022/0221845, titled System and Method for Augmented Reality (AR) Assisted Manufacture of Composite Structures and Bonded Assemblies, describes an AR-assisted system specifically designed for assembling the various layers of such structures and assemblies. These structures, which can include aircraft and vehicle panels, often possess complex geometries that necessitate precise placement during assembly. The AR system provides virtual representations of each layer of the structure and provides visual indicators that ensure accurate placement, position, and orientation of new layers relative to previously assembled layers.

Patents are not the only way to protect competitive advantages. Given the unique nature of manufacturing, innovators should carefully consider whether patents or alternative forms of intellectual property, such as trade secrets, are the most appropriate means of protecting their innovations. AR technologies often are employed in private within the confines of a manufacturing floor. The lack of visibility into whether a competitor is violating a patented invention can limit the effectiveness of the patent. The ability to detect infringement, therefore, is a key factor to consider when deciding whether to seek patent protection.

Potential commercial value is another important factor to consider. The examples above demonstrate that AR technologies may be tailored to suit the specific needs of a given facility, process, or end product. Understanding whether other manufacturers have similar needs can provide valuable insight into the potential market for the AR invention and help to assess the likelihood of competitors copying or licensing the invention. The expected commercial value of a patent, however, should be balanced against the potential costs to enforce it.

Defending ones own innovations, however, is not the only motivation for applying for a patent. Other motivations include potential revenue streams via licensing, access to other technologies through cross-licensing opportunities or joint ventures, obtaining leverage in litigation, attracting investment through increased valuation, and providing access to funding by using patent assets as collateral. Ultimately, a combination of factors and motivations typically guide the decision to seek patent protection.

Manufacturers contemplating patenting their AR innovations should consult with an intellectual property attorney who can provide guidance on available options to safeguard those innovations and assist in crafting a strategy that aligns with specific business objectives.

Brian Emfinger is an IP attorney in Banner Witcoffs Chicago office. His email is bemfinger@bannerwitcoff.com.

Read the original here:

Trends and Opportunities in Augmented Reality - Manufacturing.net

How augmented reality is being used to train the next generation of … – TheFabricator.com

In this augmented reality gas metal arc welding setup, a student lays down a virtual bead.

Its a black art. I used to hear that a lot on fab shop tourscode for something that took years to learn and only the talented few truly mastered. Why, exactly? Sometimes it had to do with the nature of the skill and the workers tactile and visual experience welding a workpiece. If something went awry, theyd try again. And again. And again. For years.

Thing is, considering the acute worker shortage, the industry just doesnt have that kind of time. It needs some way to shorten that training cycle while not skimping on process fundamentals, so that students know what works in what situation and why. They dont just show up to the shop, learn a narrow set of skills (push this button, weld this joint), and get to work. They follow procedures, but they also know why those procedures work so well.

Here, augmented reality (AR) might fill a need, especially for one of the most hands-on processes on the fab shop floor: welding.

Weve been in the augmented reality space for the past eight years, and it continues to get better and better. Were trying to get as close to reality as possible, including visual, audio, and tactile elements. Our ability to create an accurate weld puddle in software has come a long way in just the past few years.

That was Steve Hidden, national account manager, welding education and workforce development, at Miller Electric Mfg. LLC in Appleton, Wis. The company offers its AugmentedArc augmented reality welding system, a technology that merges the visual, aural, and tactile welding experiencethe gun, the workpiece, the buzzing, the visual cueswith software that simulates how a bead flows given how the weld is performed.

Students can wield a gas metal arc welding (GMAW) gun, a stinger for shielded metal arc welding (SMAW), or a gas tungsten arc welding (GTAW) torch. The experience isnt a video game. Using a combination of sensors that read the position of strategically placed QR-codes on the weld gun or torch and workpiece, the AR approach to weld training tracks students movements throughout the process.

Imagine the first time a student grasps the welding gun and lays a bead on plate. He strikes the arc, hesitates, then moves too fast. He tries again and burns through. Spatter goes everywhere, and coupon after coupon after coupon goes into the recycling bin. The student continues to practice, using up shielding gas and welding wire, and putting the gun nozzle and other consumable components through all sorts of abuse. Its not a pretty site, andas anyone working at a technical school or fab shop with in-house training knowsit can get quite expensive.

Now imagine that same student donning a welding helmet, only this time hes manipulating an odd-looking GTAW torch and filler rod, each with QR codes. Instead, sensors in the students helmet read those codes to determine how exactly the student manipulates the tungsten tip along a lap joint, creating a virtual fillet on a workpiece that, again, is covered with strategically placed codes, all of which become invisible when the student dons the welding helmet. With the helmet on, he sees a metallic workpiece ready to be joined. He depresses the foot pedal throughout the process, but there are no arcs, no spatter, no metal at all.

A student next to him wields another odd-looking device, this time a stinger for the SMAW process with a code-covered cube near the endagain, all invisible through the weld helmet. She manipulates the stinger carefully down a vertical groove joint. Next to her sits another student, this one manipulating the nozzle of a GMAW gun around a pipe to create a flange weld.

All three are welding in AR. They see the arc, filler metal, and weld being laid down, and they even hear the weld buzz, just as a welder would in a real-world application. When the student practicing GTAW dips too much of the filler rod at once, the weld pool reacts. When the student practicing GMAW travels too quickly, again, the weld pool reacts. After completing the practice joint, all three students keep their welding helmets on to view their completed, virtual weld, defects and all.

Through the welding helmet, the student can see a virtual representation of the welding arc, plus certain markers showing ideal work position, travel, and standoff for a particular lesson.

The software points out exactly where and how those errors occurred. The arc length was too long here. The filler rod angle was off here. Your work and travel angles were off. Not only this, the software shows what those angles should have been and exactly how to correct it.

On the next try, the students teacher turns on visual aids that they see through the welding helmet, showing what elements should be where. I call them training wheels, Hidden said. Are they too close? Are they too far way?

The visual aids are dynamic, changing as needed to direct the student. For instance, a visual cue might show the student where the welding gun should be at a certain point during the weld program; as the student catches up, the cue changes.

We also give the instructor the ability to let students make their own mistakes, Hidden said, adding that the software neednt tell them that, say, their gas setting is wrong, that the voltage is set too high, or anything else. The software might not simulate exactly what happens when something goes awry, like an actual burn-through of the material. But it can graphically show somethings amiss and leave students to figure out what the problem is on their own. Alternatively, teachers can set the system up to notify exactly whats wrong.

Teachers can create their own assignments, Hidden said, adding that they can establish what training wheel symbols to display and when. The key is knowing when to stop giving assistance and let students fly solo. It all depends on students needs and where they are on their training journey.

[AR] serves as an interim phase between theory and hands-on applications, said Patricia Carr, national manager of education and workforce development at Miller, adding that the technology has helped students uncomfortable with the arcs and sparks, including those with disabilities, gain confidence before practicing the real thing, effectively broadening the recruiting net.

The AR system has been designed around the needs of educators. Over the years, experienced welders and welding engineers within and outside Miller have collaborated with software engineers to make the experience ever more realistic. Students now can see all the puddle dynamics, with molten metal wetting against the joint sidewalls. They see weld pool disturbances that could indicate undercut or porosity, incomplete penetration, as well as over-welding practices that could lead to excessive and costly grinding.

All this begs the question, could AR be used to certify welders? Hidden chuckled a bit. Not today, he said. This tool is all about preparation. AR could allow welders to refine their technique and gain muscle memory before practicing in the lab and taking the certification test.

Gaining that muscle memory can be an extraordinary challenge, especially when improper technique can lead to a messy situationlike stick welding overhead, holding an exceedingly long arc, and being caught in a rainstorm of hot sparks.

Using AR, a student can place the workpiece wherever needed to start practicing those challenging weld positions. We have seven coupons for all positions, Hidden said, including flat and overhead. And the beauty with AR, I can take my part [coupon], put double sticky tape on it, and put it underneath the table. Students can crawl underneath the table and weld it.

Looking through the welding helmet, a student can inspect his weld and receive specific feedback.

Repetition builds muscle memory and confidence, preparing students for the real world of overhead welding. Once they strike an arc for real, theyre more likely to maintain the right welding technique and produce a clean bead without a plethora of sparks raining down.

Students and professionals using the AR system can practice any technique they wish, but as Hidden explained, software does need to be built around a specific technique to score itthat is, building the ability to track the position of the weld and consumables, compare it to an ideal, score it based on that comparison, and pinpoint areas for improvement. For instance, students practicing GTAW might want to walk the cup over a certain joint geometry. They can run through the motions to gain the muscle memory, but the system wont be able to give comprehensive feedback, at least not yet.

Though, of course, new software is being written and improved upon all the time. As Carr explained, Miller has been following the voice-of-the-customer methodology, developing software requested by the majority of current and potential users of the technology.

Even in its current state, AR helps demystify a misunderstood, opaque process by identifying exactly what makes a good weld and what doesnt. Skilled people take many paths to achieve a quality weld, so not following exactly what the AR system prescribes doesnt guarantee failure. Still, in the future, those learning and perfecting their skillsand perhaps even experienced welding professionalsmight look more and more to AR as a kind of compass, something to reference to make sure their fundamentals are there and that theyre headed in the right direction. And theres an added bonus: They neednt waste consumables and test coupons in the process.

See the article here:

How augmented reality is being used to train the next generation of ... - TheFabricator.com

CoPilot to Harness AI Speed, Scale, Talespin CEO Says – XR Today

Virtual reality (VR) training continues to lead as one of the top verticals for the extended reality (XR) umbrella of technologies. Along with augmented and mixed reality (AR/MR), VRs fully-immersive training capabilities remain a vital tool in boosting learner engagement. Talespin, a major supplier of XR training solutions, recently debuted a web-based version of its CoPilot Designer platform to increase adoption rates for enterprises.

Numerous studies and end users have documented a significant increase in information and employee retention rates due to XRs appealing, interactive instructional designs. Additional emerging technologies like artificial intelligence (AI) are also expediting the democratisation of XR for enterprises tapping the tools needed to empower workforces.

XR Today interviewed Kyle Jackson, CEO and Co-Founder, Talespin to discuss the latest updates on the companys CoPilot Designer solution. He discussed how his companys latest update provides employers with impressive platforms to upskill workers on-the-fly and with promising results.

Kyle Jackson: We are excited to announce the launch of a web-based version of our no-code, AI-enabled XR content creation tool, CoPilot Designer, which is now available for our customers and partners.

This update creates a version of our design tool that is easier to use and adopt than ever. In the near term, this will help our current customers save time and money, allow us to welcome more customers to our platform, and ultimately lead to more immersive learning content production across our ecosystem empowering companies to scale content across teams, offices, and geographies faster and more efficiently than ever.

However, we think its important to note the broader context. If we take a step back, theres an even bigger picture that CoPilot Designer plays its part in as our industry adapts to generative AI content creation tools and prepares for a new wave of XR headsets. We know this combination will usher in a new paradigm of immersive content creation and distribution.

We see our platform, and specifically CoPilot Designer, as a critical layer in this equation. CoPilot Designer harnesses AIs speed and scale and applies both to creating and publishing the next generation of highly engaging XR learning content.

Kyle Jackson: When we first released CoPilot Designer to the market in 2021, it was a key piece of our vision to help people become better humans. Weve always believed that AI-powered virtual humans could help real humans get better at skills such as critical thinking, empathy, and navigating difficult workplace conversations.

Ironically, we realized that as AI emerged, our human skills soft skills would be more in demand than ever. Since then, weve spent more than four years using our platform to help dozens of enterprises train their workforce in human skills in VR.

VR is a perfect channel for workers to practice difficult conversations in a safe, non-judgmental environment. It allows learners to practice delicate scenarios requiring mindfulness and tact and reinforces key interactions no college or business school trains for.

Going forward, we believe that as AI continues to automate many tasks, a spotlight will be further shined on workers human intelligence, soft skills, and people skills and become what truly differentiates them in the future workplace.

Kyle Jackson: Currently, customers can use CoPilot Designer for content creation workflows with complementary generative AI text and image tools, such as ChatGPT and Midjourney. XR content created with CoPilot Designer also uses text-to-speech and natural language processing to deliver realistic conversational simulations for learners.

We are also thoughtfully exploring more AI integrations on our roadmap. For example, the ability to create immersive learning simulations where the speech from virtual human characters can be driven by a large language model (LLM) that is guided by the constraints provided by the business.

This can also be mixed with more prescriptive immersive branched narratives for enterprise use cases ranging from onboarding to customer experience training simulations. With integrations like this, CoPilot Designer can be used to author open-ended AI-powered immersive learning experiences and learning modules with a very specific scenario or script for use cases that require that.

Kyle Jackson: Absolutely! Our customers have seen impressive results across different industries. For example, a PwC study proved the efficacy of immersive learning with results like fourfold increase in learning speed and learners saying they were 275 times more confident after immersive learning training. Learners ranging from Fortune 500 employees to high school-age students benefit from engaging learning experiences.

The industry applied these results to corporate training use cases ranging from practising customer conversations in the insurance industry to helping managers simulate giving performance feedback.

Were on a mission to help people develop the human skills that set us apart as AI and other technologies continue to permeate further into our work lives. We see great opportunities for these very tools to advance that mission.

See the original post here:

CoPilot to Harness AI Speed, Scale, Talespin CEO Says - XR Today

Teletrix licenses methods for ionizing radiation training using augmented reality – Newswise

Newswise A method using augmented reality to create accurate visual representations of ionizing radiation, developed at the Department of Energys Oak Ridge National Laboratory, has been licensed byTeletrix, a firm that creates advanced simulation tools to train the nations radiation control workforce.

Ionizing radiation which is linked to cancer and other health problems has enough energy to knock electrons off of atoms or molecules, creating ions. Occupational exposure is a common occurrence for many radiological workers, so any method of decreasing exposure helps to limit overall negative effects and increase worker safety.

In the 1940s, ORNL made pioneering contributions across numerous scientific fields, including radiation protection, said Susan Hubbard, ORNL deputy for science and technology. In our 80th year as an institution, we continue to provide leadership in this area. This technology will allow radiological workers to better understand the environments they work in, enabling a safer and more informed workforce.

At ORNL, the licensed methods were originally used to create thevirtual interaction with physics-enhanced reality, or VIPER, application. Using simulated radiation dataimplemented in a gaming platform, the technology divides a physical space into cubes, each representing a volumetric value of ionizing radiation by dose. A 3D interpolation of these values is then used to create an image of gradient contours that are overlaid on a real-world view through an augmented reality, or AR, headset. As a trainee moves through the space, navigating around the contours, the device calculates real-time, yet simulated, exposure based on the users behavior.

We combined physics-based data with a gaming interface that provides a visual platform to make something invisible look and feel real we took science and cinematography and brought them together, said ORNLs Michael Smith.

In addition to Smith, the development team includes ORNLs Noel Nelson and Douglas Peplow, all of the Nuclear Energy and Fuel Cycle Division; and former ORNL researchers M. Scott Greenwood and Nicholas Thompson. Significant support came from the Nuclear and Radiological Protection Division. The technology began as an exploratory, one-year seed project funded under ORNLs Lab Directed Research and Development program.

When it comes to training with ionizing radiation, augmented reality is a superior and safer solution, Smith said. Our team was at the right place at the right time to develop this technology. There was a synergy of hardware and software maturity coupled with an idea thats been around a long time the need to see ionizing radiation.

Teletrixs simulators for radiological and gas detection training are widely used by utilities, emergency response organizations and government agencies. ORNL has been a longtime customer of the Pittsburgh, Pennsylvania-based small business, which also manufactures its own products.

Our company is solely dedicated to improving radiation training our tagline is Prepare Through Simulation and making that training more realistic, said Jason OConnell, sales and business development manager for Teletrix. We're always looking to innovate training, so we make a lot of new products.

One of Teletrixs products is VIZRAD, a virtual reality software system that simulates contamination on individuals and workspaces. VIZRAD trains a user to properly scan someone with a detector and provides objective feedback on technique.

When I put the AR glasses on, it was obvious that ORNLs technology and Teletrixs tools were a great fit, OConnell said. Through the headset and the AR technology, we have the ability to track a persons exact location within a room and inject source information into the room. It really raises the bar on the precision of the training we can deliver.

Having much more realistic readings on your instruments leads to better-prepared employees, better prepared trainees, fewer incidents this technology will help make people in this industry safer.

Additionally, lowering exposure to ionizing radiation also provides cost benefits to companies, he said.

Smith said the development team envisioned three applications for the ORNL technology, including:

Just by having a general impression of the spatial relationship of your body in a given radiation environment, you can decrease your overall dose based on really fundamental behavioral changes, Smith said. We can't see ionizing radiation, so you just walk right through it. But once you have seen what the radiation in your working environment looks like, you can't unsee it. AR provides a means to train people to have a better visceral understanding of how ionizing radiation behaves.

Performance data collected from about 40 participants supports this hypothesis by showing statistically significant behavioral changes after minimal training with AR representations of radiation fields.

Additionally, the method of coupling AR technologies with accurate radiation measurements has been demonstrated and experimentally validated in a study using cesium-137 in ORNLs Nuclear Radiation Protection Division demonstration facility.

ORNL senior commercialization manager Eugene Cochran negotiated the terms of the license. For more information about ORNLs intellectual property in analytical instrumentation,email ORNL Partnerships, call 865.574.1051 orsubscribe to ORNL invention alerts. To connect with the Teletrix team,email [emailprotected]or call 412.798.3636.

UT-Battelle manages ORNL for the Department of Energys Office of Science, the single largest supporter of basic research in the physical sciences in the United States. The Office of Science is working to address some of the most pressing challenges of our time. For more information, please visitenergy.gov/science.

See the rest here:

Teletrix licenses methods for ionizing radiation training using augmented reality - Newswise

PSVR 2 VR Cover accessory kit aims to ease comfort issues – MIXED Reality News

Bild: VR Cover

From pressure points to excessive sweating: A set of accessories from VR-Cover aims to alleviate many of the Playstation VR 2s comfort issues.

The PSVR 2 allows for amazing VR graphics, but suffers from comfort issues depending on the shape of your head. Some users have already hacked their way out of the Halo straps poor fit and pressure points. Sweat under the plastic padding can even compromise the VR headsets technology.

A three-piece accessory set from VR Cover is designed to alleviate all these problems without affecting the warranty. Instead of modifying the hardware itself, buyer:ins simply wrap their PSVR 2 with two fabric covers. The two wraparound covers for the front and back padding are said to reduce and absorb sweat.

The two washable covers with Velcro fasteners are each made of two layers of tightly woven cotton to prevent the formation of foam from perspiration.

In the style of other VR headsets, there is also a length-adjustable headband. It attaches to the sides of the halo strap with Velcro and takes some weight and pressure off the front and back of the head. It also relieves pressure on the neck and shoulders, according to the accessory maker. The principle is similar to many other VR headsets with similar top head straps.

The PSVR 2, on the other hand, practically clamps the skull between the front and back air cushions. With the right head shape, such a halo strap can be very comfortable. After all, the VR headset hangs loosely in front of your eyes with no pressure on your face and enough room for your glasses.

But as is often the case, comfort in virtual reality is highly subjective. With Sonys new headband design in particular, some customers complained about an uncomfortable fit and sweat problems.

The three-piece Head Strap Cover Set for PlayStation VR2 has been available in the European VR Cover Store for 29 Euros since May 4 and sold out within a few hours of going on sale. Replenishment is expected to follow early next week. VR Cover recommends interested buyers to try their luck on Monday, May 8th. A second batch is expected then.

For more tips and support, see our PSVR 2 Getting Started Guide.

Note: Links to online stores in articles can be so-called affiliate links. If you buy through this link, MIXED receives a commission from the provider. For you the price does not change.

Visit link:

PSVR 2 VR Cover accessory kit aims to ease comfort issues - MIXED Reality News

Dundalk Institute student presents at virtual reality conference – Louth Live

Dundalk Institute of Technology (DkIT) said they are delighted to report that Michael Galbraith, an immersive technology specialist with Arup and a current student of the MSc in Computer Gaming and XR in DkIT, recently delivered a successful presentation at the Meta European HQ office in Dublin in conjunction with Eirmersive.

Michael showcased various virtual reality projects he contributed to as part of the company's Immersive Technology team.

These projects exemplified the potential of immersive technology to transform designs and engage the public with proposed solutions, reflecting the practical applications of the skills he is currently honing through the MSc program at DkIT.

DkIT's MSc in Computer Gaming and XR focuses on developing software engineering skills within 3D game engines and the skillset to model 3D characters and environments with a focus on Virtual Reality (VR) and Augmented Reality (AR) technologies.

The course equips students with the knowledge and experience needed to excel in the rapidly evolving world of immersive technology.

Michael's presentation at the Meta European HQ office in Dublin serves as a testament to the quality of education provided by DkIT.

As students like Michael continue to grow and achieve success in the immersive technology field, DkIT remains committed to offering innovative educational programs that prepare students for the dynamic developments ahead within this fast-moving pioneering industry.

ADVERTISEMENT - CONTINUE READING BELOW

Go here to see the original:

Dundalk Institute student presents at virtual reality conference - Louth Live

Memes, virtual reality used to train Home Team officers – The Straits Times

SINGAPORE A photo of American actor Sylvester Stallone as Rambo sticking up both his thumbs is being used to train the next generation of Home Team officers.

The meme, also known as Thumbs Up Rambo, is a reminder to officers that travellers use both their thumbs to clear biometric scans at immigration.

It is one of several memes being used at the Home Team Academy (HTA) to keep training relevant for younger officers, as well as help them better remember and develop the skills they need.

The memes used to train officers in immigration clearance can be scanned using an app to provide officers more information on how clearance should be done, and also how to spot suspicious characters at checkpoints.

These were unveiled on Tuesday at HTAs workplan seminar, where Second Minister for Home Affairs Josephine Teo also launched the second iteration of the Home Team Learning Management System.

The system, which was first used in 2016, has been enhanced to bring together training, assessment and social collaboration onto one platform.

Artificial intelligence-assisted assessment will also be used.

The plan is for the system to eventually become the primary training platform for more than 68,000 officers across the Home Team.

Mrs Teo said the HTA, as the corporate university in homefront safety and security, plays a crucial role in ensuring Home Team officers are future-ready.

She said: Competency-building through training and learning will enable our officers to tackle emerging and future challenges effectively, and achieve our mission of keeping Singapore and Singaporeans safe and secure.

Read more:

Memes, virtual reality used to train Home Team officers - The Straits Times

The Global Augmented Reality In Agriculture Market to register … – Digital Journal

PRESS RELEASE

Published May 5, 2023

Factual Market Research has released a report on the global Augmented Reality In Agriculture market, including historical and current growth prospects and trends from 2022-2030. The Report utilizes unique research techniques that combine primary and secondary research to comprehensively analyze the global Augmented Reality In Agriculture market and draw conclusions about its future growth potential. This method helps analysts determine the quality and reliability of the data. The Report offers valuable insights on key market factors, including market trends, growth prospects, and expansion opportunities for the industry.

Augmented Reality (AR) technology is becoming increasingly prevalent in agriculture. AR in agriculture involves using digital images, video, or sound to enhance the real-world environment and provide farmers with valuable insights and information.

Market Dynamics:

Drivers and Restraints:

The Augmented Reality In Agriculture Market is being driven by several factors, including the increasing demand for precision agriculture, the need to optimize farming processes and reduce waste, and the growing adoption of smart farming technologies. AR technology can help farmers to make more informed decisions about planting, watering, and harvesting crops, as well as detect and treat plant diseases and pests more effectively.

Moreover, the use of AR in agriculture can improve worker safety by providing real-time data and alerts about potential hazards and risks. Additionally, the increasing availability of affordable AR devices such as smartphones and tablets is making this technology more accessible to farmers and agricultural workers.

However, some factors are restraining the growth of the AR in agriculture market. These include the limited adoption of advanced technologies by small-scale farmers, the lack of standardized practices and regulations for using AR in agriculture, and the high costs associated with implementing AR systems.

Any query regarding the Report:

https://www.factualmarketresearch.com/Reports/Augmented-Reality-In-Agriculture-Market

Key players:

Market Segmentation:

Augmented Reality in Agriculture Market, By Type

Augmented Reality In Agriculture Market, By End-User

Augmented Reality In Agriculture Market, By Application

Augmented Reality In Agriculture Market, By Region

Get a Free Sample Report:

https://www.factualmarketresearch.com/Reports/Augmented-Reality-In-Agriculture-Market

Market Trends:

Some of the key trends in the augmented reality in agriculture market include the development of new and innovative AR applications for farming, integrating AR with other smart farming technologies such as drones and sensors, and using AR in training and education programs for farmers and agricultural workers.

Another emerging trend is the use of AR to create virtual simulations of farming environments, which can help farmers test different strategies and scenarios safely and in a controlled manner. In addition, the increasing use of AR to improve the traceability and transparency of the agricultural supply chain is also driving the growth of augmented reality in the agriculture market.

For any customization:

https://www.factualmarketresearch.com/Inquiry/12856

The Report covers the following key elements:

Table of Contents: Augmented Reality In Agriculture Market

Chapter 1: Introduction to Augmented Reality In Agriculture Market

Chapter 2: Analysis of Market Drivers

Chapter 3: Global Market Status and Regional Forecast

Chapter 4: Global Market Status and Forecast by Types

Chapter 5: Competition Status among Major Manufacturers

Chapter 6: Introduction and Market Data of Major Manufacturers

Chapter 7: Upstream and Downstream Analysis

Chapter 8: PESTEL, SWOT, and PORTER 5 Forces Analysis

Chapter 9: Cost Analysis and Gross Margin

Chapter 10: Sales Channels, Distributors, Traders, and Dealers

Chapter 11: Analysis of Marketing Status

Chapter 12: Conclusion of Market Report

Chapter 13: Methodology and References for Augmented Reality In Agriculture Market Research

Chapter 14: Appendix

About Us:

Factual Market Research is a leading provider of comprehensive industry research that provides clients with actionable intelligence to answer their research questions. Our expertise covers over 20 industries, and we provide customized syndicated and consulting research services to cater to our clients specific requirements. Our focus is on delivering high-quality Market Research Reports and Business Intelligence Solutions that enable clients to make informed decisions and achieve long-term success in their respective market niches. Additionally, FMR offers business insights and consulting services to further support our clients.

Visit our website to learn more about our services and how we can assist you.

Contact Us:

If you have any questions regarding our Augmented Reality In Agriculture report or require further information, please dont hesitate to contact us.

E-mail:[emailprotected]

Contact Person: Jaipreet Makked

US Toll-Free: +18007743961

UK (Tollfree): +448081897087

Web: https://www.factualmarketresearch.com/

Follow us on LinkedIn

View post:

The Global Augmented Reality In Agriculture Market to register ... - Digital Journal

Using augmented reality to guide bone conduction device … – Nature.com

Specimen preparation

Whole cadaveric heads were prepared with bilateral curvilinear post-auricular incisions with elevation of a soft tissue flap for exposure of the zygomatic root, posterior external auditory canal, and the mastoid tip. Eight 2mm bone wells were drilled outside of the surgical field to act as fiducial references for eventual image guidance calibration within the experimental arm. Areas of placement included the zygomatic root, bony external auditory canal, and the mastoid tip.

Using a prototype intraoperative cone-beam computed tomography scanner (Powermobil, Siemens, Germany), the cadaver heads were obtained, with an isotropic voxel size of 0.78mm12. Scans were evaluated for abnormal anatomy or evidence of previous surgery. Both the O-OSI and BB-FMT devices were imaged for surgical modelling by creating the virtual rendering of hearing device for projecting the overlay during the procedure. Materialise Mimics Medical 19.0 (Materialise NV, Belgium) was used to identify optimal placement of the devices with creation of virtual heads rendered from CT imaging using pre-set bony segmentation sequencing.

Implants were imported into Materalise Mimics as optimized triangulated surface meshes that moved independently from the bone. The experimental design is outlined in Fig.1. Each surgeons pre-operative planning included placement of four O-OSI devices and four BB-FMT devices in two separate sessions. Bone depth and avoidance of critical structures, such as the sigmoid sinus were major factors. O-OSIs were placed within the mastoid and clearance around the implant was ensured to avoid inadvertent contact with underlying bone. The three possible placements of the BB-FMTs included the mastoid, retrosigmoid, and middle fossa areas. Each surgeon underwent a brief 10-min session with surgical manuals to review optimal surgical technique for both implants. Each planning session lasted five minutes to allow for surgeons to guide exact placement.

Study protocol (CBCT cone beam computed tomography, O-OSI Osia osseointegrated implant steady-state implant, BB-FMT BoneBridge floating mass transducer).

Implantation followed a standardized protocol beginning with the control arm followed by the experimental AR arm (Fig.1). Within the control arm, surgeons utilized Materialise Mimics built-in measurement tool for eventual intraoperative reference during implant placement. Whereas in the experimental arm, device placement was projected onto the surgical field using GTx-Eyes (Guided Therapeutics, TECHNA Institute, Canada) via a PicoPro projector (Cellon Inc., South Korea)7,11. The AR setup is demonstrated in Fig.2 and seen in the supplementary video.

Integrated augmented reality surgical navigation system. (A) the projector and surgical instruments were trackedwith the optical tracker in reference to the registered fiducials on the cadaveric head. Optical tracking markers attached to the projector allows for real-time adjustments to image projection. The surgical navigation platform displaying a pre-operatively placed implant. Experimental AR projection arm setup. (B) Surgeons were encouraged to align the projector to their perspective to reduce parallax.

Following implant placement, CT scans were obtained of the cadaveric heads to capture the location of implantation for eventual 3D coordinates measurement analysis. Each surgeon performed four O-OSI placements followed by four BB-FMTs.

The integrated AR surgical navigation system consists of a PicoPro projector (Cellon Inc., South Korea), a Polaris Spectra stereoscopic infrared optical tracker (NDI, Canada), a USB 2.0-megapixel camera (ICAN, China), and a standard computer. A 3D printed PicoPro projector enclosure enabled the attachment of four tracking markers, which provide real-time three-dimensional tracking information (Fig.2). GTx-Eyes (Guided Therapeutics, TECHNA Institute, Canada) is a surgical navigation platform that utilizes open-source, cross-platform libraries included IGSTK, ITK, and VTK11,13,14,15,16. The developed AR system has demonstrated the projection accuracy at 0.550.33mm and has been widely adapted to the domains of Otolaryngologic and Orthopedic oncologic operations17,18,19,20. Recently, the software has evolved to include AR integration7,9.

The AR system requires two calibrations: (1) camera and instrument tracker, (2) camera and projector, which are both are outlined by Chan et al.9,11. The result allows the tracked tool to be linked with the projectors spatial parameters allowing for both translation and rotational movements.

The camera and tracking tool calibration defines the relationship between the cameras center and the tracking tool coordinates by creating a homogeneous transformation matrix, ({{}^{Tracker}T}_{Cam}), consisting of a 33 rotational matrix (R) and a 31 translational vector (t). The rotational parameter was represented with Euler angles (({R}_{x},{R}_{y},{R}_{z})). This calibration process requires photographing a known checkerboard pattern from various perspectives using the camera that is affixed to the projectors case. The instrument trackers position and orientation are recorded to compute the spatial transformation. The grid dimensions from each photograph are compared with actual dimensions (30mm 30mm in a 97 array) using an open-source Matlab camera calibration tool21. This calibration serves as the extrinsic parameter of the camera.

The intrinsic parameters (A) of the camera include the principal point (({u}_{0,}{v}_{0})), scale factors ((alpha , beta ),mathrm{ and the skew of the two image axes }left(cright))22,23,24. This is denoted as:

$$mathbf{A}=left[begin{array}{ccc}alpha & c& {u}_{0}\ 0& beta & {v}_{0}\ 0& 0& 1end{array}right]$$

When combining the extrinsic (R t) with intrinsic (A) parameters, three-dimensional space (({mathbf{M}=[X,Y,Z,1]}^{T})) can be mapped to a two-dimensional camera image (({mathbf{m}=[u,v,1]}^{T})). s is defined as the scale factors. This is represented by: (smathbf{m}=mathbf{A}left[mathbf{R} mathbf{t}right]mathbf{M}.)

This link defines the spatial relationship between the cameras centre and the projector to create a homogenous transformation matrix (({{}^{Cam}T}_{Proj})). A two-dimensional checkerboard image is projected onto a planar checkerboard surface, which was used in the previous calibration step. The camera captures both images from various perspectives. Using the projector-camera calibration toolbox, the transformation of the camera and projector (({{}^{Cam}T}_{Proj})) is now established25. The calibration requires linking the camera and the projector tracking markers, both of which are mounted on the projector enclosure (Fig.2). By combining both calibration processes, the resulting transformation matrix from the AR projector to the tracking marker is denoted by ({{}^{Tracker}T}_{Proj}={{}^{Tracker}T}_{Cam}*{{}^{Cam}T}_{Proj}) .

AR projection setup required confirmation of projection adequacy using an image guidance probe and a Polaris Spectra NDI (Fig.2). Using the image guidance probe, coordinates from the bony fiducials (drilled bone well) and the projected fiducials (green dots) were captured. The difference between coordinates served as the measurement of projection accuracy (Fig.3).

(A) Fiducials projection onto the surgical field was matched to the drilled wells and (B) subsequent accuracy measurements were obtained with a tracking pointer tool placed within the drilled wells where x-, y-, and z- coordinates were captured.

Post-operative and pre-operative scans were superimposed on Materialise Mimics and centre-to-centre distances as well as angular differences on the axial plane were measured (Figs.4, 5). For O-OSI placements, the centre of the O-OSI was used, whereas the centre of the FMT for BB-FMT.

Accuracy measurements for center-to-center distances and angular accuracy.

Post-operative CT scans (A) BB-FMT and (B) O-OSI following AR projector guided surgery with paired pre-operative planning rendering seen in (C) and (D). In images (A) and (B), there is the pre-operative planning outline superimposed. The blue arrow denotes post- operative placement whereas the red arrow denotes pre-operative planning.

All participants completed a NASA Task Load Index (TLX) questionnaire assessing the use of AR in addition to providing feedback in an open-ended questionnaire26. TLX results were used to generate raw TLX (RTLX) scores for the six domains and subsequently weighted workload scores were generated27.

Continuous data was examined for normality by reviewing histograms, quantilequantile plots, and the ShapiroWilk test for normality. Given the lack of normality and repeated measurements, Wilcoxon signed-rank testing was used for centre-to-centre (C-C) and angular accuracies comparisons between the control and experimental arms. All analyses were performed using SPSS 26 (IBM Corp., Armonk, NY).

All methods were carried out in accordance with relevant guidelines and regulations. This study was approved by the Sunnybrook Health Sciences Centre Research Ethics Board (Project Identification Number: 3541). Informed consent was obtained from all subjects and/or their legal guardian(s) by way of the University of Torontos Division of AnatomyBody Donation Program. All subjects provided consent in the publication of identifying images in an online open-access publication.

Go here to see the original:

Using augmented reality to guide bone conduction device ... - Nature.com

Mixed Reality Music Prototype Turns Spotify Into Vinyl – UploadVR

Freelance Creative Director Bob Bjarke, formerly of Meta, shared an amusing new mixed reality concept on Twitter centered around discovering new music and creating playlists with virtual records.

Bjarke shared footage of Wreckommendation Engine, a prototype experience he created during the Meta Quest Presence Platform Hackathon last week with Unity developers @RJdoesVR and Jeremy Kesten, 3D artist and prototype Joe Kane and immersive sound designer David Urrutia.

Wreckommendation Engine presents users with a virtual record player and crate of records, positioned on a real life surface using mixed reality passthrough on Quest Pro. The user can grab records out of the crate and listen to them by placing them on the turntable. If you like the music, you can throw it against a designated nearby wall to save it. If you hate it, you can throw it against a different wall to smash it into pieces.

If you smash too many tracks, they will eventually come back to life as a killer robot made up of vintage electronics and hi-fi equipment. You can destroy it by throwing more records at it.

The experience integrates with Spotify and uses its API to present you with new tracks, take note of your preferences and compile your saved tracks into a playlist for later.

This is just a proof-of-concept prototype and a bit of fun, so its unlikely to ever see the light of day for Quest users. Nonetheless, its an amusing concept and a cool way to bring more physicality into music discovery in the age of streaming. In a follow-up tweet, Bjarke said that they wanted to use the immersive tools of mixed reality to make a more fun and social music experience, given that formerly social activities like making mixtapes and burning CDs are now algorithmic utilities, done along on a 2D screen.

Originally posted here:

Mixed Reality Music Prototype Turns Spotify Into Vinyl - UploadVR

Cracking the Code of Sound Recognition: Machine Learning Model Reveals How Our Brains Understand … – Neuroscience News

Summary: Researchers developed a machine learning model that mimics how the brains of social animals distinguish between sound categories, like mating, food or danger, and react accordingly.

The algorithm helps explain how our brains recognize the meaning of communication sounds, such as spoken words or animal calls, providing crucial insight into the intricacies of neuronal processing.

Insights from the research pave the way for treating disorders that affect speech recognition and improving hearing aids.

Key Facts:

Source: University of Pittsburgh

In a paper published today inCommunications Biology, auditory neuroscientists at theUniversity of Pittsburghdescribe a machine-learning model that helps explain how the brain recognizes the meaning of communication sounds, such as animal calls or spoken words.

The algorithm described in the study models how social animals, including marmoset monkeys and guinea pigs, use sound-processing networks in their brain to distinguish between sound categories such as calls for mating, food or danger and act on them.

The study is an important step toward understanding the intricacies and complexities of neuronal processing that underlies sound recognition. The insights from this work pave the way for understanding, and eventually treating, disorders that affect speech recognition, and improving hearing aids.

More or less everyone we know will lose some of their hearing at some point in their lives, either as a result of aging or exposure to noise. Understanding the biology of sound recognition and finding ways to improve it is important, said senior author and Pitt assistant professor of neurobiology Srivatsun Sadagopan, Ph.D.

But the process of vocal communication is fascinating in and of itself. The ways our brains interact with one another and can take ideas and convey them through sound is nothing short of magical.

Humans and animals encounter an astounding diversity of sounds every day, from the cacophony of the jungle to the hum inside a busy restaurant.

No matter the sound pollution in the world that surrounds us, humans and other animals are able to communicate and understand one another, including pitch of their voice or accent.

When we hear the word hello, for example, we recognize its meaning regardless of whether it was said with an American or British accent, whether the speaker is a woman or a man, or if were in a quiet room or busy intersection.

The team started with the intuition that the way the human brain recognizes and captures the meaning of communication sounds may be similar to how it recognizes faces compared with other objects. Faces are highly diverse but have some common characteristics.

Instead of matching every face that we encounter to some perfect template face, our brain picks up on useful features, such as the eyes, nose and mouth, and their relative positions, and creates a mental map of these small characteristics that define a face.

In a series of studies, the team showed that communication sounds may also be made up of such small characteristics.

The researchers first built a machine learning model of sound processing to recognize the different sounds made by social animals. To test if brain responses corresponded with the model, they recorded brain activity from guinea pigs listening to their kins communication sounds.

Neurons in regions of the brain that are responsible for processing sounds lit up with a flurry of electrical activity when they heard a noise that had features present in specific types of these sounds, similar to the machine learning model.

They then wanted to check the performance of the model against the real-life behavior of the animals.

Guinea pigs were put in an enclosure and exposed to different categories of sounds squeaks and grunts that are categorized as distinct sound signals. Researchers then trained the guinea pigs to walk over to different corners of the enclosure and receive fruit rewards depending on which category of sound was played.

Then, they made the tasks harder: To mimic the way humans recognize the meaning of words spoken by people with different accents, the researchers ran guinea pig calls through sound-altering software, speeding them up or slowing them down, raising or lowering their pitch, or adding noise and echoes.

Not only were the animals able to perform the task as consistently as if the calls they heard were unaltered, they continued to perform well despite artificial echoes or noise. Better yet, the machine learning model described their behavior (and the underlying activation of sound-processing neurons in the brain) perfectly.

As a next step, the researchers are translating the models accuracy from animals into human speech.

From an engineering viewpoint, there are much better speech recognition models out there. Whats unique about our model is that we have a close correspondence with behavior and brain activity, giving us more insight into the biology.

In the future, these insights can be used to help people with neurodevelopmental conditions or to help engineer better hearing aids, said lead author Satyabrata Parida, Ph.D., postdoctoral fellow atPitts department of neurobiology.

A lot of people struggle with conditions that make it hard for them to recognize speech, said Manaswini Kar, a student in the Sadagopan lab.

Understanding how a neurotypical brain recognizes words and makes sense of the auditory world around it will make it possible to understand and help those who struggle.

Author: Anastasia GorelovaSource: University of PittsburghContact: Anastasia Gorelova University of PittsburghImage: The image is credited to Neuroscience News

Original Research: Open access.Adaptive mechanisms facilitate robust performance in noise and in reverberation in an auditory categorization model by Srivatsun Sadagopan et al. Communications Biology

Abstract

Adaptive mechanisms facilitate robust performance in noise and in reverberation in an auditory categorization model

For robust vocalization perception, the auditory system must generalize over variability in vocalization production as well as variability arising from the listening environment (e.g., noise and reverberation).

We previously demonstrated using guinea pig and marmoset vocalizations that a hierarchical model generalized over production variability by detecting sparse intermediate-complexity features that are maximally informative about vocalization category from a dense spectrotemporal input representation.

Here, we explore three biologically feasible model extensions to generalize over environmental variability: (1) training in degraded conditions, (2) adaptation to sound statistics in the spectrotemporal stage and (3) sensitivity adjustment at the feature detection stage. All mechanisms improved vocalization categorization performance, but improvement trends varied across degradation type and vocalization type.

One or more adaptive mechanisms were required for model performance to approach the behavioral performance of guinea pigs on a vocalization categorization task.

These results highlight the contributions of adaptive mechanisms at multiple auditory processing stages to achieve robust auditory categorization.

Visit link:
Cracking the Code of Sound Recognition: Machine Learning Model Reveals How Our Brains Understand ... - Neuroscience News

The 7 Best Websites to Help Kids Learn About AI and Machine … – MUO – MakeUseOf

If you have kids or teach kids, you likely want them to learn the latest technologies to help them succeed in school and their future jobs. With rapid tech advancements, artificial intelligence and machine learning are essential skills you can teach young learners today.

Thankfully, you can easily access free and paid online resources to support your kids' and teens' learning journey. Here, we explore some of the best e-learning websites for students to gain experience in AI and ML technology.

Do you want to empower your child's creativity and AI skills? You might want to schedule a demo session with Kubrio. The alternative education website offers remote learning experiences on the latest technologies like ChatGPT.

Students eight to 18 years old learn about diverse subjects at their own pace. At the same time, they get to team up with learners who share their interests.

Kubrios AI Prompt Engineering Lab teaches your kids to use the best online AI tools for content creation. Theyll learn to develop captivating stories, interactive games, professional-quality movies, engaging podcasts, catchy songs, aesthetic designs, and software.

Kubrio also gamifies AI learning in the form of "Quests." Students select their Quest, complete their creative challenge, build a portfolio, and earn points and badges. This program is currently in beta, but you can sign them up for the private beta for the following Quests:

Explore the Create&Learn website if you want to introduce your kids to the latest technological advancements at an early age. The e-learning site is packed with classes that help kids discover the fascinating world of robots, artificial intelligence, and machine learning.

Depending on their grade level, your child can join AI classes such as Hello Tech!, AI Explorers, Python for AI, and AI Creators. The classes are live online, interactive, and hands-on. Students from grades two up to 12 learn how AI works and can be applied to the latest technology, such as self-driving cars, face recognition, and games.

Create&Learns award-winning curriculum was designed by experts from well-known institutions like MIT and Stanford. But if you aren't sure your kids will enjoy the sessions, you can avail of a free introductory class (this option is available for select classes only).

One of the best ways for students to learn ML and AI is through hands-on machine learning project ideas for beginners. Machine Learning for Kids gives students hands-on training with machine learning, a subfield of AI that enables computers to learn from data and experience.

Your kids will train a computer to recognize text, pictures, numbers, or sounds. For instance, you can train the model to distinguish between images of a happy person and a sad person using free photos from the internet. We tried this, and then tested the model with a new photo, and it was able to successfully recognize the uploaded image as a happy person.

Afterward, your child will try their hand at the Scratch, Python, or App Inventor coding platform to create projects and build games with their trained machine learning model.

The online platform is free, simple, and user-friendly. You'll get access to worksheets, lesson plans, and tutorials, so you can learn with your kids. Your child will also be guided through the main steps of completing a simple machine learning project.

If you and your kids are curious about how artificial intelligence and machine learning work, go through Experiments with Google. The free website explains machine learning and AI through simple, interactive projects for learners of different ages.

Experiments with Google is a highly engaging platform that will give students hours of fun and learning. Your child will learn to build a DIY sorter using machine learning, create and chat with a fictional character, conduct their own orchestra, use a camera to bring their doodles to life, and more.

Many of the experiments don't require coding. Choose the projects appropriate for your child's level. If youre working with younger kids, try Scroobly; Quick, Draw!; and LipSync with YouTube. Meanwhile, teens can learn how experts build a neural network to learn about AI or explore other, more complex projects using AI.

Do you want to teach your child how to create amazing things with AI? If yes, then AI World School is an ideal edtech platform for you. The e-learning website offers online and self-learning AI and coding courses for kids and teens seven years old and above.

AI World School courses are designed by a team of educators and technologists. The courses cover AI Novus (an introduction to AI for ages seven to ten), Virtual Driverless Car, Playful AI Explorations Using Scratch, and more.

The website also provides affordable resources for parents and educators who want to empower their students to be future-ready. Just visit the Project Hub to order $1-3 AI projects, you can filter by age group, skill level, and software.

Kids and teens can also try the free games when they click Play AI for Free. Converse with an AI model named Zhorai, teach it about animals, and let it guess where these animals live. Students can also ask an AI bot about the weather in any city, or challenge it to a competitive game of tic-tac-toe.

AIClub is a team of AI and software experts with real-world experience. It was founded by Dr. Nisha Tagala, a computer science Ph.D. graduate from UC Berkeley. After failing to find a fun and easy program to help her 11-year-old daughter learn AI, she went ahead and built her own.

AI Club's progressive curriculum is designed for elementary, middle school, and high school students. Your child will learn to create unique projects using AI and coding. Start them young, and they can flex their own AI portfolio to the world.

You can also opt to enroll your child in the one-on-one class with expert mentors. This personalized online class enables students to research topics they care about on a flexible schedule. They'll also receive feedback and advice from their mentor to improve their research.

What's more, students enrolled in one-on-one classes can enter their research in competitions or present their findings at a conference. According to the AIClub Competition Winners page, several students in the program have already been awarded in national and international competitions.

Have you ever wondered how machines can learn from data and perform tasks that humans can do? Check out Teachable Machine, a website by Google Developers that lets you create your own machine learning models in minutes.

Teachable Machine is a fun way for kids and teens to start learning the concepts and applications of machine learning. You don't need any coding skills or prior knowledge, just your webcam, microphone, or images.

Students can play with images, sounds, poses, text, and more. They'll understand how tweaking the settings and data changes the performance and accuracy of the models.

Teachable Machine is a learning tool and a creative platform that unleashes the imagination. Your child can use their models to create games, art, music, or anything else they can dream of. If they need inspiration, point them to the gallery of projects created by other users.

Artificial intelligence and machine learning are rapidly transforming the world. If you want your kids and teens to learn about these fascinating fields and develop their critical thinking skills and creativity, these websites that can help them.

Whether you want to explore Experiments with Google, AI World School, or other sites in this article, you'll find plenty of resources and fun challenges to spark your child's curiosity and imagination. There are also ways to use existing AI tools in school so that they can become more familiar with them.

Read more here:
The 7 Best Websites to Help Kids Learn About AI and Machine ... - MUO - MakeUseOf

How to get going with machine learning – Robotics and Automation News

We can see everyone around us talking about machine learning and artificial intelligence. But is the hype of machine learning objective? Lets dive into the details of machine learning and how we can start it from scratch.

Machine learning is a technological method through which we teach our computers and electronic gadgets how to provide accurate answers. Whenever data is fed into the system, it acts in a defined way to find precise answers to those questions asked.

For example, questions such as: What is the taste of avocado?, What are the things to consider for buying an old car?, How do I drive safely on reload?, and so on.

But using machine language, the computer is trained to give precise answers even without input from developers. In other words, machine language is a sophisticated form of language in which computers are trained to provide correct answers to complicated questions.

Furthermore, they are trained to learn more, distinguish confusing questions, and provide satisfactory answers.

Machine learning and AI is the future. Therefore, people who can learn skills and become proficient will become the first in line to reap the profits. We have companies that offer machine learning services to augment your business.

In other words, to get unreal advantages, we must engage with these services for the exponential growth of our business.

Initially, the developers do a massive number of training and modeling. Other crucial things are also done by the developers for machine language development. Additionally, vast amounts of data are used to provide precise results and effectively reduce the decision taking time.

Here are the simple steps that can get you started with machine learning.

Make up your mind and choose a tool in which you want to master machine learning development.

Always look for the best language in terms of practicality and its acceptability on multiple platforms.

As we know, Machine learning is a process that involves a rigorous process of modeling and training. Therefore we must practice the given below bullet points.

To take the most advantage, create a delicate and lucid portfolio of yours to demonstrate your learned skills to the world. Keep in mind the below-mentioned bullet points too.

When we apply a precise algorithm to a data set, the output we get is called a Model. In other words, it is also known as Hypothesis.

In technical terms, a feature is a quantifiable property that defines the characteristics of a process in machine learning. One of the crucial characteristics of it is to recognize and classify algorithms. It is used as input into a model.

For example, to recognize a fruit, it uses features such as smell, taste, size, color, and so on. The element is vital in distinguishing the target or asked query using several characteristics.

The highest level of value or variable created by the machine learning model is called Target.

For example, In the previous set, we measured fruits. Each label has a specific fruit such as orange, banana, apple, pineapple, and so on.

In machine learning, Training is a term used for getting used to all the values and biases of our target examples. Under supervision during the learning process, many experiments are done to build a machine learning algorithm to reach the minimum loss going the correct output.

When a model is accomplished, we can set a variety of inputs that will give us the expected results as output. Always be careful and look that system is performing accurately on unseen data. Then only we can say it is a successful operation.

After preparing our model, we can input a set of data for which it will generate a predicted output or label. However, verifying its performance on new, untested data is essential before concluding that the machine is performing well.

As machine learning continues to increase in significance to enterprise operations and AI becomes more sensible in corporation settings, the machine learning platform wars will accentuate handiest.

Persisted research into deep studying and ai is increasingly targeted at developing different general applications. Cutting-edge AI models require sizeable training to produce an algorithm that is particularly optimized to perform one venture.

But some researchers are exploring approaches to make fashions greater bendy and are searching for techniques that allow a device to use context discovered from one project to future, specific tasks.

You might also like

Read the original here:
How to get going with machine learning - Robotics and Automation News

ASCRS 2023: Predicting vision outcomes in cataract surgery with … – Ophthalmology Times

Mark Packer, MD, sat down with Sheryl Stevenson, Group Editorial Director,Ophthalmology Times, to discuss his presentation on machine learning and predicting vision outcomes after cataract surgery at the ASCRS annual meeting in San Diego

Editors note:This transcript has been edited for clarity.

We're joined by Dr. Mark Packer, who will be presenting at this year's ASCRS. Hello to Dr. Packard. Great to see you again.

Good to see you, Sheryl.

Sure, tell us a little bit about your talk about machine learning, and visual, predicting vision outcomes after cataract surgery.

Sure, well, as we know, humans tend to be fallible, and even though surgeons don't like to admit it, they have been prone to make errors from time to time. And you know, one of the errors that we make is that we always extrapolate from our most recent experience. So if I just had a patient who was very unhappy with a multifocal IOL, all of a sudden, I'm going to be a lot more cautious with my next patient, and maybe the one after that, too.

And, the reverse can happen as well. If I just had a patient who was absolutely thrilled with their toric multifocal, and they never have to wear glasses again, and they're leaving for Hawaii in the morning, you know, getting a full makeover, I'm going to think, wow, that was the best thing I ever did. And now all of a sudden, everyone looks like a candidate. and even for someone like me, who has been doing multifocal IOL for longer than I care to admit, you know, this can still pose a problem. That's just human nature.

And, so what we're attempting to do with the oculotics program is to bring a little objectivity into the mix. Now, of course, we already do that, when we talked about IOL power calculations, we, we leave that up to algorithms and let them do the work. One of the things that we've been able to do with oculotics is actually improve upon the way that power calculations are done. So rather than just looking at the Dioptric power of a lens, for example, we're actually looking at the real optical properties of the lens, the modulation transfer function, in order to help correlate that with what a patient desires in terms of spectacle independence.

But the real brainchild here is the idea of incorporating patient feedback after surgery into the decision making process. So part of this is actually to give our patients and app that they can use to then provide feedback on their level of satisfaction, essentially, by filling out the VFQ-25, which is a simply, a 25 item questionnaire that was developed in the 1990s by RAND Corporation, to look at visual function and how satisfied people are with their vision, whether they have to worry about it, and how they feel about their vision, that sort of thing, whether they can drive at night comfortably and all that.

So if we can incorporate that feedback into our decision making, now instead of my going into the next room, you know, with fresh in my mind just what happened today, actually, I'll be incorporating the knowledge of every patient that I've operated on since I started using this system, and how they fared with these different IOLs.

So the machine learning algorithm can actually take this patient feedback and put that together with the preoperative characteristics such as, you know, personal items, such as hobbies, what they do for recreation, what their employment is, what kind of visual demands they have. And also anatomic factors, you know, the axial length, anterior chamber depth, corneal curvature, all of that, put that all together, and then we can begin to match inter ocular lens selection, actually to patients based not only on their biometry, but also on their personal characteristics, and how they actually felt about the results of their surgery.

So that's how I think machine learning can help us, and hopefully bring surgeons up to speed with premium IOLs more quickly because, you know, it's taken some of us years and years to gain the experience to really become confident in selecting which patients are right for premium lenses, particularly multifocal extended depth of focus lenses and that sort of thing where, you know, there are visual side effects, and there are limitations, but there also are great advantages. And so hopefully using machine learning can bring young surgeons up more quickly increase their confidence and allow them to increase the rate of adoption among their patients for these premium lenses.

The rest is here:
ASCRS 2023: Predicting vision outcomes in cataract surgery with ... - Ophthalmology Times

How AI and Machine Learning is Transforming the Online Gaming … – Play3r

Are you an avid online gamer? Do you find yourself craving a more immersive experience every time you jump into playing your favorite slot games or any game at that? If so, you may be interested to learn about how advances in AI and machine learning are transforming the gaming experience.

In this blog post, we will explore the ways that artificial intelligence and machine learning technologies are making online gaming smoother and more thrilling than ever before. Well look at how these technologies have been used to enhance graphics, user interfaces, and in-game dynamics all of which can drastically improve your gameplay.

Whether your favorite pastime is first-person shooters or real-time strategy games, lets delve into everything AI has to offer gamers!

As the online gaming industry continues to grow and evolve, AI and machine learning have become increasingly important tools for developers. These technologies can change the way we experience our favorite games, from providing more realistic and unpredictable opponents to personalized gameplay.

Through the use of AI and machine learning, game developers can analyze vast amounts of data, allowing them to create better-balanced and more engaging gaming experiences.

Additionally, these tools can help identify and prevent cheating, making online gaming fairer and more enjoyable for all. As the gaming industry moves forward, its clear that AI and machine learning will play an important role in shaping the future of the industry.

The world of online gaming is constantly evolving and with the introduction of AI and machine learning, it just keeps getting better. These technologies have revolutionized the gaming industry and brought about countless benefits for both players and developers.

AI algorithms help create more realistic gameplay and sophisticated opponents, while machine learning helps predict player behavior and preferences, leading to a more personalized gaming experience.

Additionally, AI can help game developers optimize their games for performance and eliminate bugs faster than ever before. In short, the benefits of using AI and machine learning in online gaming are diverse and far-reaching, making it an exciting area to watch for future developments.

Developing AI and machine learning technologies can be incredibly challenging for software developers. One of the biggest obstacles faced by developers is finding the right data to train their algorithms effectively.

In addition to this, there is also a lot of complexity involved in designing AI systems that can learn from data with minimal human intervention. Moreover, creating machine learning models that can accurately predict and analyze data in real time requires a sophisticated understanding of various statistical techniques and programming languages.

With these challenges in mind, its no wonder that many developers in this field feel overwhelmed. However, with the right tools and resources, developers can overcome these obstacles and continue advancing the exciting field of AI and machine learning.

The world of gaming has evolved significantly in recent years, and one major factor in this transformation is the integration of AI and machine learning into popular online games. From first-person shooters to strategy and adventure games, players have been enjoying a more immersive experience thanks to the inclusion of smarter, more complex non-player characters (NPCs) and advanced game optimization.

For example, in the game AI Dungeon, players can enter any storyline, and the AI generates a unique adventure based on their input. Similarly, the popular game League of Legends uses machine learning to optimize matchmaking, ensuring players are pitted against opponents of similar skill levels.

With AI and machine learning continually improving, the future of online gaming promises to be even more exciting and engrossing.

Artificial intelligence and machine learning have drastically transformed the gaming industry in recent years. These technologies can analyze vast amounts of data, predict outcomes, and make recommendations for players to improve their overall gameplay experience. AI can also assist developers in creating more immersive worlds, where virtual characters have reactive behaviors that mimic real-life behaviors.

Machine learning algorithms, on the other hand, can help determine a players skill level and preferences, adapting gameplay accordingly. Many gamers have already seen the benefits of these technologies, with smarter NPCs, more adaptive environments, and improved matchmaking systems.

As AI and machine learning continue to evolve, the gaming experience will only become more enhanced and personalized, creating an even more immersive world for players to explore.

AI and machine learning-based games have become increasingly popular in recent years, offering players a unique and immersive gaming experience. But how can you make the most of these cutting-edge titles?

Firstly, take the time to understand the game mechanics and the AIs decision-making process. This can help you anticipate actions and develop strategies to stay ahead of the curve. Additionally, be sure to give feedback to the developers, as this can help them improve the games machine-learning algorithms and provide a better experience for everyone.

Lastly, dont be afraid to experiment and try out different approaches to see what works best. With these tips, youll be well on your way to dominating the world of AI and machine learning-based gaming.

Online gaming experiences have been revolutionized by AI and Machine Learning technology. The ability to offer players intelligent, personalized gaming experiences that feel unique and engaging. Not only is this creating games that boost user retention, but it is also opening up exciting possibilities for multiplayer gaming.

Additionally, developers are increasingly leaning towards AI and ML to create more immersive worlds for gamers to explore. Despite challenges in implementation, the advancements of AI and Machine Learning are offering a wide range of captivating new experiences for online gamers from improved graphics to real-time learning obstacles making them an important component in crafting better gameplay experiences than ever before.

As players continue to enjoy the ever-evolving exciting world of online gaming, they must keep up with the latest trends related to AI and Machine Learning technology to make sure they are getting the most out of their experience.

Like Loading...

Continued here:
How AI and Machine Learning is Transforming the Online Gaming ... - Play3r

Artificial Intelligence and Machine Learning in Cancer Detection – Targeted Oncology

Toufic Kachaamy, MD

City oh Hope Phoenix

Since the first artificial intelligence (AI) enabled medical device received FDA approval in 1995 for cervical slide interpretation, there have been 521 FDA approvals provided for AI-powered devices as of May 2023.1 Many of these devices are for early cancer detection, an area of significant need since most cancers are diagnosed at a later stage. For most patients, an earlier diagnosis means a higher chance of positive outcomes such as cure, less need for systemic therapy and a higher chance of maintaining a good quality of life after cancer treatment.

While an extensive review of these is beyond the scope of one article, this article will summarize the major areas where AI and machine learning (ML) are currently being used and studied for early cancer detection.

The first area is large database analyses for identifying patients at risk for cancer or with early signs of cancer. These models analyze the electronic medical records, a structured digital database, and use pattern recognition and natural language processing to identify patients with specific characteristics. These include individuals with signs and symptoms suggestive of cancer; those at risk of cancer based on known risk factors; or specific health measures associated with cancer. For example, pancreatic cancer has a relatively low incidence but is still the fourth leading cause of cancer death. Because of the low incidence, screening the general population is neither practical nor cost-effective. ML can be used to analyze specific health outcomes such as new onset hyperglycemia2 and certain health data from questionnaires (3) to classify members of the population as high risk for pancreatic cancer. This allows the screened population to be "enriched with pancreatic cancer," thus making screening higher yield and more cost-effective at an earlier stage.

Another area leveraging AI and ML learning is image analyses. The human vision is best centrally, representing less than 3 degrees of the visual field. Peripheral vision has significantly less special resolution and is more suited for rapid movements and "big picture" analysis. In addition, "inattentional blindness" or missing significant findings when focused on a specific task is one of the vulnerabilities of humans, as demonstrated in the study that showed even experts missed a gorilla in a CT when searching for lung nodules.3 Machines are not susceptible to fatigue, distraction, blind spots or inattentional blindness. In a study that compared a deep learning algorithm to radiologist from the National Lung Screening trial, the algorithm performed better than the radiologist in detecting lung cancer on chest X-rays.4

AI algorithm analysis of histologic specimens can serve as an initial screening tool and an assistant as a real-time interactive interface during histological analysis.5 AI is capable of diagnosing cancer with high accuracy.6 It can accurately determine grades, such as the Gleason score for prostate cancer and identify lymph node metastasis.7 AI is also being explored in predicting gene mutations from histologic analysis. This has the potential of decreasing cost and improving time to analysis. Both are limitations in today's practice limiting universal gene analysis in cancer patients,8 but at the same time are gaining a role in precision cancer treatment.9

An excitingand up-and-coming area where AI and deep learning are the combination of the above such as combining large data analysis with pathology assessment and/ or image analyses. For example, using medical record analysis and CXR findings, deep learning was used to identify patients at high risk for lung cancer and who would benefit the most from lung cancer screening. This has great potential, especially since only 5% of patients eligible for lung cancer screening are currently being screened.10

Finally, the holy grail of cancer detection: blood-based multicancer detection tests, many of which are already available and in development, often use AI algorithms to develop, analyze and validate their test.11

It is hard to imagine an area of medicine that AI and ML will not impact. AI is unlikely, at least for the foreseeable future, to replace physicians. It will be used to enhance physician performance, improve accuracy and efficiency. However, it is essential to note that machine-human interaction is very complicated, and we are scratching the surface of this era. It is premature to assume that real-world outcomes will be like outcomes seen in trials. Any outcome that involves human analysis and final decision-making is affected by human performance. Training and studying human behavior are needed for human-machine interaction to produce optimal outcomes. For example, randomized controlled studies have shown increased polyp detection during colonoscopy using computer-aided detection or AI-based image analysis.12 However, real-life data did not show similar findings13 likely due to a difference in how AI impacts different endoscopists.

Artificial intelligence and machine learning dramatically alter how medicine is practiced, and cancer detection is no exception. Even in the medical world, where change is typically slower than in other disciplines, AI's pace of innovation is coming upon us quickly and, in certain instances, faster than many can grasp and adapt.

Read more from the original source:
Artificial Intelligence and Machine Learning in Cancer Detection - Targeted Oncology

Use of machine learning to assess the prognostic utility of radiomic … – Nature.com

Centers for Disease Control and Prevention. CDC covid data tracker. https://covid.cdc.gov/covid-data-tracker/ (Accessed 13 June 2022) (2022).

Karim, S. S. A. & Karim, Q. A. Omicron sars-cov-2 variant: A new chapter in the covid-19 pandemic. Lancet 398(10317), 21262128 (2021).

Article CAS PubMed PubMed Central Google Scholar

Kupferschmidt, K. & Wadman, M. Delta variant triggers new phase in the pandemic. Science 372(6549), 13751376 (2021).

Article ADS CAS Google Scholar

McCue, C. et al. Long term outcomes of critically ill covid-19 pneumonia patients: Early learning. Intensive Care Med. 47(2), 240241 (2021).

Article CAS PubMed Google Scholar

Michelen, M. et al. Characterising long term covid-19: A living systematic review. BMJ Glob. Health 6(9), e005427 (2021).

Article PubMed Google Scholar

Jacobi, A. et al. Portable chest x-ray in coronavirus disease-19 (covid-19): A pictorial review. Clin. Imaging 64, 3542 (2020).

Article PubMed PubMed Central Google Scholar

Kim, H. W. et al. The role of initial chest x-ray in triaging patients with suspected covid-19 during the pandemic. Emerg. Radiol. 27(6), 617621 (2020).

Article PubMed PubMed Central Google Scholar

Akl, E. A. et al. Use of chest imaging in the diagnosis and management of covid-19: A who rapid advice guide. Radiology 298(2), E63E69 (2021).

Article PubMed Google Scholar

Borkowski, A. A. et al. Using artificial intelligence for covid-19 chest x-ray diagnosis. Fed. Pract. 37(9), 398404 (2020).

PubMed PubMed Central Google Scholar

Balbi, M. et al. Chest x-ray for predicting mortality and the need for ventilatory support in covid-19 patients presenting to the emergency department. Eur. Radiol. 31(4), 19992012 (2021).

Article CAS PubMed Google Scholar

Maroldi, R. et al. Which role for chest x-ray score in predicting the outcome in covid-19 pneumonia?. Eur. Radiol. 31(6), 40164022 (2021).

Article CAS PubMed Google Scholar

Monaco, C. G. et al. Chest x-ray severity score in covid-19 patients on emergency department admission: A two-centre study. Eur. Radiol. Exp. 4(1), 68 (2020).

Article PubMed PubMed Central Google Scholar

Hussain, L. et al. Machine-learning classification of texture features of portable chest x-ray accurately classifies covid-19 lung infection. Biomed. Eng. Online 19(1), 88 (2020).

Article PubMed PubMed Central Google Scholar

Ismael, A. M. & engr, A. Deep learning approaches for covid-19 detection based on chest x-ray images. Expert Syst. Appl. 164(114), 054 (2021).

Google Scholar

Salvatore, M. et al. A phenome-wide association study (phewas) of covid-19 outcomes by race using the electronic health records data in michigan medicine. J. Clin. Med. 10(7), 1351 (2021).

Article CAS PubMed PubMed Central Google Scholar

Spector-Bagdady, K. et al. Coronavirus disease 2019 (covid-19) clinical trial oversight at a major academic medical center: Approach of michigan medicine. Clin. Infect. Dis. 71(16), 21872190 (2020).

Article CAS PubMed Google Scholar

Nypaver, M. et al. The michigan emergency department improvement collaborative: A novel model for implementing large scale practice change in pediatric emergency care. Pediatrics 142(1 MeetingAbstract), 105 (2018).

Article Google Scholar

Abbas, A., Abdelsamea, M. M. & Gaber, M. M. Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network. Appl. Intell. 51, 854864 (2021).

Article Google Scholar

Gupta, A. et al. Association between antecedent statin use and decreased mortality in hospitalized patients with COVID-19. Nat. Commun. 12(1), 1325 (2021).

Article ADS CAS PubMed PubMed Central Google Scholar

Cox, D. R. Regression models and life tables (with discussion). J. R. Stat. Soc. B 34(2), 187220 (1972).

MATH Google Scholar

Therneau, T. M. & Grambsch, P. M. Modeling survival data: Extending the Cox model. In The Cox Model 3977 (Springer, 2000).

MATH Google Scholar

Plsterl, S., Navab, N. & Katouzian, A. An efficient training algorithm for kernel survival support vector machines. https://doi.org/10.48550/arXiv.1611.07054 (Preprint posted online November 21, 2016).

Ishwaran, H. et al. Random survival forests. Ann. Appl. Stat. 2(3), 841860 (2008).

Article MathSciNet MATH Google Scholar

Hothorn, T. et al. Survival ensembles. Biostatistics 7(3), 355373 (2006).

Article PubMed MATH Google Scholar

Zhou, Z. H. Ensemble Methods: Foundations and Algorithms (CRC Press, 2012).

Book Google Scholar

Zwanenburg, A. et al. Image biomarker standardisation initiative. https://doi.org/10.48550/arXiv.1612.07003 (Preprint posted online December 21, 2016)

Harrell, F. E. et al. Evaluating the yield of medical tests. JAMA 247(18), 25432546 (1982).

Article PubMed Google Scholar

Harrell, F. E. Jr., Lee, K. L. & Mark, D. B. Multivariable prognostic models: Issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Stat. Med. 15(4), 361387 (1996).

Article PubMed Google Scholar

Holste, G. et al. End-to-end learning of fused image and non-image features for improved breast cancer classification from mri. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 32943303 (2021).

Zhou, H. et al. Diagnosis of distant metastasis of lung cancer: Based on clinical and radiomic features. Transl. Oncol. 11(1), 3136 (2018).

Article PubMed Google Scholar

Militello, C. et al. CT Radiomic Features and Clinical Biomarkers for Predicting Coronary Artery Disease. Cogn. Comput. 15(1), 238253 (2023).

Article Google Scholar

Huang, S. C. et al. Multimodal fusion with deep neural networks for leveraging CT imaging and electronic health record: A case-study in pulmonary embolism detection. Sci. Rep. 10(1), 19 (2020).

Article Google Scholar

Liu, Z. et al. Imaging genomics for accurate diagnosis and treatment of tumors: A cutting edge overview. Biomed. Pharmacother. 135, 111173 (2021).

Article CAS PubMed Google Scholar

Tomaszewski, M. R. & Gillies, R. J. The biological meaning of radiomic features. Radiology 298(3), 505516 (2021).

Article PubMed Google Scholar

Brouqui, P. et al. Asymptomatic hypoxia in COVID-19 is associated with poor outcome. Int. J. Infect. Dis. 102, 233238 (2021).

Article CAS PubMed Google Scholar

Struyf, T. et al. Cochrane COVID-19 Diagnostic Test Accuracy Group. Signs and symptoms to determine if a patient presenting in primary care or hospital outpatient settings has COVID19. Cochrane Database Syst. Rev. (5) (2022).

Garrafa, E. et al. Early prediction of in-hospital death of covid-19 patients: A machine-learning model based on age, blood analyses, and chest x-ray score. Elife 10, e70640 (2021).

Article CAS PubMed PubMed Central Google Scholar

Schalekamp, S. et al. Model-based prediction of critical illness in hospitalized patients with covid-19. Radiology 298(1), E46E54 (2021).

Article PubMed Google Scholar

Soda, P. et al. Aiforcovid: Predicting the clinical outcomes in patients with covid-19 applying ai to chest-x-rays. An Italian multicentre study. Med. Image Anal. 74, 102216 (2021).

Article PubMed PubMed Central Google Scholar

Shen, B. et al. Initial chest radiograph scores inform covid-19 status, intensive care unit admission and need for mechanical ventilation. Clin. Radiol. 76(6), 473.e1-473.e7 (2021).

Article CAS PubMed Google Scholar

Liu, Y. et al. Tumor heterogeneity assessed by texture analysis on contrast-enhanced CT in lung adenocarcinoma: Association with pathologic grade. Oncotarget 8(32), 5366453674 (2017).

Article PubMed PubMed Central Google Scholar

Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. Adv. Neural. Inf. Process. Syst. 25, 19 (2012).

Google Scholar

He, K. et al. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770778 (2016).

Chandra, T. B. et al. Coronavirus disease (covid19) detection in chest x-ray images using majority voting based classifier ensemble. Expert Syst. Appl. 165(113), 909 (2021).

Google Scholar

Johri, S. et al. A novel machine learning-based analytical framework for automatic detection of covid-19 using chest x-ray images. Int. J. Imaging Syst. Technol. 31(3), 11051119 (2021).

Article Google Scholar

Selvi, J. T., Subhashini, K. & Methini, M. Investigation of covid-19 chest x-ray images using texture featuresA comprehensive approach. Computational 1, 4558 (2021).

MATH Google Scholar

van Griethuysen, J. J. M. et al. Computational radiomics system to decode the radiographic phenotype. Cancer Res. 77(21), e104e107 (2017).

Article PubMed PubMed Central Google Scholar

Zhang, Q., Wu, Y. N. & Zhu, S. C. Interpretable convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 88278836 (2018).

Varghese, B. A. et al. Predicting clinical outcomes in covid-19 using radiomics on chest radiographs. Br. J. Radiol. 94(1126), 20210221 (2021).

Article PubMed Google Scholar

Iori, M. et al. Mortality prediction of COVID-19 patients using radiomic and neural network features extracted from a wide chest X-ray sample size: A robust approach for different medical imbalanced scenarios. Appl. Sci. 12(8), 3903 (2022).

Article CAS Google Scholar

Blain, M. et al. Determination of disease severity in covid-19 patients using deep learning in chest x-ray images. Diagn. Interv. Radiol. 27(1), 2027 (2021).

Article PubMed Google Scholar

Liu, X. et al. Temporal radiographic changes in covid-19 patients: Relationship to disease severity and viral clearance. Sci. Rep. 10(1), 10263 (2020).

Article ADS CAS PubMed PubMed Central Google Scholar

Yasin, R. & Gouda, W. Chest x-ray findings monitoring covid-19 disease course and severity. Egypt. J. Radiol. Nucl. Med. 51(1), 193 (2020).

Article Google Scholar

Castelli, G. et al. Brief communication: Chest radiography score in young covid-19 patients: Does one size fit all?. PLoS ONE 17(2), e0264172 (2022).

Original post:
Use of machine learning to assess the prognostic utility of radiomic ... - Nature.com

Using Machine Learning to Predict the 2023 Kentucky Derby … – DataDrivenInvestor

Can the forecasted weather be used to predict the winning race time?

My hypothesis is that the weather plays a major impact on the Kentucky Derbys winning race time. In this analysis I will use the Kentucky Derbys forecasted weather to predict the winning race time using Machine Learning (ML). In previous articles I discussed the importance of using explainable ML in a business setting to provide business insights and help with buy-in and change management. In this analysis, because Im striving purely for accuracy, I will disregard this advice and go directly to the more complex, but accurate, black box Gradient Boosted Machine (GBM), because we want to win some money!

The data I will use comes from the National Weather Service:

# Read in Data #data <- read.csv("...KD Data.csv")

# Declare Year Variables #year <- data[,1]

# Declare numeric x variables #numeric <- data[,c(2,3,4)]

# Scale numeric x variablesscaled_x <- scale(numeric)# check that we get mean of 0 and sd of 1colMeans(scaled_x)apply(scaled_x, 2, sd)

# One-Hot Encoding #data$Weather <- as.factor(data$Weather)xfactors <- model.matrix(data$Year ~ data$Weather)[, -1]

# Bring prepped data all back together #scaled_df <- as.data.frame(cbind(year,y,scaled_x,xfactors))

# Isolate pre-2023 data #old_data <- scaled_df[-1,]new_data <- scaled_df[1,]

# Gradient Boosted Machine ## Find Max Interaction Depth #floor(sqrt(NCOL(old_data)))

# find index for n trees with minimum CV errorbest.iter <- gbm.perf(tree_mod, method="OOB", plot.it=TRUE, oobag.curve=TRUE, overlay=TRUE)print(best.iter)

In this article, I chose a more accurate, but complex, black box model to predict the Kentucky Derbys winning race time. This is because I dont care about generating insights or winning buy-in or change management, rather I want to use the model that is the most accurate so I can make a data driven gamble. In most business cases you will give up accuracy for explainability, however there are some instances (like this one) in which accuracy is the primary requirement of a model.

This prediction is based off forecasted weather for Saturday May 6th, taken on Thursday May 4th, so obviously it should be taken with a grain of salt. As everyone knows, even with huge amounts of technology, predicting weather is very difficult. Using forecasted weather to predict the winning race time adds even more uncertainity. That being said, I will take either the over or the under that matches my predicted winning time of 122.12 seconds.

Read the original post:
Using Machine Learning to Predict the 2023 Kentucky Derby ... - DataDrivenInvestor