At the Louvre, the Olympics Are More French Than You Might Think – The New York Times

The flame is coming home, the director of the Paris Olympics, Tony Estanguet, told a crowd of reporters and critics gathered in the Louvres interior sculpture garden on Tuesday. The sun streamed through the vaulted glass roof, lighting up a bronze sculpture of a discus thrower installed beneath a lapis blue arch emblazoned with LOlympisme Olympism.

Estanguet, a former Olympic champion, might have been describing the Gamess centennial return to France. After the Olympic flame makes its way from Athens to Paris, via a handful of French overseas territories, it will be installed in the Tuileries Garden just beyond the Louvre, whose grounds will also be part of the marathon route this summer. But the museum itself holds a special connection to the birth of the modern Olympics, a relationship that is explored in the exhibition Olympism: Modern Invention, Ancient Legacy, running through Sept. 16.

The show brings together 120 artworks and artifacts that show how the quadrennial sporting events of 8th century B.C. Greece, devoted to the worship of Zeus, influenced the late-19th-century development of the modern Games. The first iteration of these new competitions took place in Athens in 1896, but Frenchmen and a French fascination with antiquity played a large role, and in 1900, the Games moved to Paris.

A wall of photographic portraits at the Louvre identifies six men, four of them French, who envisioned the revival. For the aristocratic Frenchman Pierre de Coubertin, it was about sporting education; for his Greek counterpart, Demetrius Vikelas, it was a mix of business and history. This slightly dry introductory display gives way to a series of rooms that focus on the art of the Olympics: a mix of antique veneration and turn-of-the-century innovation.

Greek vases, plates, and cups from the 5th and 6th centuries B.C. illustrate the classical imagery, deeply rooted in mythology, that was associated with ancient Games. On the Lambros Cup (540-520 B.C.), nude runners black figures on red clay race around the ample vessel, their muscular legs frozen mid-stride. A cup from around 490 B.C. shows a discus thrower encircled by a decorative motif.

Many of these objects are from the Louvres collection, and it was one of its own curators, Edmond Pottier, who pioneered the study of ancient Greek pottery around the time that de Coubertin and his peers were seized with Olympic fervor. Pottiers profile features on a giant 1934 bronze medallion that hangs above a copy of his Corpus Vasorum Antiquorum a definitive catalog of Greek vases in collections around the world that began as an index of Louvre artifacts.

Herakles, the divine warrior credited with founding the ancient Olympics, also looms large in the exhibition as an embodiment of preternatural strength. A calyx krater (a tall bowl for mixing water and wine) from 515-10 B.C. shows Herakles, a son of Zeus, fighting the giant Antaois. On the black vessel, Herakles is a taut nude figure in red clay against black, wrestling his burly opponent into submission. Elsewhere, he is a portly infant struggling against a snake that coils above him, in a statue admired by mile Gilliron, the official artist of the inaugural modern Games.

Gillirons drawings for Olympic brochures, commemorative albums and posters hang alongside his sketches and studies for medallions, plaques and trophies. The artist also produced images of wrestlers, discus throwers, torch bearers and weight lifters for special-edition stamps whose colored sheets are on display in vitrines, as well as blown up on the gallery walls behind the statues that inspired them. Unlike the ancient ceramics, however, these are 20th-century replicas made to aid study: What is new can seem old, and vice versa.

Amid these elegant but somewhat staid arrangements are hints at the more idiosyncratic aspects of the Olympic Games as reimagined by the French. A contact sheet produced by the photographer (and rival of Eadweard Muybridge) tienne-Jules Marey shows how the technology of chronophotography, which captures frames of movement in quick succession, was used to reconstruct the movements of ancient Greek athletes, based on the still postures seen in relics. In Mareys stills, a nude man spins around and around, disc in hand, gathering speed, until he flings it into the distance.

Nearby, Jean Rovras 1924 film The Olympic Games as They Were Practiced in Ancient Greece stages the act of discus throwing as a slow-motion pantomime in which an artfully dressed modern-day Adonis theatrically lobs his disc with the elegance of a dancer. Another shot shows a still-life tableau of six spear throwers paused mid-movement, elapsing time from left to right, their arms shaking with effort as they hold their unmoving posture.

An attempt at including women in the history of the Games doesnt really work, mostly because they were hardly permitted to compete in the 1896 Athens Olympics, or those that followed in Paris in 1900 and 1924, London in 1908, Stockholm in 1912 and onward. While other international sporting competitions evolved, the Olympics continued refusing full participation to women until 1928. (London 2012 was the first time every participating country sent women to the Games, and this summer in Paris there will be quotas to ensure an equal number of female and male participants.)

There was one video of women competing in the 1896 Games on display, but it was broken, so I dont know what it showed: perhaps croquet or sailing, two of the sports available to female athletes. Elsewhere a curatorial stretch were some films of Isadora Duncan, the late-19th-century choreographer who admired neoclassical traditions, dancing in her garden. A few drawings and plates of Greek heroines hung in the same display Nike the winged goddess flying, or sowing seeds over a stadium but female allegories are not women.

An 1869 painting, The Soldier of Marathon, depicts the famous messenger who ran home shedding all extraneous objects, including clothes and shoes, along the way to announce the triumph of his compatriots over the invading Persians. As soon as he delivered the news, he dropped dead.

This legend inspired the French linguist and educator Michel Bral to conceive of the 26.2-mile marathon race as the ultimate physical test and a cornerstone of the 1896 Games. In a darkened Louvre walkway filled with relics and replicas of gleaming trophies, Brals Silver Cup, which he designed himself, is spotlit on a small plinth. It is a sparkling object, pure silver, but modest and slender. Reeds and flowers swirl around its base, just like the Marathon marshlands that foiled the Persian attack.

Olympism tells us much about the ancient history admired by the modern Frenchmen whose games return to Paris in July. During the ancient Games, it was decreed that all hostilities must cease for their duration. It is this sentiment, however utopian, that we still see in the Olympic emblem, with its five interlocking rings, designed by de Coubertin over a century ago. These five rings represent the five parts of the world now won over to Olympism, he wrote in 1913 in the Olympic Review. At the Louvre, you may be won over, too.

Olympism: Modern Invention, Ancient Legacy Through Sept. 16 at the Louvre in Paris; louvre.fr.

Continued here:

At the Louvre, the Olympics Are More French Than You Might Think - The New York Times

Is artificial intelligence combat ready? – Washington Technology

Human soldiers will increasingly share the battlespace with a range of robotic, autonomous, and artificial intelligence-enabled agents. Machine intelligence has the potential to be a decisive factor in future conflicts that the U.S. may face.

The pace of change will be faster than anything seen in many decades, driven by the advances in commercial AI technology and the pressure of a near-peer with formidable technological capabilities.

But are AI and machine learning combat-ready? Or, more precisely, is our military prepared to incorporate machine intelligence into combat effectively?

Creating an AI-Ready Force

The stakes of effective collaboration between AI and combatants are profound.

Human-machine teaming has the potential to reduce casualties dramatically by substituting robots and autonomous drones for human beings in the highest-risk front-line deployments.

It can dramatically enhance situational awareness by rapidly synthesizing data streams across multiple domains to generate a unified view of the battlespace. And it can overwhelm enemy defenses with the swarming of autonomous drones.

In our work with several of the Defense Department research labs working at the cutting edge of incorporating AI and machine learning into combat environments, we have seen that this technology has the potential to be a force multiplier on par with air power.

However, several technological and institutional obstacles must be overcome before AI agents can be widely deployed into combat environments.

Safety and Reliability

The most frequent concern about AI agents and uncrewed systems is whether they can be trusted to take actions with potentially lethal consequences. AI agents have an undeniable speed advantage in processing massive amounts of data to recognize targets of interest. However, there is an inherent tension between conducting war at machine speed and retaining accountability for the use of lethal force.

It only takes one incident of AI weapons systems subjecting their human counterparts to friendly fire to undermine the confidence of warfighters in this technology. Effective human-machine teaming is only possible when machines have earned the trust of their human allies.

Adapting Military Doctrine to AI Combatants

Uncrewed systems are being rapidly developed that will augment existing forces across multiple domains. Many of these systems incorporate AI at the edge to control navigation, surveillance, targeting, and weapons systems.

However, existing military doctrine and tactics have been optimized for a primarily human force. There is a temptation to view AI-enabled weapons as a new tool to be incorporated into existing combat approaches. But doctrine will be transformed by innovations such as the swarming of hundreds or thousands of disposable, intelligent drones capable of overwhelming strategic platforms.

Force structures may need to be reconfigured on the fly to deliver drones where there is the greatest potential impact. Human-centric command and control concepts will need to be modified to accommodate machines and build warfighter trust.

As autonomous agents proliferate and become more powerful, the battlespace will become more expansive, more transparent, and move exponentially faster. The decision on how and if to incorporate AI into the operational kill chain has profound ethical consequences.

An even more significant challenge will be how to balance the pace of action on the AI-enabled battlefield with the limits of human cognition. What are the tradeoffs between ceding a first-strike advantage measured in milliseconds with the loss of human oversight? The outcome of future conflicts may hinge on such questions.

Insatiable Hunger for Data

AI systems are notoriously data-hungry. There is not, and fortunately never will be, enough live operational data from live military conflicts to adequately train AI models to the point where they could be deployed on the battlefield. For this reason, simulations are essential to develop and test AI agents, and they require thousands or even millions of iterations using modern machine learning techniques.

The DoD has existing high-fidelity simulations, such as Joint Semi-Automated Forces (JSAF), but they run essentially in real-time. To unlock the full potential of AI-enabled warfare requires developing simulations with sufficient fidelity to accurately model potential outcomes but compatible with the speed requirements of digital agents.

Integration and Training

AI-enabled mission planning has the potential to vastly expand the situational awareness of combatants and generate novel multi-domain operation alternatives to overwhelm the enemy. Just as importantly, AI can anticipate and evaluate thousands of courses of action that the enemy might employ and suggest countermeasures in real time.

One reason Americas military is so effective is a relentless focus on training. But warfighters are unlikely to embrace tactical directives emanating from an unfamiliar black box when their lives hang in the balance.

As autonomous platforms move from research labs to the field, intensive warfighter training will be essential to create a cohesive, unified human-machine team. To be effective, AI course-of-action agents must be designed to align with existing mission planning practices.

By integrating such AI agents with the training for mission planning, we can build confidence among users while refining the algorithms using the principles of warfighter-centric design.

Making Human-Machine Teaming a Reality

While underlying AI technology has grown exponentially more powerful in the past few years, addressing the challenges posed by human-machine teaming will determine how rapidly these technologies can translate into practical military advantage.

From the level of the squad all the way to the joint command, it is essential that we test the limits of this technology and establish the confidence of decision-makers in its capabilities.

There are several vital initiatives the DoD should consider to accelerate this process.

Embrace the Chaos of War

Building trust in AI agents is the most essential step to effective human-machine teaming. Warfighters will rightly have a low level of confidence in systems that have only been tested under controlled laboratory conditions. The best experiments and training exercises replicate the chaos of war, including unpredictable events, jamming of communications and positioning systems, and mid-course changes to the course of action.

Human warfighters should be encouraged to push autonomous systems and AI agents to the breaking point to see how they perform under adverse conditions. This will result in iterative design improvements and build the confidence that these agents can contribute to mission success.

A tremendous strength of the U.S. military is the flexible command structure that empowers warfighters down to the squad level to rapidly adapt to changing conditions on the ground. AI systems have the potential to provide these units with a far more comprehensive view of the battlespace and generate tactical alternatives. But to be effective in wartime conditions, AI agents must be resilient enough to function under conditions of degraded communications and understand the overall intent of the mission.

Apply AI to Defense Acquisition Process

The rapid evolution of underlying AI and autonomous technologies means that traditional procurement processes developed for large cold-war platforms are doomed to fail. As an example, swarming tactics are only effective when using hundreds or thousands of individual systems capable of intelligent, coordinated action in a dynamic battlespace.

Acquiring such devices at scale will require leveraging a broad supplier base, moving rapidly down the cost curve, and enabling frequent open standards updates. Too often, we have seen weapons vendors using incompatible, proprietary communications standards that render systems unable to share data, much less engage in coordinated, intelligent maneuvers. One solution is to apply AI to revolutionize the acquisition process.

By creating a virtual environment to test systems designs, DoD customers can verify operational concepts and interoperability before a single device is acquired. This will help to reduce waste, promote shared knowledge across the services, and create a more level playing field for the supplier base.

Build Bridges from Labs to Deployment

While a tremendous amount of important work has been done by organizations such as the Navy Research Lab, the Army Research Lab, the Air Force Research Lab, and DARPA, the success of AI-enabled warfare will ultimately be determined by moving this technology from the laboratories and out into the commands. Human-machine teaming will be critical to the success of these efforts.

Just as important, the teaching of military doctrine at the service academies needs to be continuously updated as the technology frontier advances. Incorporating intelligent agents into practical military missions requires both profound changes in doctrine and reallocation of resources.

Military commanders are unlikely to be dazzled by bright and shiny objects unless they see tangible benefits to deploying them. By starting with some easy wins, such as the enhancement of ISR capabilities and automation of logistics and maintenance, we can build early bridges that will instill confidence in the value of AI agents and autonomous systems.

Educating commands about the potential of human-machine teaming to enhance mission performance and then developing roadmaps to the highest potential applications will be essential. Commanders need to be comfortable with the parameters of human-in-the-loop and human-on-the-loop systems as they navigate how much autonomy to grant to AI-at-the-edge weapons systems. Retaining auditability as decision cycles accelerate will be critical to ensuring effective oversight of system development and evolving doctrine.

Summary

Rapid developments in AI and autonomous weapons systems have simultaneously accelerated and destabilized the ongoing quest for military superiority and effective deterrence. The United States has responded to this threat with a range of policies restricting the transfer of underlying technologies. However, the outcome of this competition will depend on the ability to convincingly transfer AI-enabled warfare from research labs to potential theaters of conflict.

Effective human-machine teaming will be critical to make the transition to a joint force that leverages the best capabilities of human warfighters and AI to ensure domination of the battlespace and deter adventurism by foreign actors.

Mike Colony leads Sercos Machine Learning Group, which has helped to support several Department of Defense clients in the area of AI and machine learning, including the Office of Naval Research, the Air Force Research Laboratory, the U.S. Marine Corps, the Electronic Warfare and Countermeasures Office, and others.

Excerpt from:

Is artificial intelligence combat ready? - Washington Technology

Posted in Uncategorized

As Massachusetts leans in on artificial intelligence, AG waves a yellow flag Rhode Island Current – Rhode Island Current

BOSTON While the executive branch of state government touts the competitive advantage to investing energy and money into artificial intelligence across Massachusetts tech, government, health, and educational sectors, the states top prosecutor is sounding warnings about its risks.

Attorney General Andrea Campbell issued an advisory to AI developers, suppliers, and users on Tuesday, reminding them of their obligations under the states consumer protection laws.

AI has tremendous potential benefits to society, Campbells advisory said. It presents exciting opportunities to boost efficiencies and cost-savings in the marketplace, foster innovation and imagination, and spur economic growth.

However, she cautioned, AI systems have already been shown to pose serious risks to consumers, including bias, lack of transparency or explainability, implications for data privacy, and more. Despite these risks, businesses and consumers are rapidly adopting and using AI systems which now impact virtually all aspects of life.

Developers promise that their complex and opaque systems are accurate, fair, effective, and appropriate for certain uses, but Campbell notes that the systems are being deployed in ways that can deceive consumers and the public, citing chatbots used to perpetrate scams or of false computer-generated images and videos called deepfakes that mislead consumers and viewers about a participants identity. Misleading and potentially discriminatory results from these systems can run afoul of consumer protection laws, according to the advisory.

The advisory has echoes of a dynamic in the states enthusiastic embrace of gambling at the executive level, with Campbell cautioning against potential harmful impacts while staying shy of a full-throated objection to expansions like an online Lottery.

Gov. Maura Healey has touted applied artificial intelligence as a potential boon for the state, creating an artificial intelligence strategic task force through executive order in February. Healey is also seeking $100 million in her economic development bond bill the Mass Leads Act to create an Applied AI Hub in Massachusetts.

Massachusetts has the opportunity to be a global leader in Applied AI but its going to take us bringing together the brightest minds in tech, business, education, health care, and government. Thats exactly what this task force will do, Healey said in a statement accompanying the task force announcement. Members of the task force will collaborate on strategies that keep us ahead of the curve by leveraging AI and GenAI technology, which will bring significant benefit to our economy and communities across the state.

The executive order itself makes only glancing references to risks associated with AI, focusing mostly on the task forces role in identifying strategies for collaboration around AI and adoption across life sciences, finance, and higher education. The task force members will recommend strategies to facilitate public investment in AI and promoting AI-related job creation across the state, as well as recommending structures to promote responsible AI development and use for the state.

In conversation with Healey last month, tech journalist Kara Swisher offered a sharp critique of the enthusiastic embrace of AI hype, describing it as just marketing right now and comparing it to the crypto bubble and signs of a similar AI bubble are troubling other tech reporters. Tech companies are seeing the value in pushing whatever were pushing at the moment, and its exhausting, actually, Swisher said, adding that certain types of tasked algorithms like search tools are already commonplace, but the trend now is slapping an AI onto it and saying its AI. Its not.

Eventually, Swisher acknowledged, tech becomes cheaper and more capable at certain types of labor than people as in the case of mechanized farming and its up to officials like Healey to figure out how to balance new technology while protecting the people it impacts.

Mohamad Ali, chief operating officer of IBM Consulting, opined in CommonWealth Beacon that there need to be significant investments in an AI-capable workforce that prioritizes trust and transparency.

Artificial intelligence policy in Massachusetts, as in many states, is a hodgepodge crossing all branches of government. The executive branch is betting big that the technology can boost the states innovation economy, while the Legislature is weighing the risks of deepfakes in nonconsensual pornography and election communications.

Reliance on large language model styles of artificial intelligence melding the feel of a search algorithm with the promise of a competent researcher and writer has caused headaches for courts. Because several widely used AI tools use predictive text algorithms trained on existing work but not always limiting itself to it, large language model AI can hallucinate and fabricate facts and citations that dont exist.

In a February order in the troubling wrongful death and sexual abuse case filed against the Stoughton Police Department, Associate Justice Brian Davis sanctioned attorneys for their reliance on AI systems to prepare legal research and blindly file inaccurate information generated by the systems with the court. The AI hallucinations and the unchecked use of AI in legal filings are disturbing developments that are adversely affecting the practice of law in the Commonwealth and beyond, Davis wrote.

This article first appeared on CommonWealth Beacon and is republished here under a Creative Commons license.

GET THE MORNING HEADLINES DELIVERED TO YOUR INBOX

SUBSCRIBE

Continue reading here:

As Massachusetts leans in on artificial intelligence, AG waves a yellow flag Rhode Island Current - Rhode Island Current

Posted in Uncategorized

SpaceX to launch Maxar WorldView Legion 1 & 2 mission for leading resolution and accuracy SatNews – SatNews

A SpaceX Falcon 9 rocket will launch the WorldView Legion 1 & 2 mission on Wednesday, April 17, 2024 at 6:30PM (UTC). WorldView Legion is a constellation of Earth observation satellites built and operated by Maxar. Constellation is planned to consist of 6 satellites in both polar and mid-inclination orbits, providing 30 cm-class resolution.

These are the first two of six planned WorldView Legion satellites, which will enhance Maxar Intelligences constellation by delivering industry-leading resolution and accuracy. When all six WorldView Legion satellites are launched, it will triple Maxar Intelligences capacity to collect 30 cm-class and multispectral imagery. The full Maxar constellation of 10 electro-optical satellites will image the most rapidly changing areas on Earth as frequently as every 20 to 30 minutes, from sunup to sundown.

WorldView Legion will extend the quality and capability of our industry-leading constellation, redefining Earth observation constellation performance and providing customers with unprecedented access to timely, actionable insights that help drive mission success, said Dan Smoot, Maxar Intelligence CEO.

These Maxar Space Systems-built satellites are the first Maxar 500 series buses to complete production at the companys satellite manufacturing locations in Palo Alto and San Jose, California. The Maxar 500 series bus is a mid-size platform that can be tailored for multiple missions and orbits. As part of the WorldView Legion program, Maxar invested to create a bus with better stability, agility and pointing accuracy; future Maxar 500 customers can benefit from this technology for their missions.

WorldView Legion and the Maxar 500 series platform is the culmination of decades of experience in building satellites for customer missions, said Chris Johnson, Maxar Space Systems CEO. We are excited to reach this important program milestone and look forward to continued partnership on the program.

The launch of the first two WorldView Legion satellites will be broadcast on spacex.com and on x.com/spacex.

Space Launch Complex 4E has witnessed the launch of 141 rockets, including 141 orbital launch attempts, while Vandenberg SFB, California, has been the site for 752 rocket launches. The launch cost is $52 Million.

Continued here:

SpaceX to launch Maxar WorldView Legion 1 & 2 mission for leading resolution and accuracy SatNews - SatNews

Rookie Robotics Team from Small UWS High School Joining the Giants in Robotics Competition – westsiderag.com

Sonia Benowitz is second from left. Credit: Annabelle Malschlin.

By Lisa Kava

Students from the newly formed robotics team at West End Secondary School (WESS), on West 61st Street, are competing in the New York City regionals of the FIRST Robotics Competition (FRC) from April 5-7. The event will take place at the Armory Track and Field Center in Washington Heights.

Founded in 2015, WESS has 500 students in its public high school. How did its novice robotics team secure a spot at FRC, alongside larger, well-established schools known for their STEM (Science, Technology, Engineering, and Math) programs, such as The Bronx High School of Science and Stuyvesant HIgh School?

The story starts in September 2023 when Upper West Sider Sonia Benowitz, 14, entered 9th grade at WESS. She had loved building LEGO robots in WESSs middle school robotics club, the community of the club and working with friends towards a common goal, she told West Side Rag in a phone interview. But a club did not exist for high school students. So she created one.

First, she approached her school principal who was supportive, she said. Benowitz then asked her middle school robotics coach, Noah Tom-Wong, to help run the club. Together with math teacher Evan Wheeler, who signed on as faculty leader, they began to spread the word. Soon the club had 25 members from 9th through 12th grade.

With Tom-Wongs guidance, the club members gathered wood, metal, and other supplies, ordering from vendors and robotics companies. They began to build a fully functional robot that could perform various tasks through remote wireless control. For example, one task is that the robot will use its arms that we built to pick up disks shaped like frisbees, Benowitz said, then throw the disks into a goal area.

Tom-Wong suggested the club enter the FIRST Robotics Competition, in which he had competed as a student at Stuyvesant High School. He volunteers frequently at FRC competitions. Robotics provides students [with] an incredibly unique environment where they can exert energy safely and with great impact, he told the Rag. The nature of the competition not only makes students good at STEM, but also [at] STEM communication.

But the $6,000 registration fee for the competition was not in the school budget. Thats when Samantha Alvarez Benowitz, Sonias mom, got involved. Researching, she learned about a rookie grant from NASA through its Robotics Alliance Project. The WESS team applied and got it. According to Alvarez Benowitz, they were the only school in New York City selected to receive the NASA grant, and one of five schools in New York state,

On the application we had to describe who was on our team, so I did a demographic survey and found that close to 70% of our team members are from historically underrepresented groups in STEM, including women, people of color, LGBTQ+, and students with disabilities, Sonia Benowitz said. They also wanted to know how we would get and pay for the supplies we needed to build the robot. The team has been fundraising through bake sales and other school functions. They also applied for grants, receiving $2,500 from the Gene Hass Foundation, an automotive company that sponsors STEM education.

At the competition the WESS team will be paired with two other teams to form a three-team alliance. Each team has its own robot which will be programmed to perform different tasks. The robots are judged and awarded points. We have to prepare our robot to complete as many tasks as possible, but also to complete tasks as well as possible, Benowitz explained. The WESS robot has been programmed to drive up a ramp onto a platform, like a car on a road, Alvarez Benowitz added. The ramp and platform are part of an existing set that all the teams use.

Working collaboratively is crucial, according to Tom-Wong. The work that comes out of these robotics teams can be very complex, he said. Its not unusual at competitions to see students from multiple teams working together to fix one teams problem. The top five teams will compete in the championships in Houston at the end of April.

Benowitz is excited about the competition. Our team has been working towards this moment for months, and we have all put in a lot of time and effort to get here. She is also a little nervous. I hope that our robot wont have any problems or break in the middle of a match.

Tom-Wong credits the rookie team for its perseverance. The group had to work with less stock and fewer tools [than most teams]. We also do not have the experience that the veteran teams have, he told the Rag. He is hopeful that WESS students will remain active in robotics in future years. Ultimately this group is unique in that they are pioneering the robotics program at WESS. They are laying the groundwork for a place where students can push themselves to learn and develop.

Subscribe to West Side Rags FREE email newsletterhere.

See more here:

Rookie Robotics Team from Small UWS High School Joining the Giants in Robotics Competition - westsiderag.com

Nvidia Announces Robotics-Oriented AI Foundational Model – InfoQ.com

At its recent GTC 2024 event, Nvidia announced a new foundational model to build intelligent humanoid robots. Dubbed GR00T, short for Generalist Robot 00 Technology, the model will understand natural language and be able to observe human actions and emulate human movements.

According to Nvidia CEO Jensen Huang, creating intelligent humanoid robots is the most exciting AI problem today. GR00T robots will learn coordination and other skills by observing humans to be able to navigate, adapt and interact with the real world. At the conference keynote, Huang showed several demos of what GR00T is capable of at the moment, including some robots performing a number of tasks.

The GR00T model takes multimodal instructions and past interactions as input and produces the actions for the robot to execute.

To power GR00T, Nvidia has created a new family of systems-on-modules, called Jetson Thor, using the latest Blackwell graphics architecture from the company and able to provide 800 teraflops (TFLOPS) of eight-bit floating-point compute.

At the foundation of GR00T lies Nvidia Isaac Sim, an extensible, Omniverse-based platform for robotics simulation aimed to improve the way AI-based robots are designed and tested, according to the company.

To train GR00T at scale, Nvidia has also built a new compute orchestration platform, Nvidia Osmo, aimed at coordinating training and inference across several Nvidia systems, including DGX systems for training, OVX systems for simulation, and IGX and AGX systems for hardware-in-the-loop validation.

Embodied AI models require massive amounts of real and synthetic data. The new Isaac Lab is a GPU-accelerated, lightweight, performance-optimized application built on Isaac Sim specifically for running thousands of parallel simulations for robot learning.

While GR00T is still very much a work in progress, Nvidia has announced two of the building blocks that will compose it, as part of the Isaac platform: a foundational model for robotic-arm manipulators, called Isaac Manipulator, and a collection of hardware-accelerated packages for visual AI and perception, the Isaac Perceptor.

According to Nvidia, Isaac Manipulator

provides up to an 80x speedup in path planning and zero-shot perception increases efficiency and throughput, enabling developers to automate a greater number of new robotic tasks.

On the other hand, Isaac Perceptor aims to improve efficiency and safety in environments where autonomous mobile robots are used, such as in manufacturing and fulfillment operations.

Both the Manipulator and the Perceptor should become available in the next quarter, says Huang.

On a related note, Nvidia has joined the Open Source Robotics Alliance, which aims to provide financial and industry support to the Robot Operating System (ROS). The company has not detailed if they plan to use ROS for GR00T robots, though.

Link:

Nvidia Announces Robotics-Oriented AI Foundational Model - InfoQ.com

Google giving $500K to expand robotics and AI education programs in Washington state – GeekWire

U.S. Congresswoman Suzan DelBene joins Googles Paco Galanes, Kirkland site lead and engineering director, right, with students working on robotics projects at Finn Hill Middle School in Kirkland, Wash., on Friday. (Google Photo)

Googles philanthropic arm is giving a $500,000 grant to expand access to robotics and artificial intelligence education programs across Washington state middle schools, the company announced Friday.

In partnership with the non-profits Robotics Education & Competition Foundation (RECF) and For InSpiration and Recognition of Science and Technology (FIRST), Google.org said the grant would support 1,234 new or existing robotics clubs in Washington and reach more than 8,900 students over the course of three years.

The announcement came during an event Friday morning at Finn Hill Middle School in Kirkland, Wash., where students put together robots and were introduced to hands-on STEM tools by Google employee volunteers. The Alphabet-owned tech giant has a sizable workforce in Kirkland and the greater Seattle area.

U.S. Congresswoman Suzan DelBene (D-WA) attended the event and said the investment was key to educating future leaders in robotics and AI.

Programs like these give young people the opportunity to innovate, build new skills, and open bright new pathways for their future, DelBene said.

The funding is part of a $10 million initiative launched by Google.org to fund FIRST and RECF in communities where the company has a presence.

Continue reading here:

Google giving $500K to expand robotics and AI education programs in Washington state - GeekWire

Rainbow Robotics unveils RB-Y1 wheeled, two armed robot – Robot Report

Listen to this article

RB-Y1 mounts a humanoid-type double-arm robot on a wheeled, high-speed mobile base. | Credit: Rainbow Robotics

Rainbow Robotics announced the release of detailed specifications for the new RB-Y1 mobile robot. The company recently signed a memorandum of understanding with Schaeffler Group and the Korea Electronics Technology Institute, or KETI, to co-develop the RB-Y1 and other mobile manipulators in Korea.

The past year has seen an explosion in the growth of humanoids, where most of the robots are bipedal and walk on two legs. Likewise, there have been many recent releases of mobile manipulators, or autonomous mobile robots (AMRs) with a single arm manipulator on board the vehicle.

The RB-Y1 is a form of wheeled robot base with a humanoid double-arm robot on top. Rainbow Robotics robot uses that base to maneuver through its environment and position the arms for manipulation tasks. The company called this configuration a bimanual manipulator.

To perform various and complex tasks, both arms on the RB-Y1 are equipped with seven degrees of freedom and consist of a single torso with six axes that can move the body. With this kinematic configuration, it is possible to move more than 50 cm (19.7 in.) vertically, making it possible to perform tasks at various heights.

Learn from Agility Robotics, Amazon, Disney, Teradyne and many more.

The maximum driving speed for the RB-Y1 is 2,500 mm/s (5.6 mph), and the company is claiming that the robot can accelerate quickly and turn at higher speeds by leaning the body into the turn. To avoid toppling while in motion, the center of gravity can be safely controlled by dynamically changing the height of the body.

The dimensions of the robots are 600 x 690 x 1,400 mm (23.6 x 27.2 x 55.1 in.), and the unit weighs 131 kg (288.8 lb.). The manipulators can each lift 3 kg (6.61 lb.).

At press time, there are not a lot of details about the robots ability to function using artificial intelligence, and one early video showed it working via teleoperation. Its likely that the demonstrations in the video below are with remote operators.

However, Rainbow Robotics clearly has the goal of making its robot fully autonomous in the future, as more research, development, training, and simulation are completed.

These days, when Generative AI such as ChatGPT and Figure is a hot topic in the robot industry, we have developed a bimanual mobile manipulator in line with the AI era, stated a company spokesperson. We hope that the platform will overcome the limitations of existing industrial robots and be used in many industrial sites.

Original post:

Rainbow Robotics unveils RB-Y1 wheeled, two armed robot - Robot Report

Comau and Leonardo Want to Elevate Aeronautical Structure Inspection with Cognitive Robotics – DirectIndustry e-Magazine

Robotic company Comau and aerospace company Leonardo are currently testing a self-adaptive robotic solution to enable autonomous inspection of helicopter blades. This could enhance quality inspections and offer greater flexibility without sacrificing precision or repeatability. At a time when the aerospace industry demands faster processes, better control, and higher quality, it requires a new generation of advanced automation. We contacted Simone Panicucci, Head of Cognitive Robotics at Comauto know more about this solution and how it could benefit the aerospace industry.

The increasing demand for faster processes in the aerospace industry requires to automate complex processes that, until recently, could only be manual. When it comes to testing essential structures such as helicopter blades, the potential benefits of automation increase exponentially. Robotic inspection ensures precision and efficiency. It also ensures standardization and full compliance with the testing process by objectively executing each assigned task.

To meet the industrys needs, Comau and Leonardo have been testing an intelligent inspection solution based on Comaus cognitive robotics, on-site in Anagni, Italy to inspect helicopter blades measuring up to 7 meters.

The solution relies on a combination of self-adaptive robotics, advanced vision systems, and artificial intelligence. Comaus intelligent robot can autonomously perform hammer tests and multispectral surface inspections on the entire nonlinear blade to measure and verify structural integrity, with a granularity exceeding thousands of points.

The robot perceives and comprehends its environment, makes calculated decisions, and intuitively optimizes the entire inspection process.

They will then test the system on another site to enhance MRO (maintenance, repair, and overhaul) service capabilities.

We contacted Simone Panicucci, Head of Cognitive Robotics at Comau who gave us more details about this collaboration.

Simone Panicucci: The collaboration grew out of Leonardos need to ensure advanced autonomous inspection of highly critical aviation infrastructure using cognitive robotics. The two companies are collaborating to develop and test a powerful, self-adaptive robotic solution to autonomously inspect helicopter blades up to 7 meters in length. Aerospace is not a sector that is used to automation yet. The high variability and the low volumes act as constraints toward a deep automation adoption. Cognitive robotics solutions are thus a key enabler to provide the automation benefits (such as process engineering, repeatability, and traceability) even with heterogeneous products and unstructured environments and Comau is leading the creation of AI-based, custom robotic solutions.

Simone Panicucci: The solution developed is a self-adaptive and efficient machine to inspect really large helicopter blades. It includes a visual inspection as well as a tapping test. It consists in physically stimulating the blade surface with an ad-hoc little hammer to recognize from the consequent sound if there is any issue in the blades internal structure. Jointly, both inspections require testing tens of thousands of points on the overall blade.

The robot can sense the environment, and locate the blade in the space with an accuracy below 10 mm. It can also understand potential objects in the scene the robot may collide with. And it can calculate at run time the optimal and collision-free path planning to complete the task.

Simone Panicucci: The solution is provided with a 3D camera whose input is elaborated by a vision system to merge multiple acquisitions, post-process the scene acquired, and then localize both the helicopter blade as well as potential obstacles.

Simone Panicucci: All the movements performed by the robot are calculated once the scene has been sensed, which means that no robot movement has been offline calculated. Additional sensors have been added to the robot flange as an external and independent system to avoid damaging the blade.

Simone Panicucci: Today, helicopter blade inspection is done manually. The provided solution offers greater accuracy and efficiency, ensuring standardization and full compliance with the testing process by objectively completing each assigned task. Operators now program the machine, codifying their experience through a simplified user interface. The machine can work for hours without intervention, providing an accurate report summarizing critical points at the end.

Simone Panicucci: The flexibility is given by the fact that the solution is able to deal with different helicopter blade models and potentially even different helicopter components. In addition, accuracy and repeatability are typical automation takeaways, now even improved thanks to vision system adoption. Increased quality is due to the fact that the operator can now focus on the activity where he/she brings most of the value, the defect detection and confirmation, instead of mechanically performing the inspection.

Simone Panicucci: Operator knowledge is always at the center. Leonardo personnel keep the final word regarding the helicopter blade status certification as well as any point inspected. The automation solution aims to alleviate operators from the repetitive task of manually inspecting tens of thousands of points on the helicopter surface. After hours of signal recording, the solution generates a comprehensive report summarizing the results of AI-based anomaly detection. The industrialized solution ensures repeatability, reliability, and traceability, covering and accurately performing the task.

Simone Panicucci: The solution is CE-certified and incorporates both physical and virtual safety measures. Physical barriers and safety lasers create a secure perimeter, halting operations instantly in the event of unexpected human intrusion. Furthermore, the solution ensures safe loading and unloading of helicopter blades and verifies proper positioning by requiring operators to activate safety keys from a distance of approximately 10 meters.

Simone Panicucci: This solution demonstrates that product heterogeneity and low volumes, typical of the aerospace sector, no longer constrain automation adoption. Comaus cognitive robotics approach enables the delivery of effectiveness, quality, and repeatability even in unstructured environments and with low volumes. It easily adapts to different helicopter models and blades. Executing a process like the tapping test necessitated defining requirements and process engineering. This involved defining the material of the tapping tool, as well as the angle and force to apply. Additionally, all labeled data, whether automatic or manual, are now tracked and recorded, facilitating the creation of an extensive knowledge base to train deep learning models.

Simone Panicucci: Leonardo has been conducting tests on this solution as part of a technology demonstration. This technology holds potential benefits for both Leonardo and its customers. It could standardize inspection processes globally and may be offered or deployed to customers with numerous helicopters requiring inspection.

Simone Panicucci: The specific solution could obviously be extended to other inspections in the helicopter sectors as well as the avionics. But it is worth mentioning that from the technology point of view, the software pipeline, as well as the localization and optimal path planning may be easily applicable in other inspection activities as well as manufacturing or even continuous processes, like welding.

Simone Panicucci: The next steps involve thorough testing of the automation solution at another Leonardo Helicopters plant. This process will contribute to ongoing improvements in the knowledge base and, consequently, the deep learning algorithm for anomaly recognition.

Continued here:

Comau and Leonardo Want to Elevate Aeronautical Structure Inspection with Cognitive Robotics - DirectIndustry e-Magazine

This Week’s Awesome Tech Stories From Around the Web (Through April 6) – Singularity Hub

To Build a Better AI Supercomputer, Let There Be Light Will Knight | Wired Lightmatter wants to directly connect hundreds of thousands or even millions of GPUsthose silicon chips that are crucial to AI trainingusing optical links. Reducing the conversion bottleneck should allow data to move between chips at much higher speeds than is possible today, potentially enabling distributed AI supercomputers of extraordinary scale.

Apple Has Been Secretly Building Home Robots That Could End up as a New Product Line, Report Says Aaron Mok | Business Insider Apple is in the early stages of looking into making home robots, a move that appears to be an effort to create its next big thing after it killed its self-driving car project earlier this year, sources familiar with the matter told Bloomberg. Engineers are looking into developing a robot that could follow users around their houses, Bloomberg reported. Theyre also exploring a tabletop at-home device that uses robotics to rotate the display, a more advanced project than the mobile robot.

A Tantalizing Hint That Astronomers Got Dark Energy All Wrong Dennis Overbye | The New York Times On Thursday, astronomers who are conducting what they describe as the biggest and most precise survey yet of the history of the universe announced that they might have discovered a major flaw in their understanding of dark energy, the mysterious force that is speeding up the expansion of the cosmos. Dark energy was assumed to be a constant force in the universe, both currently and throughout cosmic history. But the new data suggest that it may be more changeable, growing stronger or weaker over time, reversing or even fading away.

How ASML Took Over the Chipmaking Chessboard Mat Honan and James ODonnell | MIT Technology Review When asked what he thought might eventually cause Moores Law to finally stall out, van den Brink rejected the premise entirely. Theres no reason to believe this will stop. You wont get the answer from me where it will end, he said. It will end when were running out of ideas where the value we create with all this will not balance with the cost it will take. Then it will end. And not by the lack of ideas.'

The Very First Jet Suit Grand Prix Takes Off in Dubai Mike Hanlon | New Atlas A new sport kicked away this month when the first ever jet-suit race was held in Dubai. Each racer wore an array of seven 130-hp jet engines (two on each arm and three in the backpack for a total 1,050 hp) that are controlled by hand-throttles. After that, the pilots use the three thrust vectors to gain lift, move forward and try to stay above ground level while negotiating the coursefaster than anyone else.

Toyotas Bubble-ized Humanoid Grasps With Its Whole Body Evan Ackerman | IEEE Spectrum Many of those motions look very human-like, because this is how humans manipulate things. Not to throw too much shade at all those humanoid warehouse robots, but as is pointed out in the video above, using just our hands outstretched in front of us to lift things is not how humans do it, because using other parts of our bodies to provide extra support makes lifting easier.

A Brief History of the Future Offers a Hopeful Antidote to Cynical Tech Takes Devin Coldewey | TechCrunch The future, he said, isnt just what a Silicon Valley publicist tells you, or what Big Dystopia warns you of, or even what a TechCrunch writer predicts. In the six-episode series, he talks with dozens of individuals, companies and communities about how theyre working to improve and secure a future they may never see. From mushroom leather to ocean cleanup to death doulas, Wallach finds people who see the same scary future we do but are choosing to do something about it, even if that thing seems hopelessly small or nave.

This AI Startup Wants You to Talk to Houses, Cars, and Factories Steven Levy | Wired Weve all been astonished at how chatbots seem to understand the world. But what if they were truly connect to thereal world? What if the dataset behind the chat interface was physical reality itself, captured in real time by interpreting the input of billions of sensors sprinkled around the globe? Thats the idea behind Archetype AI, an ambitious startup launching today. As cofounder and CEO Ivan Poupyrev puts it, Think of ChatGPT, but for physical reality.'

How One Tech Skeptic Decided AI Might Benefit the Middle Class Steve Lohr | The New York Times David Autor seems an unlikely AI optimist. The labor economist at the Massachusetts Institute of Technology is best known for his in-depth studies showing how much technology and trade have eroded the incomes of millions of American workers over the years. But Mr. Autor is now making the case that the new wave of technologygenerative artificial intelligence, which can produce hyper-realistic images and video and convincingly imitate humans voices and writingcould reverse that trend.

Image Credit:Harole Ethan / Unsplash

Read the original:

This Week's Awesome Tech Stories From Around the Web (Through April 6) - Singularity Hub

Universities build their own ChatGPT-like AI tools – Inside Higher Ed

When ChatGPT debuted in November 2022, Ravi Pendse knew fast action was needed. While the University of Michigan formed an advisory group to explore ChatGPTs impact on teaching and learning, Pendse, UMichs chief information officer, took it further.

Months later, before the fall 2023 semester, the university launched U-M GPT, a homebuilt generative AI tool that now boasts between 14,000 to 16,000 daily users.

A report is great, but if we could provide tools, that would be even better, Pendse said, noting that Michigan is very concerned about equity. U-M GPT is all free; we wanted to even the playing field.

Most Popular

The University of Michigan is one of a small number of institutions that have created their own versions of ChatGPT for student and faculty use over the last year. Those include Harvard University, Washington University, the University of California, Irvine and UC San Diego. The effort goes beyond jumping on the artificial intelligence (AI) bandwagonfor the universities, its a way to overcome concerns about equity, privacy and intellectual property rights.

We need to talk about AI for good of course, but lets talk about not creating the next version of the digital divide.

Students can use OpenAIs ChatGPT and similar tools for everything from writing assistance to answering homework questions. The newest version of ChatGPT costs $20 per month, while older versions remain free. The newer models have more up-to-date information, which could give students who can afford it a leg up.

That fee, no matter how small, creates a gap unfair to students, said Tom Andriola, UC Irvines chief digital officer.

Do we think its right, in who we are as an organization, for some students to pay $20 a month to get access to the best [AI] models while others have access to lesser capabilities? Andriola said. Principally, it pushes us on an equity scale where AI has to be for all. We need to talk about AI for good of course, but lets talk about not creating the next version of the digital divide.

UC Irvine publicly announced their own AI chatbotdubbed ZotGPTon Monday. Deployed in various capacities since October 2023, it remains in testing and is only available to staff and faculty. The tool can help them with everything from creating class syllabi to writing code.

Offering their own version of ChatGPT allows faculty and staff to use the technology without the concerns that come with OpenAIs version, Andriola said.

When we saw generative AI, we said, We need to get people learning this as fast as possible, with as many people playing with this that we could, he said. [ZotGPT] lets people overcome privacy concerns, intellectual property concerns, and gives them an opportunity of, How can I use this to be a better version of myself tomorrow?

That issue of intellectual property has been a major concern and a driver behind universities creating their own AI tools. OpenAI has not been transparent in how it trains ChatGPT, leaving many worried about research and potential privacy violations.

Albert Lai, deputy faculty lead for digital transformation at Washington University, spearheaded the launch of WashU GPT last year.

WashUalong with UC Irvine and University of Michiganbuilt their tools using Microsofts Azure platform, which allows users to integrate the work into their institutions applications. The platform uses open source software available for free. In contrast, proprietary platforms like OpenAIs ChatGPT have an upfront fee.

A look at WashU GPT, a version of Washington Universitys own generative AI platform that promises more privacy and IP security than ChatGPT.

Provided/Washington University

There are some downsides when universities train their own models. Because a universitys GPT is based on the research, tests and lectures put in by an institution, it may not be as up-to-date as the commercial ChatGPT.

But thats a price we agreed to pay; we thought about privacy, versus what were willing to give up, Lai said. And we felt the value in maintaining privacy was higher in our community.

To ensure privacy is kept within a universitys GPT, Lai encouraged other institutions to ensure any Microsoft institutional agreements include data protection for IP. UC Irvine and UMichigan also have agreements with Microsoft that any information put into their GPT models will stay within the university and not be publicly available.

Weve developed a platform on top of [Microsofts] foundational models to provide faculty comfort that their IP is protected, Pendse said. Any faculty memberincluding myselfwould be very uncomfortable in putting a lecture and exams in an OpenAI model (such as ChatGPT) because then its out there for the world.

Once you figure out the secret sauce, its pretty straightforward.

It remains to be seen whether more universities will build their own generative AI chatbots.

Consulting firm Ithaka S+R formed a 19-university task force in September dubbed Making AI Generative for Higher Education to further study the use and rise of generative AI. The task force members include Princeton University, Carnegie Mellon University and the University of Chicago.

Lai and others encourage university IT officials to continue experimenting with what is publicly available, which can eventually morph into their own versions of ChatGPT.

I think more places do want to do it and most places havent figured out how to do it yet, he said. But frankly, in my opinion, once you figure out the magic sauce its pretty straightforward.

Visit link:

Universities build their own ChatGPT-like AI tools - Inside Higher Ed

ChatGPT use linked to sinking academic performance and memory loss – Yahoo News UK

ChatGPT use is linked to bad results and memory loss. (Getty Images)

Using AI software such as ChatGPT is linked to poorer academic performance, memory loss and increased procrastination, a study has shown.

The AI chatbot ChatGPT can generate convincing answers to simple text prompts, and is already used weekly by up to 32% of university students, according to research last year.

The new study found that university students who use ChatGPT to complete assignments find themselves in a vicious circle where they dont give themselves enough time to do their work and are forced to rely on ChatGPT, and over time, their ability to remember facts diminishes.

The research was published in the International Journal of Educational Technology in Higher Education. Scientists conducted interviews with 494 students about their use of ChatGPT, with some admitting to being "addicted" to using the technology to complete assignments.

The researchers wrote: "Since ChatGPT can quickly respond to any questions asked by a user, students who excessively use ChatGPT may reduce their cognitive efforts to complete their academic tasks, resulting in poor memory. Over time, over-reliance on generative AI tools for academic tasks, instead of critical thinking and mental exertion, may damage memory retention, cognitive functioning, and critical thinking abilities."

In the interviews, the researchers were able to pinpoint problems experienced by students who habitually used ChatGPT to complete their assignments.

The researchers surveyed students three times to work out what sort of student is most likely to use ChatGPT, and what effects heavy users experienced.

The researchers then asked questions about the effects of using ChatGPT.

Study author Mohammed Abbas, from the National University of Computer and Emerging Sciences in Pakistan, told PsyPost: "My interest in this topic stemmed from the growing prevalence of generative artificial intelligence in academia and its potential impact on students.

Story continues

"For the last year, I observed an increasing, uncritical, reliance on generative AI tools among my students for various assignments and projects I assigned. This prompted me to delve deeper into understanding the underlying causes and consequences of its usage among them."

The study found that students who were results-focused were less likely to rely on AI tools to do tasks for them.

The research also found that students who relied on ChatGPT were not getting the full benefit of their education - and actually lost the ability to remember facts.

"Our findings suggested that excessive use of ChatGPT can have harmful effects on students personal and academic outcomes. Specifically, those students who frequently used ChatGPT were more likely to engage in procrastination than those who rarely used ChatGPT," Abbas said.

"Similarly, students who frequently used ChatGPT also reported memory loss. In the same vein, students who frequently used ChatGPT for their academic tasks had a poor grade average."

The researchers found that students who felt under pressure were more likely to turn to ChatGPT - but that this then led to worsening academic performance and further procrastination and memory loss.

The researchers suggest that academic institutions should be mindful that heavy workloads can drive students to use ChatGPT.

The researchers also said academics should warn students of the negative impact of using the software.

"Higher education institutions should emphasise the importance of efficient time management and workload distribution while assigning academic tasks and deadlines," they said.

"While ChatGPT may aid in managing heavy academic workloads under time constraints, students must be kept aware of the negative consequences of excessive ChatGPT usage."

Read more from the original source:

ChatGPT use linked to sinking academic performance and memory loss - Yahoo News UK

Why is Elon Musk suing Open AI and Sam Altman? In a word: Microsoft. – Morningstar

By Jurica Dujmovic

Potential ramifications extend far beyond the courtroom

In a striking turn of events, Elon Musk, Tesla's (TSLA) CEO, has initiated legal action against OpenAI and its leadership, alleging that the organization he helped found has moved from its original altruistic mission toward a profit-driven approach, particularly after partnering with Microsoft (MSFT).

The lawsuit accentuates Musk's deep-seated concerns that OpenAI has deviated from its foundational manifesto of developing artificial general intelligence (AGI) for the betterment of humanity, choosing instead to prioritize financial gains. But is that really so, or is there something else at hand?

Musk was deeply involved with OpenAI since its inception in 2015, as his concerns about AI's potential risks and the vision to advance AI in a way that benefits humanity aligned with OpenAI's original ethos as a non-profit organization.

In 2018, however, Musk became disillusioned with OpenAI because, in his view, it no longer operated as a nonprofit and was building technology that took sides in political and social debates. The recent OpenAI drama that culminated with a series of significant changes in OpenAI's structure and ethos, as well as a what can only be seen as Microsoft's power grab, seems to have sparked Musk's discontent.

To understand his reasoning, it helps to remember that Microsoft is a company with a long history of litigation. Over the years, Microsoft has faced numerous high-profile legal battles related to its market practices.

Here are some prominent cases to illustrate the issue:

-- In the United States v. Microsoft Corp. case, which began in 1998, the U.S. Department of Justice accused Microsoft of holding a monopolistic position in the PC operating-systems market and taking actions to crush threats to that monopoly. In April 2000, the case resulted in a verdict that Microsoft had engaged in monopolization and attempted monopolization in violation of the Sherman Antitrust Act.

-- In Europe, Microsoft has faced significant fines for abusing its dominant market position. In 2004, the European Commission fined Microsoft 497.2 million euros, the largest sum it had ever imposed on a single company at the time??. In 2008, Microsoft was fined an additional 899 million euros for failing to comply with the 2004 antitrust order.

-- In 2013, the European Commission levied a 561 million euro fine against Microsoft for failing to comply with a 2009 settlement agreement to offer Windows users a choice of internet browsers instead of defaulting to Internet Explorer.

In light of these past litigations, it's much easier to understand why OpenAI's CEO Sam Altman's brief departure from the company and subsequent return late last year - which culminated in a significant shift in the organization's governance and its relationship with Microsoft - was the straw that likely broke Musk's back.

After Altman was reinstated, Microsoft solidified its influence over OpenAI by securing a permanent position on its board. Furthermore, the restructuring of OpenAI's board to include business-oriented members, rather than AI experts or ethicists, signaled a permanent shift in the organization's priorities and marked a pivotal turn toward a profit-driven model underpinned by corporate governance.

The consequences of this power grab are plain to see: Microsoft is already implementing various AI models designed by the company in its various products while none of the code is being released to the public. These models also include a specific political and ideological bias that makes them problematic from an ethical point of view. This too, is an issue that cannot be addressed due to the closed-source nature of AI models generated and shaped under the watchful eye of Microsoft.

Musk's own ventures, like xAI and Neuralink, suggest he's still deeply invested in the AI space, albeit in a way he has more control over, presumably to ensure that the technology develops according to his vision for the future of humanity.

On the other hand, proponents of Microsoft's partnership with OpenAI emphasize strategic and mutually-beneficial aspects. Microsoft's $1 billion investment in OpenAI is viewed as a significant step in advancing artificial-intelligence technology as it allows OpenAI to utilize Microsoft's Azure cloud services to train and run its AI software. Additionally, the collaboration is positioned as a way for Microsoft to stay competitive against other tech giants by integrating AI into its cloud services and developing more sophisticated AI models????.

Proponents say Microsoft's involvement with OpenAI is a strategic business decision aimed at promoting Azure's AI capabilities and securing a leading position in the industry. The partnership is framed as a move to democratize AI technology while ensuring AI safety, which aligns with broader industry goals of responsible and ethical AI development. It is also seen as a way for OpenAI to access necessary resources and expertise to further its research, emphasizing the collaborative nature of the partnership rather than a mere financial transaction??.

Hard truths and consequences

While many point out that Musk winning the case is extremely unlikely, it's still worth looking into potential consequences. Such a verdict could mandate that OpenAI returns to a non-profit status or open-source its technology, significantly impacting its business model, revenue generation and future collaborations. It could also affect Microsoft's investment in OpenAI, particularly if the court determines that the latter has strayed from its founding mission, influencing the tech giant's ability to protect its investment and realize expected returns.

The lawsuit's outcome might influence public and market perceptions of OpenAI and Microsoft, possibly affecting customer trust and market share, with Musk potentially seen as an advocate for ethical AI development. Additionally, the case could drive the direction of AI development, balancing between open-source and proprietary models, and possibly accelerating innovation while raising concerns about controlling and misusing advanced AI technologies.

The scrutiny from this lawsuit might lead to more cautious approaches in contractual relationships within the tech sector, focusing on partnerships and intellectual property. Furthermore, the case could draw regulatory attention, possibly leading to increased oversight or regulation of AI companies, particularly concerning transparency, data privacy and ethical considerations in AI development. While Musk's quest might seem like a longshot to some legal experts, the potential ramifications of this lawsuit extend far beyond the courtroom.

More: Here's what an AI chatbot thinks of Elon Musk's lawsuit against OpenAI and Sam Altman

Also read: Microsoft hasn't been worth this much more than Apple since 2003

-Jurica Dujmovic

This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.

(END) Dow Jones Newswires

03-09-24 1003ET

Original post:

Why is Elon Musk suing Open AI and Sam Altman? In a word: Microsoft. - Morningstar

An automated solution for the aerospace industry – Engineer Live

Evaluating a complete automated solution for composite handling, assembly and inspection.

Composites have long been leveraged for aerospace applications to help with lightweighting, reinforcement and new part designs. Naturally, the demand for wholly automated solutions for composite handling, assembly and inspection has become a key focus for the industry, as it promises to lower the cost and speed up the process of aircraft manufacturing.

One company operating at the forefront of this area is Loop Technology, whose innovative composite automation and layup technologies, inspection and kitting systems are leveraged by multiple global aerospace manufacturers across the globe. Using a combination of precision gantry, robotics, vision and automation, Loops products are supporting several large-scale projects demanding tight tolerances and fast assembly times.

Ian Redman, Project Director at Loop Technology, discussed recent advances in high-rate composite deposition at Advanced Engineering last November. He explains: We all know the benefits of composites, the challenge is actually getting these composite parts at the volume we require. So, yes we can make composite parts, but without advanced automation we are never going to achieve the quality, repeatability and rate that is required. Loop Technology has developed a range of technologies to level that challenge, and we have been working for a decade in this area on various R&D projects with industrial partners. Were now at a really exciting point where the maturity of these technological solutions is ready to deliver on the demands of todays projects.

Loops composite products are modular, allowing the company to deliver a system tuned to the individual needs of a particular project or manufacturer. The company can design bespoke systems for preforming structures both large and small, such as wing skins, fan blades or small box structures. The system gantry or robotic configurations can be itemised depending on factory size and layup preference, from full gantry systems and dual robot fiberoll layouts to small deposition cell and track gantry configurations.

The risks involved in composite handling are significant, as damage or deformation of plies in any handling process cannot be tolerated in flight and safety critical aerospace engineering applications. To protect against this, Loop offers bespoke composite gripper designs that simultaneously improve manufacturing cell throughput and maintain industry quality standards. On the inspection side of things, Loop has developed systems to meet stringent quality standards capable of in-process monitoring and positional correction during composite layup.

When optimal ply utilisation is a priority, Loop can design, manufacture and install fully integrated composite kitting systems. These systems offer a comprehensive automated composite ply handling and management solution starting from automated carbon fibre ply feeding to a cutting table, through to the fully kitted stage where composite plies can be presented in prescribed order for immediate assembly.

Another part of Loops automated composites handling solution is trimming it can deliver high-precision ultrasonic cutting of composite materials, from tacks of dry fibre to 3D preforms. By combining the power of CAD and CAM software with the flexibility of six-axis robots, Loop can offer bespoke part trimming while also integrating various auxiliary processes that may be required, such as torque monitoring and particulate extraction.

View original post here:

An automated solution for the aerospace industry - Engineer Live

Generative AI, Free Speech, & Public Discourse: Why the Academy Must Step Forward | TechPolicy.Press – Tech Policy Press

On Tuesday, Columbia Engineering and the Knight First Amendment Institute at Columbia University co-hosted a well-attended symposium, Generative AI, Free Speech, & Public Discourse. The event combined presentations about technical research relevant to the subject with addresses and panels discussing the implications of AI for democracy and civil society.

While a range of topics were covered across three keynotes, a series of seed funding presentations, and two panelsone on empirical and technological questions and a second on legal and philosophical questionsa number of notable recurring themes emerged, some by design and others more organically:

This event was part of one partnership amongst others in an effort that Columbia University president Manouche Shafik and engineering school dean Shih-Fu Chang referred to as AI+x, where the school is seeking to engage with various other parts of the university outside of computer engineering to better explore the potential impacts of current developments in artificial intelligence. (This event was also a part of Columbias Dialogue Across Difference initiative, which was established as part of a response to campus conflict around the Israel-Gaza conflict.) From its founding, the Knight Institute has focused on how new technologies affect democracy, requiring collaboration with experts in those technologies.

Speakers on the first panel highlighted sectors where they have already seen potential for positive societal impact of AI, outside of the speech issues that the symposium was focussed on. These included climate science, drug discovery, social work, and creative writing. Columbia engineering professor Carl Vondrick suggested that current large language models are optimized for social media and search, a legacy of their creation by corporations that focus on these domains, and the panelists noted that only by working directly with diverse groups can their needs for more customized models be understood. Princeton researcher Arvind Narayanan proposed that domain experts play a role in evaluating models as, in his opinion, the current approach of benchmarking using standardized tests is seriously flawed.

During the conversation between Jameel Jaffer, Director of the Knight Institute, and Harvard Kennedy School security technologist Bruce Schneier, general principles for successful interdisciplinary work were discussed, like humility, curiosity and listening to each other; gathering early in the process; making sure everyone is taken seriously; and developing a shared vocabulary to communicate across technical, legal, and other domains. Jaffer recalled that some proposals have a lot more credibility in the eyes of policymakers when they are interdisciplinary. Cornell Tech law professor James Grimmelman, who specializes in helping lawyers and technologists understand each other, remarked that these two groups are particularly well-equipped to work together, once they can figure out what the other needs to know.

President Shafik declared that if a responsible approach to AIs impact on society requires a +x, Columbia (surely along with other large research universities) has lots of xs. This positions universities as ideal voices for the public good, to balance out the influence of the tech industry that is developing and controlling the new generation of large language models.

Stanfords Tatsunori Hashimoto, who presented his work on watermarking generative AI text outputs, emphasized that the vendors of these models are secretive, and so the only way to develop a public technical understanding of them is to build them within the academy, and take on the same tasks as the commercial engineers, like working on alignment fine-tuning and performing independent evaluations. One relevant and striking finding by his group was that the reinforcement learning from human feedback (RLHF) process tends to push models towards the more liberal opinions common amongst highly-educated Americans.

The engineering panel developed a wishlist of infrastructure resources that universities (and others outside of the tech industry) need to be able to study how AI can be used to benefit and not harm society, such as compute resources, common datasets, separate syntax models so that vetted content datasets can be added for specific purposes, and student access to models. In the second panel, Camille Franois, a lecturer at the Columbia School of International and Public Affairs and presently a senior director of trust & safety at Niantic Labs, highlighted the importance of having spaces, presumably including university events such as the one at Columbia, to discuss how AI developments are impacting civil discourse. On a critical note, Knight Institute executive director Katy Glenn Bass also pointed out that universities often do not value cross-disciplinary work to the same degree as typical research, and this is an obstacle to progress in this area, given how essential collaboration across disciplines is.

Proposals for regulation were made throughout the symposium, a number of which are listed below, but the keynote by Bruce Schneier was itself an argument for government intervention. Schneiers thesis was, in brief, that corporation-controlled development of generative AI has the potential to undermine the trust that society needs to thrive, as chatbot assistants and other AI systems may present as interpersonally trustworthy, but in reality are essentially designed to drive profits for corporations. To restore trust, it is incumbent on governments to impose safety regulations, much as they do for airlines. He proposed a regulatory agency for the AI and robotics industry, and the development of public AI models, created under political accountability and available for academic and new for-profit uses, enabling a freer market for AI innovation.

Specific regulatory suggestions included:

A couple of cautions were also voiced: Narayanan warned that the Liars Dividend could be weaponized by authoritarian governments to crack down on free expression, and Franois noted the focus on watermarking and deepfakes at the expense of unintended harms, such as chatbots giving citizens incorrect voting information.

There was surprisingly little discussion during the symposium of how generative AI specifically influences public discourse, which Jaffer defined in his introductory statement as acts of speaking and listening that are part of the process of democracy and self-governance. Rather, much of the conversation was about online speech generally, and how it can be influenced by this technology. As such, an earlier focus of online speech debates, social media, came up a number of times, with clear parallels in terms of concern over corporate control and a need for transparency.

Hashimoto referenced the notion that social media causes feedback loops that greatly amplify certain opinions. LLMs can develop data feedback loops which may cause a similar phenomenon that is very difficult to identify and unpick without substantial research. As chatbots become more personalized, suggested Vondrick, they may also create feedback on an individual user level, directing them to more and more of the type of content that they have already expressed an affinity for, akin to the social media filter bubble hypothesis.

Another link to social media was drawn in the last panel, during which both Grimmelmann and Franois drew on their expertise in content moderation. They agreed that the most present danger to discourse from generative AI is inauthentic content and behavior overwhelming the platforms that we rely on, and worried that we may not yet have the tools and infrastructure to counter it. (Franois described a key tension between the Musk effect pushing disinvestment in content moderation and the Brussels effect encouraging a ramping up in on-platform enforcement via the DSA.) At the same time, trust and safety approaches like red-teaming and content policy development are proving key to developing LLMs responsibly. The correct lesson to draw from the failures to regulate social media, proposed Grimmelmann, was the danger of giving up on antitrust enforcement, which could be of great value when current AI foundation models are developed and controlled by a few (and in several cases the same) corporations.

One final theme was a framing of the current moment as one of transition. Even though we are grappling with how to adapt to realistic, readily available synthetic content at scale, there will be a point in the future, perhaps even for todays young children, that this will be intuitively understood and accounted for, or at least that media literacy education, or tools (like watermarking) will have caught up.

Several speakers referenced prior media revolutions. Narayanan was one of several who discussed the printing press, pointing out that even this was seen as a crisis of authority: no longer could the written word be assumed to be trusted. Wikipedia was cited by Columbia Engineering professor Kathy McKeown as an example of media that was initially seen as untrustworthy, but whose benefits, shortcomings, and suitable usage are now commonly understood. Franois noted that use of generative AI is far from binary and that we have not yet developed good frameworks to evaluate the range of applications. Grimmelman mentioned both Wikipedia and the printing press as examples of technologies where no one could have accurately predicted how things would shake out in the end.

As the Knight Institutes Glenn Bass stated explicitly, we should not assume that generative AI is harder to work through than previous media crises, or that we are worse equipped to deal with it. However, two speakers flagged that the tech industry should not be the given free rein: USC Annenbergs Mike Ananny warned that those with invested interests may attempt to prematurely push for stabilization and closure, and we should treat this with suspicion; and Princetons Narayanan noted that this technology is producing a temporary societal upheaval and that its costs should be distributed fairly. Returning to perhaps the dominant takeaways from the event, these comments again implied a role for the academy and for the government in guiding the development of, adoption of, and adaptation to the emerging generation of generative AI.

Read more:

Generative AI, Free Speech, & Public Discourse: Why the Academy Must Step Forward | TechPolicy.Press - Tech Policy Press

Supreme Court to hear landmark case on social media, free speech – University of Southern California

Today, the U.S. Supreme Court will hear oral arguments in a pair of cases that could fundamentally change how social media platforms moderate content online. The justices will consider the constitutionality of laws introduced by Texas and Florida targeting what they see as the censorship of conservative viewpoints on social media platforms.

The central issue is whether platforms like Facebook and X should have sole discretion over what content is permitted on their platforms. A decision is expected by June.USC experts are available to discuss.

Depending on the ruling, companies may face stricter regulations or be allowed more autonomy in controlling their online presence. Tighter restrictions would require marketers to exercise greater caution in content creation and distribution, prioritizing transparency, and adherence to guidelines to avoid legal repercussions. Alternatively, a ruling in favor of greater moderation powers could potentially raise consumer concerns about censorship and brand authenticity, said Kristen Schiele, an associate professor of clinical marketing at the USC Marshall School of Business.

Regardless of the verdict, companies will need to adapt their strategies to align with advancing legal standards and consumer expectations in the digital landscape. Stricter regulations will require a more thorough screening of content to ensure compliance. Marketers may need to invest more resources to understand and adhere to the evolving legislations, which would lead to shifts in budget allocation and strategy development. In response, the industry will most likely see new content moderation technologies and platforms emerge to help companies navigate legal challenges and still create effective marketing campaigns, she said.

Erin Miller is an expert on theories of speech and free speech rights, and especially their application to mass media. She also writes on issues of moral and criminal responsibility. Her teaching areas include First Amendment theory and criminal procedure. Miller is an assistant professor of law at the USC Gould School of Law.

Content:emiller@law.usc.edu

###

Jef Pearlman is a clinical associate professor of law and director of the Intellectual Property & Technology Law Clinic at the USC Gould School of Law.

Contact:jef@law.usc.edu

###

Karen Northis a recognized expert in the field of digital and social media, with interests spanning personal and corporate brand building, digital election meddling, reputation management, product development, and safety and privacy online. North is a clinical professor of communication at the USC Annenberg School for Communication and Journalism.

Contact:knorth@usc.edu

###

Wendy Wood is an expert in the nature of habits. Wood co-authored a study exploring how fake news spreads on social media, which found that platforms more than individual users have a larger role to play in stopping the spread of misinformation online.

Contact:wendy.wood@usc.edu

###

Emilio Ferrara is an expert incomputational social sciences who studies socio-technical systems and information networks to unveil the communication dynamics that govern our world. Ferrara isis a professor of computer science and communication at the USC Viterbi School of Engineering and USC Annenberg School for Communication and Journalism.

Contact:emiliofe@usc.edu

###

(Photo/Benjamin Sow/Unsplash)

See original here:

Supreme Court to hear landmark case on social media, free speech - University of Southern California

Free Speech or Hate Speech? | GW Today | The George Washington University – GW Today

What are the free speech rights of university students? That was the first question posed by moderator Jeffrey Rosen, GW Law professor and president of the National Constitution Center, to a panel of George Washington University faculty experts on the First Amendment.

The webinar, Free Speech v. Hate Speech: First Amendment Scholars Discuss Where to Draw the Line in the Context of Higher Education, was held as part of the universitys plan for strengthening the GW community in challenging times, with the goal of fostering civil conversations about complex issues and emphasizing university policies.

The incoming inaugural Burchfield Professor of First Amendment and Free Speech Law, Mary-Rose Papandrea, began by noting that the First Amendment applies to public and not private universities, but private universities often look to the First Amendment principles for guidance. Under the First Amendment, she explained, some categories of speech receive no First Amendment protection, such as incitement of unlawful conduct, threats of violence, or giving material support to terrorists. But offensive speech and bad words are not carved out from the First Amendment. In a public university setting, however, there is some leeway for penalizing speech that would be otherwise protected. She suggested classrooms provide the best example of this.

When I ask a student to tell me the holding of a case, I actually want the holding of the case, and there is a wrong answer, Papandrea said. And if the student doesnt give me the correct answer, that will result in a lower grade in the class. Outside in the town square you can engage in false speech, incorrect speech, or misrepresentations and cannot be, as a general matter, punished by the government.

Most of the tensions surrounding free speech on campuses today, she added, arise when universities attempt to regulate the speech of faculty and students outside of the classroom.

Universities are the quintessential marketplace of ideas, Papandrea said, and we should be really concerned when the university starts making viewpoint-based speech restrictions outside of the classroom.

First Amendment: Does everything go?

In the view of Mary Anne Franks, Eugene L. and Barbara A. Bernard Professor in Intellectual Property, Technology and Civil Rights Law, free speech issues are clouded by unequal power relations, often resulting in protection of reckless speech for the majority but not for minorities. Franks proposes an alternative paradigm encouraging what she describes as fearless speech.

If we really want to talk about free speech, we actually need to get away from the First AmendmentI mean the kind of popularized version of the First Amendment which says everything goes, and you can never have any kind of intervention, Franks said.

People operating under this misconception, she added, argue that any kind of devaluation or nonplatforming constitutes censorship. That idea, she said, is pernicious.

When we think about what the First Amendment actually does, its not really telling us anything about free speech, Franks said. Its telling us about what the government cant do in certain contexts. And thats really useful to know, because the government has a lot of power that no individual has and because the kinds of measures it can take against you include the loss of your liberty. But I dont know that its such a good model for us as a private university. How much are we like a government? What we could be doing instead, and what I think successful universities do when they want to be marketplaces of ideas or spaces for intellectual, robust debate, is set standards. What are the good ideas? Whether an idea is controversial or noncontroversial is not the point.

Instead, Franks said, ideas should be well informed and argued eloquently. She argues in favor of a conscious curation of the best ideas that reflect the universitys values, expressed as persuasively as possible without threats of force or ad hominem attacks.

What is the kind of speech that a university could uniquely try to foster? she asked. What kind of space could it foster to become a forum where really difficult ideas get aired out in a way that is physically safe but also sophisticated? Im suggesting that we move toward fearless speech and critiques of current power structures, that we take notice of the fact that reality is a certain way. There are certain sensitivities to race and gender and class that we really need to have on our radar, if we want to make sure that people within the university space can speak equally.

Free speech at a private university

Dawn Nunziato, Pedas Family Professor of IP and Technology Law, agreed that the First Amendment is not necessarily the right one for every context.

At a private university like GW, we have the autonomy and the freedom and the duty to decide what kind of community we want to be, Nunziato said, and within certain bounds, what types of speech we want to protect and to not protect. Our speech policies are not governed by the First Amendment. So we dont need to protect hate speech in the same way that the First Amendment protects hate speech. We could draw the line very differently. And there are reasons why we should, and we should be very thoughtful about how we draw the line. We may choose to value inclusivity and belonging over the unfettered marketplace of ideas.

Under the Civil Rights Act of 1964, Nunziato noted, GW has a responsibility to provide an educational environment free of discrimination.

Robust discussion and respectful listening

The panels discussion touched on the recent congressional hearings at which the presidents of three elite universities were criticized for saying that whether speech could be considered hate speech depends on context.

After pointing out that she didnt view it as incorrect to say that the answer to questions of free speech v. hate speech can depend on context, Papandrea noted examples of speech that should be protected, such as an antisemitic line spoken by a character in a play meant to condemn antisemitism. The same line spoken by a student marching across campus could be viewed as creating a hostile environment.

Franks, too, was sympathetic to the trio of university presidents, who may have been reacting to the charge that universities are a woke paradise for snowflakes who require trigger warnings.

The most upsetting thing about the spectacle is not any of those presidents answers, Franks said. It was the fact that the spectacle was happening at alla real invocation and revitalization of a McCarthyesque kind of moment, with legislators who have made it clear that antisemitism and white supremacy are things that they either dont have a problem with or actively support. It was a really grotesque spectacle, she added, a bad faith attempt to attack diversity.

If we object to the First Amendments protection of vile speech in the public square, Nunziato said, we take that up with the Supreme Court, which defines the First Amendments protections. But whether vile speech should be restricted in the university environment is a different question, she added.

Balancing robust, sometimes caustic and heated discussion on issues of public importance against the legal obligations that we have to protect our community members from discriminatory harassment, Nunziato said, is an important part of what we do as a university.

Being part of a university community, Nunziato said, presents a unique opportunity to interact more thoughtfully than people do on social media.

Our University Yard and the quad are spaces where there may be protesters and counter-protesters, but we can be there together, Nunziato said, and engage in speech and counterspeech, unlike in some of the online environments where we have egregious problems of information silos and people going down rabbit holes. In the university environment, were all on our phones and on social media, but were also in spaces where we can engage with one another. Maybe were raising our voices, but we can listen to one another. One of the principles in our code of conduct is that members of the university community are urged to hear all sides of controversial issues.

In closing remarks, Rosen quoted Supreme Court Justice Louis Brandeis, who argued that the correct remedy for harmful speech is more speech, not enforced silence. Only an emergency can justify repression.

The concluding webinar, Rosen said, was a model of the kind of robust discussion and respectful listening that Brandeis advocated.

See the original post here:

Free Speech or Hate Speech? | GW Today | The George Washington University - GW Today

Why the Odysseus Moon Landing Is So Important – TIME

Early this week, Facebook provided me with a sweet piece of serendipity when it served up a picture of the late Gene Cernan. I had taken and posted the picture in 2014, when Cernan, the last man on the moon, was being feted at the premiere of the documentary about his life, titled, straightforwardly, The Last Man On the Moon. I had gotten to know Gene well over the course of many years of reporting on the space program, and was keenly saddened when we lost him to cancer three years later.

But this week, on Feb. 22, Cernan made news in a bank-shot sort of way, when the Odysseus spacecraft touched down near the south lunar pole, marking the first time the U.S. had soft-landed metal on the moon since Cernan feathered his lunar module Challenger down to the surface of the Taurus-Littrow Valley on Dec. 11, 1972. The networks made much of that 52-year gulf in cosmic history, but Odysseus was significant for two other, more substantive reasons: it marked the first time a spacecraft built by a private company, not by a governmental space program, had managed a lunar landing, and it was the first time any ship had visited a spot so far in the moons south, down in a region where ice is preserved in permanently shadowed craters. Those deposits could be harvested to serve as drinking water, breathable oxygen, and even rocket fuel by future lunar astronauts.

Today, for the first time in more than a half century, the U.S. has returned to the moon, said NASA Administrator Bill Nelson in a livestream that accompanied the landing. Today, for the first time in the history of humanity, a commercial company and an American company launched and led the voyage up there.

Nelsons enthusiasm was not misplaced. The six Apollo lunar landings might have been epochal events, but they were also abbreviated ones. The longest stay any of the crews logged on the surface was just three days by Cernan and his lunar module pilot Harrison Schmitt. The shortest stay was less than 21 hours, by Neil Armstrong and Buzz Aldrin during the Apollo 11 mission, the first lunar landing, in 1969. That so-called flags and footprints model was fine for the days when the U.S. lunar program was mostly about doing some basic spelunking and, not for nothing, beating the much-feared Soviet Union at planting a flag in the lunar regolith.

But the 21st-century moon program is different. Ever since NASA established its Artemis program in 2017, the space agency has made it clear that the new era of exploration will be much more ambitious. The goal is in part for American astronauts to establish at least a semi-permanent presence on the moon, with a mini-space station known as Gateway positioned in lunar orbit, allowing crews to shuttle to and from the surface. NASA also plans to create a south pole habitat that the crews could call home. And all of this will be done by a much more diverse corps of astronauts, with women and persons of color joining the all-white, all-male list of astronauts who traveled to the moon the first time around.

There is, however, a catch: money. In the glory days of Apollo, NASA funding represented 4% of the total federal budget; now its just 0.4%. That means taking the job of designing and building spacecraft off of the space agencys plate and outsourcing it to private industry, the way SpaceX now ferries crews to the International Space Station, charging NASA for the rides the way it charges satellite manufacturers and other private customers. The Commercial Crew Program, of which SpaceX is a part, was established in 2011, and has been a rousing success, so much so that, in 2018, NASA took things a step further, announcing the Commercial Lunar Payload Services (CLPS) program, similarly outsourcing the delivery of equipment that astronaut-settlers will need.

CLPS, however, stumbled out of the gate. On Jan. 8 of this year, the Peregrine lander, built by Pittsburgh-based Astrobotic Technology, was launched to a similar lunar region that Odysseus targeted, carrying 20 payloads, including mini-rovers, a spectrometer designed to scour the soil for traces of water, and another to study the moons exceedingly tenuous atmosphere. Peregrine was not destined to make it out of Earths orbit, however, after an engine failure stranded itleaving the ship to plunge back into the atmosphere 10 days after launch.

There will be some failures, Astrobotic CEO John Thornton told TIME before the Peregrine mission launched. But if even half of these missions succeed, it is still a wild, runaway success.

Odysseus landed in that second, happier column. Built by Houston-based Intuitive Machines, the spacecraft carries six science instruments, including stereoscopic cameras, an autonomous navigation system, and a radio wave detector to help measure charged particles above the surfacecritical to determining the necessary sheathing in an eventual habitat. NASA has at least eight other CLPS missions planned, including two more by Intuitive Machines and another by Astrobotic, through 2026. After that, the program is expected to go on indefinitelysupplying lunar bases for as long as Artemis has astronauts on the moon.

Just when those explorers will arrive is unclear. The Artemis II mission, which was expected to take astronauts on a circumlunar journey in November of this year, has been postponed until September of 2025, due to R&D issues in both the Space Launch System moon rocket and the Orion spacecraft. Artemis III, set to be the first landing since the Apollo 17 astronauts trod the regolith, will likely not come until 2026 at the earliest.

That 52 year wait would not have sat well with that long-ago crew. In the same year in which they flew, the National Football Leagues Miami Dolphins made a less consequential history of their own, when they became the first and so far only team to go through an entire season undefeated. The surviving members of that legendary squad have waited out the seasons that have followed, pulling for their record to standand conceding relief when the final undefeated team at last records a loss. Cernan, for his part, wanted nothing to do with his own last man record. We leave here as we came and, God willing, we shall return, with peace and hope for all mankind, he said before he climbed back up the ladder of his lunar module and left the moon behind. The success of Odysseus does not make the fulfillment of Cernans wish imminent, but it does nudge it closer.

Follow this link:

Why the Odysseus Moon Landing Is So Important - TIME

Ingenuity Mars helicopter snapped rotor blade during hard landing last month (video, photo) – Space.com

There's no way Ingenuity could fly through this.

Ingenuity, the 4-pound (1.8 kilograms) helicopter that journeyed to Mars with NASA's Perseverance rover, was grounded for good after suffering a hard landing during a Jan. 18 flight.

New observations by Perseverance show just how rough that touchdown was and make it easy to understand why Ingenuity is now a frozen feature of the Martian landscape.

Related: NASA to 'wiggle' broken Ingenuity Mars helicopter's blades to analyze damage

We already knew that the Jan. 18 landing broke off the tip of at least one of Ingenuity's four rotors; a selfie snapped by the little chopper shortly thereafter made that plain.

That damage by itself was enough to end Ingenuity's flying days on Mars, mission team members said at the time. Helicopters must be perfectly balanced to maintain controlled flight, and losing bits of a rotor robbed Ingenuity of that balance.

But the drone lost more than just a rotor tip. The new Perseverance photos, which the rover took with its SuperCam remote imager on Sunday (Feb. 25), show that at least one of Ingenuity's four rotor blades snapped clean off on Jan. 18.

Ingenuity and Perseverance landed together on the floor of Mars' Jezero Crater in February 2021. Two months later, the rotorcraft deployed from the rover's belly and began its prime mission, a five-flight campaign designed to show that powered flight is possible on Mars despite the planet's thin atmosphere.

Ingenuity aced that campaign, then shifted to an extended mission during which it served as a scout for the life-hunting, sample-collecting Perseverance. The helicopter racked up a whopping 67 sorties during this phase of its Mars operations, which were led (like those of Perseverance) by NASA's Jet Propulsion Laboratory (JPL) in Southern California.

Its final flight occurred over a sandy patch of terrain that lacked prominent rocks and other features that Ingenuity relied on for navigation, mission team members said. Ingenuity could not stick the landing, and its fast-spinning blades hit the ground.

The helicopter's legacy is assured. Ingenuity was the first vehicle ever to achieve powered flight in the skies of a world beyond Earth, and its success will pave the way for other aerial explorers.

"The NASA JPL team didn't just demonstrate the technology," Tiffany Morgan, deputy director of NASA's Mars Exploration Program, said during a Jan. 31 webcast tribute to Ingenuity. "They demonstrated an approach that if we use in the future will really help us to explore other planets and be as awe-inspiring, as amazing, as Ingenuity has been."

See the original post here:

Ingenuity Mars helicopter snapped rotor blade during hard landing last month (video, photo) - Space.com

NASA will retire the ISS soon. Here’s what comes next. – NPR

The International Space Station is pictured from the SpaceX Crew Dragon Endeavour during a fly around of the orbiting lab on Nov. 8, 2021. NASA hide caption

The International Space Station is pictured from the SpaceX Crew Dragon Endeavour during a fly around of the orbiting lab on Nov. 8, 2021.

Since its first modules launched at the end of 1998, the International Space Station has been orbiting 250 miles above Earth. But at the end of 2030, NASA plans to crash the ISS into the ocean after it is replaced with a new space station, a reminder that nothing within Earth's orbit can stay in space forever.

NASA is collaborating on developing a space station owned, built, and operated by a private company either Axiom Space, Voyager Space, or Blue Origin. NASA is giving each company hundreds of millions of dollars in funding and sharing their expertise with them.

Eventually, they will select one company to officially partner with and have them replace the ISS. NASA says this will help them focus on deep space exploration, which they consider a much more difficult task.

Progress photos showing the Axiom Space station being built. ENRICO SACCHETTI/Axiom Space hide caption

Progress photos showing the Axiom Space station being built.

But any company that is able to develop their own space station, get approval from the federal government and launch it into space will be able to pursue their own deep space missions even without the approval of NASA.

Phil McCalister, director of the Commercial Space Division of NASA, told NPR's Morning Edition that NASA does not want to own in perpetuity everything in low-Earth orbit which is up to 1,200 miles above Earth's surface.

"We want to turn those things over to other organizations that could potentially do it more cost-effectively, and then focus our research and activities on deep space exploration," said McCalister.

McCalister says the ISS could stay in space longer, but it's much more cost-effective for NASA to acquire a brand new station with new technology. NASA would then transition to purchasing services from commercial entities as opposed to the government building a next-generation commercial space station.

The ISS was designed in the 80s, so the technology when it was first built was very different from what is available today.

"I kind of see this as like an automobile. When we bought that automobile in 1999, it was state of the art. And it has been great. And it serves us well and continues to be safe. But it's getting older. It's getting harder to find spare parts. The maintenance for that is becoming a larger issue," McCalister said.

A new, private space station will have a lot of similarities and some differences from the current ISS.

Robyn Gatens, director of the International Space Station, says that despite it aging, not all the technology on the ISS is out of date.

"We've been evolving the technology on the International Space Station since it was first built. So some of these technologies will carry over to these private space stations," said Gatens. "We've upgraded the batteries, we've upgraded and added solar arrays that roll out and are flexible, we've been upgrading our life support systems."

The view from NASA spacewalker Thomas Marshburn's camera points downward toward the ISS on December 2, 2021. Thomas Marshburn/NASA hide caption

The view from NASA spacewalker Thomas Marshburn's camera points downward toward the ISS on December 2, 2021.

Paulo Lozano is the director of the Space Propulsion Laboratory at MIT and an aerospace engineer. He said, "NASA has already changed the solar panels at least once and switched them from these very large arrays that produce relatively little power, to these smaller arrays that produce much more power. All the computer power at the beginning is nothing compared to what can be done today."

Gatens says the structure of the space station which is the size of a football field is what can't be upgraded and replaced. And something of that size is costly for NASA to maintain.

"The big structure, even though it's doing very well, has a finite lifetime. It won't last forever. It is affected by the environment that it's in. And every time we dock a vehicle and undock a vehicle, the thermal environment puts stresses and loads on that primary structure that will eventually make it wear out," said Gatens.

Gatens says we can expect a new space station to be designed a little more efficiently and right sized for the amount of research that NASA and its partners are going to want to do in low-Earth orbit.

NASA astronaut Megan McArthur doing an experiment on the ISS on May 26, 2021. NASA hide caption

NASA astronaut Megan McArthur doing an experiment on the ISS on May 26, 2021.

The structure of the ship is also extremely important to the people who work there.

The ISS carries scientists who perform research that can only be done in the weak gravity of space, like medical research. In space, cells age more quickly and conditions progress more rapidly, helping researchers understand the progression of things like heart disease or cancer more quickly.

Researchers on the ISS also work to understand what happens to the human body when it's exposed to microgravity. This research is aimed at helping develop ways to counteract the negative effects of being in space and let astronauts stay there longer something essential to getting a human on Mars.

Gatens says a new space station will have updated research facilities.

"I'm looking forward to seeing very modern laboratory equipment on these space stations. We say the International Space Station has a lot of capability, but it's more like a test kitchen. I'm looking forward to seeing the future commercial space stations take these laboratory capabilities and really develop them into state-of-the-art space laboratories," said Gatens.

Expedition 60 crewmembers Luca Parmitano, Christina Koch, Andrew Morgan, and Nick Hague in the ISS cupola photographing Hurricane Dorian on August 30, 2019. NASA hide caption

Expedition 60 crewmembers Luca Parmitano, Christina Koch, Andrew Morgan, and Nick Hague in the ISS cupola photographing Hurricane Dorian on August 30, 2019.

On top of having modern research facilities, new space stations will likely be designed to provide a cleaner environment for researchers.

"If you see pictures of the station, you'll think 'how can they work there?' It looks cluttered, it looks messy," Astronaut Peggy Whitson told NPR. She's spent more time in space than any other woman and is the first woman to command the ISS. Whitson is now Director of Human Spaceflight and an astronaut at Axiom Space, one of the companies funded by NASA to develop a space station.

Whitson said the reason there are cables all over the place is because the structure of the station wasn't designed for some of the systems it has now. She thinks having a method for making a station even more adaptable to new technology will be important in terms of user experience.

Whitson doesn't know what technology will be available five years from now. But she said Axiom Space will want to take advantage of whatever they can get their hands on, ideally without wires everywhere.

Peggy Whitson in the ISS's cupola. AXIOM SPACE/Axiom Space hide caption

Peggy Whitson in the ISS's cupola.

"I would like all that cabling and networking to be behind the panels so that it's easier for folks to move around in space," Whitson said. "Having and building in that adaptability is one of the most critical parts, I think, of building a station for low-Earth orbit."

Paulo Lozano says many of the electronic components on the ISS are bulky. But now that electronics are smaller, she expects the interior of future stations might be a bit different.

At the current ISS, there is one small inflatable module. That structure flies up, collapsed, and then expands as it gets filled with air once it's attached to the primary structure of the station with it literally blowing up kind of like a balloon. Gatens says they are looking at multiple elements of a new space station being inflatable.

Whitson told NPR that on the space station Axiom Space is developing, they will have windows in the crew quarters and a huge cupola, what she describes as an astronaut's window to the world. On the ISS, they have a cupola you can pop your head and shoulders into and see 360-degree views of space and look down at the Earth.

On the proposed Axiom space station, Whitson said the cupola is so large that astronauts will be able to float their whole body in there and have it be an experience of basically almost flying in space.

NASA hopes that by handing responsibility of an ISS replacement over to private companies, it will allow the agency to develop technology more quickly and focus on their next goal of putting a station beyond low-Earth orbit for the first time. Current proposed low-Earth orbit stations include the Lunar Gateway, which is NASA's planned space station on the moon.

"What the space stations of today are doing is just paving the way for humans to actually explore deeper into space, which is going to be a significantly harder challenge to accomplish. The space stations of today are essential stepping stones towards that goal," said Lozano.

Gatens says one piece of technology that is being developed at Blue Origin is a big rotating space station that, when finished, would have artificial gravity.

For long trips in space, the lack of gravity is a main issue for the human body, causing bone-loss and other health issues. "If you could recreate that in space, that will be very beneficial," Gatens said.

Lozano says that a space station beyond low-Earth orbit would need new technology that is radically different from what's been used in the ISS. And both NASA and Lozano don't think it is possible to venture deeper into space, and eventually get a human on Mars, with U.S. government funding alone.

"I don't think we're very far away in terms of technology development. I think we're a little bit far away in terms of investment, because space technology is quite expensive and sometimes a single nation cannot really make it work by itself. So you need international cooperation." Lozano said.

Treye Green edited the digital version of this story.

More:

NASA will retire the ISS soon. Here's what comes next. - NPR