AI in the Translation Industry The 5-10 Year Outlook – AiThority

Artificial intelligence(AI) has had a major and positive impact on a range of industries already, with the potential to give much more in the future. We sat down with Ofer Tirosh, CEO ofTomedes, to find out how the translation industry has changed as a result of advances in technology over the past 10 years and what the future might hold in store for it.

Translation services have felt the impact of technology in various positive ways during recent years. For individual translators, the range and quality of computer-assisted translation (CAT) tools have increased massively. A CAT tool is a piece of software that supports the translation process. It helps the translator to edit and manage their translations.

CAT tools usually include translation memories, which are particularly valuable to translators. They store sentences and their translations for future use and can save a vast amount of time during the translation process. This means that translators can work more efficiently, without compromising on quality.

There are myriad other ways that technology has helped the industry. Everything from transcription to localization services has become faster and better as a result of tech advances. Even things likecontract automationmake a difference, as they speed up the overall time taken to set up and deliver on each client contract.

Also Read:Top 9 SaaS Startups in India 2020

Machine translation is an issue that affects not just our translation agency but the industry as a whole. Human translation still outdoes machine translation in terms of quality but the fact that websites that can translate for free are widely available has tempted many companies to try machine translation. The resulting translations are not good qualitand this acceptance of below-par translations isnt great for the industry as a whole, as it drives down standards.

There were some fears around machine translation taking over from professional translation services whenmachine learningwas first used to move away from statistical-based machine translation. However, those fears havent really materialized. Indeed, the Bureau of Labor Statistics is projecting19% growthfor the employment of interpreters and translators between 2018 and 2028, which is well above the average growth rate.

Instead, the industry has adapted to work alongside the new machine translation technology, with translators providing post-editing machine translation services, which essentially tidy up computerized attempts at translation and turn them into high-quality documents that accurately reflect the original content.

It was the introduction of neural networks that really took machine language learning to the next level. Previously, computers relied on the analysis of phrases (and before that, words) from existing human translations in order to produce a translation. The results were far from ideal.

Neural networks have provided a different way forward. A machine learning algorithm is used so that the machine can explore the data in its own way, learning and progressing in ways that were not previously possible. What is particularly exciting about this approach is the adaptability of the model that the machine creates. Its not a static process but one that can flex and change over time and based on new data.

Also Read:Vectors of Innovation with Conversational AI

I think the fears of machines taking over from human translation professionals have been put to bed for now. Yes, machines can translate better than they used to, but they still cant translate as well as humans can.

I think that well see a continuation of the trend towards more audio and video translation. Video, in particular, has become such an important marketing and social connection tool that demandvideo translationis likely to boom in the years ahead, just as it has for the past few years.

Ive not had access yet to anyPredictive Intelligencedata for the translation industry, unfortunately, but were definitely likely to experience an increase in demand for more blended human and machine translation models over the coming years. Theres an increasing need to translate faster without a drop in quality, for example in relation to thespread of coronavirus. We need to ensure a smooth, rapid flow of accurate information from country to country in order to tackle the situation as a global issue and not a series of local ones. Thats where both machines and humans can support the delivery of high quality, fast translation services, by working together to achieve maximum efficiency.

AI has had a major impact on the translation industry over the past ten years and I expect the pace of change over the next ten to be even greater, as the technology continues to advance.

Also Read: Proactive vs Reactive: Eliminating Passive Safety Systems With New Technology Trends

Excerpt from:

AI in the Translation Industry The 5-10 Year Outlook - AiThority

Google phone cameras will read heart, breathing rates with AI help – Reuters

FILE PHOTO: The new Google Pixel 4 smartphone is displayed during a Google launch event in New York City, New York, U.S., October 15, 2019. REUTERS/Eduardo Munoz/File Photo

(Reuters) - Cameras on Google Pixel smartphones will be able to measure heart and breathing rates starting next month, in one of the first applications of Alphabet Incs artificial intelligence technology to its wellness services.

Health programs available on Google Plays store and Apple Incs App Store for years have provided the same functionality. But a study in 2017 found accuracy varied and adoption of the apps remains low.

Google Health leaders told reporters earlier this week they had advanced the AI powering the measurements and plan to detail its method and clinical trial in an academic paper in the coming weeks. The company expects to roll out the feature to other Android smartphones at an unspecified time, it said in a blog post on Thursday, but plans for iPhones are unclear.

Apples Watch, Googles Fitbit and other wearables have greatly expanded the reach of continuous heart rate sensing technologies to a much larger population.

The smartphone camera approach is more ad hoc - users who want to take a pulse place their finger over the lens, which catches subtle color changes that correspond to blood flow. Respiration is calculated from video of upper torso movements.

Google Health product manager Jack Po said that the company wanted to give an alternative to manual pulse checks for smartphone owners who only want to monitor their condition occasionally but cannot afford a wearable.

Po said the technology, which can mistake heart rates by about 2%, requires further testing before it could be used in medical settings.

The new feature will be available as an update to the Google Fit app.

Google consolidated its health services about two years ago, aiming to better compete with Apple, Samsung Electronics Co and other mobile technology companies that have invested heavily in marketing wellness offerings.

Reporting by Paresh Dave; Editing by Sam Holmes

Read the rest here:

Google phone cameras will read heart, breathing rates with AI help - Reuters

Passengers threaten to open cockpit door on AI flight; DGCA seeks action – Times of India

NEW DELHI: The Directorate General of Civil Aviation has asked Air India to act against unruly passengers who banged on the cockpit door and misbehaved with crew of a delayed Delhi-Mumbai flight on Thursday (Jan 2).

While threatening to break open the door, some passengers had asked the Boeing 747s pilots to come out of the cockpit and explain the situation a few hours after the jumbo jet had returned to the bay at IGI Airport from the runway due to a technical snag. AI is yet to take a call on the issue of beginning proceedings under the strict no fly list against the unruly passengers of this flight.

DGCA chief Arun Kumar said: "We have asked the airline to act against the unruly behaviour."

AI spokesman Dhananjay Kumar said: "A video of few passengers of AI 865 of January 2 is being widely circulated in different forums.That flight was considerably delayed due to technical reasons. AI management has asked the operating crew for a detailed report on the reported misbehaviour by some passengers. Further action would be considered after getting the report."

The 24-year-old B747 (VT-EVA) was to operate as AI 865 at 10.10 am on Thursday. "Passengers had boarded the flight by 9.15 am. The aircraft taxied out at 10 am and returned from the taxiway in about 10 minutes. Attempts were made to rectify the snag. Finally passengers were asked to alight from this ane at about 2.20 pm and sent to Mumbai by another aircraft at 6 pm Thursday," said an AI official. So there was a delay of 8 hours in the passengers taking off for their destination.

While airlines should do their best to minimise passenger woes during flight delays, unruly behaviour by flyers targeting crew is unacceptable globally. India also now has a no fly list where disruptive passengers can be barred from flying for upto a lifetime depending on the gravity of their unruly behaviour.

Problems on board the B747 (VT-EVA) named Agra began when passengers got restive after waiting for more than a couple of hours for the snag to be rectified.

Videos have emerged showing some young passengers banging on the cockpit door, asking the pilots to come out. Captain please one out Loser come out Come out or we will break the door they yell to the cockpit crew. Cockpit is on the upper deck of B747s where AI has its business class.

See more here:

Passengers threaten to open cockpit door on AI flight; DGCA seeks action - Times of India

Facebook and NYU use artificial intelligence to make MRI scans four times faster – The Verge

If youve ever had an MRI scan before, youll know how unsettling the experience can be. Youre placed in a claustrophobia-inducing tube and asked to stay completely still for up to an hour while unseen hardware whirs, creaks, and thumps around you like a medical poltergeist. New research, though, suggests AI can help with this predicament by making MRI scans four times faster, getting patients in and out of the tube quicker.

The work is a collaborative project called fastMRI between Facebooks AI research team (FAIR) and radiologists at NYU Langone Health. Together, the scientists trained a machine learning model on pairs of low-resolution and high-resolution MRI scans, using this model to predict what final MRI scans look like from just a quarter of the usual input data. That means scans can be done faster, meaning less hassle for patients and quicker diagnoses.

Its a major stepping stone to incorporating AI into medical imaging, Nafissa Yakubova, a visiting biomedical AI researcher at FAIR who worked on the project, tells The Verge.

The reason artificial intelligence can be used to produce the same scans from less data is that the neural network has essentially learned an abstract idea of what a medical scan looks like by examining the training data. It then uses this to make a prediction about the final output. Think of it like an architect whos designed lots of banks over the years. They have an abstract idea of what a bank looks like, and so they can create a final blueprint faster.

The neural net knows about the overall structure of the medical image, Dan Sodickson, professor of radiology at NYU Langone Health, tells The Verge. In some ways what were doing is filling in what is unique about this particular patients [scan] based on the data.

The fastMRI team has been working on this problem for years, but today, they are publishing a clinical study in the American Journal of Roentgenology, which they say proves the trustworthiness of their method. The study asked radiologists to make diagnoses based on both traditional MRI scans and AI-enhanced scans of patients knees. The study reports that when faced with both traditional and AI scans, doctors made the exact same assessments.

The key word here on which trust can be based is interchangeability, says Sodickson. Were not looking at some quantitative metric based on image quality. Were saying that radiologists make the same diagnoses. They find the same problems. They miss nothing.

This concept is extremely important. Although machine learning models are frequently used to create high-resolution data from low-resolution input, this process can often introduce errors. For example, AI can be used to upscale low-resolution imagery from old video games, but humans have to check the output to make sure it matches the input. And the idea of AI imagining an incorrect MRI scan is obviously worrying.

The fastMRI team, though, says this isnt an issue with their method. For a start, the input data used to create the AI scans completely covers the target area of the body. The machine learning model isnt guessing what a final scan looks like from just a few puzzle pieces. It has all the pieces it needs, just at a lower resolution. Secondly, the scientists created a check system for the neural network based on the physics of MRI scans. That means at regular intervals during the creation of a scan, the AI system checks that its output data matches what is physically possible for an MRI machine to produce.

We dont just allow the network to create any arbitrary image, says Sodickson. We require that any image generated through the process must have been physically realizable as an MRI image. Were limiting the search space, in a way, making sure that everything is consistent with MRI physics.

Yakubova says it was this particular insight, which only came about after long discussions between the radiologists and the AI engineers, that enabled the projects success. Complementary expertise is key to creating solutions like this, she says.

The next step, though, is getting the technology into hospitals where it can actually help patients. The fastMRI team is confident this can happen fairly quickly, perhaps in just a matter of years. The training data and model theyve created are completely open access and can be incorporated into existing MRI scanners without new hardware. And Sodickson says the researchers are already in talks with the companies that produce these scanners.

Karin Shmueli, who heads the MRI research team at University College London and was not involved with this research, told The Verge this would be a key step to move forward.

The bottleneck in taking something from research into the clinic, is often adoption and implementation by manufacturers, says Shmueli. She added that work like fastMRI was part of a wider trend incorporating artificial intelligence into medical imaging that was extremely promising. AI is definitely going to be more in use in the future, she says.

Read the rest here:

Facebook and NYU use artificial intelligence to make MRI scans four times faster - The Verge

Samsung AI Forum 2020: Humanity Takes Center Stage in Discussing the Future of AI – Samsung Global Newsroom

Each year, Samsung Electronics AI Forum brings together experts from all over the world to discuss the latest advancements in artificial intelligence (AI) and share ideas on the next directions for the development of these technologies.

This November 2 and 3, experts, researchers and interested viewers alike convened virtually to share the latest developments in AI research and discussed some of the most pressing and relevant issues facing AI research today.

AI technologies have developed remarkably in recent years, thanks in no small part to the hard work and diverse research projects being done by academic and corporate researchers alike all around the world. But given the rapid and significant changes brought on by the recent global pandemic, attention has recently been turning to how AI can be used to help solve real-life problems, and what methods might be most effective in order to create such solutions.

The first day of the forum, organized by the Samsung Advanced Institute of Technology (SAIT), was opened with a keynote speech by Dr. Kinam Kim, Vice Chairman and CEO of Device Solutions at Samsung Electronics, who acknowledged the importance of the discussions set to take place at this years AI Forum around the past, present and future of the role of AI. Dr. Kim also affirmed Samsung Electronics dedication to working with global researchers in order to develop products and services with meaningful real-world impact.

The first day of the Forum then continued with a series of fascinating invited talks given by several global leading academics and professionals. Professor Yoshua Bengio of University of Montreal, Professor Yann LeCun of New York University and Professor Chelsea Finn of Stanford University were the first three to present, following which the Samsung AI Researcher of the Year awards were presented. After this ceremony, SAIT Fellow Professor Donhee Ham of Harvard University, Dr. Tara Sainath of Google Research and Dr. Jennifer Wortman Vaughan of Microsoft Research gave their talks.

The first days invited talks were followed by a virtual live panel discussion, moderated by Young Sang Choi, Vice President of Samsung Electronics, and attended by Professor Bengio, Professor LeCun, Professor Finn, Dr. Sainath, Dr. Wortman Vaughan and Dr. Inyup Kang, President of Samsung Electronics System LSI business. It is my great pleasure to join this Forum, noted Dr. Kang. I feel as if I am standing on the shoulders of giants.

Questions were given to the panel that invited the experts to discuss the ways in which computational bottlenecks can be overcome in order to take AI systems to the next level and be developed to possess the same intelligibility as the human brain. The panelists weighed the benefits of scaling neural nets as opposed to searching for new algorithms, with Dr. Kang noting that, We have to try both. Given the scale of human synapses, I doubt that we can achieve the human level of intelligibility using just current technologies. Eventually we will get there, but we definitely need new algorithms, too.

Professor LeCun noted how AI research is not just constrained by current scaling methods. We are missing some major pieces to being able to reach human-level intelligence, or even just animal-level intelligence, he said, adding that perhaps, in the near future, we might be able to develop machines that can at least reach the scale of an animal such as a cat. Professor Finn concurred with Professor LeCun. We still dont even have the AI capabilities to make a bowl of cereal, she noted. Such basic things are still beyond what our current algorithms are capable of.

Building on the topic of his invited talk, Professor Bengio added that, in order for future systems to have intelligence comparable to that of the way humans learn as children, a world model will need to be developed that is based on unsupervised learning. Our models need to act like human babies in order to go after knowledge in an active way, he explained.

The panel discussion then moved on to the ways in which the community can bridge the gaps between current technologies and future, human-intelligence level technologies, with all the experts agreeing that there is still much work to be done in developing systems that mimic the way human synapses work. A lot of current research directions are trying to address these gaps, reassured Professor Bengio.

Next, the panel shared their thoughts on how to make AI fairer given the inherent biases possessed by todays societies, with the experts debating the balance that needs to be struck between systems development reform, institutional regulation and corporate interest. Dr. Wortman Vaughan made the case for introducing a diversity of viewpoints across all parts of the system building process. I would like to see regulation around processes for people to follow when designing machine learning systems rather than trying to make everyone meet the same outcomes.

The final question given to the panel asked for their thoughts on which field will be the next successful application area for end-to-end models. End-to-end models changed the field of speech recognition by reducing latency and removing the need for internet connection, noted Dr. Sainath. Thanks to this breakthrough, going forward, youre going to see applications of end-to-end models for such purposes as long meeting transcriptions. We always speak of having one model to rule them all, and this is a challenging and interesting research area that has been expanded by the possibilities of end-to-end models as we look to develop a model capable of recognizing all the languages in the world.

The second day of the AI Forum 2020 was hosted by Samsung Research, the advanced R&D hub of Samsung Electronics that leads the development of future technologies for the companys end-product business.

In his opening keynote speech, Dr. Sebastian Seung, President and Head of Samsung Research, outlined the areas in which Samsung has been accelerating its AI research to the end of providing real-world benefits to their users, including more traditional AI fields (vision and graphics, speech and language, robotics), on-device AI and the health and wellness field.

After showcasing a range of Samsung products bolstered with AI technologies, Dr. Seung affirmed that, in order to best extend the capabilities of AI to truly help people in meaningful ways, academic researchers and corporations need to come together to find best-practice solutions.

Following Dr. Seungs speech, the second day of the Forum proceeded with a series of invited talks around the theme of Human-Centric AI by Professor Christopher Manning of Stanford University, Professor Devi Parikh of the Georgia Institute of Technology, Professor Subbarao Kambhampati of Arizona State University and Executive Vice President of Samsung Research Daniel D. Lee, Head of Samsungs AI Center in New York and Professor at Cornell Tech.

The expert talks were followed by a live panel discussion, moderated by Dr. Seung and joined by Professor Manning, Professor Parikh, Professor Kambhampati and EVP Lee. Dr. Seung kicked off the discussion with a question about a topic raised in Professor Kambhampatis speech around the potential issues that could lead to the risk of data manipulation as AI develops. As AI technology continues to develop, it is important that we stay vigilant about the potential for manipulation and work to solve the issues of any AI systems inadvertent data manipulations, explained Professor Kambhampati.

Dr. Seung then posed a much-requested viewer question to the panel. Given that one of the most practical concerns in AI research is the obtaining of data, the experts were asked whether they believe that companies or academic researchers need to develop new means of handling and managing data. Acknowledging that academics often struggle to secure data while companies possess alleviated data shortage problems yet elevated restraints around the usage of their data, Professor Parikh made a case for the need of new research methods that can be modeled with insufficient data or with cooperation between academia and industry, including open research methods. In many areas, there are big public data sets available, she noted. Researchers outside of companies are able to access and use these. But further to this, some of the most interesting fields in AI today are the ones where we dont have much data these represent some of the most cutting-edge problems and approaches.

The final question took the panel back to the theme of the AI Forums second day, Human-Centered AI, wherein the panelists were asked whether or not they believe that AI will be capable of equaling human intelligence in the next 70 years, since that is the period of time it has taken us to get to where we are today in the field of AI research. EVP Lee reasoned that AI still has a way to go but that 70 years is a long time. I am optimistic, noted EVP Lee, but there are lots of hard problems in the way. We need to have academics and companies working on a goal like this together.

We are currently reaching the limits of the range of problems we can solve using just lots of data, summarized Professor Manning. Before we see AI developments like this on a large scale, an area that we should emphasize is the production of AI systems that work for regular people, not just huge corporations, he concluded.

The Samsung AI Forum 2020 ended with a warm thanks to all the esteemed experts who had taken part in the two-day Forum and a shared hope to hold next years Forum offline. All the sessions and invited talks from the AI Forum 2020 are available to watch on the official Samsung YouTube channel.

More:

Samsung AI Forum 2020: Humanity Takes Center Stage in Discussing the Future of AI - Samsung Global Newsroom

AGAT Software Announces the Launch of the Recording AI Compliance Analysis Capabilities for All the Leading Vendors in the Unified Communications…

AGAT, a market leader in Security and Compliance solutions for Unified Communications (UC) announces the launch of the Recording AI Compliance Analysis capabilities for all Webex Meetings, Zoom, and Microsoft Teams.

IT Security- 97% of Enterprises Have Suspicious Activity in Network Traffic

AGAT Software is pushing the limits and setting higher standards for what regulation technology for those such as FINRA, GDPR, HIPAA & MiFID II.

In a world where video conferencing is becoming more common than the regular face to face and audio, calls are replacing chat or emails, SphereShield by AGAT is coming to deliver advanced yet intuitive solutions for recording compliance and deep analysis.

In a nutshell, AGAT is bringing audio and video (including screen sharing) analysis for DLP and eDiscovery needs, meaning that companies can have a precise written record of who said what at a video conference and which content was shared on the screen.

The possibilities include the following

Nearly 30% of Ransomware Attacks Occur on the Weekend

Go here to read the rest:

AGAT Software Announces the Launch of the Recording AI Compliance Analysis Capabilities for All the Leading Vendors in the Unified Communications...

AI Algorithm Remotely Monitors Sleep – Geek

Sleep is one of lifes most precious giftsalong with scented candles, tacos, and Barry Manilow.

Yet millions of Americans lay awake each night, counting sheep and wishing for their insomniac hell to end.

Sure, you could attach a bunch of sleep-monitoring sensors to yourself, but those will probably do more harm than good. The best alternative, it seems, comes down to science.

Researchers at Massachusetts Institute of Technology (MIT) and Massachusetts General Hospital (MGH) developed a way to remotely observe sleep stages, without applying any bits and bobs.

The device, mounted on a nearby wall, uses an artificial intelligence algorithm to analyze and translate radio signals around the user into sleep stages: light, deep, rapid eye movement (REM).

(Sounds like some kind of Voldemort-dark magic to me.)

Imagine if your Wi-Fi router knows when you are dreaming, and can monitor whether you are having enough deep sleep, which is necessary for memory consolidation, study lead Dina Katabi, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, said in a statement.

Our vision is developing health sensors that will disappear into the background and capture physiological signals and important health metrics, without asking the user to change her behavior in any way, she added.

Katabi adopted previously developed radio-based sensors, which emit low-power radio frequency (RF) signals that reflect off the body, as a novel way to monitor sleep.

The opportunity is very big because we dont understand sleep well, and a high fraction of the population has sleep problems, MIT graduate student and study co-author Mingmin Zhao said. We have the technology that, if we can make it work, can move us from a world where we do sleep studies once every few months in the sleep lab to continuous sleep studies in the home.

To achieve that, the team incorporated a proprietary deep neural network-based AI algorithm, which automatically eliminates irrelevant information.

Just mount and sleep (via Shichao Yue @ MIT)

The novelty lies in preserving the sleep signal while removing the rest of the unwanted data, according to Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science.

The resulting 80 percent accuracy, MIT boasted, is comparable to that of sleep specialists based on EEG measurements.

Our device allows you not only to remove all of these sensors that you put on the person, and make it a much better experience that can be done at home, Katabi said.

It not only makes the job of the doctor and sleep technologist easier, it also opens new doors for studying how certain diseases, like Parkinsons, affect sleep.They dont have to go through the data and manually label it.

Katabi and Jaakkola partnered with Matt Bianchi, chief of the MGH Division of Sleep Medicine, to present their findings in a paper co-written by Zhao and grad student Shichao Yue.

Let us know what you like about Geek bytaking our survey.

The rest is here:

AI Algorithm Remotely Monitors Sleep - Geek

When bad actors have AI tools: Rethinking security tactics – The Enterprisers Project

Cloud-stored data is suddenly encrypted, followed by a note asking for ransom or threatening public embarrassment. Corporate email addresses become conduits for malicious malware and links. An organizations core business platform abruptly goes offline, disrupting vital communications and services for hours.

Weve learned to recognize the familiar signs of a cyberattack, thanks to the growing array of well-publicized incidents when threat actors from nation-states or criminal enterprises breach our digital networks. Artificial Intelligenceis changing this picture.

[ Read also:5 approaches to security automationand How to automate compliance and security with Kubernetes: 3 ways. ]

With AI, organizations can program machines to perform tasks that would normally require human intelligence. Examples include self-driving trucks, computer programs that develop drug therapies, and software that writes news articles and composes music. Machine learning (ML) is an application of AI that uses algorithms to teach computers to learn and adapt to new data.

AI and ML represent a revolutionary new way of harnessing technology and an unprecedented opportunity for threat actors to sow even more disruption.

What do these emerging adversarial AI/ML threats look like? How can we take the appropriate measures to protect ourselves, our data, and society as a whole?

Step one in cybersecurity is to think like the enemy. What could you do as a threat actor with adversarial AI/ML? The possibilities are many, with the potential impact extending beyond cyberspace:

Step one in cybersecurity is to think like the enemy. What could you do as a threat actor with adversarial AI/ML?

You could manipulate what a device is trained to see for instance, corrupting training imagery so that a driving robot interprets a stop sign as 55 mph. Because intelligent machines lack the ability to understand context, the driving robot in this case would just keep driving over obstacles or into a brick wall if these things stood in its way. Closer to home, an adversarial AI/ML attack can fool your computers anti-virus software into allowing malware to run.

You could manipulate what humans see, like a phone number that looks like its from your area code. Deepfakes are a sophisticated and frightening example of this. Manufactured videos of politicians and celebrities, nearly indistinguishable from the real thing, have been shared over social media among millions of people before being identified as fake.

Furthermore, you can manipulate what an AI application does, like Twitter users did with Microsofts AI chatbot Tay. In less than a day, they trained the chatbot to spew misogynistic and racist remarks.

Once a machine learning application is live, you can tamper with its algorithms for instance, directing an application for automated email responses to instead spit out sensitive information like credit card numbers. If youre with a cybercriminal organization, this is valuable data ripe for exploitation.

You could even alter the course of geopolitical events. Retaliation for cyberattacks has already been moving into the physical world, as we saw with the 2016 hacking of Ukraines power grid. Adversarial AI ups the ante.

[ Check out our primer on 10 key artificial intelligence terms for IT and business leaders:Cheat sheet: AI glossary. ]

Fortunately, as adversarial AL/ML tactics evolve, so are cybersecurity measures against them. One tactic is training an algorithm to think more like a human. AI research and deployment companyOpen AI suggests explicitly training algorithms against adversarial attacks, training multiple defense models, and training AI models to output probabilities rather than hard decisions, which makes it more difficult for an adversary to exploit the model.

Training can also be used in threat detection for example, training computers to detect deepfake videos by feeding them examples of deepfakes compared with real videos.

IT teams can also achieve an ounce of prevention through baking security into their AI/ML applications from the beginning. When building models, keep in mind how adversaries may try to cause damage. A variety of resources, likeIBMs Adversarial Robustness Toolbox, have emerged to help IT teams evaluate ML models and create more robust and secure AI applications.

Where should organizations start their efforts? Identify the easiest attack vector and try to bake it directly into your AI/ML pipeline. By tackling concrete problems with bespoke solutions, you can mitigate threats in the short term while building the understanding and depth needed to track long-term solutions.

MORE ON ARTIFICIAL INTELLIGENCE AND SECURITY

Attackers armed with AI pose a formidable threat. Bad actors are constantly looking at loopholes and ways to exploit them, and with the right AI system, they can manipulate systems in new, insidious ways and easily perform functions at a scale unachievable by humans. Fortunately, AI is part of the cybersecurity solution as well, powering complex models for detecting malicious behavior, sophisticated threats, and evolving trends and conducting this analysis far faster than any team of humans could.

[ Get the eBook:Top considerations for building a production-ready AI/ML environment.]

Read this article:

When bad actors have AI tools: Rethinking security tactics - The Enterprisers Project

NASA – Wikipedia

The National Aeronautics and Space Administration (NASA ) is an independent agency of the U.S. federal government responsible for the civil space program, aeronautics research, and space research.

NASA was established in 1958, succeeding the National Advisory Committee for Aeronautics (NACA), to give the U.S. space development effort a distinctly civilian orientation, emphasizing peaceful applications in space science.[5][6][7] NASA has since led most American space exploration, including Project Mercury, Project Gemini, the 1968-1972 Apollo Moon landing missions, the Skylab space station, and the Space Shuttle. NASA supports the International Space Station and oversees the development of the Orion spacecraft and the Space Launch System for the crewed lunar Artemis program, Commercial Crew spacecraft, and the planned Lunar Gateway space station. The agency is also responsible for the Launch Services Program, which provides oversight of launch operations and countdown management for uncrewed NASA launches.

NASA's science is focused on better understanding Earth through the Earth Observing System;[8] advancing heliophysics through the efforts of the Science Mission Directorate's Heliophysics Research Program;[9] exploring bodies throughout the Solar System with advanced robotic spacecraft such as New Horizons and planetary rovers such as Perseverance;[10] and researching astrophysics topics, such as the Big Bang, through the James Webb Space Telescope, and the Great Observatories and associated programs.[11]

The agency's administration is located at NASA Headquarters in Washington, DC, and provides overall guidance and direction.[12] Except under exceptional circumstances, NASA civil service employees are required to be US citizens.[13] NASA's administrator is nominated by the President of the United States subject to the approval of the US Senate,[14] and serves at the President's pleasure as a senior space science advisor. The current administrator is Bill Nelson, appointed by President Joe Biden, since May 3, 2021.[15]

NASA operates with four FY2022 strategic goals.[16]

NASA budget requests are developed by NASA and approved by the administration prior to submission to the U.S. Congress. Authorized budgets are those that have been included in enacted appropriations bills that are approved by both houses of Congress and enacted into law by the U.S. president.[17]

NASA fiscal year budget requests and authorized budgets are provided below.

NASA funding and priorities are developed through its six Mission Directorates.

Center-wide activities such as the Chief Engineer and Safety and Mission Assurance organizations are aligned to the headquarters function. The MSD budget estimate includes funds for these HQ functions. The administration operates 10 major field centers with several managing additional subordinate facilities across the country. Each is led by a Center Director (data below valid as of September 1, 2022).

Short 2018 documentary about NASA produced for its 60th anniversary

Beginning in 1946, the National Advisory Committee for Aeronautics (NACA) began experimenting with rocket planes such as the supersonic Bell X-1.[43] In the early 1950s, there was challenge to launch an artificial satellite for the International Geophysical Year (19571958). An effort for this was the American Project Vanguard. After the Soviet space program's launch of the world's first artificial satellite (Sputnik 1) on October 4, 1957, the attention of the United States turned toward its own fledgling space efforts. The US Congress, alarmed by the perceived threat to national security and technological leadership (known as the "Sputnik crisis"), urged immediate and swift action; President Dwight D. Eisenhower counseled more deliberate measures. The result was a consensus that the White House forged among key interest groups, including scientists committed to basic research; the Pentagon which had to match the Soviet military achievement; corporate America looking for new business; and a strong new trend in public opinion looking up to space exploration.[44]

On January 12, 1958, NACA organized a "Special Committee on Space Technology", headed by Guyford Stever.[7] On January 14, 1958, NACA Director Hugh Dryden published "A National Research Program for Space Technology", stating,[45]

It is of great urgency and importance to our country both from consideration of our prestige as a nation as well as military necessity that this challenge [Sputnik] be met by an energetic program of research and development for the conquest of space ... It is accordingly proposed that the scientific research be the responsibility of a national civilian agency ... NACA is capable, by rapid extension and expansion of its effort, of providing leadership in space technology.[45]

While this new federal agency would conduct all non-military space activity, the Advanced Research Projects Agency (ARPA) was created in February 1958 to develop space technology for military application.[46]

On July 29, 1958, Eisenhower signed the National Aeronautics and Space Act, establishing NASA.[47] When it began operations on October 1, 1958, NASA absorbed the 43-year-old NACA intact; its 8,000 employees, an annual budget of US$100million, three major research laboratories (Langley Aeronautical Laboratory, Ames Aeronautical Laboratory, and Lewis Flight Propulsion Laboratory) and two small test facilities.[48] Elements of the Army Ballistic Missile Agency and the United States Naval Research Laboratory were incorporated into NASA. A significant contributor to NASA's entry into the Space Race with the Soviet Union was the technology from the German rocket program led by Wernher von Braun, who was now working for the Army Ballistic Missile Agency (ABMA), which in turn incorporated the technology of American scientist Robert Goddard's earlier works.[49] Earlier research efforts within the US Air Force[48] and many of ARPA's early space programs were also transferred to NASA.[50] In December 1958, NASA gained control of the Jet Propulsion Laboratory, a contractor facility operated by the California Institute of Technology.[48]

NASA's first administrator was Dr. T. Keith Glennan who was appointed by President Dwight D. Eisenhower. During his term (19581961) he brought together the disparate projects in American space development research.[51] James Webb led the agency during the development of the Apollo program in the 1960s.[52] James C. Fletcher has held the position twice; first during the Nixon administration in the 1970s and then at the request of Ronald Reagan following the Challenger disaster.[53] Daniel Goldin held the post for nearly 10 years and is the longest serving administrator to date. He is best known for pioneering the "faster, better, cheaper" approach to space programs.[54] Bill Nelson is currently serving as the 14th administrator of NASA.

The NASA seal was approved by Eisenhower in 1959, and slightly modified by President John F. Kennedy in 1961.[55][56] NASA's first logo was designed by the head of Lewis' Research Reports Division, James Modarelli, as a simplification of the 1959 seal.[57] In 1975, the original logo was first dubbed "the meatball" to distinguish it from the newly designed "worm" logo which replaced it. The "meatball" returned to official use in 1992.[57] The "worm" was brought out of retirement by administrator Jim Bridenstine in 2020.[58]

NASA Headquarters in Washington, DC provides overall guidance and political leadership to the agency's ten field centers, through which all other facilities are administered.[59]

Aerial views of the NASA Ames (left) and NASA Armstrong (right) centers

Ames Research Center (ARC) at Moffett Field is located in the Silicon Valley of central California and delivers wind-tunnel research on the aerodynamics of propeller-driven aircraft along with research and technology in aeronautics, spaceflight, and information technology.[60] It provides leadership in astrobiology, small satellites, robotic lunar exploration, intelligent/adaptive systems and thermal protection.

Armstrong Flight Research Center (AFRC) is located inside Edwards Air Force Base and is the home of the Shuttle Carrier Aircraft (SCA), a modified Boeing 747 designed to carry a Space Shuttle orbiter back to Kennedy Space Center after a landing at Edwards AFB. The center focuses on flight testing of advanced aerospace systems.

Glenn Research Center is based in Cleveland, Ohio and focuses on air-breathing and in-space propulsion and cryogenics, communications, power energy storage and conversion, microgravity sciences, and advanced materials.[61]

Goddard Space Flight Center (GSFC), located in Greenbelt, Maryland develops and operates uncrewed scientific spacecraft.[62] GSFC also operates two spaceflight tracking and data acquisition networks (the Space Network and the Near Earth Network), develops and maintains advanced space and Earth science data information systems, and develops satellite systems for the National Oceanic and Atmospheric Administration (NOAA).[62]

Johnson Space Center (JSC) is the NASA center for human spaceflight training, research and flight control.[63] It is home to the United States Astronaut Corps and is responsible for training astronauts from the US and its international partners, and includes the Christopher C. Kraft Jr. Mission Control Center.[64] JSC also operates the White Sands Test Facility in Las Cruces, New Mexico to support rocket testing.

Jet Propulsion Laboratory (JPL), located in the San Gabriel Valley area of Los Angeles County, C and builds and operates robotic planetary spacecraft, though it also conducts Earth-orbit and astronomy missions.[65] It is also responsible for operating NASA's Deep Space Network (DSN).

Langley Research Center (LaRC), located in Hampton, Virginia devotes two-thirds of its programs to aeronautics, and the rest to space. LaRC researchers use more than 40 wind tunnels to study improved aircraft and spacecraft safety, performance, and efficiency. The center was also home to early human spaceflight efforts including the team chronicled in the Hidden Figures story.[66]

Kennedy Space Center (KSC), located west of Cape Canaveral Space Force Station in Florida, has been the launch site for every United States human space flight since 1968. KSC also manages and operates uncrewed rocket launch facilities for America's civil space program from three pads at Cape Canaveral.[67]

Marshall Space Flight Center (MSFC), located on the Redstone Arsenal near Huntsville, Alabama, is one of NASA's largest centers and is leading the development of the Space Launch System in support of the Artemis program. Marshall is NASA's lead center for International Space Station (ISS) design and assembly; payloads and related crew training; and was the lead for Space Shuttle propulsion and its external tank.[68]

Stennis Space Center, originally the "Mississippi Test Facility", is located in Hancock County, Mississippi, on the banks of the Pearl River at the MississippiLouisiana border.[69] Commissioned in October 1961, it is currently used for rocket testing by over 30 local, state, national, international, private, and public companies and agencies.[70][71] It also contains the NASA Shared Services Center.[72]

NASA inherited NACA's X-15 experimental rocket-powered hypersonic research aircraft, developed in conjunction with the US Air Force and Navy. Three planes were built starting in 1955. The X-15 was drop-launched from the wing of one of two NASA Boeing B-52 Stratofortresses, NB52A tail number 52-003, and NB52B, tail number 52-008 (known as the Balls 8). Release took place at an altitude of about 45,000 feet (14km) and a speed of about 500 miles per hour (805km/h).[73]

Twelve pilots were selected for the program from the Air Force, Navy, and NACA. A total of 199 flights were made between June 1959 and December 1968, resulting in the official world record for the highest speed ever reached by a crewed powered aircraft (current as of 2014[update]), and a maximum speed of Mach 6.72, 4,519 miles per hour (7,273km/h).[74] The altitude record for X-15 was 354,200 feet (107.96km).[75] Eight of the pilots were awarded Air Force astronaut wings for flying above 260,000 feet (80km), and two flights by Joseph A. Walker exceeded 100 kilometers (330,000ft), qualifying as spaceflight according to the International Aeronautical Federation. The X-15 program employed mechanical techniques used in the later crewed spaceflight programs, including reaction control system jets for controlling the orientation of a spacecraft, space suits, and horizon definition for navigation.[75] The reentry and landing data collected were valuable to NASA for designing the Space Shuttle.[76]

In 1958, NASA formed an engineering group, the Space Task Group, to manage their human spaceflight programs under the direction of Robert Gilruth. Their earliest programs were conducted under the pressure of the Cold War competition between the US and the Soviet Union. NASA inherited the US Air Force's Man in Space Soonest program, which considered many crewed spacecraft designs ranging from rocket planes like the X-15, to small ballistic space capsules.[77] By 1958, the space plane concepts were eliminated in favor of the ballistic capsule,[78] and NASA renamed it Project Mercury. The first seven astronauts were selected among candidates from the Navy, Air Force and Marine test pilot programs. On May 5, 1961, astronaut Alan Shepard became the first American in space aboard a capsule he named Freedom7, launched on a Redstone booster on a 15-minute ballistic (suborbital) flight.[79] John Glenn became the first American to be launched into orbit, on an Atlas launch vehicle on February 20, 1962, aboard Friendship7.[80] Glenn completed three orbits, after which three more orbital flights were made, culminating in L. Gordon Cooper's 22-orbit flight Faith 7, May 1516, 1963.[81] Katherine Johnson, Mary Jackson, and Dorothy Vaughan were three of the human computers doing calculations on trajectories during the Space Race.[82][83][84] Johnson was well known for doing trajectory calculations for John Glenn's mission in 1962, where she was running the same equations by hand that were being run on the computer.[82]

Mercury's competition from the Soviet Union (USSR) was the single-pilot Vostok spacecraft. They sent the first man in space, cosmonaut Yuri Gagarin, into a single Earth orbit aboard Vostok 1 in April 1961, one month before Shepard's flight.[85] In August 1962, they achieved an almost four-day record flight with Andriyan Nikolayev aboard Vostok 3, and also conducted a concurrent Vostok 4 mission carrying Pavel Popovich.[86]

Based on studies to grow the Mercury spacecraft capabilities to long-duration flights, developing space rendezvous techniques, and precision Earth landing, Project Gemini was started as a two-man program in 1961 to overcome the Soviets' lead and to support the planned Apollo crewed lunar landing program, adding extravehicular activity (EVA) and rendezvous and docking to its objectives. The first crewed Gemini flight, Gemini 3, was flown by Gus Grissom and John Young on March 23, 1965.[87] Nine missions followed in 1965 and 1966, demonstrating an endurance mission of nearly fourteen days, rendezvous, docking, and practical EVA, and gathering medical data on the effects of weightlessness on humans.[88][89]

Under the direction of Soviet Premier Nikita Khrushchev, the USSR competed with Gemini by converting their Vostok spacecraft into a two- or three-man Voskhod. They succeeded in launching two crewed flights before Gemini's first flight, achieving a three-cosmonaut flight in 1964 and the first EVA in 1965.[90] After this, the program was canceled, and Gemini caught up while spacecraft designer Sergei Korolev developed the Soyuz spacecraft, their answer to Apollo.

The U.S. public's perception of the Soviet lead in the Space Race (by putting the first man into space) motivated President John F. Kennedy[91] to ask the Congress on May 25, 1961, to commit the federal government to a program to land a man on the Moon by the end of the 1960s, which effectively launched the Apollo program.[92]

Apollo was one of the most expensive American scientific programs ever. It cost more than $20billion in 1960s dollars[93] or an estimated $236billion in present-day US dollars.[94] (In comparison, the Manhattan Project cost roughly $30.1billion, accounting for inflation.)[94][95] The Apollo program used the newly developed Saturn I and Saturn V rockets, which were far larger than the repurposed ICBMs of the previous Mercury and Gemini programs.[96] They were used to launch the Apollo spacecraft, consisting of the Command and Service Module (CSM) and the Lunar Module (LM). The CSM ferried astronauts from Earth to Moon orbit and back, while the Lunar Module would land them on the Moon itself.[note 1]

The planned first crew of 3 astronauts were killed due to a fire during a 1967 preflight test for the Apollo 204 mission (later renamed Apollo 1).[97] The second crewed mission, Apollo 8, brought astronauts for the first time in a flight around the Moon in December 1968.[98] Shortly before, the Soviets had sent an uncrewed spacecraft around the Moon.[99] The next two missions (Apollo 9 and Apollo 10) practiced rendezvous and docking maneuvers required to conduct the Moon landing.[100][101]

The Apollo 11 mission, launched in July 1969, landed the first humans on the Moon. Astronauts Neil Armstrong and Buzz Aldrin walked on the lunar surface, conducting experiments and sample collection, while Michael Collins orbited above in the CSM.[102] Six subsequent Apollo missions (12 through 17) were launched; five of them were successful, while one (Apollo 13) was aborted after an in-flight emergency nearly killed the astronauts. Throughout these seven Apollo spaceflights, twelve men walked on the Moon. These missions returned a wealth of scientific data and 381.7 kilograms (842lb) of lunar samples. Topics covered by experiments performed included soil mechanics, meteoroids, seismology, heat flow, lunar ranging, magnetic fields, and solar wind.[103] The Moon landing marked the end of the space race; and as a gesture, Armstrong mentioned mankind when he stepped down on the Moon.[104]

On July 3, 1969, the Soviets suffered a major setback on their Moon program when the rocket known as the N-1 had exploded in a fireball at its launch site at Baikonur in Kazakhstan, destroying one of two launch pads. Each of the first four launches of N-1 resulted in failure before the end of the first stage flight effectively denying the Soviet Union the capacity to deliver the systems required for a crewed lunar landing.[105]

Apollo set major milestones in human spaceflight. It stands alone in sending crewed missions beyond low Earth orbit, and landing humans on another celestial body.[106] Apollo 8 was the first crewed spacecraft to orbit another celestial body, while Apollo 17 marked the last moonwalk and the last crewed mission beyond low Earth orbit. The program spurred advances in many areas of technology peripheral to rocketry and crewed spaceflight, including avionics, telecommunications, and computers. Apollo sparked interest in many fields of engineering and left many physical facilities and machines developed for the program as landmarks. Many objects and artifacts from the program are on display at various locations throughout the world, notably at the Smithsonian's Air and Space Museums.

Skylab was the United States' first and only independently built space station.[107] Conceived in 1965 as a workshop to be constructed in space from a spent Saturn IB upper stage, the 169,950lb (77,088kg) station was constructed on Earth and launched on May 14, 1973, atop the first two stages of a Saturn V, into a 235-nautical-mile (435km) orbit inclined at 50 to the equator. Damaged during launch by the loss of its thermal protection and one electricity-generating solar panel, it was repaired to functionality by its first crew. It was occupied for a total of 171 days by 3 successive crews in 1973 and 1974.[107] It included a laboratory for studying the effects of microgravity, and a solar observatory.[107] NASA planned to have the in-development Space Shuttle dock with it, and elevate Skylab to a higher safe altitude, but the Shuttle was not ready for flight before Skylab's re-entry and demise on July 11, 1979.[108]

To reduce cost, NASA modified one of the Saturn V rockets originally earmarked for a canceled Apollo mission to launch Skylab, which itself was a modified Saturn V fuel tank. Apollo spacecraft, launched on smaller Saturn IB rockets, were used for transporting astronauts to and from the station. Three crews, consisting of three men each, stayed aboard the station for periods of 28, 59, and 84 days. Skylab's habitable volume was 11,290 cubic feet (320m3), which was 30.7 times bigger than that of the Apollo Command Module.[108]

In February 1969, President Richard Nixon appointed a space task group headed by Vice President Spiro Agnew to recommend human spaceflight projects beyond Apollo. The group responded in September with the Integrated Program Plan (IPP), intended to support space stations in Earth and lunar orbit, a lunar surface base, and a human Mars landing. These would be supported by replacing NASA's existing expendable launch systems with a reusable infrastructure including Earth orbit shuttles, space tugs, and a nuclear-powered trans-lunar and interplanetary shuttle. Despite the enthusiastic support of Agnew and NASA Administrator Thomas O. Paine, Nixon realized public enthusiasm, which translated into Congressional support, for the space program was waning as Apollo neared its climax, and vetoed most of these plans, except for the Earth orbital shuttle, and a deferred Earth space station.[109]

On May 24, 1972, US President Richard M. Nixon and Soviet Premier Alexei Kosygin signed an agreement calling for a joint crewed space mission, and declaring intent for all future international crewed spacecraft to be capable of docking with each other.[110] This authorized the ApolloSoyuz Test Project (ASTP), involving the rendezvous and docking in Earth orbit of a surplus Apollo command and service module with a Soyuz spacecraft. The mission took place in July 1975. This was the last US human spaceflight until the first orbital flight of the Space Shuttle in April 1981.[111]

The mission included both joint and separate scientific experiments and provided useful engineering experience for future joint USRussian space flights, such as the ShuttleMir program[112] and the International Space Station.

The Space Shuttle was the only vehicle in the Space Transportation System to be developed, and became the major focus of NASA in the late 1970s and the 1980s. Originally planned as a frequently launchable, fully reusable vehicle, the design was changed to use an expendable external propellant tank to reduce development cost, and four Space Shuttle orbiters were built by 1985. The first to launch, Columbia, did so on April 12, 1981, the 20th anniversary of the first human spaceflight.[113]

The Shuttle flew 135 missions and carried 355 astronauts from 16 countries, many on multiple trips. Its major components were a spaceplane orbiter with an external fuel tank and two solid-fuel launch rockets at its side. The external tank, which was bigger than the spacecraft itself, was the only major component that was not reused. The shuttle could orbit in altitudes of 185643 km (115400 miles)[114] and carry a maximum payload (to low orbit) of 24,400 kg (54,000 lb).[115] Missions could last from 5 to 17 days and crews could be from 2 to 8 astronauts.[114]

On 20 missions (19831998) the Space Shuttle carried Spacelab, designed in cooperation with the European Space Agency (ESA). Spacelab was not designed for independent orbital flight, but remained in the Shuttle's cargo bay as the astronauts entered and left it through an airlock.[116] On June 18, 1983, Sally Ride became the first American woman in space, on board the Space Shuttle Challenger STS-7 mission.[117] Another famous series of missions were the launch and later successful repair of the Hubble Space Telescope in 1990 and 1993, respectively.[118]

In 1995, Russian-American interaction resumed with the ShuttleMir missions (19951998). Once more an American vehicle docked with a Russian craft, this time a full-fledged space station. This cooperation has continued with Russia and the United States as two of the biggest partners in the largest space station built: the International Space Station (ISS).[119] The strength of their cooperation on this project was even more evident when NASA began relying on Russian launch vehicles to service the ISS during the two-year grounding of the shuttle fleet following the 2003 Space Shuttle Columbia disaster.

The Shuttle fleet lost two orbiters and 14 astronauts in two disasters: Challenger in 1986, and Columbia in 2003.[120] While the 1986 loss was mitigated by building the Space Shuttle Endeavour from replacement parts, NASA did not build another orbiter to replace the second loss.[120] NASA's Space Shuttle program had 135 missions when the program ended with the successful landing of the Space Shuttle Atlantis at the Kennedy Space Center on July 21, 2011. The program spanned 30 years with 355 separate astronauts sent into space, many on multiple missions.[121]

While the Space Shuttle program was still suspended after the loss of Columbia, President George W. Bush announced the Vision for Space Exploration including the retirement of the Space Shuttle after completing the International Space Station. The plan was enacted into law by the NASA Authorization Act of 2005 and directs NASA to develop and launch the Crew Exploration Vehicle (later called Orion) by 2010, return Americans to the Moon by 2020, land on Mars as feasible, repair the Hubble Space Telescope, and continue scientific investigation through robotic solar system exploration, human presence on the ISS, Earth observation, and astrophysics research. The crewed exploration goals prompted NASA's Constellation program.[122]

On December 4, 2006, NASA announced it was planning a permanent Moon base.[123] The goal was to start building the Moon base by 2020, and by 2024, have a fully functional base that would allow for crew rotations and in-situ resource utilization. However, in 2009, the Augustine Committee found the program to be on an "unsustainable trajectory."[124] In February 2010, President Barack Obama's administration proposed eliminating public funds for it.[125]

President Obama's plan was to develop American private spaceflight capabilities to get astronauts to the International Space Station, replace Russian Soyuz capsules, and use Orion capsules for ISS emergency escape purposes. During a speech at the Kennedy Space Center on April 15, 2010, Obama proposed a new heavy-lift vehicle (HLV) to replace the formerly planned Ares V.[126] In his speech, Obama called for a crewed mission to an asteroid as soon as 2025, and a crewed mission to Mars orbit by the mid-2030s.[126] The NASA Authorization Act of 2010 was passed by Congress and signed into law on October 11, 2010.[127] The act officially canceled the Constellation program.[127]

The NASA Authorization Act of 2010 required a newly designed HLV be chosen within 90 days of its passing; the launch vehicle was given the name Space Launch System. The new law also required the construction of a beyond low earth orbit spacecraft.[128] The Orion spacecraft, which was being developed as part of the Constellation program, was chosen to fulfill this role.[129] The Space Launch System is planned to launch both Orion and other necessary hardware for missions beyond low Earth orbit.[130] The SLS is to be upgraded over time with more powerful versions. The initial capability of SLS is required to be able to lift 70t (150,000lb) (later 95t or 209,000lb) into LEO. It is then planned to be upgraded to 105t (231,000lb) and then eventually to 130t (290,000lb).[129][131] The Orion capsule first flew on Exploration Flight Test 1 (EFT-1), an uncrewed test flight that was launched on December 5, 2014, atop a Delta IV Heavy rocket.[131]

NASA undertook a feasibility study in 2012 and developed the Asteroid Redirect Mission as an uncrewed mission to move a boulder-sized near-Earth asteroid (or boulder-sized chunk of a larger asteroid) into lunar orbit. The mission would demonstrate ion thruster technology and develop techniques that could be used for planetary defense against an asteroid collision, as well as a cargo transport to Mars in support of a future human mission. The Moon-orbiting boulder might then later be visited by astronauts. The Asteroid Redirect Mission was cancelled in 2017 as part of the FY2018 NASA budget, the first one under President Donald Trump.[132]

NASA has conducted many uncrewed and robotic spaceflight programs throughout its history. Uncrewed robotic programs launched the first American artificial satellites into Earth orbit for scientific and communications purposes and sent scientific probes to explore the planets of the Solar System, starting with Venus and Mars, and including "grand tours" of the outer planets. More than 1,000 uncrewed missions have been designed to explore the Earth and the Solar System.[133]

The first US uncrewed satellite was Explorer 1, which started as an ABMA/JPL project during the early part of the Space Race. It was launched in January 1958, two months after Sputnik. At the creation of NASA, the Explorer project was transferred to the agency and still continues. Its missions have been focusing on the Earth and the Sun, measuring magnetic fields and the solar wind, among other aspects.[134]

The Ranger missions developed technology to build and deliver robotic probes into orbit and to the vicinity of the Moon. Ranger 7 successfully returned images of the Moon in July 1964, followed by two more successful missions.[135]

NASA also played a role in the development and delivery of early communications satellite technology to orbit. Syncom 3 was the first geostationary satellite. It was an experimental geosynchronous communications satellite placed over the equator at 180 degrees longitude in the Pacific Ocean. The satellite provided live television coverage of the 1964 Olympic games in Tokyo, Japan and conducted various communications tests. Operations were turned over to the Department of Defense on January 1, 1965; Syncom 3 was to prove useful in the DoD's Vietnam communications.[136] Programs like Syncom, Telstar, and Applications Technology Satellites (ATS) demonstrated the utility of communications satellites and delivered early telephonic and video satellite transmission.[137]

Study of Mercury, Venus, or Mars has been the goal of more than ten uncrewed NASA programs. The first was Mariner in the 1960s and 1970s, which made multiple visits to Venus and Mars and one to Mercury. Probes launched under the Mariner program were also the first to make a planetary flyby (Mariner 2), to take the first pictures from another planet (Mariner 4), the first planetary orbiter (Mariner 9), and the first to make a gravity assist maneuver (Mariner 10). This is a technique where the satellite takes advantage of the gravity and velocity of planets to reach its destination.[138]

Magellan orbited Venus for four years in the early 1990s capturing radar images of the planet's surface.[139] MESSENGER orbited Mercury between 2011 and 2015 after a 6.5-year journey involving a complicated series of flybys of Venus and Mercury to reduce velocity sufficiently enough to enter Mercury orbit. MESSENGER became the first spacecraft to orbit Mercury and used its science payload to study Mercury's surface composition, geological history, internal magnetic field, and verified its polar deposits were dominantly water-ice.[140]

From 1966 to 1968, the Lunar Orbiter and Surveyor missions provided higher quality photographs and other measurements to pave the way for the crewed Apollo missions to the Moon.[141] Clementine spent a couple of months mapping the Moon in 1994 before moving on to other mission objectives.[142] Lunar Prospector spent 19 months from 1998 mapping the Moon's surface composition and looking for polar ice.[143]

The first successful landing on Mars was made by Viking 1 in 1976. Viking 2 followed two months later. Twenty years later the Sojourner rover was landed on Mars by Mars Pathfinder.[144]

After Mars, Jupiter was first visited by Pioneer 10 in 1973. More than 20 years later Galileo sent a probe into the planet's atmosphere and became the first spacecraft to orbit the planet.[145] Pioneer 11 became the first spacecraft to visit Saturn in 1979, with Voyager 2 making the first (and so far, only) visits to Uranus and Neptune in 1986 and 1989, respectively. The first spacecraft to leave the Solar System was Pioneer 10 in 1983. For a time, it was the most distant spacecraft, but it has since been surpassed by both Voyager 1 and Voyager 2.[146]

Pioneers 10 and 11 and both Voyager probes carry messages from the Earth to extraterrestrial life.[147][148] Communication can be difficult with deep space travel. For instance, it took about three hours for a radio signal to reach the New Horizons spacecraft when it was more than halfway to Pluto.[149] Contact with Pioneer 10 was lost in 2003. Both Voyager probes continue to operate as they explore the outer boundary between the Solar System and interstellar space.[150]

NASA continued to support in situ exploration beyond the asteroid belt, including Pioneer and Voyager traverses into the unexplored trans-Pluto region, and gas giant orbiters Galileo (19892003) and Cassini (19972017) exploring the Jovian and Saturnian systems respectively.

The missions below represent the robotic spacecraft that have been delivered and operated by NASA to study the heliosphere. The Helios A and Helios B missions were launched in the 1970s to study the Sun and were the first spacecraft to orbit inside of Mercury's orbit.[151] The Fast Auroral Snapshot Explorer (FAST) mission was launched in August 1996 becoming the second SMEX mission placed in orbit. It studied the auroral zones near each pole during its transits in a highly elliptical orbit.[152]

The International Earth-Sun Explorer-3 (ISEE-3) mission was launched in 1978 and is the first spacecraft designed to operate at the Earth-Sun L1 libration point. It studied solar-terrestrial relationships at the outermost boundaries of the Earth's magnetosphere and the structure of the solar wind. The spacecraft was subsequently maneuvered out of the halo orbit and conducted a flyby of the Giacobini-Zinner comet in 1985 as the rechristened International Cometary Explorer (ICE).[153]

Ulysses was launched in 1990 and slingshotted around Jupiter to put it in an orbit to travel over the poles of the Sun. It was designed study the space environment above and below the poles and delivered scientific data for about 19 years.[154]

Additional spacecraft launched for studies of the heliosphere include: Cluster II, IMAGE, POLAR, Reuven Ramaty High Energy Solar Spectroscopic Imager, and the Van Allen Probes.

The Earth Sciences Division of the NASA Science Mission Directorate leads efforts to study the planet Earth. Spacecraft have been used to study Earth since the mid-1960s. Efforts included the Television Infrared Observation Satellite (TIROS) and Nimbus satellite systems of which there were many carrying weather research and forecasting from space from 1960 into the 2020s.

The Combined Release and Radiation Effects Satellite (CRRES) was launched in 1990 on a three-year mission to investigate fields, plasmas, and energetic particles inside the Earth's magnetosphere.[155] The Upper Atmosphere Research Satellite (UARS) was launched in 1991 by STS-48 to study the Earth's atmosphere especially the protective ozone layer.[156] TOPEX/Poseidon was launched in 1992 and was the first significant oceanographic research satellite.[157]

The Ice, Cloud, and land Elevation Satellite (ICESat) was launched in 2003, operated for seven years, and measured ice sheet mass balance, cloud and aerosol heights, and well as topography and vegetation characteristics.[158]

Over a dozen past robotic missions have focused on the study of the Earth and its environment. Some of these additional missions include Aquarius, Earth Observing-1 (EO-1), Jason-1, Ocean Surface Topography Mission/Jason-2, and Radarsat-1 missions.

The International Space Station (ISS) combines NASA's Space Station Freedom project with the Soviet/Russian Mir-2 station, the European Columbus station, and the Japanese Kib laboratory module.[159] NASA originally planned in the 1980s to develop Freedom alone, but US budget constraints led to the merger of these projects into a single multi-national program in 1993, managed by NASA, the Russian Federal Space Agency (RKA), the Japan Aerospace Exploration Agency (JAXA), the European Space Agency (ESA), and the Canadian Space Agency (CSA).[160][161] The station consists of pressurized modules, external trusses, solar arrays and other components, which were manufactured in various factories around the world, and have been launched by Russian Proton and Soyuz rockets, and the US Space Shuttles.[159] The on-orbit assembly began in 1998, the completion of the US Orbital Segment occurred in 2009 and the completion of the Russian Orbital Segment occurred in 2010, though there are some debates of whether new modules should be added in the segment. The ownership and use of the space station is established in intergovernmental treaties and agreements[162] which divide the station into two areas and allow Russia to retain full ownership of the Russian Orbital Segment (with the exception of Zarya),[163][164] with the US Orbital Segment allocated between the other international partners.[162]

Long-duration missions to the ISS are referred to as ISS Expeditions. Expedition crew members typically spend approximately six months on the ISS.[165] The initial expedition crew size was three, temporarily decreased to two following the Columbia disaster. Since May 2009, expedition crew size has been six crew members.[166] Crew size is expected to be increased to seven, the number the ISS was designed for, once the Commercial Crew Program becomes operational.[167] The ISS has been continuously occupied for the past 22years and 73days, having exceeded the previous record held by Mir; and has been visited by astronauts and cosmonauts from 15 different nations.[168][169]

The station can be seen from the Earth with the naked eye and, as of 2023, is the largest artificial satellite in Earth orbit with a mass and volume greater than that of any previous space station.[170] The Russian Soyuz and American Dragon spacecraft are used to send astronauts to and from the ISS. Several uncrewed cargo spacecraft provide service to the ISS; they are the Russian Progress spacecraft which has done so since 2000, the European Automated Transfer Vehicle (ATV) since 2008, the Japanese H-II Transfer Vehicle (HTV) since 2009, the (uncrewed) Dragon since 2012, and the American Cygnus spacecraft since 2013.[171][172] The Space Shuttle, before its retirement, was also used for cargo transfer and would often switch out expedition crew members, although it did not have the capability to remain docked for the duration of their stay. Between the retirement of the Shuttle in 2011 and the commencement of crewed Dragon flights in 2020, American astronauts exclusively used the Soyuz for crew transport to and from the ISS[173] The highest number of people occupying the ISS has been thirteen; this occurred three times during the late Shuttle ISS assembly missions.[174]

The ISS program is expected to continue to 2030,[175] after which the space station will be retired and destroyed in a controlled de-orbit.[176]

Commercial Resupply Services missions approaching International Space Station

Commercial Resupply Services (CRS) are a contract solution to deliver cargo and supplies to the International Space Station (ISS) on a commmercial basis.[177] NASA signed its first CRS contracts in 2008 and awarded $1.6 billion to SpaceX for twelve cargo Dragon and $1.9 billion to Orbital Sciences[note 2] for eight Cygnus flights, covering deliveries to 2016. Both companies evolved or created their launch vehicle products to support the solution (SpaceX with The Falcon 9 and Orbital with the Antares).

SpaceX flew its first operational resupply mission (SpaceX CRS-1) in 2012.[178] Orbital Sciences followed in 2014 (Cygnus CRS Orb-1).[179] In 2015, NASA extended CRS-1 to twenty flights for SpaceX and twelve flights for Orbital ATK.[note 2][180][181]

A second phase of contracts (known as CRS-2) was solicited in 2014; contracts were awarded in January 2016 to Orbital ATK[note 2] Cygnus, Sierra Nevada Corporation Dream Chaser, and SpaceX Dragon 2, for cargo transport flights beginning in 2019 and expected to last through 2024. In March 2022, NASA awarded an additional six CRS-2 missions each to both SpaceX and Northrop Grumman (formerly Orbital).[182]

Northrop Grumman successfully delivered Cygnus NG-17 to the ISS in February 2022.[183] In July 2022, SpaceX launched its 25th CRS flight (SpaceX CRS-25) and successfully delivered its cargo to the ISS.[184] In late 2022, Sierra Nevada continued to assemble their Dream Chaser CRS solution; current estimates put its first launch in early 2023.[185]

The Commercial Crew Program (CCP) provides commercially operated crew transportation service to and from the International Space Station (ISS) under contract to NASA, conducting crew rotations between the expeditions of the International Space Station program. American space manufacturer SpaceX began providing service in 2020, using the Crew Dragon spacecraft, and NASA plans to add Boeing when its Boeing Starliner spacecraft becomes operational some time after 2022[needs update].[186] NASA has contracted for six operational missions from Boeing and fourteen from SpaceX, ensuring sufficient support for ISS through 2030.[187]

The spacecraft are owned and operated by the vendor, and crew transportation is provided to NASA as a commercial service. Each mission sends up to four astronauts to the ISS, with an option for a fifth passenger available. Operational flights occur approximately once every six months for missions that last for approximately six months. A spacecraft remains docked to the ISS during its mission, and missions usually overlap by at least a few days. Between the retirement of the Space Shuttle in 2011 and the first operational CCP mission in 2020, NASA relied on the Soyuz program to transport its astronauts to the ISS.

A Crew Dragon spacecraft is launched to space atop a Falcon 9 Block 5 launch vehicle and the capsule returns to Earth via splashdown in the ocean near Florida. The program's first operational mission, SpaceX Crew-1, launched on 16 November 2020.[188] Boeing Starliner operational flights will now commence after its final test flight which was launched atop an Atlas V N22 launch vehicle. Instead of a splashdown, a Starliner capsule returns on land with airbags at one of four designated sites in the western United States.[189]

Since 2017, NASA's crewed spaceflight program has been the Artemis program, which involves the help of US commercial spaceflight companies and international partners such as ESA, JAXA, and CSA.[190] The goal of this program is to land "the first woman and the next man" on the lunar south pole region by 2024. Artemis would be the first step towards the long-term goal of establishing a sustainable presence on the Moon, laying the foundation for companies to build a lunar economy, and eventually sending humans to Mars.

The Orion Crew Exploration Vehicle was held over from the canceled Constellation program for Artemis. Artemis 1 was the uncrewed initial launch of Space Launch System (SLS) that would also send an Orion spacecraft on a Distant Retrograde Orbit.[191]

NASA's next major space initiative is to be the construction of the Lunar Gateway, a small space station in lunar orbit.[192] This space station will be designed primarily for non-continuous human habitation. The first tentative steps of returning to crewed lunar missions will be Artemis 2, which is to include the Orion crew module, propelled by the SLS, and is to launch in 2024.[190] This mission is to be a 10-day mission planned to briefly place a crew of four into a Lunar flyby.[131] The construction of the Gateway would begin with the proposed Artemis 3, which is planned to deliver a crew of four to Lunar orbit along with the first modules of the Gateway. This mission would last for up to 30 days. NASA plans to build full scale deep space habitats such as the Lunar Gateway and the Nautilus-X as part of its Next Space Technologies for Exploration Partnerships (NextSTEP) program.[193] In 2017, NASA was directed by the congressional NASA Transition Authorization Act of 2017 to get humans to Mars-orbit (or to the Martian surface) by the 2030s.[194][195]

In support of the Artemis missions, NASA has been funding private companies to land robotic probes on the lunar surface in a program known as the Commercial Lunar Payload Services. As of March 2022, NASA has awarded contracts for robotic lunar probes to companies such as Intuitive Machines, Firefly Space Systems, and Astrobotic.[196]

On April 16, 2021, NASA announced they had selected the SpaceX Lunar Starship as its Human Landing System. The agency's Space Launch System rocket will launch four astronauts aboard the Orion spacecraft for their multi-day journey to lunar orbit where they will transfer to SpaceX's Starship for the final leg of their journey to the surface of the Moon.[197]

In November 2021, it was announced that the goal of landing astronauts on the Moon by 2024 had slipped to no earlier than 2025 due to numerous factors. Artemis 1 launched on November 16, 2022 and returned to Earth safely on December 11, 2022. As of June 2022, NASA plans to launch Artemis 2 in May 2024 and Artemis 3 sometime in 2025.[198][199] Additional Artemis missions, Artemis 4 and Artemis 5, are planned to launch after 2025.[200]

The Commercial Low Earth Orbit Destinations program is an initiative by NASA to support work on commercial space stations that the agency hopes to have in place by the end of the current decade to replace the "International Space Station". The three selected companies are: Blue Origin (et al.) with their Orbital Reef station concept, Nanoracks (et al.) with their Starlab Space Station concept, and Northrop Grumman with a station concept based on the HALO-module for the Gateway station.[201]

NASA has conducted many uncrewed and robotic spaceflight programs throughout its history. More than 1,000 uncrewed missions have been designed to explore the Earth and the Solar System.[133]

NASA executes a mission development framework to plan, select, develop, and operate robotic missions. This framework defines cost, schedule and technical risk parameters to enable competitive selection of missions involving mission candidates that have been developed by principal investigators and their teams from across NASA, the broader U.S. Government research and development stakeholders, and industry. The mission development construct is defined by four umbrella programs.

The Explorer program derives its origin from the earliest days of the U.S. Space program. In current form, the program consists of three classes of systems - Small Explorers (SMEX), Medium Explorers (MIDEX), and University-Class Explorers (UNEX) missions. The NASA Explorer program office provides frequent flight opportunities for moderate cost innovative solutions from the heliophysics and astrophysics science areas. The Small Explorer missions are required to limit cost to NASA to below $150M (2022 dollars). Medium class explorer missions have typically involved NASA cost caps of $350M. The Explorer program office is based at NASA Goddard Space Flight Center.[202]

The NASA Discovery program develops and delivers robotic spacecraft solutions in the planetary science domain. Discovery enables scientists and engineers to assemble a team to deliver a solution against a defined set of objectives and competitively bid that solution against other candidate programs. Cost caps vary but recent mission selection processes were accomplished using a $500M cost cap to NASA. The Planetary Mission Program Office is based at the NASA Marshall Space Flight Center and manages both the Discovery and New Frontiers missions. The office is part of the Science Mission Directorate.[203]

NASA Administrator Bill Nelson announced on June 2, 2021, that the DAVINCI+ and VERITAS missions were selected to launch to Venus in the late 2020s, having beat out competing proposals for missions to Jupiter's volcanic moon Io and Neptune's large moon Triton that were also selected as Discovery program finalists in early 2020. Each mission has an estimated cost of $500 million, with launches expected between 2028 and 2030. Launch contracts will be awarded later in each mission's development.[204]

The New Frontiers program focuses on specific Solar System exploration goals identified as top priorities by the planetary science community. Primary objectives include Solar System exploration employing medium class spacecraft missions to conduct high-science-return investigations. New Frontiers builds on the development approach employed by the Discovery program but provides for higher cost caps and schedule durations than are available with Discovery. Cost caps vary by opportunity; recent missions have been awarded based on a defined cap of $1 Billion. The higher cost cap and projected longer mission durations result in a lower frequency of new opportunities for the program - typically one every several years. OSIRIS-REx and New Horizons are examples of New Frontiers missions.[205]

Read more:

NASA - Wikipedia

CORRECTION – OMNIQ’s Artificial Intelligence-Based Quest Shield Solution Selected by the Talmudical Academy of Baltimore – GlobeNewswire

SALT LAKE CITY, July 10, 2020 (GLOBE NEWSWIRE) -- In a release issued under the same headline on June 1, 2020 by OMNIQ, Inc. (OTCQB:OMQS), please be advised that the second paragraph as originally issued contained certain inaccuracies, not related to financial results or projections, which have been corrected below.

OMNIQ, Inc. (OTCQB:OMQS) (OMNIQ or the Company), announces that it has been selected to deploy its Quest Shield campus safety solution at the Talmudical Academy of Baltimore in Maryland.

The Quest Shield security package uses the Companys AI-based SeeCube technology platform, a ground-breaking cloud-based/on-premise security solution for Safe Campus/School applications. The platform provides unique AI-based computer vision technology and software to gather real-time vehicle data, enabling the Quest Shield to identify and record images of approaching vehicles including color, make and license plate information. The license plate is then compared against the schools internal watch list to provide immediate notifications of unauthorized vehicles to security and administrative personnel. In addition to providing a vehicle identification and recognition solution to the Talmudical Academy, the Quest Shield comprehensive security platform addresses other security concerns including controlling access to the buildings and visitor management as well as the ability to pre-register guests for school activities.

Additionally, as part of COVID-19 mitigation, parents in Maryland will be asked to take and record their childs temperature each day before they leave for school. Quest Shield will automate this process, by providing parents an online form where they may record the temperature. All Talmud Academy students will be equipped with an ID tag that will have a QR code that can be read with a barcode scanner. As students enter campus, faculty equipped with Quest handheld scanners will read the barcode to confirm that the students temperature has been taken that day; if the form has not been filled in, faculty will check temperatures before allowing students inside.

Shai Lustgarten, CEO of OMNIQ, commented: It is our privilege to work with the Talmudical Academy to provide our solution to enhance safety at their Baltimore campus. Quest Shield is an extension of the homeland security solution we designed for the Israeli authorities to fight terrorism and save lives.

Rabbi Yaacov Cohen, Executive Director, Talmudical Academy of Baltimore, commented:Concern about campus safety and the safety of our students and faculty drove the Talmudical Academy to seek ways to implement new strategies aimed at preventing crimes and violence that may be committed on the school grounds. The unfortunate reality today is that situations we could never imagine just a few years ago are happening now with increasing regularity. Most security systems that are currently being deployed on other campuses are good at recording events subsequent to crimes being committed. With Quest Shield, we have an opportunity to alert personnel and Law Enforcement ahead of any sign of violence.

Mr. Lustgarten added: The Quest Shield has been tailored to provide a proactive solution to improve security and safety in schools and on campuses as well as community centers and places of worship in the U.S. that have unfortunately become a target for ruthless attacks. Were pleased to work with a forward-thinking organization like the Talmudical Academy, it is gratifying that the Academy selected the Quest Shield platform to strengthen its security precautions.

Additionally, many schools and communities are expressing concern around children returning to school in the fall due to COVID-19. With that in mind, Talmudical Academy will also employ the Quest Shield to provide an automated screening process to confirm that students have had their temperatures checked, per Maryland regulation, upon their arrival on campus and prior to them entering the school facilities.

Mr. Lustgarten concluded, We are proud to be able to improve student safety in the U.S., as well as in other vulnerable communities. Quest Shield has previously been implemented by a pre-K through Grade 12 school in Florida and at a Jewish Community Center in Salt Lake City. We look forward to working closely with the Academy and other institutions to promote the health and safety of students, faculty and support personnel.

About OMNIQ, Corp.OMNIQ Corp. (OMQS) provides computerized and machine vision image processing solutions that use patented and proprietary AI technology to deliver data collection, real time surveillance and monitoring for supply chain management, homeland security, public safety, traffic & parking management and access control applications. The technology and services provided by the Company help clients move people, assets and data safely and securely through airports, warehouses, schools, national borders, and many other applications and environments.

OMNIQs customers include government agencies and leading Fortune 500 companies from several sectors, including manufacturing, retail, distribution, food and beverage, transportation and logistics, healthcare, and oil, gas, and chemicals. Since 2014, annual revenues have grown to more than $50 million from clients in the USA and abroad.

The Company currently addresses several billion-dollar markets, including the Global Safe City market, forecast to grow to $29 billion by 2022, and the Ticketless Safe Parking market, forecast to grow to $5.2 billion by 2023.

Information about Forward-Looking StatementsSafe Harbor Statement under the Private Securities Litigation Reform Act of 1995. Statements in this press release relating to plans, strategies, economic performance and trends, projections of results of specific activities or investments, and other statements that are not descriptions of historical facts may be forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995, Section 27A of the Securities Act of 1933 and Section 21E of the Securities Exchange Act of 1934.

This release contains forward-looking statements that include information relating to future events and future financial and operating performance. The words anticipate, may, would, will, expect, estimate, can, believe, potential and similar expressions and variations thereof are intended to identify forward-looking statements. Forward-looking statements should not be read as a guarantee of future performance or results, and will not necessarily be accurate indications of the times at, or by, which that performance or those results will be achieved. Forward-looking statements are based on information available at the time they are made and/or managements good faith belief as of that time with respect to future events, and are subject to risks and uncertainties that could cause actual performance or results to differ materially from those expressed in or suggested by the forward-looking statements. Important factors that could cause these differences include, but are not limited to: fluctuations in demand for the Companys products particularly during the current health crisis, the introduction of new products, the Companys ability to maintain customer and strategic business relationships, the impact of competitive products and pricing, growth in targeted markets, the adequacy of the Companys liquidity and financial strength to support its growth, the Companys ability to manage credit and debt structures from vendors, debt holders and secured lenders, the Companys ability to successfully integrate its acquisitions, and other information that may be detailed from time-to-time in OMNIQ Corp.s filings with the United States Securities and Exchange Commission. Examples of such forward looking statements in this release include, among others, statements regarding revenue growth, driving sales, operational and financial initiatives, cost reduction and profitability, and simplification of operations. For a more detailed description of the risk factors and uncertainties affecting OMNIQ Corp., please refer to the Companys recent Securities and Exchange Commission filings, which are available at http://www.sec.gov. OMNIQ Corp. undertakes no obligation to publicly update or revise any forward-looking statements, whether as a result of new information, future events or otherwise, unless otherwise required by law.

Investor Contact: John Nesbett/Jen BelodeauIMS Investor Relations203.972.9200jnesbett@institutionalms.com

See more here:

CORRECTION - OMNIQ's Artificial Intelligence-Based Quest Shield Solution Selected by the Talmudical Academy of Baltimore - GlobeNewswire

AI Healthcare Stocks Will Mint The Worlds First Trillionaire – Yahoo Finance

You wont read about thesestocksin the mainstream media. The average investor doesnt even know they exist. But dont let that fool you:because a small hiddengroup of companies is figuring out how to mergeartificial intelligencewithmedical technology.

Some folks are even scared of it. They think AI could eventually try to wipe out the human race. They imagine human-like robots like Ava from the film Ex Machinaor Skynetthe computer program that tried to wipe out the human race in the Terminator movies.

AI in real life isn't what you see in the movies. It's not about robots or computers that outthink and enslave humans. It's about computers with mind-boggling processing powers that solve problems faster than teams of scientist PhDs ever could.

AI healthcare stocks ventilator availability CVD Patients PFSi Coronavirus Africa pension relief workers wages COVID 19 vaccine biggest controllable factor biggest controllable factor bill gates coronavirus vaccine SEAT, Coronavirus, COVID19, ventilators

SEAT is collaborating with the healthcare system by making automated ventilators with adapted windscreen wiper motors

Other investors ignore AI because they assume its impact will be felt for decades.But AI is already improving your life in ways you probably dont even realize. AI is whyNetflix (NFLX)is so good at recommending movies. And whySpotify (SPOT)is so good at recommending music that suits your tastes. Its also howAmazons (AMZN)Alexa can tell you everything from the weather to who Americas fourth president was in just seconds. AI is also howTeslas (TSLA)Model X can navigate traffic on the highway on its own without you laying a finger on the steering wheel!

These breakthroughs are all thanks to AI. But theyre just a taste of whats to come.

According to tech entrepreneur and Dallas Mavericks owner Mark Cuban, the worlds firsttrillionairewont be a hedge fund manager, oil baron, or social media tycoon.It will be someone who masters AI.

A trillion dollars is almost an unfathomable amount of money. To put this in perspective, Amazon founder Jeff Bezosthe worlds richest manis worth around $117 billion. Thats more than the annual economic output of Ecuador. And yet, a trillionaire would be worthat least ten timesas much as Bezos!

But heres the thing.You dont have to be a genius inventor or entrepreneur to strike gold in AI.Just like prior megatrends, everyday investors stand to make millions off of AI.

According to ARK Invest, AI could add $30 trillion to the global equity markets over the next two decades. Thats almost as much as the entire US stock market is worth today!

And the best way to take advantage isnt with traditional AI companies. As I mentioned above,its with AI healthcare stocks. The fusion of AI and healthcare is one of the most lucrative opportunities Ive come across in my entire career.

Researchers at MIT have already used AI to identify a powerful new antibiotic compound for the coronavirus. Scientists in China were able recreate and copy the coronavirus genome sequence in just one month! Chinese tech giantAlibaba (BABA)recently created a new AI algorithm that can diagnose the coronavirus in as little as 20 seconds! Thats 45 times faster than humans can. And its reportedly 96% accurate.

Insilico Medicine used AI to successfully identify thousands of molecules for potential medications in just four days. Additionally, the Food and Drug Administration recently approved the use of an AI-driven diagnostic for COVID-19 developed by AI radiology company Behold.ai. The tool analyzes lung x-rays and provides radiologists with a tentative diagnosis as soon as the image is captured, reducing time and expense.

In short, AI will likely be the reason we never experience an outbreak like the coronavirus again.But thats certainly not the only way AI is revolutionizing healthcare.

Story continues

AI is also being used to identify a drug candidate that could be repurposed for different uses. It can also help medical professionals parse through data faster than ever before.I cannot overstate the importance of this. Every year, 1.2 billion unstructured clinical documents are created every year. A staggering amount of data is contained in these documents. And thats only going to increase. The amount of medical data is poised to double every couple of months! Its nearly impossible to search and make sense of this data without the help of AI.

Genomics companies will play a major role in this revolution. But AI will play a massive role in this revolution, too.

You see, analyzing genomic sequences takes time and a ton of computing power. AI rapidly accelerates this process. It greatly reduces the time it takes to develop valuable drugs. Not only that, it drives down drug development costs. And increases the success rate of trials.

Money is pouring into AI companies at a breathtaking rate. According to CB Insights, $4 billion was invested in private healthcare AI startups last year. And that included 367 deals. That was the most money of any sector!

Its also a huge spike from 2018 when $2.7 billion was invested across 264 deals. Its easy to see why venture capitalists (VC) are betting so big on healthcare AI. According to Grand View Research, the market is growing at nearly 42% per year! By 2025, its projected to be a $31 billion industry. When an industry grows this fast, fortunes stand to be made.

You can get in on the ground floor of this trend by buying the right "AI healthcare stocks." Im not talking aboutMicrosoft (MSFT),Amazon (AMZN), or any other blue-chip tech company using AI for their healthcare initiatives. These companies are already behemoths. They dont offer explosive upside.

So, I wouldnt focus on the usual suspects. Instead, pay attention to the smaller AI healthcare stocks. Many of which have gone public recently. These are still unknown to most of the investing world. And they offer the best chance to multiply your money in the coming months.

The Great Disruptors:3 Breakthrough Stocks Set to Double Your Money"

Get our latest report where we reveal our three favorite stocks that will hand you 100% gains as they disrupt whole industries.Get your free copy here.

Video: Top 5 Stocks Among Hedge Funds

At Insider Monkey we leave no stone unturned when looking for the next great investment idea. For example, 2020s unprecedented market conditions provide us with the highest number of trading opportunities in a decade. So we are checking out stocks recommended/scorned bylegendary Bill Miller. We interview hedge fund managers and ask them about their best ideas. If you want to find out the best healthcare stock to buy right now, you can watch our latesthedge fund manager interview here. We read hedge fund investor letters and listen to stock pitches at hedge fund conferences. Our best call in 2020 was shorting the market when the S&P 500 was trading at 3150 after realizing the coronavirus pandemics significance before most investors. You can subscribe to our free enewsletter below to receive our stories in your inbox:

[daily-newsletter][/daily-newsletter]

Article by Justin Spittler, Mauldin Economics

Related Content

More:

AI Healthcare Stocks Will Mint The Worlds First Trillionaire - Yahoo Finance

Even the Best AI Models Are No Match for the Coronavirus – WIRED

The stock market appears strangely indifferent to Covid-19 these days, but that wasnt true in March, as the scale and breadth of the crisis hit home. By one measure, it was the most volatile month in stock market history; on March 16, the Dow Jones average fell almost 13 percent, its biggest one-day decline since 1987.

To some, the vertigo-inducing episode also exposed a weakness of quantitative (or quant) trading firms, which rely on mathematical models, including artificial intelligence, to make trading decisions.

Some prominent quant firms fared particularly badly in March. By mid-month, some Bridgewater Associates funds had fallen 21 percent for the year to that point, according to a statement posted by the companys co-chairman, Ray Dalio. Vallance, a quant fund run by DE Shaw, reportedly lost 9 percent through March 24. Renaissance Technologies, another prominent quant firm, told investors that its algorithms misfired in response to the months market volatility, according to press accounts. Renaissance did not respond to a request for comment. A spokesman for DE Shaw could not confirm the reported figure.

The turbulence may reflect a limit with modern-day AI, which is built around finding and exploiting subtle patterns in large amounts of data. Just as algorithms that grocers use to stock shelves were flummoxed by consumers sudden obsession with hand sanitizer and toilet paper, those that help hedge funds wring profit from the market were confused by the sudden volatility of panicked investors.

In finance, as in all things, the best AI algorithm is only as good as the data its fed.

Andrew Lo, a professor at MIT and the founder and chairman emeritus of AlphaSimplex, a quantitative hedge fund based in Cambridge, Massachusetts, says quantitative trading strategies have a simple weakness. By definition a quantitative trading strategy identifies patterns in the data, he says.

Lo notes that March bears similarities to a meltdown among quantitative firms in 2007, in the early days of the financial crisis. In a paper published shortly after that mini-crash, Lo concluded that the synchronized losses among hedge funds betrayed a systemic weakness in the market. What we saw in March of 2020 is not unlike what happened in 2007, except it was faster, it was deeper, and it was much more widespread, Lo says.

What we saw in March of 2020 is not unlike what happened in 2007, except it was faster, it was deeper, and it was much more widespread.

Andrew Lo, MIT

Zura Kakushadze, president of Quantigic Solutions, describes the March episode as a quant bust in an analysis of the events posted online in April.

Kakushadzes paper looks at one form of statistical arbitrage, a common method of mining market data for patterns that are exploited by quant funds through many frequent trades. He points out that even quant funds that employed a dollar-neutral strategy, meaning they bet equally on stocks rising and falling, did poorly in the rout.

In an interview, Kakushadze says the bust shows AI is no panacea during extreme market volatility. I don't care whether youre using AI, ML, or anything else, he says. Youre gonna break down no matter what.

In fact, Kakushadze suggests that quant funds that use overly complex and opaque AI models may have suffered worse than others. Deep learning, a form of AI that has taken the tech world by storm in recent years, for instance, involves feeding data into neural networks that are difficult to audit. Machine learning, and especially deep learning, can have a large number of often obscure (uninterpretable) parameters, he writes.

Ernie Chan, managing member of QTS Capital Management, and the author of several books on machine trading, agrees that AI is no match for a rare event like the coronavirus.

Its easy to train a system to recognize cats in YouTube videos because there are millions of them, Chan says. In contrast, only a few such large swings in the market have occurred before. You can count [these huge drops] on one hand. So its not possible to use machine learning to learn from those signals.

Still, some quant funds did a lot better than others during Marchs volatility. The Medallion Fund operated by Renaissance Technologies, which is restricted to employees money, has reportedly seen 24 percent gains for the year to date, including a 9 percent lift in March.

View original post here:

Even the Best AI Models Are No Match for the Coronavirus - WIRED

Facebook is using AI to identify suicidal thoughts — but it’s not … – Fox News

For many of its nearly 2 billion users, Facebook is the primary channel of communication, a place where they can share their thoughts, post pictures and discuss every imaginable topic of interest.

Including suicide.

Six years ago, Facebook posted a page offering advice on how to help people who post suicidal thoughts on the social network. But in the year since it made its live-streaming feature, Facebook Live, available to all users, Facebook has seen some people use its technology to let the world watch them kill themselves.

TOO MUCH SOCIAL MEDIA USE LINKED TO FEELINGS OF ISOLATION

After at least three users committed suicide on Facebook Live late last year, the companys chairman and CEO, Mark Zuckerberg, addressed the issue in the official company manifesto he posted in February:

"To prevent harm, we can build social infrastructure to help our community identify problems before they happen. When someone is thinking of suicide or hurting themselves, we've built infrastructure to give their friends and community tools that could save their life.

There are billions of posts, comments and messages across our services each day, and since it's impossible to review all of them, we review content once it is reported to us. There have been terribly tragic events like suicides, some live streamed that perhaps could have been prevented if someone had realized what was happening and reported them sooner. These stories show we must find a way to do more."

Now, in its effort to do more, the company is using artificial intelligence and pattern recognition to identify suicidal thoughts in posts and live streams and to flag those posts for a team that can follow up, typically via Facebook Messenger.

FACEBOOK REPORTS JOURNALISTS TO THE COPS FOR REPORTING CHILD PORN TO FACEBOOK

Were testing pattern recognition to identify posts as very likely to include thoughts of suicide, product manager Vanessa Callison-Burch, researcher Jennifer Guadagno and head of global safety Antigone Davis wrote in a blog post.

Our Community Operations team will review these posts and, if appropriate, provide resources to the person who posted the content, even if someone on Facebook has not reported it yet.

Using artificial intelligence and pattern recognition, Facebook will monitor millions of posts to identify common behaviors among potential suicides, something a human intervention expert could never do.

FACEBOOK ADDS SUICIDE-PREVENTION TOOLS FOR LIVE VIDEO

But it still doesnt go far enough, some experts say.

Cheryl Karp Eskin, program director at Teen Line, said using artificial intelligence (AI) to identify patterns holds great promise in detecting expressions of suicidal thoughts but it wont necessarily decrease the number of suicides.

There has been very little progress in preventing suicides in the last 50 years. Suicide is the second leading cause of death among 15- to 29-year-olds, and the rate in that age group continues to rise.

Eskin expressed concerns that the technology might wrongly flag posts, or that users might hide their feelings if they knew a machine learning algorithm was watching them.

A TECHNICAL GLITCH LEFT SOME FACEBOOK USERS LOCKED OUT OF THEIR ACCOUNTS

AI is not a substitute for human interaction, as there are many nuances of speech and expression that a machine may not understand, she said. There are people who are dark and deep, but not suicidal. I also worry that people will shut down if they are identified incorrectly and not share some of their feelings in the future.

Joel Selanikio, MD, an assistant professor at Georgetown University who started the AI-powered company Magpi, said Facebook has a large data set of users, which helps AI parse language constantly and enables it to work more effectively.

But even if AI helps Facebook identify suicidal thoughts, that doesnt mean it can help determine the best approach for prevention.

FIFTH-GRADER HITS POLICE FACEBOOK SITE FOR EMERGENCY HOMEWORK HELP

Right now, Selanikio said, my understanding is that it just tells the suicidal person to seek help. I can imagine other situations, for example in the case of a minor, where the system notifies the parents. Or in the case of someone under psychiatric care, this might alert the clinician.

Added Wendy Whitsett, a licensed counselor, I would like to learn more about the plan for follow-up support, after the crisis had ended, and helping the user obtain services and various levels of support utilizing professional and peer support, as well as support from friends, neighbors, pastors, and others.

I am also interested to know if the algorithms are able to detect significant life events that would indicate increased risk factors and offer assistance with early intervention.

Technology has moved from offering assistance to people who view others suicidal posts to using artificial intelligence and pattern recognition to track and flag the posts automatically. But that, the experts say, is just the beginning. Facebook still has a long way to go.

Next, they hope, Facebook will be able to use AI to predict behavior and intervene in real-time to help those in need.

Read the original post:

Facebook is using AI to identify suicidal thoughts -- but it's not ... - Fox News

Instagram Turns to AI to Stop Cyberbullying on Its Platform – Government Technology

The artificial intelligence tool works by identifying words and phrases that have been reported as offensive in the past. It then allows the author to rework their comment before posting it.

(TNS) Photo-sharing social media app Instagram has a new feature that uses artificial intelligence against cyberbullying and offensive comments.

The new feature began rolling out to the apps billions of users on Monday.

The AI operates from a list of words and phrases that have been reported as offensive in the past.

If the AI detects offensive language on a comment, it will send a prompt to the writer and give them a chance to edit or delete their comment before it is posted.

The app hopes to encourage users to pause and reconsider their words before posting."

In order to create the new system, Instagram partnered with suicide prevention programs.

Best-selling author and Smith College educator Rachel Simmons told Good Morning America that it was a great first step.

We want to see social media platforms like Instagram stopping bullies before they start," she said.

The new tool is the latest arrow in Instagrams anti-bullying quiver. In October, the app launched Restrict, a feature that allows users to easily shadow-ban other users who may post bullying or offensive comments.

2019 New York Daily News Distributed by Tribune Content Agency, LLC.

Go here to see the original:

Instagram Turns to AI to Stop Cyberbullying on Its Platform - Government Technology

Where it Counts, U.S. Leads in Artificial Intelligence – Department of Defense

When it comes to advancements in artificial intelligence technology, China does have a lead in some places like spying on its own people and using facial recognition technology to identify political dissenters. But those are areas where the U.S. simply isn't pointing its investments in artificial intelligence, said director of the Joint Artificial Intelligence Center. Where it counts, the U.S. leads, he said.

"While it is true that the United States faces formidable technological competitors and challenging strategic environments, the reality is that the United States continues to lead in AI and its most important military applications," said Nand Mulchandani, during a briefing at the Pentagon.

The Joint Artificial Intelligence Center, which stood up in 2018, serves as the official focal point of the department's AI strategy.

China leads in some places, Mulchandani said. "China's military and police authorities undeniably have the world's most advanced capabilities, such as unregulated facial recognition for universal surveillance and control of their domestic population, trained on Chinese video gathered from their systems, and Chinese language text analysis for internet and media censorship."

The U.S. is capable of doing similar things, he said, but doesn't. It's against the law, and it's not in line with American values.

"Our constitution and privacy laws protect the rights of U.S. citizens, and how their data is collected and used," he said. "Therefore, we simply don't invest in building such universal surveillance and censorship systems."

The department does invest in systems that both enhance warfighter capability, for instance, and also help the military protect and serve the United States, including during the COVID-19 pandemic.

The Project Salus effort, for instance, which began in March of this year, puts artificial intelligence to work helping to predict shortages for things like water, medicine and supplies used in the COVID fight, said Mulchandani.

"This product was developed in direct work with [U.S. Northern Command] and the National Guard," he said. "They have obviously a very unique role to play in ensuring that resource shortages ... are harmonized across an area that's dealing with the disaster."

Mulchandani said what the Guard didn't have was predictive analytics on where such shortages might occur, or real-time analytics for supply and demand. Project Salus named for the Roman goddess of safety and well-being fills that role.

"We [now have] roughly about 40 to 50 different data streams coming into project Salus at the data platform layer," he said. "We have another 40 to 45 different AI models that are all running on top of the platform that allow for ... the Northcom operations team ... to actually get predictive analytics on where shortages and things will occur."

As an AI-enabled tool, he said, Project Salus can be used to predict traffic bottlenecks, hotel vacancies and the best military bases to stockpile food during the fallout from a damaging weather event.

As the department pursues joint all-domain command and control, or JADC2, the JAIC is working to build in the needed AI capabilities, Mulchandani.

"JADC2 is ... a collection of platforms that get stitched together and woven together[ effectively into] a platform," Mulchandani said. "The JAIC is spending a lot of time and resources focused on building the AI components on top of JADC2. So if you can imagine a command and control system that is current and the way it's configured today, our job and role is to actually build out the AI components both from a data, AI modeling and then training perspective and then deploying those."

When it comes to AI and weapons, Mulchandani said the department and JAIC are involved there too.

"We do have projects going on under joint warfighting, which are actually going into testing," he said. "They're very tactical-edge AI, is the way I describe it. And that work is going to be tested. It's very promising work. We're very excited about it."

While Mulchandani didn't mention specific projects, he did say that while much of the JAIC's AI work will go into weapons systems, none of those right now are going to be autonomous weapons systems. The concepts of a human-in-the-loop and full human control of weapons, he said, "are still absolutely valid."

See the original post here:

Where it Counts, U.S. Leads in Artificial Intelligence - Department of Defense

The Station: Birds improving scooter-nomics, breaking down Tesla AI day and the Nuro EC-1 – TechCrunch

The Station is a weekly newsletter dedicated to all things transportation.Sign up here just click The Station to receive it every weekend in your inbox.

Hello readers: Welcome to The Station, your central hub for all past, present and future means of moving people and packages from Point A to Point B. Im handing the wheel over to reporters Aria Alamalhodaei and Rebecca Bellan.

Before I completely leave though, I have to share the Nuro EC-1, a series of articles on the autonomous vehicle technology startup reported by investigative science and tech reporter Mark Harris with assistance from myself and our copy editorial team. This deep dive into Nuro is part of Extra Crunchs flagship editorial offerings.

As always, you can email me at kirsten.korosec@techcrunch.com to share thoughts, criticisms, opinions or tips. You also can send a direct message to me at Twitter @kirstenkorosec.

New York City finally launched its long-awaited scooter pilot in the Bronx this past week. Over 90 parking corrals specifically for e-scooters have been installed across the borough, but residents can also park in unobstructive locations on the sidewalk. Bird, Lime and Veo were the operators chosen for the pilot, each bringing their own sets of strengths.

Bird says it intends to focus on the mobility gap in the Bronx and will use its AI drop engine to ensure equitable deployment across all neighborhoods in the pilot zone. Veo is focused on safety and accessibility, bringing its Astro VS4, the first e-scooter with turn signals, to the mix, as well as its Cosmo, a seated e-scooter. Lime is also focusing on accessibility, with its Lime Able program, which offers an on-demand suite of adaptive vehicles. Lime also highlighted a safety quiz it will require new riders to take before hopping on a vehicle.

All three companies have promised to partner with community organizations to hire locally as well as to offer discounted pricing for vulnerable groups.

Not only has Bird officially launched in NYC, but it was also awarded a 12-month permit to operate 1,500 scooters in San Francisco. Well, technically its Scoot that got the permit, but Scoot is owned by Bird, and was kind of Birds backdoor way into the city. Last month, the SFMTA asked Scoot to halt its operations just as the fresh round of scooter permits were kicking off because the company was implementing its fleet manager program with unauthorized subcontractors.

On Friday, after careful evaluation of Scoots application, SFMTA determined Scoot has qualified for a permit to operate. Scoot intends to have its vehicles back on the roads in the coming weeks.

Bird also officially launched its consumer e-bike, dubbed the Bird Bike (which I think is also the name of their shared e-bike). Bird hasnt had the easiest time with profitability, and really, not many scooter companies have, so this is a chance for Bird to diversify, get a piece of the $68 billion e-bike sales pie and create more brand awareness across marketplaces. The bike costs $2,229 and consumer sales will likely make up about 10% of Birds revenue going forward, per the companys S-4 filing.

Bird (and Scoot) are now integrated with Google Maps. So is Spin, as of this week. More integrations like these, as we saw a couple weeks ago with Lime joining Moovit, demonstrate how shared micromobility is becoming more integrated with the way we think about moving around cities and planning our journeys. I heartily welcome such integrations.

Finally, Alex Wilhelm dug into new financial data released by Bird. The tl;dr: the quarterly data shows an improving economic model and a multiyear path to profitability. However, that path is fraught unless a number of scenarios all work out in concert and without a glitch, Wilhelm reports.

Rebecca Bellan

Imagine a future in which drivers dont charge their electric vehicles but instead swap out the batteries at small, roadside pods. Thats the future Ampleis imagining, and this week it announced a fresh $160 million funding round to scale its operations.

The internationally funded Series C was led by Moore Strategic Ventures with participation from PTT, a Thai state-owned oil and gas company, and Disruptive Innovation Fund. Existing investors Eneos, a Japanese petroleum and energy company, and Singapores public transit operator SMRT also participated. Amples total funding is now $230 million.

Its an interesting idea but one that will require considerable buy-in from automakers to make it a reality for example, by selling vehicles with either a standard battery or Amples battery system pre-built in. But according to Ample co-founders John de Souza and Khaled Hassounah, it wouldnt be all that complicated for OEMs to separate the battery from the car.

The marketing departments at the OEMs want to tell you that This is a super-duper battery that is very well integrated with the car; theres no way you can separate it, Hassounah said. The truth of the matter is theyre built completely separately and so true for almost not almost, for every battery in the car, including a Tesla.

Since weve built our system to be easy to interface with different vehicles, weve abstracted the battery component from the vehicle, he added.

Other deals that got our attention this week

AEye, the lidar startup, completed its reverse merger with special purpose acquisition company CF Finance Acquisition Corp. III. AEye is now a publicly traded company that trades on the Nasdaq exchange.

Canada Drives, an online car shopping and delivery platform, announced $79.4 million ($100 million CAD) in Series B funding that it will use to expand its service across Canada. The company is going to use its recent funding to keep enhancing the product, grow its inventory in existing and new markets and hire around 200 people over the next year, particularly in product development.

DigiSure, a digital insurance company that caters to modern mobility form factors like peer-to-peer marketplaces, is officially coming out of stealth to announce a $13.1 million pre-Series A funding round. The startup will use the funds to hire more than 50 engineers, data scientists, business development, insurance and compliance specialists, as well as scale into new industry verticals and across into Europe.

High Definition Vehicle Insurance Group, a commercial auto insurance company that is initially focused on trucking, raised $32.5 million in Series B funding round led by Weatherford Capital, with new investors Daimler Trucks North America and McVestCo, and continued participation from Munich Re Ventures, 8VC, Autotech Ventures and Qualcomm Ventures LLC.

RepairSmith, a mobile auto repair service that sends a mechanic right to the drivers home, raised $42 million in fresh funding with the aim of expanding to all major metros by the end of 2022. The company is looking to disrupt auto servicing and repair, a massive industry that hasnt seen much change in the past 40 years.

REE Automotive was awarded $17 million from the UK government as part of a $57 million investment, coordinated through the Advanced Propulsion Centre. The investment, the company said, is in line with the UK governments ambition to accelerate the shift to zero-emission vehicles.

Swvl, a Dubai-based transit and mobility company, will be expanding into Europe and Latin America after it acquired a controlling interest inShotl. Shotl, which is in 22 cities across 10 countries, matches passengers with shuttles and vans heading in that same direction. The company partners with governments and municipalities to provide mobility solutions for populations that are underserved by traditional mass transit options. While Swvl declined to share the financials of the transaction, a spokesperson told TechCrunch that the companys footprint is being doubled by this acquisition.

Xos Inc., a manufacturer of electric Class 5 to Class 8 commercial vehicles completed its business combination with NextGen Acquisition Corporation. As a reuslt, Xos made its public debut on the Nasdaq exchange.

Regarding Tesla investigations, when it rains it pours. First, the National Highway Traffic and Safety Administrationopened a preliminary investigation into Teslas Autopilot advanced driver assistance system, citing 11 incidents in which vehicles crashed into parked first responder vehicles while the system was engaged.

The Tesla vehicles involved in the collisions were confirmed to have either have had engaged Autopilot or a feature called Traffic Aware Cruise Control, according toinvestigation documents posted on the agencys website. Most of the incidents took place after dark and occurred despite scene control measures, such as emergency vehicle lights, road cones and an illuminated arrow board signaling drivers to change lanes.

A few days later, Senators Edward Markey (D-Mass.) and Richard Blumenthal (D-Conn.)asked the new chair of the Federal Trade Commission to investigate Teslas statements about the autonomous capabilities of its Autopilot and Full Self-Driving systems. The senators expressed particular concern over Tesla misleading customers into thinking their vehicles are capable of fully autonomous driving.

Teslas marketing has repeatedly overstated the capabilities of its vehicles, and these statements increasingly pose a threat to motorists and other users of the road, they said. Accordingly, we urge you to open an investigation into potentially deceptive and unfair practices in Teslas advertising and marketing of its driving automation systems and take appropriate enforcement action to ensure the safety of all drivers on the road.

Waymo, Alphabets self-driving arm, is seriously scaling up its autonomous trucking operations across Texas, Arizona and California. The company said it was building a dedicated trucking hub in Dallas and partnering with Ryder for fleet management services.

The Dallas hub will be a central launch point for testing not only the Waymo Driver, but also its transfer hub model, which is a mix of automated and manual trucking that optimizes transfer hubs near highways to ensure the Waymo Driver is sticking to main thoroughfares and human drivers are handling first and last mile deliveries.

Canoois expecting 25,000 units out of its manufacturing partner VDL Nedcars facility by 2023, CEO Tony Aquila said during the companys quarterly earnings call.

Year over year, Canoo upped its workforce from 230 to 656 total employees, 70% of which are hardware and software engineers. The startups operating expenses have increased from $19.8 million to $104.3 million YOY, with the majority of that increase coming from R&D.

Ford, Stellantis, Toyota and Volkswagen are among the carmakers this week that have announced production cuts in response to the ongoing global shortage of semiconductors. Its been a grim week.

A brief run-down: Toyota said it anticipated a production drop of anywhere from 60,000-90,000 vehicles across North America in August. Then Ford joined the chorus, saying it would temporarily close its F-150 factory in Kansas City. Volkswagen told Reuters it couldnt rule out further changes to production in light of the chip shortage. And finally, Stellantis is halting production at one of its factories in France.

Teslaunveiled what its calling the D1 computer chip to power its advanced AI training supercomputer, Dojo, at its AI Day on Thursday. According to Tesla director Ganesh Venkataramanan, the D1 has GPU-level compute with CPU connectivity and twice the I/O bandwidth of the state of the art networking switch chips that are out there today and are supposed to be the gold standards.

Venkataramanan also revealed a training tile that integrates multiple chips to get higher bandwidth and an incredible computing power of 9 petaflops per tile and 36 terabytes per second of bandwidth. Together, the training tiles compose the Dojo supercomputer.

But there was more, of course. CEO Elon Musk also unveiled that the company is developing a humanoid robot, with a prototype expected in 2022. The bot is being proposed as a non-automotive robotic use case for the companys work on neural networks and its Dojo advanced supercomputer.

Reality check: Tesla is not the first automaker, or company, to dip its toe into humanoid robot development.Hondas Asimo robot has been around for decades, ToyotaandGM have their own robots and Hyundai recently acquired robotics company Boston Dynamic.

The full rundown of Teslas AI Day can be found here.

General Motors and AT&T will be rolling out 5G connectivity in select Chevy, Cadillac and GMC vehicles from model year 2024, in a boost that the two companies say will bring more reliable software updates, faster navigation and downloads and better coverage on roadways.

5G technology has generated a lot of hype for its promises to boost speed and reduce latency across a range of industries, a next-gen tech that everyone thought would change the world far sooner than now. That hasnt happened (yet), in part because network rollout was much slower than people anticipated. So this announcement can be taken as a clear signal that, at the very least, AT&T thinks its 5G network will be mature enough to handle millions of connected vehicles by 2024.

RubiRides, a new ride-hailing company focuses on transporting kids, launched in the Washington D.C. metro area. The ride-hailing service is designed for children ages 7 and older. But the service also offers ride services for seniors and people with special needs. The company was founded by Noreen Butler, who was inspired to start the company after searching for transportation to support the busy schedules of her children.

Continue reading here:

The Station: Birds improving scooter-nomics, breaking down Tesla AI day and the Nuro EC-1 - TechCrunch

Meet STACI: your interactive guide to advances of AI in health care – STAT

Artificial intelligence has become its own sub-industry in health care, driving the development of products designed to detect diseases earlier, improve diagnostic accuracy, and discover more effective treatments. One recent report projected spending on health care AI in the United States will rise to $6.6 billion in 2021, an 11-fold increase from 2014.

The Covid-19 pandemic underscores the importance of the technology in medicine: In the last few months, hospitals have used AI to create coronavirus chatbots, predict the decline of Covid-19 patients, and diagnose the disease from lung scans.

Its rapid advancement is already changing practices in image-based specialties such as radiology and pathology, and the Food and Drug Administration has approved dozens of AI products to help diagnose eye diseases, bone fractures, heart problems, and other conditions. So much is happening that it can be hard for health professionals, patients, and even regulators to keep up, especially since the concepts and language of AI are new for many people.

The use of AI in health care also poses new risks. Biased algorithms could perpetuate discrimination along racial and economic lines, and lead to the adoption of inadequately vetted products that drive up costs without benefiting patients. Understanding these risks and weighing them against the potential benefits requires a deeper understanding of AI itself.

Its for these reasons that we created STACI: the STAT Terminal for Artificial Computer Intelligence. She will walk you through the key concepts and history of AI, explain the terminology, and break down its various uses in health care. (This interactive is best experienced on screens larger than a smartphones.)

Remember, AI is only as good as the data fed into it. So if STACI gets something wrong, blame the humans behind it, not the AI!

This is part of a yearlong series of articles exploring the use of artificial intelligence in health care that is partly funded by a grant from the Commonwealth Fund.

Originally posted here:

Meet STACI: your interactive guide to advances of AI in health care - STAT

A concept in psychology is helping AI to better navigate our world – MIT Technology Review

The concept: When we look at a chair, regardless of its shape and color, we know that we can sit on it. When a fish is in water, regardless of its location, it knows that it can swim. This is known as the theory of affordance, a term coined by psychologist James J. Gibson. It states that when intelligent beings look at the world they perceive not simply objects and their relationships but also their possibilities. In other words, the chair affords the possibility of sitting. The water affords the possibility of swimming. The theory could explain in part why animal intelligence is so generalizablewe often immediately know how to engage with new objects because we recognize their affordances.

The idea: Researchers at DeepMind are now using this concept to develop a new approach to reinforcement learning. In typical reinforcement learning, an agent learns through trial and error, beginning with the assumption that any action is possible. A robot learning to move from point A to point B, for example, will assume that it can move through walls or furniture until repeated failures tell it otherwise. The idea is if the robot were instead first taught its environments affordances, it would immediately eliminate a significant fraction of the failed trials it would have to perform. This would make its learning process more efficient and help it generalize across different environments.

The experiments: The researchers set up a simple virtual scenario. They placed a virtual agent in a 2D environment with a wall down the middle and had the agent explore its range of motion until it had learned what the environment would allow it to doits affordances. The researchers then gave the agent a set of simple objectives to achieve through reinforcement learning, such as moving a certain amount to the right or to the left. They found that, compared with an agent that hadnt learned the affordances, it avoided any moves that would cause it to get blocked by the wall partway through its motion, setting it up to achieve its goal more efficiently.

Why it matters: The work is still in its early stages, so the researchers used only a simple environment and primitive objectives. But their hope is that their initial experiments will help lay a theoretical foundation for scaling the idea up to much more complex actions. In the future, they see this approach allowing a robot to quickly assess whether it can, say, pour liquid into a cup. Having developed a general understanding of which objects afford the possibility of holding liquid and which do not, it wont have to repeatedly miss the cup and pour liquid all over the table to learn how to achieve its objective.

View post:

A concept in psychology is helping AI to better navigate our world - MIT Technology Review

AI’s inflation paradox – FT Alphaville (registration)


FT Alphaville (registration)
AI's inflation paradox
FT Alphaville (registration)
Many of the jobs that AI will destroy like credit scoring, language translation, or managing a stock portfolio are regarded as skilled, have limited human competition and are well-paid. Conversely, many of the jobs that AI cannot (yet) destroy ...

Originally posted here:

AI's inflation paradox - FT Alphaville (registration)

Why Should We Bother Building Human-Level AI? Five Experts Weigh In

Working on artificial intelligence can be a vicious cycle. It’s easy to lose track of the bigger picture when you spend an entire career developing a niche, hyper-specific AI application. An engineer might finally step away and realize that the public never actually needed such a robust system; each of the marginal improvements they’ve spent so much time on didn’t mean much in the real world.

Still, we need these engineers with lofty, yet-unattainable goals. And one specific goal still lingers in the horizon for the more starry-eyed computer scientists out there: building a human-level artificial intelligence system that could change the world.

Coming up with a definition of human-level AI (HLAI) is tough because so many people use it interchangeably with artificial general intelligence (AGI) – which is the thoughtful, emotional, creative sort of AI that exists only in movie characters like C-3PO and “Ex Machina’s” Ava.

Human-level AI is similar, but not quite as powerful as AGI, for the simple reason that many in the know expect AGI to surpass anything we mortals can accomplish. Though some see this as an argument against building HLAI, some experts believe that only an HLAI could ever be clever enough to design a true AGI – human engineers would only be necessary up to a certain point once we get the ball rolling. (Again, neither type of AI system exists nor will they anytime soon.)

At a conference on HLAI held by Prague-based AI startup GoodAI in August, a number of AI experts and thought leaders were asked a simple question: “Why should we bother trying to create human-level AI?”

For those AI researchers that have detached from the outside world and gotten stuck in their own little loops (yes, of course we care about your AI-driven digital marketplace for farm supplies), the responses may remind them why they got into this line of work in the first place. For the rest of us, they provide a glimpse of the great things to come.

For what it’s worth, this particular panel was more of a lightning round — largely for fun, the experts were instructed to come up with a quick answer rather than taking time to deliberate and carefully choose their words.

“Why should we bother trying to create human-level AI?”

Ben Goertzel, CEO at SingularityNET and Chief Scientist at Hanson Robotics

AI is a great intellectual challenge and it also has more potential to do good than any other invention. Except superhuman AI which has even more.

Tomas Mikolov, Research Scientist At Facebook AI

[Human-Level AI will give us] ways to make life more efficient and basically guide [humanity.]

Kenneth Stanley, Professor At University Of Central Florida, Senior Engineering Manager And Staff Scientist At Uber AI Labs

I think we’d like to understand ourselves better and how to make our lives better.

Pavel Kordik, Associate Professor at Czech Technical University and Co-founder at Recombee

To create a singularity, perhaps.

Ryota Kanai, CEO at ARAYA

To understand ourselves.

More on the future of AI: Five Experts Share What Scares Them the Most About AI

Excerpt from:

Why Should We Bother Building Human-Level AI? Five Experts Weigh In