Two US Army projects seek to improve comms between soldiers and AI – C4ISRNet

WASHINGTON A pair of artificial intelligence projects from U.S. Army researchers are easing communication barriers that limit the relationship between AI systems and soldiers.

The artificial intelligence projects are designed to support ongoing efforts for the Armys next-generation combat vehicle modernization priority, which includes a focus on autonomous vehicles and AI-enabled platforms.

The first project, named the Joint Understanding and Dialogue Interface, or JUDI, is an AI system that can understand the intent of a soldier when that individual gives a robot verbal instructions. The second project, Transparent Multi-Modal Crew Interface Designs, is meant to give soldiers a better understanding of why AI systems make decisions.

Were attacking a similar problem from opposite ends, said Brandon Perelman, a research psychologist at the Army Research Laboratory who worked on Transparent Multi-Modal Crew Interface Designs.

The JUDI system will improve soldiers situational awareness when working with robots because it will transform that relationship from a heads down, hands full to a heads up, hands free interaction, according to Matthew Marge, a computer scientist at the lab. Simply put, this means that soldiers will be more aware of their surroundings, he said.

Natural language AI systems are available on the commercial market, but the JUDI system requires a level of awareness in a physical environment that isnt matched by commercial products. Commercial systems can understand what a person is saying and take instructions, but they dont know what is going on in the surrounding area. For the Army, the autonomous system needs to know that.

You want a robot to be able to process what youre saying and ground it to the immediate physical context, Marge said. So that means the robot has to not only interpret the speech, but also have a good idea of where it is [in] the world, the mapping of its surroundings, and how it represents those surroundings in a way that can relate to what the soldier is saying.

Researchers looked into how soldiers speak to robots, and how robots talk back. In prior research, Marge found that humans speak to technology in much simpler, direct language; but when talking to other people, they usually talk about a course of action and the steps involved. However, those studies were done in a safe environment, and not a stressful one similar to combat, during which a soldiers language could be different. Thats an area where Marge knows the Army must perform more research.

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

When a soldier is under pressure, we dont want to have any limit on the range of words or phrases they have to memorize to speak to the robot, Marge said. So from the beginning, we are taking an approach of so-called natural language. We dont want to impose any restrictions on what a soldier might say to a robot.

JUDIs ability to determine a soldiers intent or what Army researchers define as whatever the soldier wants JUDI to do is based on an algorithm that tries to match the verbal instructions with existing data. The algorithm finds an instruction from its training data with the highest overlap and sends it to the robot as a command.

The JUDI system, Marge said, is scheduled for field testing in September. JUDI was developed with help from researchers at the University of Southern Californias Institute for Creative Technologies.

The Transparent Multi-Modal Crew Interface Designs is tackling the AI-human interaction from the other side.

Were looking at ways of improving the ability of AI to communicate information to the soldier to show the soldier what its thinking and what its doing so its more predictable and trustworthy, Perelman said. Because we know that if soldiers dont understand why the AI is doing something and it fails, theyre not going to trust it. And if they dont trust it, theyre not going to use it.

Mission planning is the one area where the Transparent Multi-Modal Crew Interface Designs may prove useful. Perelman compared the program to driving down the highway while a navigation app responds to changes along a route. A driver may want to stay on the highway for the sake of convenience not having to steer through extra turns even if it takes a few minutes longer.

You can imagine a situation during mission planning, for example, where an AI proposes a number of courses of action that you could take, and if its not able to accurately communicate how its coming up with those decisions, then the soldier is really not going to be able to understand and accurately calculate the trade-offs that its taking into account, Perelman said.

He added that through lab testing, the team improved soldiers ability to predict the AIs future mobility actions by 60 percent and allowed the soldiers to decide between multiple courses of actions 40 percent quicker.

The program has transitioned over to the Army Combat Capabilities Development Commands Ground Vehicle System Centers Crew Optimization and Augmentation Technologies program. Thats where it will take part in Mission Enabler Technologies-Demonstrators phase 2 field testing.

Excerpt from:

Two US Army projects seek to improve comms between soldiers and AI - C4ISRNet

AI bias detection (aka: the fate of our data-driven world) – ZDNet

Here's an astounding statistic: Between 2015 and 2019, global use of artificial intelligencegrew by 270%. It's estimated that85% of Americansare already using AI products daily, whether they now it or not.

It's easy to conflate artificial intelligence with superior intelligence, as though machine learning based on massive data sets leads to inherently better decision-making. The problem, of course, is that human choices undergird every aspect of AI, from the curation of data sets to the weighting of variables. Usually there's little or no transparency for the end user, meaning resulting biases are next to impossible to account for. Given that AI is now involved in everything from jurisprudence to lending, it's massively important for the future of our increasingly data-driven society that the issue of bias in AI be taken seriously.

This cuts both ways -- development in the technology class itself, which represents massive new possibilities for our species, will only suffer from diminished trust if bias persists without transparency and accountability. In one recent conversation, Booz Allen'sKathleen Featheringham, Director of AI Strategy & Training, told me that adoption of the technology is being slowed by what she identifies as historical fears:

Because AI is still evolving from its nascency, different end users may have wildly different understandings about its current abilities, best uses and even how it works. This contributes to a blackbox around AI decision-making. To gain transparency into how an AI model reaches end results, it is necessary to build measures that document the AI's decision-making process. In AI's early stage, transparency is crucial to establishing trust and adoption.

While AI's promise is exciting, its adoption is slowed by historical fear of new technologies. As a result, organizations become overwhelmed and don't know where to start. When pressured by senior leadership, and driven by guesswork rather than priorities, organizations rush to enterprise AI implementation that creates more problems.

One solution that's becoming more visible in the market is validation software.Samasource, a prominent supplier of solutions to a quarter of the Fortune 50, is launching AI Bias Detection, a solution that helps to detect and combat systemic bias in artificial intelligence across a number of industries. The system, which leaves a human in the loop, offers advanced analytics and reporting capabilities that help AI teams spot and correct bias before it's implemented across a variety of use-cases, from identification technology to self-driving vehicles.

"Our AI Bias Detection solution proves the need for a symbiotic relationship between technology and a human-in-the-loop team when it comes to AI projects," says Wendy Gonzalez, President and Interim CEO of Samasource. "Companies have a responsibility to actively and continuously improve their products to avoid the dangers of bias and humans are at the center of the solution."

That responsibility is reinforced by alarmingly high error rates in current AI deployments. One MITstudyfound that "gender classification systems sold by IBM, Microsoft, and Face++" were found to have "an error rate as much as 34.4 percentage points higher for darker-skinned females than lighter-skinned males." Samasource also references a Broward County, Florida, law enforcement program used to predict the likelihood of crime, which was found to "falsely flag black defendants as future criminals (...) at almost twice the rate as white defendants."

The company's AI Bias Detection looks specifically at labeled data by class and discriminates between ethically sourced, properly diverse data and sets that may lack diversity. It pairs that detection capability with a reporting architecture that provides details on dataset distribution and diversity so AI teams can pinpoint problem areas in datasets, training, or algorithms in order to root out biases

Pairing powerful detection tools with a broader understanding of how insidious AI bias can be will be an important step in the early days of AI/ML adoption. Part of the onus, certainly, will have to be on consumers of AI applications, particularly in spheres like governance and law enforcement, where the stakes couldn't possibly be higher.

View post:

AI bias detection (aka: the fate of our data-driven world) - ZDNet

What’s wrong with this picture? Teaching AI to spot adversarial attacks – GCN.com

Whats wrong with this picture? Teaching AI to spot adversarial attacks

Even mature computer-vision algorithms that can recognize variations in an object or image can be tricked into making a bad decision or recommendation. This vulnerability to image manipulation makes visual artificial intelligence an attractive target for malicious actors interested in disrupting applications that rely on computer vision, such as autonomous vehicles, medical diagnostics and surveillance systems.

Now, researchers at the University of California, Riverside, are attempting to harden computer-vision algorithms against attacks by teaching them what objects usually coexist near each other so if a small detail in the scene or context is altered or absent the system will still make the right decision.

When people see a horse or a boat, for example, they expect to also see a barn or a lake. If the horse is standing in a hospital or the boat is floating in clouds, a human knows something is wrong.

If there is something out of place, it will trigger a defense mechanism, Amit Roy-Chowdhury, a professor of electrical and computer engineering leading the team studying the vulnerability of computer vision systems to adversarial attacks, told UC Riverside News. We can do this for perturbations of even just one part of an image, like a sticker pasted on a stop sign.

The stop sign example refers to a 2017 study that demonstrated that images of stickers on a stop sign that were deliberately misclassified as a speed limit sign in training data were able to trick a deep neural network (DNN)-based system into thinking it saw a speed limit sign 100% of the time. An autonomous driving system trained on that manipulated data that sees a stops sign with a sticker on it would interpret that image as a speed limit sign and drive right through the stop sign. These adversarial perturbations attacks can also be achieved by adding digital noise to an image, causing the neural network to misclassify it.

However, a DNN augmented with a system trained on context consistency rules can check for violations.

In the traffic sign example, the scene around the stop sign the crosswalk lines, street name signs and other characteristics of a road intersection can be used as context for the algorithm to understand the relationship among the elements in the scene and help it deduce if some element has been misclassified.

The researchers propose to use context inconsistency to detect adversarial perturbation attacks and build a DNN-based adversarial detection system that automatically extracts context for each scene, and checks whether the object fits within the scene and in association with other entities in the scene, the researchers said in their paper.

The research was funded by a $1 million grant from the Defense Advanced Research Projects Agencys Machine Vision Disruption program, which aims to understand the vulnerability of computer vision systems to adversarial attacks. The results could have broad applications in autonomous vehicles, surveillance and national defense.

About the Author

Susan Miller is executive editor at GCN.

Over a career spent in tech media, Miller has worked in editorial, print production and online, starting on the copy desk at IDGs ComputerWorld, moving to print production for Federal Computer Week and later helping launch websites and email newsletter delivery for FCW. After a turn at Virginias Center for Innovative Technology, where she worked to promote technology-based economic development, she rejoined what was to become 1105 Media in 2004, eventually managing content and production for all the company's government-focused websites. Miller shifted back to editorial in 2012, when she began working with GCN.

Miller has a BA and MA from West Chester University and did Ph.D. work in English at the University of Delaware.

Connect with Susan at [emailprotected] or @sjaymiller.

See more here:

What's wrong with this picture? Teaching AI to spot adversarial attacks - GCN.com

Navigating the AI and Analytics Job Market During COVID-19 – Datanami

(metamorworks/Shutterstock)

The market for AI and analytics jobs has not been spared from the wrath of COVID-19, which has directly led to the loss of more than 30 million American jobs over the past four months. It may not appear to be an ideal time to look for a new job, but those hunting for employment in AI and analytics still have options.

The overall job market is abysmal in the United States at the moment, thanks to government orders to temporarily shudder unnecessary businesses in an attempt to slow the novel coronaviruss spread. Unemployment rolls keep growing as politicians debate how much money to borrow to keep everybody afloat a little bit longer.

Tech startups with questionable business plans are especially vulnerable to the carnage. Nearly 77,000 people have been laid off from tech startup jobs since March, according to the Layoffs.fyi tracker. Silicon Valley, in particular, has been hit hard by layoffs, with companies like Uber, Airbnb, Yelp, and Groupon parting ways with large groups of workers.

It wont likely end soon, especially if COVID-19 cases continue to surge into the fall. As businesses are forced to shut their doors, workers across all industries are laid off, which hurts consumer spending goes down. Businesses slow investment in reaction to lowered spending, which in turn leads to fewer jobs, which leads to lower consumer spending, etc. Rinse and repeat. Its a nasty feedback loop, to be sure, and until something breaks the cycle, consumer confidence and business investment will continue to be dulled.

COVID-19 has cost more than 30 million people their jobs in the U.S.(Kira_Yan/Shutterstock)

Early in the pandemic, companies resisted plans to scale back their hiring efforts in big data, analytics, and AI. They have mostly maintained those employment plans, according to the most recent Burtch Works survey, which it conducted with the International Institute for Analytics.

The July Burtch Works survey concluded that 50% of analytics and data science organizations have either suffered no impacts (42.1%) or have actually grown in size (7.6%). Only 14.5% of respondents expected additional staffing and hiring actions, it says.

Jobs in AI are also not immune to the slowdown. According to a June report from LinkedIn, COVID-19 caused about a 10% drop in demand for AI jobs compared to 2019, when folks with AI skills were in very high demand and commanded top dollar in the job market. In fact, the job title Artificial Intelligence Specialist was the number one job in LinkedIns 2020 Emerging Jobs Report.

Specifically, job listings for such roles grew at 14.0% YOY [year over year] before COVID-19 outbreak and slowed down to only 4.6% YOY, the author of the June report, Zhichun Jenny Ying, writes. Job applications grew at 50.8% YOY before COVID-19 outbreak and dipped to 30.2% YOY post-COVID.

While the AI job market has slowed down, its still chugging along at a 5% annual growth rate (when measured by AI job postings). Clearly, the overall job market is doing much worse than the AI sector. When Ying normalized AI job postings against overall job postings, AI jobs actually posted an 8.3% increase during the 10 weeks after the COVID-19 outbreak began.

But theres a twist: fewer people are applying for those AI jobs. According to Lings data, AI job applications dropped 14.1% during the 10 weeks after the COVID-19 outbreak in the U.S., compared to the 10 weeks prior, when normalized against overall job postings, she writes. This suggests that candidates may be playing it safe during a period of uncertainty.

AI and data jobs are resilient to COVID-19 cutbacks, but theyre not immune (Jozsef Bagota/Shutterstock)

You cant blame people for holding onto their current positions during the pandemic. With schools mostly shut down and non-essential workers asked to work from home, the day-to-day existence of the American workforce has experienced unprecedented shock. Many companies wont survive the pandemic, so why take a risk with job hopping?

Everybodys situation is different. Depending on the conditions, there could be very good reasons to take a new job in data, according to Tamara Iakiri, the vice president of talent experience at Open Systems Technologies (OST), a technology consultancy based in Grand Rapids, Michigan.

While we have certainly seen strong talent come into the market due to job cuts, it is important to remember that companies are still hiring and great companies are using the current situation to gain outstanding talent especially with these desired skills sets that have been hard to recruit for, Iakiri tells Datanami.

Iakiri recommends that folks in data-related industries update their resume every six months. Keeping an eye on the job market can let folks know what skills are currently hot, as well as who is hiring.

Now is not the time to bury our heads and hope, Iakiri says. If the company is not doing well, keep doing great work, but also be prepared have your resume ready, know who is hiring and build your network so you are ready to respond if the unfortunate happens and you are laid off. The best recruiters are willing to have conversations even if they dont have current openings, having these connections can be invaluable now and in the future.

Related Items:

AWS re:Invent Goes Virtual (and Free) as COVID-19 Conference Cancellations Continue

How COVID-19 Is Impacting the Market for Data Jobs

Is the Tech Boom Over?

Read more:

Navigating the AI and Analytics Job Market During COVID-19 - Datanami

3 Ethical Considerations When Investing in AI – Manufacturing Business Technology

While Artificial Intelligence (AI) has been prevalent in industries such as the financial sector, where algorithms and decision trees have long been used in approving or denying loan requests and insurance claims, the manufacturing industry is at the beginning of its AI journey. Manufacturers have started to recognize the benefits of embedding AI into business operationsmarrying the latest techniques with existing, widely used automation systems to enhance productivity.

A recent international IFS study polling 600 respondents, working with technology including Enterprise Resource Planning (ERP), Enterprise Asset Management (EAM), and Field Service Management (FSM), found more than 90 percent of manufacturers are planning AI investments. Combined with other technologies such as 5G and the Internet of Things (IoT), AI will allow manufacturers to create new production rhythms and methodologies. Real-time communication between enterprise systems and automated equipment will enable companies to automate more challenging business models than ever before, including engineer-to-order or even custom manufacturing.

Despite the productivity, cost-savings and revenue gains, the industry is now seeing the first raft of ethical questions come to the fore. Here are the three main ethical considerations companies must weigh-up when making AI investments.

At first, AI in manufacturing may conjure up visions of fully automated smart factories and warehouses, but the recent pandemic highlighted how AI can play a strategic role in the back-office, mapping different operational scenarios and aiding recovery planning from a finance standpoint. Scenario planning will become increasingly important. This is relevant as governments around the world start lifting lockdown restrictions and businesses plan back to work strategies. Those simulations require a lot of data but will be driven by optimization, data analysis and AI.

And of course, it is still relevant to use AI/Machine Learning to forecast cash. Cash is king in business right now. So, there will be an emphasis on working out cashflows, bringing in predictive techniques and scenario planning. Businesses will start to prepare ways to know cashflow with more certainty should the next pandemic or crisis occur.

For example, earlier in the year the conversation centered on the just-in-time scenarios, but now the focus is firmly on what-if planning at the macro supply chain level:

Another example is how you can use a Machine Learning service and internal knowledge base to facilitate Intelligent Process Automation allowing recommendations and predictions to be incorporated into business workflows, as well as AI-driven feedback on how business processes themselves can be improved or automated.

The closure of manufacturing organizations and reduction in operations due to depleting workforces highlight AI technology in the front-office isnt perhaps as readily available as desired, and that progress needs to be made before it can truly provide a level of operational support similar to humans.

Optimists suggest AI may replace some types of labor, with efficiency gains outweighing transition costs. They believe the technology will come to market at first as a guide-on-the-side for human workers, helping them make better decisions and enhancing their productivity, while having the potential to upskill existing employees and increase employment in business functions or industries that are not in direct competition with AI.

Indeed, recent IFS research points to an encouraging future for a harmonized AI and human workforce in manufacturing. The IFS AI study revealed that respondents saw AI as a route to create, rather than cull, jobs. Around 45 percent of respondents stated they expect AI to increase headcount, while 24 percent believe it wont impact workforce figures.

The pandemic has demonstrated AI hasnt developed enough to help manufacturers maintain digital-only operations during unforeseen circumstances, and decision makers will be hoping it can play a greater role to mitigate extreme situations in the future.

It is easy for organizations to say they are digitally transforming. They have bought into the buzzwords, read the research, consulted the analysts, and seen the figures about the potential cost savings and revenue growth.

But digital transformation is no small change. It is a complete shift in how you select, implement and leverage technology, and it occurs company-wide. A critical first step to successful digital transformation is to ensure that you have the appropriate stakeholders involved from the very beginning. This means manufacturing executives must be transparent when assessing and communicating the productivity and profitability gains of AI against the cost of transformative business changes to significantly increase margin.

When businesses first invested in IT, they had to invent new metrics that were tied to benefits like faster process completion or inventory turns and higher order completion rates. But manufacturing is a complex territory. A combination of entrenched processes, stretched supply chains, depreciating assets and growing global pressures makes planning for improved outcomes alongside day-to-day requirements a challenging prospect. Executives and their software vendors must go through a rigorous and careful process to identify earned value opportunities.

Implementing new business strategies will require capital spending and investments in process change, which will need to be sold to stakeholders. As such, executives must avoid the temptation of overpromising. They must distinguish between the incremental results they can expect from implementing AI in a narrow or defined process as opposed to a systemic approach across their organization.

There can be intended or unintended consequences of AI-based outcomes, but organizations and decision makers must understand they will be held responsible for both. We have to look no further than tragedies from self-driving car accidents and the subsequent struggles that followed as liability is assigned not on the basis of the algorithm or the inputs to AI, but ultimately the underlying motivations and decisions made by humans.

Executives therefore cannot afford to underestimate the liability risks AI presents. This applies in terms of whether the algorithm aligns with or accounts for the true outcomes of the organization, and the impact on its employees, vendors, customers and society as a whole. This is all while preventing manipulation of the algorithm or data feeding into AI that would impact decisions in ways that are unethical, either intentionally or unintentionally.

Margot Kaminski, associate professor at the University of Colorado Law School, raised the issue of automation biasthe notion that humans trust decisions made by machines more than decisions made by other humans. She argues the problem with this mindset is that when people use AI to facilitate decisions or make decisions, they are relying on a tool constructed by other humans, but often they do not have the technical capacity, or practical capacity, to determine if they should be relying on those tools in the first place.

This is where explainable AI will be criticalAI which creates an audit path so both before and after the fact, there is a clear representation of the outcomes the algorithm is designed to achieve and the nature of the data sources it is working form. Kaminski asserts explainable AI decisions must be rigorously documented to satisfy different stakeholdersfrom attorneys to data scientists through to middle managers.

Manufacturers will soon move past the point of trying to duplicate human intelligence using machines, and towards a world where machines behave in ways that the human mind is just not capable. While this will reduce production costs and increase the value organizations are able to return, this shift will also change the way people contribute to the industry, the role of labor, and civil liability law.

There will be ethical challenges to overcome, but those organizations who strike the right balance between embracing AI and being realistic about its potential benefits alongside keeping workers happy will usurp and take over. Will you be one of them?

Read more here:

3 Ethical Considerations When Investing in AI - Manufacturing Business Technology

How new tech raises the risk of nuclear war – Axios

75 years after Hiroshima and Nagasaki, some experts believe the risk of the use of a nuclear weapon is as high now as it has been since the Cuban missile crisis.

The big picture: Nuclear war remains the single greatest present threat to humanity and one that is poised to grow as emerging technologies, like much faster missiles, cyber warfare and artificial intelligence, upset an already precarious nuclear balance.

What's happening: A mix of shifting geopolitical tensions and technological change is upsetting a decades-long state of strategic stability around nuclear weapons.

Cyber warfare can directly increase the risk of nuclear conflict if it is used to disrupt command and control systems.

AI is only in its infancy, but depending on how it develops, it could utterly disrupt the nuclear balance.

Be smart: As analysts from RAND wrote in a 2018 report, "AI may be strategically destabilizing not because it works too well but because it works just well enough to feed uncertainty." Whether or not an AI system could provide a decisive advantage in a nuclear standoff, if either the system's user or that country's opponent believes it can do so, the result could be catastrophic.

The bottom line: The riskiest period of the Cold War was its earliest stages, when military and political leaders didn't yet fully understand the nature of what Hiroshima had demonstrated. Emerging technologies like AI threaten to plunge us back into that uncertainty.

View original post here:

How new tech raises the risk of nuclear war - Axios

Could a new academy solve the AI talent problem? – Defense Systems

AI & Analytics

Defense technology experts think adding a military academy could be the solution to the U.S. government's tech talent gap.

"The canonical view is that the government cannot hire these people because they will get paid more in private industry," said Eric Schmidt, former Google chief and current chair of the Defense Department's Innovation Advisory Board, during a July 29 Brookings Institution virtual event.

"My experience is that people are patriotic and that you have a large number of people -- and this I think is missed in the dialogue -- a very large number of people who want to serve the country that they love. And the reason that they're not doing it is there's no program that makes sense to them."

Schmidt's comments come as the National Security Commission on Artificial Intelligence, of which he chairs, issued its second quarterly report with recommendations to Congress on how the U.S. government can invest in and implement AI technology.

One key recommendation: A national digital service academy, to act like the civilian equivalent of a military service academy to train technical talent. That institution would be paired with an effort to establish a national reserve digital corps to serve on a rotational basis.

Robert Work, former deputy secretary of defense who is now NSCAI's vice chair, said the academy would bring in people who want to serve in government and would graduate students to serve as full time federal employees at GS-7 to GS-11 pay grade. Members of the digital corps would five years at 38 days a year helping government agencies figure out how to best implement AI.

For the military, the commission wants to focus on creating a clear way to test existing service members' skills and better gauge the abilities of incoming recruits and personnel.

"We think we have a lot of talent inside the military that we just aren't aware of," Work said.

To remedy that, Work said the commission recommends a grading, via a programming proficiency test, to identify government and military workers that have software development experience. The recommendations also include a computational thinking component to the armed services' vocational aptitude battery to better identify incoming talent.

"I suspect that if we can convince the Congress to make this real and the president signs off hopefully then not only will we be successful but we'll discover that we need 10 times more. The people are there and the talent is available," Schmidt said.

Photo credit: Eric Schmidt at a March 2020 meeting of the Defense Innovation Board in Austin, Texas; DOD photo by EJ Hersom.

This article first appeared on FCW, a Defense Systems partner site.

About the Author

Lauren C. Williams is a staff writer at FCW covering defense and cybersecurity.

Prior to joining FCW, Williams was the tech reporter for ThinkProgress, where she covered everything from internet culture to national security issues. In past positions, Williams covered health care, politics and crime for various publications, including The Seattle Times.

Williams graduated with a master's in journalism from the University of Maryland, College Park and a bachelor's in dietetics from the University of Delaware. She can be contacted at [emailprotected], or follow her on Twitter @lalaurenista.

Click here for previous articles by Wiliams.

View original post here:

Could a new academy solve the AI talent problem? - Defense Systems

New AI diagnostic tool knows when to defer to a human, MIT researchers say – Healthcare IT News

Machine learning researchers at MIT's Computer Science and Artificial Intelligence Lab, or CSAIL, have developed a new AI diagnostic system they say can do two things: make a decision or diagnosis based on its digital findings or, crucially, recognize its own limitations and turn to a carbon-based lifeform who might make a more informed decision.

WHY IT MATTERSThe technology, as it learns, can also adapt how often it might defer to human clinicians, according to CSAIL, based on their availability and levels of experience.

"Machine learning systems are now being deployed in settings to [complement] human decision makers," write CSAIL researchers Hussein Mozannar and David Sontagin a new paperrecently presented at the International Conference of Machine Learningthat touches, not just on clinical applications of AI, but also on areas such as content moderation with social media sites such as Facebook or YouTube.

HIMSS20 Digital

"These models are either used as a tool to help the downstream human decision maker with judges relying on algorithmic risk assessment tools and risk scores being used in the ICU, or instead these learning models are solely used to make the final prediction on a selected subset of examples."

In healthcare, they point out, "deep neural networks can outperform radiologists in detecting pneumonia from chest X-rays, however, many obstacles are limiting complete automation, an intermediate step to automating this task will be the use of models as triage tools to complement radiologist expertise.

"Our focus in this work is to give theoretically sound approaches for machine learning models that can either predict or defer the decision to a downstream expert to complement and augment their capabilities."

THE LARGER TRENDAmong the tasks the machine learning system was trained on was the ability to assess chest X-rays to potentially diagnose conditions such as lung collapse (atelectasis) and enlarged heart (cardiomegaly).

Importantly, the system was developed with two parts, according to MIT researchers: a so-called "classifier," designed to predict a certain subset of tasks, and a "rejector" that decides whether a specific task should be handled by either its own classifier or ahuman expert.

The team performed experiments focused on medical diagnosis and text/image classification, the team showed that their approach not only achieves better accuracy than baselines, but does so with a lower computational cost and with far fewer training data samples.

While researchers say they haven't yet tested the system with human experts, they did develop"synthetic experts" to enable them to tweak parameters such as experience and availability.

They note that for the machine learning program to work with a new human expert, the algorithm would "need some minimal onboarding to get trained on the person's particular strengths and weaknesses."

Interestingly, in the case of cardiomegaly, researchers found that a human-AI hybrid model performed 8% percent better than either could on its own.

Going forward, Mozannar and Sontag plan to study how the tool works with human experts such as radiologists. They also hope to learn more about how it will process biased expert data, and work with several experts at once.

ON THE RECORD"In medical environments where doctors don't have many extra cycles, it's not the best use of their time to have them look at every single data point from a given patient's file," said Mozannar, in a statement. "In that sort of scenario, it's important for the system to be especially sensitive to their time and only ask for their help when absolutely necessary."

"Our algorithms allow you to optimize for whatever choice you want, whether that's the specific prediction accuracy or the cost of the expert's time and effort," added Sontag. "Moreover, by interpreting the learned rejector, the system provides insights into how experts make decisions, and in which settings AI may be more appropriate, or vice-versa."

"There are many obstacles that understandably prohibit full automation in clinical settings, including issues of trust and accountability," says Sontag. "We hope that our method will inspire machine learning practitioners to get more creative in integrating real-time human expertise into their algorithms."

Twitter:@MikeMiliardHITNEmail the writer:mike.miliard@himssmedia.com

Healthcare IT News is a publication of HIMSS Media.

Here is the original post:

New AI diagnostic tool knows when to defer to a human, MIT researchers say - Healthcare IT News

Researchers Rank Deepfakes as the Biggest Crime Threat Posed by AI – Adweek

While science fiction is often preoccupied with the threat of artificial intelligence successfully imitating human intelligence, researchers say a bigger danger right now is people using the technology to imitate one another.

A recent survey from the University College of London ranked deepfakes as the most worrying application of machine learning in terms of potential for crime and terrorism. According to 31 AI experts, the video fabrication technique could fuel a variety of crimesfrom discrediting a public figure with fake footage to extorting money through video call scams impersonating a victims loved onewith the cumulative effect leading to a dangerous mistrust of audio and visual evidence on the part of society.

The experts were asked to rank a list of 20 identified threats associated with AI, ranging from driverless car attacks to AI-authored phishing messages and fake news. The criteria for the ranking included overall risk, ease of use, profit potential and the level of difficulty in how hard they are to detect and stop.

Deepfakes are worrying on all of these counts. They are easily made and are increasingly hard to differentiate from real video. Advertised listings are easily found on corners of the dark web, so the prominence of the targets and variety of possible crimes means that there could be a lot of money at stake.

While the threat of deepfakes was once confined to celebrities, politicians and other prominent figures with enough visual data to train an AI, more recent systems have been proven to be effective when trained on as little data with a couple of photos.

People now conduct large parts of their lives online and their online activity can make and break reputations, said the reports lead author, UCL researcher Matthew Caldwell, in a statement. Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity.

Despite the abundance of possible criminal applications of deepfakes, a report last fall found that they are so far primarily used by bad actors to create fake pornography against the subjects consent.

Not all uses for deepfakes are nefarious, however. Agencies like Goodby, Silverstein & Partners and R/GA have used them in experimental ad campaigns, and the underlying generative technology is helping fuel different types of AI creativity and art.

Link:

Researchers Rank Deepfakes as the Biggest Crime Threat Posed by AI - Adweek

Army Tests New All Domain Kill Chain: From Space To AI – Breaking Defense

M270A1 Multiple Launch Rocket System (MLRS) firing at the Grafenwoehr training range in Germany.

UPDATED with Lt. Gen. Karbler remarks WASHINGTON: A successful field test in Germany shows how satellite surveillance, artificial intelligence, and long-range artillery could combine to devastating effect in future war, a senior Army official said this morning. The data from that test will feed into both computer models and future field experiments later this year in the US part of the ambitious Project Convergence exercise and in the Pacific.

We have valuable data now on actually how the real operational system works, said Willie Nelson, de facto head of space efforts at Army Futures Command. To be able to provide that data back to the warfighter near real time I know the word game-changer is overused, but frankly, that is a game-changer.

Willie Nelson addresses a Defense News webcast

This is not just science experiments, he told a Defense News webcast following on yesterdays virtual Space & Missile Defense conference. We actually used the fielded equipment M777 towed howitzers and M270 MLRS rocket/missile launchers with live fires on the range in Germany. The upcoming tests will add Extended Range Cannon Artillery (ERCA) and Grey Eagle drones.

Eventually we want this stuff on every ground platform and even down to the soldier, he said, but those are a couple of years away. (While Nelson didnt mention it, the new IVAS targeting goggles about to enter service will eventually link their wearers to AI target recognition system).

Connecting an ever-wider network of different sensors to shooters with AI accelerating the data flow over land, sea, air, space, and cyberspace is central to the Pentagons evolving concept of Joint All-Domain Operations.

In this years tests, were doing that entire fires kill chain from initial detection to final destruction, Nelson said. Were able to receive that [satellite] data in theater, process that data, be able to develop targeting coordinates from that, put it directly into the [artillery] firing system, AFATDS, and be able to launch weapons on target, he said. And were doing that now very successfully in a very short time.

Nelson isnt given to overstatement. In fact, hes among the most reserved of Futures Commands eight Cross Functional Team directors, and, not coincidentally, the only civilian among them. Nominally, his CFT handles Assured Precision, Navigation, & Timing (APNT) in laymans terms, alternatives for GPS if its jammed but its mandate has expanded to include the Armys use of satellites. Nelsons team has been working closely with the Armys Network CFT, ISR Task Force, and AI Task Force on this technology.

Lockheeds prototype Precision Strike Missile (PrSM) fires from an Army HIMARS launcher truck in its first flight test, December 2019.

Satellites, AI & Humans

Today, artillery units rely on scout helicopters, drones, and forward observers on the ground to spot targets. But to counter Russian and Chinese long-range missiles, the Army is developing a family of long-range weapons including hypersonics with a thousand-plus-mile range that can hit targets much farther away than any earthbound asset can see, while recon aircraft are prime targets for ever-more-sophisticated surface-to-air weapons.

You cant assume air superiority, at least very early on, and so quite frankly the only avenue we have is to sense from space, Nelson said. The good news is, our access to space has greatly improved over the years, through a lot of innovation through commercial capabilities, primarily.

A growing variety of government and commercial satellites in Low Earth Orbit, Nelson said, can provide data of multiple types. The Army has said it doesnt need to build its own satellites for intelligence, surveillance, and reconnaissance.

UPDATEWith the standup of the space force, the Army should not be in the business of its own launch services [for] satellites, Lt. Gen. Daniel Karbler, who heads Army Space & Missile Defense Command, told reporters this afternoon. Army space starts and ends on the ground.

While the Army has experimented with small satellites in the past, Karbler and his staff today made clear the service isnt interested in building its own constellations. (That doesnt rule out Army payloads hosted on satellites built and launched by others, however). We identify the requirements that we needand we hand those requirements on the space enterprise, he said. That could be DoD, that could be commercial, that could be other agencies. UPDATE ENDS

Those rapidly expanding constellations of non-Army satellites provide a wide array of data, from straightforward photographic imagery, to triangulated sources of radio emissions, to Synthetic Aperture Radar. its SAR that Nelson is most excited about, not only because it can penetrate cloud cover and the dark of night, but because commercial R&D has improved the tech to where it can fit aboard a small LEO satellite.

Historically, each of these different data sources would go to a different human analyst. It would take multiple specialists in different fields to piece together for example how a blurry photo, an indecipherable radio transmission, and a ghostly radar image, each inconclusive on its own, could together pinpoint a potential target. Then a different set of specialists would take the list of targets and match it against the commanders priorities and the available weapons.

In future conflicts, we dont have time for that, Nelson said. Thats where the AI comes in. It can radically reduce the timeline by automating the grunt work, with software replacing human analysts and planners but not, Nelson emphasized, human decision-makers.

There need to be humans in the loop when were talking about lethal power, he said. What we want to do is let machines do what machines do well[:] dull work, fast.

Heres Nelsons outline of how this kill chain works, translated into laymans terms:

Who are the humans who must ensure this AI-accelerated cycle doesnt spin out of control? The Army is figuring that out at the same time it field-tests the technology.

Its a strategic weapon, said Lt. Gen. Neil Thurgood, who is developing the Armys Long-Range Hypersonic Weapon. Its not long-range artillery.

Lt. Gen. Neil Thurgood

The last time the Army had something comparable, it was nuclear-tipped: All the US military hypersonics now in development will be precision weapons with conventional warheads. So the Army cant reprint old doctrine from the Cold War era, any more than it can use existing field artillery manuals for hypersonics.

The Armys is exploring new kinds of organization, such as Theater Fires Command. Lt. Gen. Charles Flynn, the Armys deputy chief of staff for operations, and Maj. Gen. Kenneth Kamper, head of the artillery school at Fort Sill, Okla., are leading an effort to develop new doctrine, Thurgood said. Theyre collaborating closely with the all-service Strategic Command, which controls Air Force strategic bombers, ICBMs, and Navy nuclear missile submarines, and which will likely have a role in Army hypersonics too.

Think of hypersonics as a strategic weapon, Thurgood told yesterdays SMD conference. It literally is bringing the Army back into the days where we had weapons systems like Pershing, where we had strategic weapons that were part of the combatant commands war plans, and those were held at the STRATCOM level.

The mission planning for the hypersonic weapon is at that level, he said. It is not happening at the battery level.

It is not a traditional field artillery mission, Thurgood said at this mornings follow-up webinar. This is a detailed, preplanned set of events that will happen at the STRATCOM level all the way down to the battery level.

See the original post here:

Army Tests New All Domain Kill Chain: From Space To AI - Breaking Defense

Worlds first AI-generated arts festival program opens this Friday – The Next Web

The Edinburgh Fringeisthe worlds largest performing arts festival, but this years event has sadly been canceled due to COVID-19. Fortunately, art junkies can still get their fix ofthe Fringe at a virtual alternative curated by an AI called the ImprovBot.

The system analyzed the100-word text descriptions of every show staged at the festival from 2011 to 2019 a total ofmore than two million words. ImprovBot uses this data to generate ideas for newcomedies, plays, musicals, and cabaret.

The blurbs will then be handed to the Improverts the Fringes longest-running improvised comedy troupe who will stage their own takes on the shows overTwitter.

[Read:How an AI graphic designer convinced clients it was human]

The aim of ImprovBot is to explore the junction of human creativity and comedy, and to see how this is affected when an Artificial Intelligence enters into the mix, saidMelissa Terras,Professor of Digital Cultural Heritage at the University of Edinburgh. It is [a] reminder of the playfulness of the Fringe and we invite online audiences to rise to the provocation, and interact, remix, mashup, and play with the content.

In total, ImprovBot aims to create 350 show descriptions, which will be posted every houron Twitter from August 7 to 31. Its already provided a sample of itsoeuvre, which ranges from a terrifying tale of isolation titledCollection to Politics to a hilarious comedy called The Man Behind the Real Song Lovers.

Truth be told, most of the blurbs are pretty nonsensical, so the Improverts will have a tough job adapting the AIs words for the stage. You can judge their efforts for yourself from Friday on Twitter.

Published August 5, 2020 17:21 UTC

Read the original here:

Worlds first AI-generated arts festival program opens this Friday - The Next Web

AI Is All the Rage. So Why Arent More Businesses Using It? – WIRED

The Census report found AI to be less widespread than some earlier estimates. The consulting firm McKinsey, for instance, reported in November 2018 that 30 percent of surveyed executives said their firms were piloting some form of AI. Another study, by PwC at the end of 2018, found that 20 percent of executives surveyed planned to roll out AI in 2019.

One reason for the difference is that those surveys were focused on big companies which are more likely to adopt new technology. Fortune 500 firms have the money to invest in expertise and resources, and often have more data to feed to AI algorithms.

For a lot of smaller companies, AI isnt part of the picturenot yet, at least. Big companies are adopting, says Brynjolfsson, but most companies in AmericaJoes pizzeria, the dry cleaner, the little manufacturing companythey are just not there yet.

Another reason for the discrepancy is that those who responded to the Census survey might not realize that their company is using some form of AI. Companies could use software that relies on some form of machine learning for tasks such as managing employees or customers without advertising the fact.

Even if AI isnt yet widespread, the fact that it is more common at larger companies is important, because those companies tend to drive an even greater proportion of economic activity than their size suggests, notes Pascual Restrepo, an assistant professor at Boston University who researches technology and the economy. He adds that job ads for AI experts increased significantly in 2019.

LinkedIn says that postings for AI-related roles grew 14 percent year over year for the 10 weeks before the Covid outbreak slowed hiring in early March. There has been a very rapid uptake in terms of hiring of people with skills related to AI, Restrepo says.

Another data point that suggests rapid growth in use of AI comes from Google. Kemal El Moujahid, director of product management for TensorFlow, Googles software framework for creating AI programs, says interest in the product has skyrocketed recently. The framework has been downloaded 100 million times since it was released five years agoincluding 10 million times in May 2020 alone.

The economic crisis triggered by the pandemic may do little to dim companies' interest in automating decisions and processes with AI. What can be accomplished is expanding really rapidly, and we're still very much in the discovery phase, says David Autor, an economist at MIT. I cant see any reason why, in the midst of this, people would say, Oh no, we need less AI.

But the benefits may not flow equally to all companies. One worrying aspect that this survey reveals, the report concludes, is that the latest technology adoption is mostly being done by the largest and older firms, potentially leading to increased separation between the typical firm and superstar firms.

As a general principle, says Restrepo of Boston University, when technology adoption concentrates amongst a handful of firms, the gains will not be fully passed to consumers.

Nicholas Bloom, a professor of economics at Stanford, isnt so sure. While the average small firm lags the average large firm, there are some elite adopters in small firms, Bloom says. These are the rapid innovators, who are creative and ambitious, often becoming the larger firms of the future.

More Great WIRED Stories

Follow this link:

AI Is All the Rage. So Why Arent More Businesses Using It? - WIRED

Follow the Money: Cash for AI Models, Oncology, Hematology Therapies – Bio-IT World

August5,2020 | Sema4 gets $121M tobuild dynamic models of human health and defineoptimal, individualized health trajectories.Glioblastoma, hematology, and acutepancreatitisall see new funding for therapy development. And AI-powered models net cash.

$257M: Series B for Liquid Biopsy for Multiple Cancers

Thrive Earlier Detection, Cambridge, Mass., closed$257millioninSeries Bfinancing. Funds will help advanceCancerSEEK, aliquid biopsy test designed to detect multiple cancers at earlier stages of disease, into registrational trial. The round was led byCasdinCapital and Section 32, with participation from new investors Bain Capital Life Sciences, Brown Advisory, Driehaus Capital Management, Intermountain Ventures, Janus Henderson Investors, Lux Capital, and more.

$121M: Series Cfor Data-Driven Health Intelligence

Sema4, Stamford, Conn., closed a Series C round led by BlackRock with additional new investors including Deerfield Management Company and Moore Strategic Ventures. Sema4 is dedicated to transforming healthcare by building dynamic models of human health and defining optimal, individualized health trajectories. The company began with an emphasis on reproductive health and recently launched Sema4 Signal, a family of products and services providing data-drivenprecision oncology solutions. Over the last several months, Sema4 has also joined the fight against COVID-19. Sema4 has integrated its premier clinical and scientific expertise with its cutting-edge digital capabilities to deliver a holistic testing program that enables organizations to make fast, informed decisions as they navigate COVID-19. The company has also launchedCentrellis, an innovative health intelligence platformdesigned to provide a more complete understanding of disease and wellness and to offer physicians deeper insight into the patient populations they serve.

$112M: Series C for Phase 2 for GlioblastomaTreatment

Imvax, Philadelphia,raised $112 million in series C financing from existing investors HP WILD Holding AG, Ziff Capital Partners, Magnetar Capital, and TLP Investment Partners, and new institutional investor,Invus. The funds will to support Phase 2 Clinical Development of IGV-001 for treatment of Glioblastoma multiforme, Phase 1 research into additional solid tumor indications, and will help build out corporate and manufacturing capabilities.

$97M: Series C for Hematology, OncologyTherapies

AntengeneCorporation, Shanghai,has closed $97 million in Series C financing led by Fidelity Management & Research Company with additional support from new investors including GL Ventures (an affiliate of Hillhouse Capital) and GIC. Existing investors includingQimingVenture Partners andBoyuCapital also participated. Proceeds from the Series C financing will be primarily used to fund the continuing clinical development ofAntengene'srobust pipeline of hematology and oncology therapies, expanding in-house research and development capabilities and strengthening the commercial infrastructures in APAC markets.

$71M:AccelerateFinger-PrickBloodAnalyzer

Sight Diagnostics, Tel Aviv, hasraised$71million from Koch Disruptive Technologies,LonglivVentures, a member of the CK Hutchison Holdings andOurCrowd. The investment is meant to fuel Sights R&D into the automated identification and detection of diseases through its FDA-cleared direct-from-fingerstick Complete Blood Count (CBC) analyzer.Thenew investment will enable Sight to substantially expanditsU.S. footprint.

$50M: Series B Extensionfor Enzymatic DNA Synthesis

DNA Script, Paris,announced a $50 million extension to its Series B financing, bringing the total investment of this round to $89 million. This oversubscribed round is led byCasdinCapital and joined by Danaher Life Sciences, Agilent Technologies, MerckKGaA, Darmstadt, Germany, through its corporate venture arm, M Ventures three of the world's leaders in oligo synthesis LSP, theBpifranceLarge Venture Fund and Illumina Ventures. Funding from this investment round will enable DNA Script to accelerate the development of its suite of enzymatic DNA synthesis (EDS) technologies in particular, to support the commercial launch of the company's SYNTAX DNA benchtop printer.

$25M: Series A for Targeted Exosome Vehicles

Mantra Bio, San Francisco, has raised $25 million in a Series A financing to advance development of next generation, precision therapeutics based on its proprietary platform forengineering Targeted Exosome Vehicles (TEVs). 8VC and Viking Global Investors led the round, which also included Box Group and Allen & Company LLC. Mantra Bios REVEAL is an automated high throughput platform that rapidly designs, tests and optimizes TEVs for specific therapeutic applications. The platform integrates computational approaches, wet biology, and robotics, to leverage the diversity of exosomes and enable the rational design of therapeutics directed at a wide range of tissue and cellular targets.

$23.7M: Shared Grant for Biologically Based Polymers

The National Science Foundation has named the University of California, Los Angeles and the University of California, Santa Barbara, partners in a collaboration calledBioPACIFICMIPforBioPolymers, Automated Cellular Infrastructure, Flow, and Integrated Chemistry: Materials Innovation Platformand has funded the effort with a five-year, $23.7 million grant. The initiative is part of the NSF Materials Innovation Platforms program, and its scientific methodology reflects the broad goals of the federal governments Materials Genome Initiative, which aims to develop new materials twice as fast at a fraction of the cost. The collaboration aims to advance the use of microbes for sustainable production of new plastics.

$17M: Series A to Scale At-Home Blood Collection

Tasso, Seattle,secured a $17 million Series A financing round led by HambrechtDuceraGrowth Ventures and includedForesiteCapital, Merck Global Health Innovation Fund, Vertical Venture Partners,Techstars, and Cedars-Sinai. The company will use the proceeds to scale manufacturing and operations to meet the increased demand for its line of innovative Tasso OnDemand devices, which enable people to collect their own blood using a virtually painless process from anywhere at any time. These fast and easy-to-use products are being adopted by leading academic medical institutions, government agencies, comprehensive cancer centers, and pharmaceutical organizations around the world.

$12M: Molecular Data, AI Build Therapeutic Models

Endpoint Health, Palo Alto, Calif.,emerged from stealth mode in mid-July with $12 million in debt and equity financing led by Mayfield to make targeted therapies for patients with critical illnesses including sepsis and acute respiratory distress syndrome (ARDS). Endpoint Health is led by an experienced executive team including the co-founders ofGeneWEAVE, an infection detection and therapy guidance company that was acquired by Roche in 2015. Endpoint Healths approach combines molecular and digital patient data with AI to create comprehensive therapeutic modelstools that identify distinct patient subgroups and treatmentpatterns in order to highlight unmet therapeutic needs. These models are used to identify late-stage and on-market therapies, often created for other indications, that Endpoint can developinto targeted therapies, which will include the required tests and software to guide their use.

$12M: Start of an NIH Contract For COVID-19 Microfluidics

FluidigmCorporation, South San Francisco, Calif.,announced execution of a letter contract with the National Institutes of Health, National Institute of Biomedical Imaging and Bioengineering, for a proposed project under the agencys Rapid Acceleration of Diagnostics (RADx) program. The project, with a total proposed budget of up to $37 million, contemplates expandingproduction capacity and throughput capabilities for COVID-19 testing withFluidigmmicrofluidics technology. The letter contract providesFluidigmwith access to up to $12 million of initial funding based on completion and delivery of certain validation milestones prior to execution of the definitive contract.A goal of theRADxinitiative is to enable approximately 6 million daily tests in the United States by December 2020.

$6.5M: Series A for AI-Powered Precision Oncology

Nucleai,Tel Aviv,a computational biology company providing an AI-powered precision oncology platform for research and treatment decisions, secured $6.5M Series A initial closing.Debiopharmsstrategic corporate venture capital fund led the round joined by existing investors: Vertex Ventures and Grove Ventures.Nucleaiscore technology analyzes large and unique datasets of tissue images using computer vision and machine learning methods to model the spatial characteristics of both the tumor and the patients immune system, creating unique signatures that are predictive of patient response.

$5M: Pharma Grant for Rural Lung Cancer

Stand UpToCancer, New York,received a new $5 million grant from Bristol Myers Squibb to fund research and education efforts aimed at achieving health equity for underserved lung cancer patients, including Black people and people living in rural communities. The research efforts funded by the three-year grant will consist of supplemental grants to current Stand UpToCancer research teams. The supplemental grants will focus on identifying new and innovative diagnostic and treatment methods for lung cancer patients in need. These supplemental grants will be designed to jumpstart pilot projects at the intersection of lung cancers, health disparities and rural healthcare, for instance increasing clinical trial enrollment among historically under-represented groups. Since 2014, Bristol Myers Squibb has provided funding for important Stand UpToCancer research initiatives.

$2.5M: Cloud-Based XR Platform

Grid Raster, Mountain View, Calif., secured $2.5 million led byBlackhornVentures with participation from other existing investorsMaCVenture Capital andExfinityVenture Partners. This infusion of additional capital enables Grid Raster to continue developing its XR solutions, powered by cloud-based remote rendering and 3D vision-based AI, in key customer markets that include Aerospace, Defense, Automotive and Telecommunications.

$1.5M: SBIR for Acute Pancreatitis

Lamassu Pharma has received $1.5 million in Small Business Innovation Research (SBIR) grant funding from the National Institutes of Health (NIH). This will be used for further development of its lead therapeutic compound, RABI-767, a novel small molecule lipase inhibitor licensed from the Mayo Foundation for Medical Education and Research. Lamassu is developing RABI-767 to fill a critical, unmet clinical need for a treatment for acute pancreatitis (AP). Lamassu's proposed treatment is designed to mitigate the systemic toxicity and organ failure associated with acute pancreatitis that causes lengthy hospitalization, organ failure, and death, thus saving both lives and healthcare system resources. Funding from the NIH will enable Lamassu tofurther its translational research, to bring RABI-767 to human trials, and to partner with clinical and commercial development partners.

$800K: Protein Interaction Platform

A-Alpha Bio, Seattle,has been awarded an $800,000 grant to optimize therapeutics for infectious diseases. Awarded by the Bill & Melinda Gates Foundation, the grant work will be carried out by A-Alpha Bio in partnership with Lumen Bioscience using machine learning models built from data generated by A-Alpha Bios proprietaryAlphaSeqplatform. A-Alpha Bio has already completed a pilot study in partnership with Lumen Biosciences and supported by the Gates Foundation. This pilot study successfully demonstrated theAlphaSeqplatforms ability to characterize binding of therapeutic antibodies against multiple pathogen strains simultaneously. With the latest grant, the companies will useAlphaSeqdata to train machine learning models for the development of potent and cross-reactive therapeutics against intestinal and respiratory pathogens.

$620K: Grant for Gas-Sensing Ingestible

AtmoBiosciences, Melbourne and Sydney, Australia,has been awarded a $620,000 Australian Government grant through theBioMedTechHorizons (BMTH) program.Atmoaddresses the unmet clinical need to interrogate and monitor the function of the gut microbiota, allowing better diagnosis and development of personalized therapies for gastrointestinal disorders, resulting in earlier and more successful relief of symptoms, and reduced healthcare costs.Atmosplatform is underpinned by theAtmoGas Capsule, a world-first ingestible gas-sensing capsule that senses clinically important gaseous biomarkers produced by the microbiome in the gastrointestinal system. This data is wirelessly transmitted to the cloud for aggregation and analysis.

Excerpt from:

Follow the Money: Cash for AI Models, Oncology, Hematology Therapies - Bio-IT World

How this philosophy graduate became interested in the world of AI – Siliconrepublic.com

Iva Simon Bubalo talks about her career path to date, from studying philosophy to wanting to build tech solutions.

Iva Simon Bubalo is about to start studying AI, but took a very winding path to get there. While she is currently working as an analyst in the technology sector in Dublin,she started out studying philosophy eight years ago.

At that point, Simon Bubalo was curious about the human mind, she tells me, and how certain ideas such as justice, fairness and freedom influenced the course of history.

I vividly remember one of the first Introduction to Philosophy classes where the professor asked: So, you want to become professional thinkers? Why?

The room was suddenly filled with differences between the status-quo and what things ought to be, different value systems, ethical considerations of right and wrong, talented beginner impact assessments and the efforts to identify what is essential, she says. In order to be able to do those things effectively, our minds needed to train rigorously in one specific tool, and that was logic.

Simon Bubalo now has a masters degree in philosophy and language studies, but her next step will see her take part in the masters course in computer science and AI at NUI Galway.

So, how does philosophy compare to the world of tech and AI? According to Simon Bubalo, there are a lot of touch points between the two, particularly around how the human mind and reasoning work. The key, she says, is that philosophical thinking brings disruption.

Philosophy is a unique primer for analytics, project and stakeholder management, she explains, as it involves an understanding of human rationale, development cycles, and identifying needs and wants.

Being able to ask the right questions, form the right hypotheses, consider the problem from multiple angles, identify the root cause, lead or follow the argument and instil a sense of purpose in the group are all soft skills valuable in any business setting, she says.

When we do these workshops and design-thinking sessions, I realise that this is a skill that I am familiar with from my from my humanities background IVA SIMON BUBALO

She also saw parallels between the two fields in terms of exploring basic elements of reasoning, decision making and problem solving.

How to formalise a thought, structural relationships between concepts, assumptions and implicatures in human language, and creating a general world view for machines seemed to be a common and very fertile ground for collaboration between the two disciplines, she says.

In AI, we still dont have a solid, unified definition of intelligence, while were working with this notion every day of its development. Thats what drew me to the field initially, but I also saw that there are common areas.

When she first began working in tech, Simon Bubalo was surprised by how much her background in logic helped.

[It] helped me to understand, for example, database design, she says. I was properly surprised at how much the two areas are linked, where you have to kind of understand relational models and things like that.

She was also surprised by the relevance of critical thinking and design thinking the kind of soft skills that you really developed studying philosophy.

I managed to find a lot of use cases also for stakeholder management when we were taking requirements from a stakeholder trying to understand how to build a product or report best, according to their needs, she explains.

And when we do these workshops and design-thinking sessions, I realise that this is a skill that I am familiar with from my from my humanities background. To actually understand human needs and wants is basically what philosophy has been studying.

But why did Simon Bubalo move from philosophy to the field of technology in the first place?

The building aspect drew me to it, she says. I wanted to build something in philosophy. You know, its a theoretical discipline and its very rigorous, but it is building its building culture, its building societies, its building what I think the product of humanity is: our mentality.

So what actually drew me to technology was that element of problem solving and its application in the world. Building some products that solve societal issues.

She says that making the leap from philosophy to technology was scary at the beginning, as she took on a course in data analytics while also working the field. But Im just kind of a nerdy person. So I was really excited, too, she adds.

I suppose you have a lot of pressure if youre running projects at work and you have deadlines, but you also have deadlines for college. It can get difficult. Its really important to have support in this case, such as taking days for study leave and getting exposure to people who are already in the analytics field.

I can imagine if I didnt have that kind of support, it would have been very difficult.

As a woman in technology, especially one who started out in a non-STEM field, Simon Bubalo says that her support network at Women in AI Ireland has been critical.

Joining Women in AI Ireland was pivotal in a sense that I was suddenly exposed to so many learning opportunities and empowered to follow my path to specialise in AI technology, she says. That feeling is truly empowering.

The group launched last year with the goal of increasing the number of women in AI. Its led by Alessandra Sala, who is the head of analytics research at Nokia Bell Labs. After attending her first Women in AI event, Simon Bubalo joined as a committee member and contributed to organising the next event.

In the future, I wish to see more women from all different backgrounds, especially humanities and social sciences, and underrepresented minorities find their place in Irelands data science and AI ecosystem.

Simon Bubalo finishes up our conversation with some advice for people thinking about making a big career pivot.

In almost every area, I would say its a matter of being explicit about where you want to go and what you want to do. After that, I find that people are very supportive. I think at the beginning of your career, thats extremely important, to be able to show your curiosity and have people respond and support you.

And for anyone hoping to move into the field of AI especially women she adds: I would advise people to anchor themselves to something that they know, because AI is a highly interdisciplinary field and I feel like people from all different sorts of backgrounds can find something thats related, whether its psychology, agriculture or marketing. I feel like its a technology thats starting to be applied everywhere.

Also, get to know the ecosystem. Its important to have exposure, because you dont know what you dont know. The people around you who are far ahead in the industry can give you guidance.

Having that support of the community is really, really empowering. Having the support of the community of women while being in this tech industry is very empowering. And, you know, I have confidence that if I cant solve something, I know a place where I can go and ask.

Visit link:

How this philosophy graduate became interested in the world of AI - Siliconrepublic.com

Global Geospatial Solutions & Services Market Artificial Intelligence (AI), Cloud, Automation, Internet of Things (IoT), and Miniaturization of…

The global geospatial solutions & services market accounted for US$ 238.5 billion in 2019 and is estimated to be US$ 1013.7 billion by 2029 and is anticipated to register a CAGR of 15.7%

Covina, CA, Aug. 04, 2020 (GLOBE NEWSWIRE) -- The report"Global Geospatial Solutions & Services Market, By Solution Type (Hardware, Software, and Service), By Technology (Geospatial Analytics, GNSS & Positioning, Scanning, and Earth Observation), By End-user (Utility, Business, Transportation, Defence & Intelligence, Infrastructural Development, Natural Resource, and Others), By Application (Surveying & Mapping, Geovisualization, Asset Management, Planning & Analysis, and Others), and By Region (North America, Europe, Asia Pacific, Latin America, and the Middle East & Africa) - Trends, Analysis and Forecast till 2029.

Key Highlights:

Request Free Sample of this Business Intelligence Report @https://www.prophecymarketinsights.com/market_insight/Insight/request-sample/4412

Analyst View:

Geospatial technology comprises GIS (geographical information systems), GPS (global positioning systems), and RS (remote sensing), a technology that provides a radically different way of producing and using maps that are required to manage communities and industries. Developed economies are expected to provide lucrative opportunities to the industry for geospatial solutions. The application of geospatial techniques across the globe has witnessed a steady growth over the past decades, owing to simple accessibility of geospatial technology in advanced nations such as the U.S. and Canada, thus further driving growth of the target the market. Moreover, rising smart city initiatives in emerging countries have resulted in the growing need for geospatial technologies for use in 3D urban mapping, monitoring and mapping natural resources. Increasing adoption of IoT, big data analysis, and Artificial Intelligence (AI) across the globe is projected to create profitable opportunities for global geospatial solutions & services market throughout the forecast period.

Browse 60 market data tables* and 35figures* through 140 slides and in-depth TOC on Global Geospatial Solutions & Services Market, By Solution Type (Hardware, Software, and Service), By Technology (Geospatial Analytics, GNSS & Positioning, Scanning, and Earth Observation), By End-user (Utility, Business, Transportation, Defence & Intelligence, Infrastructural Development, Natural Resource, and Others), By Application (Surveying & Mapping, Geovisualization, Asset Management, Planning & Analysis, and Others), and By Region (North America, Europe, Asia Pacific, Latin America, and the Middle East & Africa) - Trends, Analysis and Forecast till 2029

Ask for a Discount on this Report @https://www.prophecymarketinsights.com/market_insight/Insight/request-discount/4412

Key Market Insights from the report:

The global geospatial solutions & services market accounted for US$ 238.5 billion in 2019 and is estimated to be US$ 1013.7 billion by 2029 and is anticipated to register a CAGR of 15.7%. The market report has been segmented on the basis of solution type, technology, end-user, application, and region.

To know the upcoming trends and insights prevalent in this market, click the link below:

https://www.prophecymarketinsights.com/market_insight/Global-Geospatial-Solutions-&-Services-Market-4412

Competitive Landscape:

The prominent player operating in the global geospatial solutions & services market includes HERE Technologies, Esri (US), Hexagon (Sweden), Atkins PLC, Pitney Bowes, Topcon Corporation, DigitalGlobe, Inc. (Maxar Group), General Electric, Harris Corporation (US), and Google.

The market provides detailed information regarding the industrial base, productivity, strengths, manufacturers, and recent trends which will help companies enlarge the businesses and promote financial growth. Furthermore, the report exhibits dynamic factors including segments, sub-segments, regional marketplaces, competition, dominant key players, and market forecasts. In addition, the market includes recent collaborations, mergers, acquisitions, and partnerships along with regulatory frameworks across different regions impacting the market trajectory. Recent technological advances and innovations influencing the global market are included in the report.

Story continues

About Prophecy Market Insights

Prophecy Market Insights is specialized market research, analytics, marketing/business strategy, and solutions that offers strategic and tactical support to clients for making well-informed business decisions and to identify and achieve high-value opportunities in the target business area. We also help our clients to address business challenges and provide the best possible solutions to overcome them and transform their business.

Some Important Points Answered in this Market Report Are Given Below:

Key Topics Covered

Browse Related Reports:

More here:

Global Geospatial Solutions & Services Market Artificial Intelligence (AI), Cloud, Automation, Internet of Things (IoT), and Miniaturization of...

Zencity raises $13.5 million to help cities aggregate community feedback with AI and big data – VentureBeat

Zencity, a platform that meshes AI with big data to give municipalities insights and aggregated feedback from local communities, has raised $13.5 million from a slew of notable backers, including lead investor TLV Partners, Microsofts VC arm M12, and Salesforce Ventures. Founded in 2015, Israel-based Zencity had previously raised around $8 million, including a $6 million tranche nearly two years ago. With its latest cash injection, the company will build out new strategic partnerships and expand its market presence.

Gathering data through traditional means, such as surveys or Town Hall meetings, can be a slow and time-consuming process and fails to factor in evolving sentiment. Zencity enables local governments and city planners to extract meaningful data from a range of unstructured data sources including social networks, news websites, and even telephone hotlines to figure out what topics and concerns are on local residents minds all in real time.

Zencity uses AI to sort and classify data from across channels to identify key topics and trends, from opinions on proposed traffic measures to complaints about sidewalk maintenance or pretty much anything else that impacts a community.

Above: Zencity platform in use

Zencity said it has seen an increase in demand during the pandemic, with 90% of its clients engaging on a weekly basis, and even on weekends.

Since COVID-19, not only have we seen an increase in usage but in demand as well, cofounder and CEO Eyal Feder-Levy told VentureBeat. Zencity has signed over 40 new local governments, reaffirming our role in supporting local governments crisis management and response efforts.

Among these new partnerships are government agencies in Austin, Texas; Long Beach, California; and Oak Ridge, Tennessee. A number of municipalities also launched COVID-19 programs using the Zencity platform, including the city of Meriden in Connecticut, which used Zencity data to optimize communications around social distancing in local parks. Officials discovered negative sentiment around the use of drones to monitor crowds in parks and noticed that communications from the mayors official channels got the most engagement from residents.

Elsewhere, government officials in Fontana, California used Zencity to assess locals opinions on lockdown restrictions and regulations.

Before the COVID-19 pandemic hit, providing real-time resident feedback for local governments was core to Zencitys AI-based solution, Feder-Levy continued. And now, as local governments continue to battle the pandemic and undertake the task of economic recovery, Zencitys platform has proven pivotal in their crisis response and management efforts.

Go here to read the rest:

Zencity raises $13.5 million to help cities aggregate community feedback with AI and big data - VentureBeat

What is AI’s role in remote work? | HRExecutive.com – Human Resource Executive

Artificial intelligence will be tapped for everything from employee engagement to talent acquisition.

This is the second in a series on AI transforming the workplace. Read the first piece here.

Despite the pandemic continuing to spread, it may not be too soon for HR decision-makers to look ahead toward its eventual end.

When that day comes, HR leaders and employers around the globe will be back to, among other issues, figuring out exactly how AI-based technology can continue to drive success in the newly reopened world of work.

Seth EarleyCEO of Earley Information Science and author of The AI-Powered Enterprise: Harness the Power of Ontologies to Make Your Business Smarter, Faster and More Profitablesays the massive workplace changes brought on by the pandemic are sure to continue. And AI will play a key role.

Seth Earley

He predicts many organizations will continue to allow remote work even post-pandemic, and, thus will have to be more aware about what this change means for recruitment, job satisfaction, performance and retention. In fact, tech giant Google said in late July it will continue its remote work strategy until at least the middle of 2021. No doubt other employers will follow suit.

Earley says greater use of remote teams means that work culture will be more fluid and, by design, will need to be less dependent on physical cues that in-person communication provides. At the same time, he adds, measuring and maintaining employee engagement will require heightened integration of HR and day-to-day collaboration tools. This need will arise, Earley explains, because many HR applications are legacy-based, on-premise and siloed, making it harder to read signals of disengagement across multiple systems.

AI-powered, integrated cloud solutions will enable aggregation of analytics and flag low-engagement employees and employees likely to leave, Earley says. He adds that the rapid deployment of remote work and new collaboration technologies means that data, architecture and user experience functions were likely cut in the rush to adapt during the pandemic.

These tools will need to be reconciled, rationalized, standardized and correctly re-architected to improve rather than detract from productivity, he adds.

According to Earley, fewer in-office interactions will increase dependency on knowledge bases and improved information access and usability. For example, the days of asking a colleague for routine information because it is too difficult to locate on the intranet will, by necessity, be behind us.

Plus, he says, employees who have to use multiple systems to accomplish their work (or who have to adapt to a different teams preferred technology) will be less satisfied due to the overhead and inefficiency such disconnected environments cause. Using AI tools to integrate and bridge the gapsincluding through chatbots for answering routine or team- and project-specific questions, semantic search and better knowledge architecturewill improve job satisfaction and increase productivity.

Many elements of the post-pandemic workplace will change dramatically, such as the role of serendipitous in-person interactions, Early says. Intelligent collaboration systems can help fill this gap.

According to Earley, fixing the foundation of the employee experience, in part through AI-based applications, needs to be the priority across all market segments.

This will be a challenge in the post-pandemic era, but those who do not do it will lose talent, customers and market share to the ones that do, Earley says.

Ken Lazarus

Ken Lazarus, the former CEO of Scout Exchange, which was recently acquired by Aquent, expects to see an increase in AI use post-pandemic, as employers have become more sophisticated in their use of the technology, especially in regards to attracting and retaining talentwhich will be even more competitive in the years ahead.

AI started with bots for simple communication, Lazarus says. What Im beginning to see in many different use casesfrom screening applicants and job candidates and scheduling them for interviews to internal talent mobilityis a greater use of conversational AI.

You can even throw it some curveballs and have that be all conducted with software and artificial intelligence, rather than a human being, he says.

According to Lazarus, other future trends driving more AI-based solutions include:

*

Check back soon for part three.

Tom Starner is a freelance writer based in Philadelphia who has been covering the human resource space and all of its component processes for over two decades. He can be reached at hreletters@lrp.com.

Read the original:

What is AI's role in remote work? | HRExecutive.com - Human Resource Executive

All Aboard! Target Has Ghostly Pirate Ship Cat Scratchers and, Yep, There’s a Poop Deck – POPSUGAR

As if we needed an excuse to dress our cats up in pantaloons and little captain's hats this Halloween, Target has rolled out a new pirate-themed cat scratcher that might just give our fluffy friends the holiday chills. The store's new Hyde & EEK! Boutique Pirate Ship Cat Scratcher Toy, available exclusively at Target for $35, comes with tattered sails, paintings of a skeleton crew members, and two scratch pads for your feline to scratch their claws after sailing the high seas (aka, taking a cat nap).

Made to fit a cat-size crew, the fur-raising ship stands in at 44 inches long, 18 inches wide, and 31 inches tall, which means there's plenty of room for carrying treats treasure. Check out the ghoulish cat scratcher for yourself ahead, and take a peek at a few feline friends who've already gotten their paws on a boat to call their own.

Read the original post:

All Aboard! Target Has Ghostly Pirate Ship Cat Scratchers and, Yep, There's a Poop Deck - POPSUGAR

Windbound Preview – The Most Relaxing Survival Game You’ll Play This Year – Twinfinite

Ever since I laid eyes on Windbound in its announcement trailer, I was intrigued. Its open world elements and the freedom it gives players to explore its high seas and the uncharted islands that nestle themselves in it evoked that same sense of mystifying wonder that some of my personal favorites of recent years like Breath of the Wild, and What Remains of Edith Finch perfected in their open world design and narrative respectively. Theres a lot to love about Windbound, even if it is a rather simple survival game.

I was recently given the opportunity to go hands-on with Windbounds opening section, and a short segment of gameplay from later on in the game, and came away feeling pretty darn positive about the whole thing.

Players find themselves in control of Kara, a young warrior who finds herself shipwrecked and completely on her own on the Forbidden Islands. With nothing to her name, she must find a way to survive by hunting and scavenging the various islands she stumbles upon as she explores the high seas.

The first segment we played took place at the very beginning of Karas adventure. I explored typical uninhabited islands, with flora and fauna bringing bursts of color to the beige sands and blue waters that enveloped them. Windbounds art style is the first thing youll notice, clearly taking cues from The Legend of Zelda: Breath of the Wild.

It doesnt just look pretty, but feels like a perfect match for the constant sense of discovery and adventure that Windbound dripfeeds you. Theres detail and beauty in its environments, without going down the photorealism route. Windbounds visuals are timelessly beautiful.

As I reached the first island, I found a large pillar. Interacting with this pillar triggered some sort of beacon, and one of three icons on my screen was illuminated upon doing so. Youre never told what you should or shouldnt be doing in Windbound, so assuming I was on the right path, I hopped onto my small boat and took to the seas to try and find the remaining two.

Sailing actually feels pretty good, too, with players required to row or sail their ship from island to island. Wind direction is a significant factor you have to taken into account when traveling, tightening or loosening your sails depending on whether youre sailing into, across, or with the wind.

From our time in stages one and four of Windbound, sailing certainly feels to provide an ample gameplay challenge as the game progresses with choppier waters and harsher winds in the latter stage compared to the relatively tranquil seas in the first. It gets you thinking more about how youll reach your destination and what ship upgrades may benefit you most on your quest.

Youll spend a lot of time sailing the turbulent seas of Windbound and as became clear when we were skipped forward to the games fourth stage, the means in which you island hop can be customized to your own specifications. Do you want multiple sails, or no sails at all? Do you want to add bamboo decking and build a bag rack to help bolster your carry capacity, or would you rather place a campfire on it so you can quickly cook meat or make leather from animal hides? The choice is entirely yours, and ultimately feeds into how your Windbound experience unfolds.

Thats the beauty of Windbound. Even in the two sections of the game this preview is focused on, I never felt like I had to do anything in any particular way, or that I was being pressured to push on to the next section. I was free to venture off the beaten path and have Kara explore the farthest corners of the stages map the fog of war only being cleared as I sailed over it as long as I could survive.

Windbound is a survival game, and so you will need to manage Karas health and stamina gauges by foraging, hunting and preparing food. Berries can be found on some islands, giving you a small boost to health and stamina, or you can take down one of the native creatures and cook its meat to restore a larger amount. Ive never been the biggest fan of hunger and thirst meters in games theyre never really fun, lets be honest and I felt no different in Windbound.

Fortunately, it never felt overwhelming enough to detract from the standout sections of Windbound. Uncovering a new island seldom gets boring. Thanks to procedurally-generated islands, each time you uncover one you never know quite what youre going to get.

Our time in Chapter 4 only reinforced this, with swampy islands popping up to offer some variation on our adventure. With these came new creatures to battle and strip for resources, flora and fauna to create new tools and ship upgrades, and more story tidbits to uncover.

It was only during my stint in Chapter 4 that one of Windbounds most significant downsides became clear. Those three beacons we were seeking out in the games first chapter appear to be the recurring main objective. This would have been fine had they perhaps got trickier to scale and interact with, but they dont.

While we dont know exactly how the rest of the game will pan out, the similarities from what we could gather between Chapters 1 and 4 in their main objectives were a little disappointing. Sure, you may have to beat a few different enemy types along the way, but it still all feels a little too simple. Climb some conveniently-placed pillars, interact with the beacon, move on.

Speaking of enemy types, Windbounds combat feels a little barebones. Youll have an arsenal of spears, slings and even bows and arrows to use, but youll be able to stab and fire your ranged weapon, and perform a basic dodge. Thats it. Theres nothing more to it. While there are a few different types of projectile, my battles with enemies often resulted in repeated stabbing attacks with my spear and the occasional slinging of a rock.

From what Ive played of Windbound, it may be one of the most tranquil, relaxing and mysterious games Ive played this year. I can imagine myself kicking back after a long day and getting lost on its towering waves and tucked-away islands, desperately trying to piece together the secrets these forbidden waters hide. Its main objectives may get a little samey, and the combat a little too linear, but I still want to keep exploring all the same.

Originally posted here:

Windbound Preview - The Most Relaxing Survival Game You'll Play This Year - Twinfinite

Mahlon Reye Boats on Deadliest Catch: What Boat Was Mahlon Reye On? – The Cinemaholic

Deadliest Catch has been one of Discovery Channels mainstay shows since it burst onto the scene in 2005 and remains a fan favorite for various reasons. Fans, however, will be aggrieved to know that Mahlon Reyes, a longstanding deckhand in Deadliest Catch, has passed away.

The 38-year-old deckhand and Deadliest Catch veteran suffered a cardiac arrest and died an untimely yet peaceful death on July 27, 2020, in his hometown of Whitefish, Montana. Following his passing, Reyes family put up a tribute post on his Facebook page, saying, On Sunday night, our family together made the hardest choice weve ever made, and that was to remove him from life support. Mahlons body was tired and had put up an amazing fight. He was the strongest guy we knew. He was surrounded by so much love.

Reyes time with Deadliest Catch was certainly an eventful one, and he featured in several heart-pounding and adrenaline-filled episodes. Throughout his stint with Deadliest Catch, Reyes served as the deckhand on two vessels. He was loved by both his superiors and his colleagues. In this article, we take a look at the two boats that had the honor of having Reyes on board.

One of the two boats on which Reyes had served, the Seabrooke, is a veteran of the waters. It was built in 1979. The Seabrooke is a steel crabber that can reach a speed of 10 knots on the open sea and comes with a 29,000-gallon fuel capacity. Being a crabbing vessel, it sports all the required deck gear for crabbing, including a crab block, a tendering set-up, and a deck crane. Over its four decades of service, the Seabrooke has been upgraded several times and remains a beast on the sea.

In Deadliest Catch, the Seabrooke is under the command of Captain Scott Campbell for four seasons, and Captain Campbell remains one of the most loved captains to feature on the show. Campbell is a veteran crabber and had spent many years on the sea before Discovery Channel discovered him and offered him a spot on Deadliest Catch. In 2018, however, Campbell was forced to step back from his rough life on the high seas due to a back injury and now owns his own company, Cordova Coolers, which sells a line of foam-injected roto-molded coolers designed by him. Campbells retirement from being a crabber has also had an effect on the fate of the Seabrooke, which, as of 2019, was on sale.

The second vessel on which Reyes served is known as the Cape Caution. After being built in 1983, the Cape Caution has spent almost four decades roaming the high seas in search of the deadliest (and yummiest) catch. The fishing vessel remains in active service to this day.

In Deadliest Catch, the Cape Caution features from Seasons 9 through 12 under the command of seafaring veteran, Captain Wild Bill Wichrowski. Having served four years in the US Navy, the monetary allure of king-crab fishing lured Bill to the Bering sea. A couple of seasons into Bills stint with Deadliest Catch, Bill brought his son Zack in to serve as greenhorn aboard the Cape Caution.[Cover Picture Courtesy: Mahlon Reyes/Facebook]

Read More: Best Ship Movies

See original here:

Mahlon Reye Boats on Deadliest Catch: What Boat Was Mahlon Reye On? - The Cinemaholic