Seattle Researchers Claim to Have Built Artificial Intelligence That Has Morality – The Great Courses Daily News

By Jonny Lupsha, Current Events WriterDue to computational programming, artificial intelligence may seem like it understands issues and has a sense of moralitybut philosophically and scientifically is that possible? Photo By PopTika / Shutterstock

Many questions have arisen since the advent of artificial intelligence (AI), even in its most primitive incarnations. One philosophical point is whether AI can actually reason and make ethical decisions in an abstract sense, rather than one deduced by coding and computation.

For example, if you program into an AI that intentionally harming a living thing without provocation is bad and not to be done, will the AI understand the idea of bad, or why doing so is bad? Or will it abstain from the action without knowing why?

Researchers from a Seattle lab claim to have developed an AI machine with its own sense of morality, though the answers it gives only lead to more questions. Are its morals only a reflection of those of its creators, or did it create its own sense of right and wrong? If so, how?

Before his unfortunate passing, Dr. Daniel N. Robinson, a member of the philosophy faculty at Oxford University, explained in his video series Great Ideas of Psychology that the strong AI thesis may be asking relevant questions to solve the mystery.

Imagine, Dr. Robinson said, if someone built a general program to function that way, so the program could provide expert judgments on cardiovascular disease, constitutional law, trade agreements, and so on. If the programmer could then have the program perform these tasks in a way indistinguishable from human experts, the position of the strong AI thesis is that its programmers have conferred on it an expert intelligence.

The strong AI thesis suggests that unspecified computational processes can exist which then would sufficiently constitute intentionality due to their existence. Intentionality means making a deliberate, conscious decision, which in turn implies reasoning and a sense of values. However, is that really possible?

The incompleteness theoremGdels theoremsays that any formal system is incomplete in that it will be based on, it will require, it will depend on a theorem or axiom, the validity of which must be established outside the system itself, Dr. Robinson said. Gdels argument is a formal argument and it is true.

What do we say about any kind of computational device that would qualify as intelligent in the sense in which the artificial intelligence community talks about artificial intelligence devices?

Kurt Gdel developed this theorem with the apparent exception for human intelligence that liberates it from the limitations of his own theorem. In other words, Gdel believed there must be something about human rationality and intelligence that cant be captured by a formal system with the power to generate, say, an arithmetic.

If you accept that as a general proposition, then what you would have to say is that human intelligence cannot be mimicked or modeled on purely computational grounds, Dr. Robinson said. So, one argument against the strong AI thesis is that its not a matter of time before it succeeds and redeems its promises. It will never succeed and redeem its promises for the simple reason that the intelligence it seeks to simulate, or model, or duplicate, is, in fact, not a computationally-based [] intelligence.

Should the mystery ever be solved, we may finally be able to answer Philip K. Dicks question: Do androids dream of electric sheep?

Edited by Angela Shoemaker, The Great Courses Daily

Read the rest here:

Seattle Researchers Claim to Have Built Artificial Intelligence That Has Morality - The Great Courses Daily News

6 positive AI visions for the future of work – World Economic Forum

Current trends in AI are nothing if not remarkable. Day after day, we hear stories about systems and machines taking on tasks that, until very recently, we saw as the exclusive and permanent preserve of humankind: making medical diagnoses, drafting legal documents, designing buildings, and even composing music.

Our concern here, though, is with something even more striking: the prospect of high-level machine intelligence systems that outperform human beings at essentially every task. This is not science fiction. In a recent survey the median estimate among leading computer scientists reported a 50% chance that this technology would arrive within 45 years.

Importantly, that survey also revealed considerable disagreement. Some see high-level machine intelligence arriving much more quickly, others far more slowly, if at all. Such differences of opinion abound in the recent literature on the future of AI, from popular commentary to more expert analysis.

Yet despite these conflicting views, one thing is clear: if we think this kind of outcome might be possible, then it ought to demand our attention. Continued progress in these technologies could have extraordinarily disruptive effects it would exacerbate recent trends in inequality, undermine work as a force for social integration, and weaken a source of purpose and fulfilment for many people.

In April 2020, an ambitious initiative called Positive AI Economic Futures was launched by Stuart Russell and Charles-Edouard Boue, both members of the World Economic Forums Global AI Council (GAIC). In a series of workshops and interviews, over 150 experts from a wide variety of backgrounds gathered virtually to discuss these challenges, as well as possible positive Artificial Intelligence visions and their implications for policymakers.

Those included Madeline Ashby (science fiction author and expert in strategic foresight), Ken Liu (Hugo Award-winning science fiction and fantasy author), and economists Daron Acemoglu (MIT) and Anna Salomons (Utrecht), among many others. What follows is a summary of these conversations, developed in the Forum's report Positive AI Economic Futures.

Participants were divided on this question. One camp thought that, freed from the shackles of traditional work, humans could use their new freedom to engage in exploration, self-improvement, volunteering, or whatever else they find satisfying. Proponents of this view usually supported some form of universal basic income (UBI), while acknowledging that our current system of education hardly prepares people to fashion their own lives, free of any economic constraints.

The second camp in our workshops and interviews believed the opposite: traditional work might still be essential. To them, UBI is an admission of failure it assumes that most people will have nothing of economic value to contribute to society. They can be fed, housed, and entertained mostly by machines but otherwise left to their own devices.

People will be engaged in supplying interpersonal services that can be provided or which we prefer to be provided only by humans. These include therapy, tutoring, life coaching, and community-building. That is, if we can no longer supply routine physical labour and routine mental labour, we can still supply our humanity. For these kinds of jobs to generate real value, we will need to be much better at being human an area where our education system and scientific research base is notoriously weak.

So, whether we think that the end of traditional work would be a good thing or a bad thing, it seems that we need a radical redirection of education and science to equip individuals to live fulfilling lives or to support an economy based largely on high-value-added interpersonal services. We also need to ensure that the economic gains born of AI-enabled automation will be fairly distributed in society.

One of the greatest obstacles to action is that, at present, there is no consensus on what future we should target, perhaps because there is hardly any conversation about what might be desirable. This lack of vision is a problem because, if high-level machine intelligence does arrive, we could quickly find ourselves overwhelmed by unprecedented technological change and implacable economic forces. This would be a vast opportunity squandered.

For this reason, the workshop attendees and interview participants, from science-fiction writers to economists and AI experts, attempted to articulate positive visions of a future where Artificial Intelligence can do most of what we currently call work.

These scenarios represent possible trajectories for humanity. None of them, though, is unambiguously achievable or desirable. And while there are elements of important agreement and consensus among the visions, there are often revealing clashes, too.

The economic benefits of technological progress are widely shared around the world. The global economy is 10 times larger because AI has massively boosted productivity. Humans can do more and achieve more by sharing this prosperity. This vision could be pursued by adopting various interventions, from introducing a global tax regime to improving insurance against unemployment.

Large companies focus on developing AI that benefits humanity, and they do so without holding excessive economic or political power. This could be pursued by changing corporate ownership structures and updating antitrust policies.

Human creativity and hands-on support give people time to find new roles. People adapt to technological change and find work in newly created professions. Policies would focus on improving educational and retraining opportunities, as well as strengthening social safety nets for those who would otherwise be worse off due to automation.

The World Economic Forums Centre for the Fourth Industrial Revolution, in partnership with the UK government, has developed guidelines for more ethical and efficient government procurement of artificial intelligence (AI) technology. Governments across Europe, Latin America and the Middle East are piloting these guidelines to improve their AI procurement processes.

Our guidelines not only serve as a handy reference tool for governments looking to adopt AI technology, but also set baseline standards for effective, responsible public procurement and deployment of AI standards that can be eventually adopted by industries.

We invite organizations that are interested in the future of AI and machine learning to get involved in this initiative. Read more about our impact.

Society decides against excessive automation. Business leaders, computer scientists, and policymakers choose to develop technologies that increase rather than decrease the demand for workers. Incentives to develop human-centric AI would be strengthened and automation taxed where necessary.

New jobs are more fulfilling than those that came before. Machines handle unsafe and boring tasks, while humans move into more productive, fulfilling, and flexible jobs with greater human interaction. Policies to achieve this include strengthening labour unions and increasing worker involvement on corporate boards.

In a world with less need to work and basic needs met by UBI, well-being increasingly comes from meaningful unpaid activities. People can engage in exploration, self-improvement, volunteering or whatever else they find satisfying. Greater social engagement would be supported.

The intention is that this report starts a broader discussion about what sort of future we want and the challenges that will have to be confronted to achieve it. If technological progress continues its relentless advance, the world will look very different for our children and grandchildren. Far more debate, research, and policy engagement are needed on these questions they are now too important for us to ignore.

Written by

Stuart Russell, Professor of Computer Science and Director of the Center for Human-Compatible AI, University of California, Berkeley

Daniel Susskind, Fellow in Economics, Oxford University, and Visiting Professor, Kings College, London

The views expressed in this article are those of the author alone and not the World Economic Forum.

Read the original here:

6 positive AI visions for the future of work - World Economic Forum

Global AI (Artificial Intelligence) Market Report 2021: Ethical AI Practices and Advisory will be Incorporated in AI Technology Growth Strategy to…

DUBLIN, Nov. 25, 2021 /PRNewswire/ -- The "Future Growth Potential of the Global AI Market" report has been added to ResearchAndMarkets.com's offering.

Artificial intelligence (AI) is transforming organizations, industries, and the technology landscape. The world is moving to the increased adoption of AI-powered smart applications/systems, and this trend will increase exponentially over the next few years. AI technologies are maturing, and the need to leverage their capabilities is becoming a CXO priority.

As businesses make AI part of their core strategy, the transformation of business functions, measures, and controls to ensure ethical best practices will gain importance. The implementation and the governance of ethical AI practices will become a priority and a board-level concern.

The deployment of AI solutions that are ethical (from a regulatory and a legal standpoint), transparent, and without bias will become essential. As governments and industry bodies across the world articulate AI regulations, AI companies must establish their ethical frameworks until roadmaps are clearly defined.

The operationalization of ethical AI principles is challenging for enterprises, given the large volumes of user-centric data that need to be processed, the breadth of use-cases, the regulatory variations in operating markets, and the diverse stakeholder priorities.

This also opens up opportunities for technology vendors and service providers. To effectively partner with enterprises and monetize these opportunities, ICT providers need to assess potential areas impacting AI ethics and evaluate opportunities across the people-process-technology spectrum.

Forward-thinking technology and service companies, including large ICT providers and start-ups, are working with enterprises and industry stakeholders to leverage potential opportunities. Ethical challenges will continue to be discovered and remediated to create sustained growth in potential advisory services.

As enterprises define goals, values, strategic outcomes, and key performance metrics, the time is right for technology companies to strategically partner with enterprises in the detection and the mitigation of ethical AI concerns.

Key Topics Covered:

1. Strategic Imperatives

2. Growth Environment

3. Growth Opportunity Analysis

4. Growth Opportunity Universe

For more information about this report visit https://www.researchandmarkets.com/r/l7isqw

Media Contact:

Research and Markets Laura Wood, Senior Manager [emailprotected]

For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1904 Fax (outside U.S.): +353-1-481-1716

SOURCE Research and Markets

http://www.researchandmarkets.com

Go here to see the original:

Global AI (Artificial Intelligence) Market Report 2021: Ethical AI Practices and Advisory will be Incorporated in AI Technology Growth Strategy to...

Defining what’s ethical in artificial intelligence needs input from Africans – The Conversation CA

Artificial intelligence (AI) was once the stuff of science fiction. But its becoming widespread. It is used in mobile phone technology and motor vehicles. It powers tools for agriculture and healthcare.

But concerns have emerged about the accountability of AI and related technologies like machine learning. In December 2020 a computer scientist, Timnit Gebru, was fired from Googles Ethical AI team. She had previously raised the alarm about the social effects of bias in AI technologies. For instance, in a 2018 paper Gebru and another researcher, Joy Buolamwini, had showed how facial recognition software was less accurate in identifying women and people of colour than white men. Biases in training data can have far-reaching and unintended effects.

There is already a substantial body of research about ethics in AI. This highlights the importance of principles to ensure technologies do not simply worsen biases or even introduce new social harms. As the UNESCO draft recommendation on the ethics of AI states:

We need international and national policies and regulatory frameworks to ensure that these emerging technologies benefit humanity as a whole.

In recent years, many frameworks and guidelines have been created that identify objectives and priorities for ethical AI.

This is certainly a step in the right direction. But its also critical to look beyond technical solutions when addressing issues of bias or inclusivity. Biases can enter at the level of who frames the objectives and balances the priorities.

In a recent paper, we argue that inclusivity and diversity also need to be at the level of identifying values and defining frameworks of what counts as ethical AI in the first place. This is especially pertinent when considering the growth of AI research and machine learning across the African continent.

Research and development of AI and machine learning technologies is growing in African countries. Programmes such as Data Science Africa, Data Science Nigeria, and the Deep Learning Indaba with its satellite IndabaX events, which have so far been held in 27 different African countries, illustrate the interest and human investment in the fields.

The potential of AI and related technologies to promote opportunities for growth, development and democratisation in Africa is a key driver of this research.

Yet very few African voices have so far been involved in the international ethical frameworks that aim to guide the research. This might not be a problem if the principles and values in those frameworks have universal application. But its not clear that they do.

For instance, the European AI4People framework offers a synthesis of six other ethical frameworks. It identifies respect for autonomy as one of its key principles. This principle has been criticised within the applied ethical field of bioethics. It is seen as failing to do justice to the communitarian values common across Africa. These focus less on the individual and more on community, even requiring that exceptions are made to upholding such a principle to allow for effective interventions.

Challenges like these or even acknowledgement that there could be such challenges are largely absent from the discussions and frameworks for ethical AI.

Just like training data can entrench existing inequalities and injustices, so can failing to recognise the possibility of diverse sets of values that can vary across social, cultural and political contexts.

In addition, failing to take into account social, cultural and political contexts can mean that even a seemingly perfect ethical technical solution can be ineffective or misguided once implemented.

For machine learning to be effective at making useful predictions, any learning system needs access to training data. This involves samples of the data of interest: inputs in the form of multiple features or measurements, and outputs which are the labels scientists want to predict. In most cases, both these features and labels require human knowledge of the problem. But a failure to correctly account for the local context could result in underperforming systems.

For example, mobile phone call records have been used to estimate population sizes before and after disasters. However, vulnerable populations are less likely to have access to mobile devices. So, this kind of approach could yield results that arent useful.

Similarly, computer vision technologies for identifying different kinds of structures in an area will likely underperform where different construction materials are used. In both of these cases, as we and other colleagues discuss in another recent paper, not accounting for regional differences may have profound effects on anything from the delivery of disaster aid, to the performance of autonomous systems.

AI technologies must not simply worsen or incorporate the problematic aspects of current human societies.

Being sensitive to and inclusive of different contexts is vital for designing effective technical solutions. It is equally important not to assume that values are universal. Those developing AI need to start including people of different backgrounds: not just in the technical aspects of designing data sets and the like but also in defining the values that can be called upon to frame and set objectives and priorities.

See original here:

Defining what's ethical in artificial intelligence needs input from Africans - The Conversation CA

Artificial intelligence reveals the secrets of the spider web – Digital Journal

Nephila clavata, a golden orb weaver. Image by Kinori via Wikimedia / Public Domain

Despite years of in-depth study there is much still to learn about a spiders web, from the intricate patterns to the tensile strength and optoelectronic architectures. Webs are highly-complex structures, as with spider webs actively springing towards prey as the result of electrically-conductive glue spread across their surface.

Webs also contain multiple silk types, with viscid silk (stretchy, wet and sticky) and dragline silk (stiff and dry) being responsible for the strength of the web.

In a newly reported research topic, scientists from Johns Hopkins University have discovered how spiders build webs. This has been revealed through a combination of night vision and artificial intelligence.

Night vision recording enabled the researchers to track and record each movement of spiders working in the dark. This helped to develop an algorithm that has led to an understanding of how spiders are able to create webs structures of elegance, complexity and geometric precision.

The study reveals that the basis of choreographed web building is from the sense of touch by the spider (vision is not a major feature of the nocturnal process). This requires innate behaviors and finely tuned motor skills.

The first wave of the study involved assessing six spiders. For this, millions of individual leg actions were captured and then assessed using machine vision software, especially designed specifically to detect each individual limb movement.

From subsequent examination of different spiders, it became apparent that web-making behaviors are remarkably similar across spiders. The assessment of the patterns using AI enabled the researchers to predict the part of a web a spider was working on just from seeing the position of a leg. This led to the scientists to propose that a rule-based system for web building was at play, even though the individual webs from different species of spider differed.

This led to the research conclusion that the rules of web-building are encoded in the brains of all species of spider.

The research has led to a web-building playbook that brings new understanding of how web building, across the period of many hours, occurs. The researchers have produced a video that explains more aspects of the research.

Moving beyond the first phase of the research, the scientists intend to conduct experiments using mind-altering drugs on different spiders. The aim here is to determine the specific circuits in the spiders brain that are responsible for the various stages of web-building.

The research is presented in the journal Current Biology, with the paper titled Distinct movement patterns generate stages of spider web building.

Link:

Artificial intelligence reveals the secrets of the spider web - Digital Journal

Artificial intelligence used to count tens of thousands of puffins – The Scotsman

For years, the answer was by hard graft, with rangers checking burrows and nests for birds and eggs, and observers forced to sit for hours at a time armed with clipboards and no little patience.

But in a marriage of nature and cutting edge technology, the arduous task of establishing the puffin population on the Isle of May is being carried out using artificial intelligence, machine learning, and image recognition software.

Those behind the project believe it could help minimise disruption to birds breeding and feeding habits, particularly when faced with developments such as offshore windfarms.

The initiative uses four cameras placed in stainless steel boxes at various points of the island in the Firth of Forth in order to capture live footage of the puffins. Each box has a condensation heater as well as a backup power supply.

The footage is then stored and processed using an artificial intelligence program which is capable of spotting the puffins and tracking them frame by frame. Each bird is assigned a unique identifier, allowing the software to follow its movements and establish the overall number of puffins.

The scheme is the brainchild of SSE Renewables, which wants to find out if its Beatrice windfarm, situated eight miles off the coast of Wick, is impacting on the flight paths of the birds as they travel to gather food to take back to their burrows.

It teamed up with the tech giant, Microsoft, and Avanade, a US-based artificial intelligence specialist, to roll out the tracking program, which monitored the birds as they landed to breed in late March and early April, before returning to sea in August.

With around 80,000 puffins recorded on the island in March last year, the data is currently being analysed, but there are hopes the approach could be replicated in order to monitor the habitats of other species.

Simon Turner, chief technology officer of data and AI at Avanade, said the use of technology made the counting process more efficient and less invasive.

Using cameras and AI, we are now able to count the number of birds and monitor their burrows all day, every day, without going near them, he said.

The AI will draw a box around each puffin it spots and give them unique tags like 001, 002, or 003. When the camera moves to the next frame, it understands that the puffin closest to a particular box is the same one.

James Scobie, SSE Renewables digital delivery lead for the project, said: Were still looking at the data but through the trial, we have already made some interesting findings.

The land was barren when we first visited the island in February. However, as spring progressed into summer, the array of flowers that had grown tricked the AI. We learned that it would be important to have seasonal training of the AI to maintain the level of accuracy we expect.

We found that on average, the highest level of puffin activity in a colony is observed at dawn and at dusk, but this is dependent on tidal times and fishing conditions. Adult puffins will not turn down a good opportunity to catch food for their young.

A message from the editor:

See the rest here:

Artificial intelligence used to count tens of thousands of puffins - The Scotsman

Artificial intelligence and mobility, who’s at the wheel? – Innovation Origins

Last week, the Dutch Scientific Council for Government Policy (WRR) found that the Netherlands is not well prepared for the consequences of artificial intelligence (AI). In Challenge AI, The New Systems Technology (in Dutch), the council calls for regulation of technology and data, its use, and social implications. And rightly so. Machines will have more computing power than humans in a few decades. If devices with artificial intelligence then start to think and decide for themselves, it is to be hoped that they will observe a number of commandments.

AI is also entering mobility, and the problems the WRR refers to are also at play there. The most imaginative AI appearance in mobility is the autonomous car. It is potentially much safer and more comfortable, but there are tricky liability issues if an accident occurs. Should you as a human always be able to override the system? And what would it take for a self-driving car to interpret the law flexibly when necessary? This is something we, as humans, do every minute in daily traffic, precisely in the service of safety.

One day, when I was driving along with traffic at 120 km/h on the E25 through the Ardennes, my automatic cruise control suddenly lowered the speed limit to 70 km/h because the road workers had forgotten to remove a speed sign. Fortunately, I was able to override that and not adhere to that officially legal speed limit. Despite this example, however, in the future, we should not start allowing extremely smart machines to be flexible with the rules, just like us, without any ethical or moral framework. That could lead to dystopian states where machines, perhaps unintentionally, start endangering humanity.

Your weekly innovation overviewEvery sunday the best articles of the week in your inbox.

But the AI issues in mobility go far beyond the self-driving car. What if Google or TomTom takes over traffic management from the road authority? What if the big tech giants take over the entire planning of public transport once people plan their journeys solely through their services? What if those platforms, after a friendly free initial period, start abusing their achieved monopolies? Who will guarantee availability and safety? Cab services like Uber are more popular than the classic taxi, but who can oblige them, as with regulated cab transport, to also accept guide dogs and wheelchairs, for example, so that a significant part of society is not left aside?

Artificial intelligence will make mobility better, safer, and more comfortable. But these systems need ethical and moral frameworks within which they can achieve this. In the Netherlands, companies, and knowledge institutions have already united in the Dutch AI Coalition. They received 276 million from the growth fund earlier this year to strengthen the Dutch position internationally. Wisely, the first part of that goes to so-called Elsa labs: Ethical, Legal & Societal aspects of AI, in which consortia focus on these aspects. Just as in mobility, AI will help steer other areas as well, but we still want to be able to take the wheel ourselves.

Maarten Steinbuch and Carlo van de Weijer are alternately writingthis weekly column, originally published (in Dutch)in FD. Did you like it? Theres more to enjoy: a book with a selection of these columns has just been published by24U and distributed byLecturis.

View post:

Artificial intelligence and mobility, who's at the wheel? - Innovation Origins

[Webinar] Balancing Compliance with AI Solutions – How Artificial Intelligence Can Drive the Future of Work by Enabling Fair, Efficient, and Auditable…

December 7th, 2021

2:00 PM - 3:00 PM EDT

*Eligible for HRCI and SHRM recertification credits

With the expansion of Talent Acquisition responsibilities and complex landscape from hiring recovery, talent redeployment, the great resignation, and DE&I initiatives, there has never been a greater need for intelligent, augmentation and automation solutions for recruiters, managers, and sourcers. There is also growing awareness of problematic artificial intelligence solutions being used across the HR space and the perils of efficiency and effectiveness solutions at the cost of fairness and diversity goals. These concerns are compounded with increased inquiries from employees and candidates of the AI solutions used to determine or influence their careers, particularly whats inside the AI and how they are tested for bias. Join this one-hour webinar hosted by HiredScore CEO & Founder Athena Karp as she shares:

Speakers

Athena Karp

CEO & Founder @HiredScore

Athena Karp is the founder and CEO of HiredScore, an artificial intelligence HR technology company that powers the global Fortune 500. HiredScore leverages the power of data science and machine learning to help companies reach diversity and inclusion goals, adapt for the future of work, provide talent mobility and opportunity, and HR efficiencies. HiredScore has won best-in-class industry recognition and honors for delivering business value, accelerating HR transformations, and leading innovation around bias mitigation and ethical AI.

View post:

[Webinar] Balancing Compliance with AI Solutions - How Artificial Intelligence Can Drive the Future of Work by Enabling Fair, Efficient, and Auditable...

Can artificial intelligence be harnessed to protect the public from random assailants? – The Japan Times

On the evening of Oct. 31, 25-year-old Fukuoka native Kyota Hattori wearing makeup and a purple and green ensemble to emulate the villainous Joker of Batman franchise fame boarded a Keio Line train at Keio-Hachioji Station, heading for central Tokyo. After spending half an hour meandering around Shibuya, which was packed with costumed revelers feting Halloween, Hattori headed back toward Hachioji, but reversed direction again at Chofu, where he changed to a Shinjuku-bound limited express train.

Soon after the doors closed, according to eye witness reports, he removed a survival knife and liquids from a backpack. When a 72-year-old male passenger tried to intervene, Hattori allegedly stabbed the man and proceeded to pursue fleeing passengers, splashing them with lighter fluid, which he then ignited. The stabbing victim was hospitalized in a critical condition and 16 other passengers suffered burns and smoke inhalation.

Videos captured on smartphones showed desperate passengers struggling to squeeze out the trains partially opened windows onto the platform of Kokuryo Station.

Id failed at work, my friendships didnt work out and I wanted to die, Shukan Jitsuwa (Nov. 25) reported Hattori as having told police. Since I couldnt die on my own, I wanted to carry out a mass murder on Halloween and get the death penalty.

He had only a few thousand yen on his person at the time of his arrest, after reportedly telling his interrogators he had spent about 200,000 on his Joker costume.

Yukan Fuji (Nov. 12) categorized Hattoris act as essentially a copycat crime, inspired by a similar rampage that occurred in August on an Odakyu Line train.

Watching the news reports of other incidents may have created a sense of sympathy for the criminal, wanting to create a commotion themselves, or perhaps from a sense of frustration that they have beaten him to it, explained Yasuyuki Deguchi, a professor of criminal psychology at Tokyo Future University. Many people are usually not good at taking action on their own, and dont consider the risk and cost of crime.

One reason trains are being singled out for such acts is that they are moving enclosed spaces, so if the driver and conductor arent informed via emergency intercom that something has occurred, they cant take action to halt the train and permit passengers to evacuate.

Then what can rail companies do, proactively, to protect their passengers?

Videos captured by smartphones show desperate passengers struggling to squeeze out the trains partially opened windows onto the platform of Kokuryo Station during the Oct. 31 attack. | KYODO

Toyo University criminologist Masayuki Kiriu told Aera (Nov. 15) that if the rail companies were to broadcast announcements such as Lets be cautious so as not to be confronted by crime over the trains public address systems, it may deter potential criminals.

Increased passenger alertness is also desirable. Kiriu was critical of peoples habitual gazing at their smartphones, which distracts them from awareness of their surroundings.

How about rail companies appealing to passengers directly, using visual images or messages? he suggested. I think it might prove beneficial as symptomatic treatment (i.e., therapy that eases the symptoms without addressing the basic cause of a condition).

In the most extreme cases, however, passengers may be forced to defend themselves. Takeshi Nishio, chief instructor in the Israeli unarmed combat skill of Krav Maga at the MagaGYMs in Akasaka and Roppongi, pointed out that even the tip of a closed umbrella can be aimed at an assailants throat or eye, or used to strike the wrist of a person brandishing a knife. Flinging keys or a smartphone into an attackers face can also be effective.

Nikkan Gendai (Nov. 18) believes the day may be approaching when artificial intelligence can be harnessed to protect the public from random assailants.

A Tokyo-based company named Earth Eyes already markets systems aimed at shoplifters. NEC Corp.s facial recognition technology, in addition to identifying wanted criminals from a database, can be tweaked to spot other suspicious behavior patterns, such as loitering, particularly for long periods, shuffling through a crowd or standing in place. Likewise NTT Docomo has been collaborating with Fujitsu to develop a security system.

Other suspicious behavior might include carrying a bag or shouldering a backpack (which might be used to carry weapons), wearing a face mask (obviously negated during the current pandemic) or headwear, and running shoes (Theyre preferred over street shoes or sandals if somebody might need to make a quick getaway).

Its hypothetical of course, but had an AI system been in place, the Kyoto Animation arson attack in July 2019 the largest mass murder in Japans modern history, with 36 people killed and another 34 injured might have been thwarted. Perpetrators are known to carefully determine the time and place of the crime beforehand, so observing passers-by might have prevented the crime. (The attacker had been seen prowling near the building several days before the incident, walking between parked cars for no apparent reason.)

While most news outlets expressed outrage, a few articles balanced their reports with a degree of empathy toward Hattori, describing him as a himote otoko (an unpopular man) a term that has parallels in the incels (involuntary celibate males) who inhabit the web.

In a story titled The criminal on the Keio Line attack was a monster created by a disparate society, Jitsuwa Bunka Taboo (January) identified such random attackers as single males, many coming from impoverished families, who work at irregular jobs in the service industry, typically at wages so low that even if they live with their families they are unable to save money. Others, once being rejected by a girlfriend, never recover their mental equilibrium.

In such a situation, Jitsuwa Bunka Taboos writer concludes, we dont know when another Hattori-like character will attempt to kill again. We need to build a society in which even losers and unpopular people can live happily in their own way, as opposed to one where the winners get everything. That will be the only way to ensure we can live in safety.

Big in Japan is a weekly column that focuses on issues being discussed by domestic media organizations.

In a time of both misinformation and too much information, quality journalism is more crucial than ever.By subscribing, you can help us get the story right.

PHOTO GALLERY (CLICK TO ENLARGE)

Go here to read the rest:

Can artificial intelligence be harnessed to protect the public from random assailants? - The Japan Times

The Future of Artificial Intelligence Autonomous Killing Machines: What You Need to Know About Military AI – SOFREP

Artificial intelligence, or AI, has created a lot of buzz, and rightfully so. Anyone remember Skynet? If so drop a comment. Ok, back to our regular programming. Military AI is no different. From self-driving vehicles to drone swarms, military AI will be used to increase the speed of operations and combat effectiveness. Lets look at the future of military AI including some ethical implications.

Military AI is a topic thats been around for a while. Those who know anything about military AI know that it has been around for years, just not talked about for reasons you can imagine. And it has been evolving.

These days Military AI has been helping with complex tasks such as target analysis and surveillance in combat.

Another great use for military AI in the future is to have it work with combat warfighters. AI could possibly be used for a tactical advantage because it would be able to predict an enemys next move before it happens. However, a good question to ask ourselves is, can Chinas AI outperform ours? Based on recent hacks by China on U.S. infrastructure this seems like a concern we should take very seriously.

Artificial intelligence (AI) is any machine or computer-generated intelligence that is intended to emulate the natural intelligence of humans. AI is generated by machines, but its avenues of application are limitless, and its no surprise that the military has taken an interest in this technology.

AI can be used to identify targets on the battlefield. Instead of relying on human intelligence, drones will be able to scan the battlefield and identify targets on their own.

This will help to reduce the number of warfighters on the battlefield, which will in turn save thousands of lives.

The future of Military AI is bright until it isnt. Lets be real, weve all seen the Terminator movies.

Military AI has the potential to increase combat effectiveness and reduce the workforce. It will be used to autonomously pilot vehicles, respond to threats in the air, and conduct reconnaissance and guide smart weapon systems. It will help with strategic planning and even provide assistance during ground combat.

Imagine for a second, the AI version of the disgruntled E-4!

AI is not only beneficial to military operations, it will also help with those boring jobs in logistics and supply chains. It can be used to predict demand for supplies and the most efficient routes for transport.

While there are many benefits of military AI, there are also potential risks. Some of these risks include military AI being hacked, weaponized, or misused in ways not intended by its creators. China or Putins Russia anyone?

Read Next: The Skyborg Program: The Air Forces new plan to give fighter pilots drone sidekicks

The future of Military AI is kind of fuzzy which could be good or bad.

In the near future, AI will be a part of military operations. It will be used in the field for combat and reconnaissance. In fact, AI-powered drones have been used in both battlefields and disaster zones, from Afghanistan to the Fukushima Nuclear Plant.

AI will be used in a variety of ways. It will be a part of combat operations, reconnaissance, and training. For example, AI can be used to build a 3D map of a combat zone. This would allow military personnel to plan their operations based on this map.

Another example of how AI can be used is in the training of new recruits. The military could use AI to simulate possible combat scenarios and determine which recruits are most likely to succeed in these scenarios. In this way, the military could train recruits using AI before deploying them to a combat zone, and we think thats pretty cool.

Perhaps surprisingly, AI may also be used in negotiation scenarios. Military negotiators could use AI to predict and prepare for negotiation outcomes and then use that data to plan their next steps in negotiations, such as predicting what response an opponent might have.

These are just some of the examples of how AI will be used in military operations in the future.

It is no secret that current warfare needs to be reconsidered. What weve done in the past isnt working. Afghanistan anyone? Bueller? With the emergence of new technologies, what we know about warfare needs to be reconsidered as well.

The U.S. Department of Defense has announced a major initiative to invest in artificial intelligence for a range of military operations from predicting the weather to detecting and tracking enemies.

It will have a huge impact on both the speed and combat effectiveness of operations, as well as the ethical implications of what we leave behind for future generations. Military AI is not an issue that will go away anytime soon. And as it becomes more prevalent, it will create a future that is quite different from what we know now.

AI will open up many possibilities for military operations in the future. For example, AI has the potential to take on tasks that are not human-safe. AI will be able to analyze data at a faster rate than humans, which will provide a tactical advantage.

If autonomous tanks are also developed, they could easily take over for soldiers on the ground in the same way that drones have taken over for pilots in the air.

AI can also be used to better coordinate drone swarms. The use of drones in the military has become more popular, and these robots can be used to take on many different tasks. For example, swarms of drones could be used to both attack and defend.

However, these advancements come with ethical implications. For example, autonomous weapons could potentially kill without human input. They could be used indiscriminately and quite possibly create more civilian casualties than conventional weapons.

So, what does this all mean? The future of military AI is unclear and may be full of ethical dilemmas. However, it seems like AI is here to stay and will continue to provide both benefits and hindrances.

The future of military AI is now. We are already seeing the effects of military AI in operations today. For example, Lockheed Martins Aegis system can control multiple air defense systems simultaneously. This means that the Aegis system can monitor more than 100 targets at one time.

However, AI will have a much more significant impact on the military in the near future. AI will have a profound effect on combat operations, logistics, and training. Combat operations will be faster and more precise because AI can handle complex tasks more quickly than humans. Logistics will be more efficient because AI systems will be able to better coordinate the transport of supplies. And training will be more effective because AI can provide personalized instruction to soldiers.

But it may not be too long before we see autonomous killing machines. Russian President Vladimir Putin has indicated an interest in developing robot fighting machines with artificial intelligence. Remember our previous Terminator comment? And other countries are developing autonomous lethal machines, too. Their names rhyme with Russia, and China

There are many ethical implications regarding the use of military AI. For example, there is the risk of AI taking control of military assets, like drones. If one AI-controlled drone gets hacked, it could cause mass destruction.

Another ethical issue is the use of autonomous weapons systems. Many people argue that these systems are immoral because they dont give soldiers the chance to defend themselves.

The use of AI in military operations will continue to grow in the coming years. Its important to keep in mind the ethical implications that come with this growth.

Technology always has a way of evolving and improving. Thats one of its best features. But not all innovation is good.

This means that AI will be used to fight wars, which is a cause for concern.

In the past, humans have had to make difficult decisions in times of war. But with AI, that decision could be made without the input of a human moral compass.

Thats why theres debate over whether or not there should be limits on what can be done with military AI. It usually comes down to two camps Elon Musks camp of, AI will destroy us. Then the more optimistic camp of Tony Robbins, AI will save us from ourselves.

An increasing number of people believe that AI should be regulated (Elon is one, and I tend to agree with him) and that there should be a ban on autonomous weapons. These arguments center on the idea that without a human in the decision-making process, there is no accountability. In fact, rewind that theres often no accountability within the current government. Afghanistan pullout anyone?

In light of these controversies, what does the future hold?

The future of military AI is unclear, but it will be a major force in future wars. There are ethical implications that we need to think about and try to regulate now before Skynet takes over and makes slaves of us all.

Veterans and active-duty military get a year of Fox Nation for free. Dont delay. Sign up today by clicking the button below!

If you enjoyed this article, please consider supporting our Veteran Editorial by becoming a SOFREP subscriber. Click here to get 3 months of full ad-free access for only $1 $29.97.

View original post here:

The Future of Artificial Intelligence Autonomous Killing Machines: What You Need to Know About Military AI - SOFREP

Who Says AI Is Not For Women? Here Are 6 Women Leading AI Field In India – SheThePeople

I dont see tech or AI as hostile to women. There are many successful women in AI both at the academic as well as industry levels, says Ramya Joseph, the founder of AI-based entrepreneurial start-up Pefin, the worlds first AI financial advisor. And even on my team at Pefin, women hold senior technology positions. There tends to be a misconception that tech tends to attract a geeky or techy kind of personality, which is not the case at all,

Joseph has a bachelors degree in computer science and masters in Artificial Intelligence, Machine Learning and Financial Engineering. As a wife, mother and daughter, Joseph could closely relate to the crisis of financial advice to plan for the future. She came up with the idea of founding Pefin when her father lost his job due to a lack of financial advice when he jeopardised his retirement plans. Navigating and solving his problems, Joseph realised that many were telling the same problem. Hence she came up with the idea of an AI-driven financial adviser.

No doubt Artificial Intelligence is one of the growing industries in the field of professionalism. As new inventions and developments knock at our doors, the relation between humans and computers is being reassessed. With the expansion of AI, new skills and exceptional human labour is in high demand. But the problem is that despite the evolution in society, the gender pay gap is not shrinking. As per the wef forum, only 22 per cent of AI professionals are women. The report suggests that there is a gender gap of around 72 per cent.

Despite this, many women are breaking the glass ceilings and reforming the field of Artificial Intelligence. Through their skills and leadership, these women are carving the path for other women to participate as AI professionals. So in this article, I am going to list out some women AI professionals in India who changing the gender dynamics through their excellence.

Amarjeet Kaur is a research scientist at TechMahindra. She has a PhD in Computer Science and Technology. Kaur specialises in research techniques and technologies like graph-based text analysis, latent semantic analysis and concept maps among others. She also has expertise in experimentation and field research, data collection and analysis and project management. She is known for her organisational skills and willingness to take charge.

Kaur has also worked with the Department of Science and Technology at Women Scientist Scheme. As a part of the scheme, she helped in developing a technique to automatically evaluate long descriptive answers. With more than ten years of research and teaching experience, Kaur has excellent academic skills. Her academic skills and innovative techniques have gained her a gold medal and a toppers position at Mumbai University. Her innovative skills and course material has also received a place in Mumbai Universitys artificial intelligence and machine learning courses.

Sanghamitra Bandyopadhyay works at the Machine Intelligence Unit of the Indian Statistical Institute. She also completed her PhD from the institute and became its director serving for the years 2015 to 2020. Bandyopadhyaya is also a member of the Science, Technology and Innovation Advisory Council of the Prime Minister of India (PM-STIAC). She specialises in fields like machine learning, bioinformatics, data mining and soft and evolutionary computation.

She has been felicitated with several awards for her work like Bhatnagar Prize, Infosys award, TWAS Prize, DBT National Women Bioscientist Award (Young) and more. She has written around 300 research papers and has edited three books.

Ashwini Ashokan is the founder of MadStreetDen, an artificial intelligence company that uses image recognising platforms to power retail, education, health, media and more. Starting up in 2014, the venture is headquartered in California with offices access Chennai, Bangalore, Tokyo, London and more. She co-founded the platform along with her husband. Speaking to SheThePeople, Ashokan said, Its only natural that the AI we build mimics what weve fed it, until the agency of its own, which could be good or bad. As an industry, we need to think about what were teaching our AI, She also added, Every line of code we write, every feature we put in products we need to ask ourselves, what effect does this have on the way the world will be interacting with it.

Apurva Madiraju is a vice president at Swiss Re Global Business Solutions India in Bangalore. She is leading the data analytics and data science team of the audit function. As the leader, she is responsible for building machine learning and text analytics solution to deal with audit compliance risk.

Madiraju flaunts 11 years of experience across diverse fields like artificial intelligence, data science, machine learning and data engineering. She has developed multiple AI and ML-driven solutions like ticket volume forecasting models, turn-around-time prediction solutions and more. She has worked across companies globally to lead the conceptualisation, development and deployment of many AI and ML-based solutions for enterprises.

With more than 20 years of experience as a Data Scientist, Bindu Narayan serves as the Senior Manager with Advanced Analytics and AI at EY GDS Data and Analytics Practice. At EY, Narayan is AI competency leader for EYs Global Delivery Services. She along with her team offers virtual assistant solutions to clients across the industry. Moreover, with her skills, Narayan has developed many innovative AI solutions and leads in the field of machine learning, customer and marketing analytics and predictive modelling. She completed her PhD from IIT Madras on the topic of modelling Customer Satisfaction and Loyalty.

Excerpt from:

Who Says AI Is Not For Women? Here Are 6 Women Leading AI Field In India - SheThePeople

Top Artificial Intelligence Jobs in MNCs to Apply this Nov Weekend – Analytics Insight

Artificial intelligence had an eventful decade so far. With 2021 bringing more into play, technology has stronghold its place in every ecosystem. Especially, business organizations are enhancing their AI capabilities to streamline routine processes. Starting from attending to customer queries to powering autonomous vehicles, the influence of artificial intelligence is no joke. Besides, critical fields like healthcare, education, and space are adopting AI to sophisticate their existing features and bring in innovation. Owing to the increasing usage, the demand for AI skills and AI professionals is also growing. According to a report, the hiring for artificial intelligence jobs has been rising at a rate of 74% annually. People who are taking up top artificial intelligence jobs explore more tech aspects while also getting a high salary. However, improving AI skills and keeping up with the emerging trends, tools, and technologies are important points while working in the digital sphere. MNCs always look for artificial intelligence professionals who have both tech knowledge and business skills. Analytics Insight has listed the top artificial intelligence jobs that aspirants should apply for in MNCs today.

Location: Bengaluru, Karnataka

Roles and Responsibilities: As a team lead/consultant- Artificial Intelligence at Accenture, the candidate will be aligned with the companys insights and intelligence verticals to help it generate insights by leveraging the latest artificial intelligence and analytics techniques to deliver value to clients. He/she should also help Accenture apply their expertise in building world-class solutions, conquering business problems, addressing technical challenges using AI platforms and technologies. In the company, the Artificial Intelligence team is responsible for the creation, deployment, and managing of the operations. Therefore, the candidate will be responsible for building AI solutions that benefit experts, research, and platform engineers for coming up with creative solutions. In the role, the candidate needs to analyze and solve moderately complex problems. They should create new solutions, leveraging and adapting existing methods and procedures.

Qualification:

Apply here for the job.

Location: Bengaluru, Karnataka

Roles and Responsibilities: The data scientist: artificial intelligence at IBM will help transform the companys clients data into tangible business value by analyzing information, communicating outcomes, and collaborating on product development. He/she will develop, maintain, evaluate, and test big data solutions. They will be involved in the design of data solutions using artificial intelligence-based technologies like H2O, Tensorflow. To deliver top-notch service, the candidate is expected to have skills in designing algorithms, implementing pipelines, validating model performance, and developing interfaces such as APIs. They are also responsible for designing and algorithms and implementation including loading from disparate datasets, pre-processing using Hive and Pig.

Qualification:

Apply here for the job.

Location: Pune

Roles and Responsibilities: Through this role, Philips gives an opportunity for the selected candidate to contribute towards the AI solutions roadmap with the responsibility of Architecture of a range of AI solutions and platforms for DXR. He/she will contribute towards the innovation of AI solutions and explore different internal and external providers, work on associated processes and procedures, participate and lead the innovation studios, develop, maintain technology inventory along while keeping the focus on execution. They should prepare proposals and alternatives, guiding towards the optimum balance of short and long-term requirements.

Qualification:

Apply here for the job.

Location: Chennai

Roles and Responsibilities: As a data scientist- manufacturing intelligence at Pfizer Limited, the candidate is expected to provide expert advanced modeling and data analytics support to all teams in the manufacturing intelligence organization and support project execution at manufacturing sites. He/she should translate business requirements into tangible solution specifications and high-quality on-time deliverables. They should leverage data manipulation/transformation, model selection, model training, cross-validation, and deployment support at scale.

Qualification:

Apply here for the job.

Location: Bangalore (WFH during Covid)

Roles and Responsibilities: General Motors expects its senior/lead engineer- ME automation (artificial intelligence) to propose and implement high-impact data and analytic solutions that address business challenges across a variety of manufacturing business units. The candidate should plan and execute projects including direction and oversight of technical work of associate data scientists. They should work with diverse technical teams and provide data and analytic oversight to ensure project deliverables that fulfill business needs and timing.

Qualification:

Apply here for the job.

Read the original here:

Top Artificial Intelligence Jobs in MNCs to Apply this Nov Weekend - Analytics Insight

Job hunting nightmare: 1,000 plus job applications and still no offers – ABC Action News

ST. PETERSBURG, Fla. There have been plenty of news reports about labor shortages and businesses unable to fill positions throughout the pandemic. But, there is another side of this story that hasn't gotten enough attention; millions of people looking for jobs and can't get hired because of online algorithms, artificial intelligence, and more.

ABC Action News reporter Michael Paluska sat down with St. Petersburg resident Elizabeth Longden. She showed us all of the jobs she's applied for on LinkedIn and Indeed. More than a thousand applications were filed on LinkedIn and more than 140 on Indeed.

"So, business data strategy, talent and culture recruiter, diversity, equity and inclusion specialist, human resources," Longden said as she named off a few of the jobs she's applied for. "There are 128 pages with eight applications per page."

"That's a lot of jobs," Paluska said.

"Yeah, a lot," Longden replied with a half-smile that was more of an acknowledgment of her job woes.

"How do you process 1,000 plus rejections?" Paluska asked.

"It's discouraging, and fortunately, there haven't been 1,000 rejections. Most of the places don't even get back to you one way or the other," Longden said. "So yeah, we're looking at less than that. But it's still a big, you know, it's a big confidence blow, especially when you hear, oh, there's a labor crisis. And nobody wants to work. And like, hi, I would like to work."

According to the Bureau of Labor, a record 4.4 million people quit their jobs in September. That's a new all-time high. So, you would think millions of openings would help Longden. But, that's not the case.

Longden has a college degree, an insurance license, and a decade of work experience in human resources. In May, like many Americans throughout this pandemic, she was laid off from her company. So she took about a month off to reset and started the search in her field as an operations specialist, people ops, HR, and businesses operations.

"Have you ever been in a hole where you lost a job, and you couldn't get another one in the past?" Paluska asked.

"Not where I had lost one and couldn't get another one. I'd had times where I'd moved, you know, and had had trouble finding a job for maybe a month or two. But I was always able to find something," Longden said.

In September, the Harvard Business School released a study called Untapped Workers: Hidden Talent. The study explains this lack of hiring phenomenon. The lead author, Joseph Fuller, estimating millions of Americans are in the same position as Longden.

"So, you have this, this system that systematically excludes people that may not check every box in the employer's description of what they're looking for, but can be highly qualified on multiple parameters, even those the most important for job success, but they still get excluded," Fuller, professor of management practice at Harvard Business School said. "But what happens is, the employer in setting up these filters and ranking systems emphasizes some skills over others, intended to rely on two factors to make a decision."

The job search algorithms and artificial intelligence filter out candidates based on keywords before someone like Longden ever talks to a human being.

"And, the algorithms are unforgiving," Fuller said. "If you don't, if you don't have the right keywords, if you're just missing one of those attributes, you can get excluded from consideration even though you check every box on every other attribute they're looking for."

"Whose fault is that the company or LinkedIn or Indeed?" Paluska asked.

"You know, no company sets out to have a failed hiring process," Fuller said. "They provide the tools that their customers regularly ask for. So I think this is a tragedy, without a villain. It's the way companies have gone about it is optimized around minimizing the time it takes to find candidates in minimizing the cost of finding someone to hire. There's some kind of killer variable that is causing the system to say not qualified or not attractive relative to other applicants. The vast majority of those candidates never hear back anything just ghosted."

Longden has been ghosted a lot. One recruiter called her three times in a week asking for her to apply and when she thought she got the job, radio silence. Longden thought he was dead.

"I even was like, 'Are you alive?' You know, like, I just want to know, you're okay, you've just totally gone dark," Longden said.

Longden's job search hell has her skeptical of the entire process.

"I've also discovered that there's been a huge uptick in companies wanting pre-work from people. So all in all, I've probably done about 25 hours worth of pre-work for various companies, none of which has been compensated, and none of which I've even gotten a roll-out of," Longden said.

"Do you think they are using your work for their benefit?" Paluska asked.

"Oh, I'm sure," Longden said. "One of the things I was asked to create was an onboarding process for new employees. So that's what the role at the company would have been doing was onboarding their new employees as they came in. And so, one of the pre-work examples was to create an onboarding process from the offer to the 90-day mark of employment. And I did that. And I'm certain that they're having multiple people do that and pulling what they like best from everyone."

We reached out to LinkedIn and Indeed for comments but did not get a response back.

"Two or three quick suggestions for Elizabeth, the first is be very, very aware of language terms, and make your submission. Match what's being asked for, to the greatest degree you can with integrity," Fuller said. "The second thing I would say is, go on something like LinkedIn and look at the profiles of people who got the job you want. And what are they saying they do? What keywords are they using? Is there a regularly referenced tool that they claim expertise in that she doesn't have?"

View original post here:

Job hunting nightmare: 1,000 plus job applications and still no offers - ABC Action News

Staten Island Family Advocating For New Artificial Intelligence Program That Aims To Prevent Drug Overdoses – CBS New York

NEW YORK (CBSNewYork) So many families have felt the pain of losing a loved one to a drug overdose, and now, new artificial intelligence technology is being used to help prevent such tragedies.

When you have a family member who lives this lifestyle, its a call you always know could come, Megan Wohltjen said.

Wohltjens brother, Samuel Grunlund, died of an overdose in March 2020, just two days after leaving a treatment facility. He was 27.

Very happy person. He was extremely athletic. Really intelligent, like, straight A student He started, you know, smoking marijuana and then experimenting with other drugs, Wohltjen told CBS2s Natalie Duddridge.

He wanted to get clean and addiction just destroyed his life, said Maura Grunlund, Sams mother.

Since Sams death, his mother and sister have been advocating for a new program they believe could have saved him. Its called Hotspotting the Opioid Crisis.

Researchers at MIT developed artificial intelligence that aims to stop an overdose before it happens.

This project has never been tried before, and its an effort to combine highly innovative predictive analytics and an AI-based algorithm to identify those who are most at risk of an overdose, said former congressman Max Rose, with the Secure Future Project.

The technology screens thousands of medical records through data sharing with doctors, pharmacies and law enforcement.

For example, over time, it might flag if a known drug user missed a treatment session, didnt show up to court or, in Sams case, just completed a rehab program. It then alerts health care professionals.

Im just calling to check in to see how things are going, said Dr. Joseph Conte,executive director of Staten Island Performing Provider System.

Conte says the program trains dozens of peer advocates who themselves are recovering addicts. They reach out to at-risk individuals and find out what they need from jobs to housing to therapy.

Theres no pressure on the patient to enter rehab. The goal is to keep them alive.

We cant help them if theyre dead If youre not ready for treatment, you should be ready for harm reduction. You should have Narcan available if you or a friend overdoses, Conte said.

Health officials say a record number of people, 100,000, died of overdoses in 2020.

This year alone on Staten Island, more than 70 people have fatally overdosed.

The number of opioid deaths per 100,000 people on Staten Island is about 170% higher than the national rate. Officials say fentanyl is largely to blame, and the lethal drug was found in 80% of Staten Island toxicology reports.

I believe that my son would be alive today if he hadnt used fentanyl I really feel that if this was any other disease, people would be up in arms, Maura Grunlund said.

Wohltjen says her brother always encouraged her to run the New York City Marathon, so this year, she did it, wearing his Little League baseball hat and raising thousands of dollars for the Partnership to End Addiction.

If we could save one life it would make a difference, Wohltjen said.

View post:

Staten Island Family Advocating For New Artificial Intelligence Program That Aims To Prevent Drug Overdoses - CBS New York

ITV Will Use Artificial Intelligence To Tailor Adverts For Its Viewers – Todayuknews – Todayuknews

By Alex Lawson, Financial Mail On Sunday

Published: 21:50, 27 November 2021 | Updated: 21:50, 27 November 2021

ITVs online viewers could soon be targeted with adverts based on the programmes they are watching.

The broadcaster is planning to use artificial intelligence to select advertising tailored to joyous or tragic moments in drama and news programmes on its ITV Hub streaming service.

The media giant is set to launch a pilot of the new technology early next year, internally dubbed moments, objects, moods or MOM.

Futuristic: The broadcaster is planning to use artificial intelligence to select advertising on its ITV Hub streaming service.

Under the plan, a relationship break-up scene in Coronation Street could soon be followed by an advert for a dating app or a holiday firm.

The technology is also able to scan scenes for products such as food and cars, inserting adverts for pizzas and vehicles in the next advert break as soon as 30 seconds afterwards.

ITV director of advanced advertising, Rhys McLachlan, told The Mail on Sunday: Weve always been aware what is in the shows you dont advertise a car just after a car crash in drama. But this takes that to the next level, scanning whats being screened in detail and in real time.

If there are moments of elation, joy, sadness, crisis, we can tell what it is and input it.

ITV this month forecast a sharp rise in its advertising revenue, its best performance in its 66-year history, aided by the post-pandemic bounce-back and the delayed Euro 2020 football tournament.

Some links in this article may be affiliate links. If you click on them we may earn a small commission. That helps us fund This Is Money, and keep it free to use. We do not write articles to promote products. We do not allow any commercial relationship to affect our editorial independence.

Read more:

ITV Will Use Artificial Intelligence To Tailor Adverts For Its Viewers - Todayuknews - Todayuknews

Yes, it has been offside: FIFA tests an artificial intelligence system with a view to using it in the Qatar World Cup – InTallaght

For decades offside has been one of the most controversial events in any soccer game: whistling them accurately is really difficult even with the help of the linesmen, and the referees have not always hit one of the plays that can most influence the outcome of the matches.

The VAR has helped to minimize the problems although it has not prevented other controversies but FIFA wants to go further and make use of an artificial intelligence system to whistle offside. The idea will be put to the test at the Arab Cup, and if successful, it could be implemented in the Qatar World Cup in 2022.

The tests will consist of an artificial intelligence system that will be installed in six stadiums where the Arab Cup will be held. The operation is closely linked to the VAR (Video Assistant Referee), which for some time has become an important help for referees in the world of professional football.

The artificial intelligence system will send VAR a message instantly when a player is offside, but it will be the referee who decides whether or not that player was intervening in the play.

The technology had already been launched in the preliminary phase by several companies that were working on something like this. Teams like Manchester City, Bayern Munich or Sevilla they have evaluated it in its stadiums.

The system to be installed in the Arab Cup consists of 12 cameras located around the pitch. Those cameras they monitor 29 points on the body of each player, creating a tracking system that allows to record the exact position of each part of the body of each player.

The ball, its movement and the exact moment in which the passes are made are also monitored. This will make it possible for the algorithms to detect if a play has been offside in as little as 0.5 seconds after that possible situation occurs.

The tests in the 2021 FIFA Arab Cup that takes place from November 30 to December 18 will therefore be the scene of some tests that could lead to this system be implanted in the next soccer World Cup in Qatar to be held in November 2022.

More:

Yes, it has been offside: FIFA tests an artificial intelligence system with a view to using it in the Qatar World Cup - InTallaght

The Key to Mars Colonization May (Literally) Lie in Human …

Colonization of the Red Planet may seem like the plot of a classic sci-fi page turner, but, as of late, NASA has made significant headway into creating a livable, sustainable, human-ready environment on Mars.

Continue reading below

Our Featured Videos

In April, NASAs Perseverance robot was able to convert some of the planets atmosphere into oxygen no easy feat, seeing as the planets atmosphere is ultra-thin and mainly made of carbon dioxide. Scientists believe that this significant accomplishment could pave the way for future successes in both isolating and storing oxygen there, marking a huge step for mankind and its ultimate goal of colonizing Mars.

Newsletter Sign Up

Get the latest design news!

Sign up for our newsletter and get the latest design news.

Thanks for subscribing! Expect a newsletter with the latest out-of-the ordinary designs and innovation soon.

One of the greatest obstacles to creating livable colonies on the Red Planet boils down to one word: resources. The cost of shipping quantities of necessary resources, including oxygen, would be astronomical (pun fully intended), so scientists have instead been working on ways to develop those resources on the planet itself, or at least find ways to create more sustainable technology that would minimize the amount of supplies needed from Earth.

This is where our precious bodily fluids come in.

A recent study published in Materials Today Bio discusses the possibility that the answer to sustainable resources for Mars colonization efforts could lie within the astronauts themselves, namely in their blood, sweat, tears, urine, and feces.

Though this may sound grim (and potentially like another sci-fi storyline), the study suggests that these organic materials could be utilized as a way to supplement raw materials already found on Mars, potentially saving time, money, and other valuable resources necessary for interplanetary supply runs from Earth.

According to Aled D. Roberts, a research scientist at the University of Manchester, and leader of the new study, there is one significant, but chronically overlooked, source of natural resources that will by definition also be available on any crewed mission to Mars: the crew themselves.

So how, exactly, would this technology work? As part of the new research, the study suggests that human blood could, in part, be used to form a material similar to concrete when combined with Martian dust. Furthermore, adding urea (which is found in human fluids like sweat, tears, and urine), would increase the strength and durability of this astro-crete by up to 300 percent. The potential to create and 3D print this concrete-like material could be an important step in astronauts ongoing quest to build on Mars.

The study also suggests that other human bi-products like dead skin, hair, nails, mucus, and feces could potentially be combined with already existing Martian materials and be exploited for their material properties on early extraterrestrial colonies.

The study is an important (if unglamorous) step toward solving one of the most critical obstacles to creating Martian colonies. Now that researchers have identified the potential of harvesting bodily fluids from humans, more studies will likely be on the horizon to develop further materials similar to astro-crete that could be used to build on Mars. And who knows? One day we could all be sitting in our oxygen-rich Martian apartments built from donations of astronaut sweat, blood, and feces a literal Martian wonderland created from humanitys greatest resource: ourselves.

More:

The Key to Mars Colonization May (Literally) Lie in Human ...

Extending the human lifespan – Bangkok Post

Next week, the American Academy of Anti-Aging Medicine (A4M) will organise its 29th Annual World Congress at The Venetian and Palazzo Resort in Las Vegas. Since 1992, A4M has been on a mission to redefine healthcare through longevity medicine in order to optimise vitality and extend the human lifespan.

But the question is do we really want to become super seniors or centenarians in a disruptive world?

Co-founder Dr Robert Goldman believes in the possibility of "practical immortality" with a lifespan of 120-plus years. Nine years ago, I met the ebullient anti-ageing physician at a conference organised by VitalLife Scientific Wellness Center and Bumrungrad International Hospital.

I asked him whether it's unnatural to stop the clock with anti-ageing medicine as the body isn't designed to last over 120 years.

"It's as unnatural as taking a plane,'' he said. "Because if man was meant to fly, he should have been born with wings.''

The fact is anti-ageing interventions are not something new and the search for the fountain of youth has been part of human culture and societies for millennia.

Dr Goldman asserts that there's nothing out of line with anti-ageing medicine and its utilisation to stretch the life span and enhance quality of life.

The demand for anti-ageing programmes is being driven by baby boomers who don't want to age the way their parents did.

Its comprehensive approach to wellness encompasses nutrition, dietary supplements, lifestyle modification, and controversial hormone replacement therapy.

One mechanism of ageing is a decline in hormone levels, which sends a chemical message to cells that this body is old and they start to die off. Hormone replacement therapy attempts to trick the cells to think that they're still young.

However, it's not a quick fix or a magic pill as it takes effort and focus in adopting an anti-ageing lifestyle and treatments.

Through very early detection, prevention and reversal of age-related diseases, this field of medicine aims to prevent illnesses and disabilities. In addition, advances in biotechnology will drive dramatic changes in anti-ageing medicine to accomplish practical immortality.

Around a century ago, a 40-year-old was considered to be an elderly person, and today those in their 70s are in the winter of life. The practical immortality concept proposes that in the future people will not be considered old until they are centenarians.

On the other hand, longevity can be earned without taking supplements and hormones. For example, Japan's nonagenarians and centenarians are proof of natural and healthy ageing through diet, exercise, way of life and cultural factors.

Accordingly, the anti-ageing movement has faced controversy and been accused of being pseudoscience and a business that prescribes dietary supplements, hormone injections, as well as other products and services.

Nevertheless, over the three decades, A4M has grown into a global community with alliances in countries including Thailand.

Next week, its Annual World Congress event is being held under the theme "The Next Chapter: Unmasking The Hidden Epidemic", and it will address many neglected health crises in a world stricken by Covid-19.

The pandemic has posed numerous challenges and changes as we focus on fighting infectious diseases and viral mutations. We aim to be survivors and not be afflicted by a deadly virus and its economic consequences.

Accordingly, the past two years have put many of us in a health-conscious mode and made us dependent on self-care due to lockdown.

It has probably changed many people's perspectives of the world and the meaning of life. Stuck in a crisis for two whole years, we may not even care about outliving turtles and just try to cope with current circumstances, which reinforce how uncertainty in life remains absolute certain.

Read more from the original source:

Extending the human lifespan - Bangkok Post

3 Former Eagles are one step closer to Hall of Fame induction – Inside the Iggles

The Pro Football Hall of Fame announced the names of 26 semifinalists on the eve of Thanksgiving and three former Philadelphia Eagles are one step closer to footballs version of immortality. Congratulations are in order for both longtime Eagles cornerback, one of the valedictorians of the Buddy Ryan era, Eric Allen and one of the more controversial players in team history, Ricky Watters.

Once January rolls around, fifteen finalists will be revealed in January. Kudos are also in order for senior finalist Cliff Branch and Art McNally, the latter being a contributor finalist. We also cant leave out another former Eagle and coach finalist, Dick Vermeil.

If hes inducted it will have been a long time coming for Eagles legend Eric Allen. NFL legend Deion Sanders even went to bat for him a little over a year ago, stating that the six-time Pro Bowler is long overdue to see his bust carved and placed in Canton, Ohio.

Allen has just under 800 tackles on his career resume along with 54 interceptions and nine defensive touchdowns. Hes a member of the Eagles 75th Anniversary Team and the franchises Hall of Fame.

Watters has long been forgiven for his For who? For what? comment. He spent three seasons in Eagles green, racking up 3,794 rushing yards, 1,318 receiving yards, and 32 total touchdowns in 48 career games.

Vermeil should need no introduction. He led the Philadelphia Eagles to their first-ever Super Bowl appearance and won more than 100 games in the City of Brotherly Love over seven seasons before returning from a long hiatus and leading the Saint Louis Rams to a Super Bowl victory. He too is in the Philadelphia Eagles Hall of Fame. His induction in Canton should be a no-brainer.

As it is every year, the Pro Football Hall of Fame Class of 2022 will be introduced during the NFL Honors special on the eve of the Super Bowl. Mark your calendars for that one. and watch it live on Fubo TV. It airs live on Thursday, February 10th at 9 p.m. EST on your local ABC affiliate.

Read more:

3 Former Eagles are one step closer to Hall of Fame induction - Inside the Iggles

The Old Guard: The Ages Of The Immortals (Including Andy) – News Nation USA

The Old Guardfeatures a group of immortal warriors, most of whom have been alive for centuriesbut how old are they exactly? The marketing material and movie itself offer several clues to the immortals ages. Charlize Therons Andy is the oldest, but most of her companions have had very long lives as well.

Directed by Gina Prince-Bythewood and based on the comic book seriesThe Old Guard, by Greg Rucka andLeandro Fernndez,the Netflix movie follows a team of warriors who have lived in the shadows for centuries, taking part in conflicts on whichever side they feel is right. The Old Guardis set in the modern-day, where a new immortal soldier Nile Freeman (Kiki Layne) joins the group after miraculously healing from having her throat cut. She is quickly initiated into the small group of warriors and learns how they have influenced history. While shes still learning about her new family, they come under threat from a greedy pharmaceutical executive called Steven Merrick (Harry Melling), who hopes to discover the secret to their immortality, bottle it, and put a price tag on it.

SCREENRANT VIDEO OF THE DAY

Related:The Best Action Movies Of 2020

Unfortunately for Merrick, hes not the first bad guy that the Old Guard has run into during their very long lifetimes. Heres a breakdown of when each member of the group besides Andy was born, and how old they are in the modern-day setting of the movie.

Whilethe movie form ofThe Old Guardkeeps Andys age ambiguous, it is known that she is the oldest member of the group. Her full name, Andromache of Scythia, refers to a Central Asian empire that ended in the second century CE, making Andy at least 1800 years old but likely older. A potential TheOld Guardsequel could expand the mythologyby exploring this, however the comics suggest that Andy is even older than her name suggests.

Rucka and Fernandezs comics depict Andy as being born circa 4500 BCE in the Western Steppe of Scythia. Andy developed immortality after being killed in battle, and led her tribe for hundreds of years. However, after centuries she left her position to seek justice and find other immortals. Andy even gives a precise age in the comics of 6732, meaning that she has been serving humanity for over six millennia.

Seen only in flashbacks and in The Old Guardsfinal scene, Quynhs (Veronica Ngo) age is perhaps the hardest to pin down out of all the immortals. In the comics shes called Noriko, and Andy recalls that they first metat the end ofAmr ibn al-As al-Sahmis conquest of the Byzantine Empire in 642, at which point Noriko had already been an immortal for a century. That puts her date of birth some time in early 500 AD,which would make her around 1500 years old during the events of The Old Guard.

However, in the movie its not specified exactly when or where Andy and Quynh met, except that Andy found Quynh when she was wandering through the desert, and that she was the first other immortal that Andy ever met. Many have speculated that Andy and Quynh will fight inThe Old Guard 2. In the comics, Andy met Lykon (Micheal Ward) before she met Noriko, and they fought together for two thousand years. Lykon also appears briefly in a flashback in The Old Guard, with Andy and Quynh both being present at the time of his death. If Quynh has been aged up in order to have been born before Lykon, she could actually be several thousand years old during the events of the movie. Hopefully,audiences willlearn more about her backstory including her age in The Old Guard 2.

According to his character poster, Joe (Marwan Kenzari) was born in 1066, making him 954 years old at the time The Old Guard takes place. Originally called Yusuf Al-Kaysani, Joe was a Muslim warrior during the First Crusade, who met the love of his life on the battlefield and killed him. However, fate chose them asthe next immortals to join the Old Guard, and after repeatedly slaying each other they realized that neither they nor the enemy soldier could be killed at which point their enmity turned to love.

Related: Every Song In Netflixs The Old Guard Movie

Younger than Joe by only a few years (which grew even less significant as the centuries pass), Nicky (Luca Marinelli) is 951 years old during the events of The Old Guard, based on a character poster that gives his year of birth as 1069. This means that he would have been in his late twenties 30 at most the first time he died. Like Joe and Andy, Nicky changed his name at some point from Nicol of Genoa to the more common name ofNick Smith, in order to aid his anonymity. This period of the Crusades has been a popular setting for movies as well as video games like Assassins CreedandCrusader Kings.Hailing from the city of Genoa, in what would later become the unified country of Italy, Nicky fought in theFirst Crusadeuntil he fell in love with one of the enemy, and instead began fighting new battles alongside him. After settling their differences, Joe and Nicky both met Andy and became part of the Old Guard alongside her and Quynh.

The baby of the group (at least, until Nile comes along), Booker (Matthias Schoenaerts) is 250 years old during the events of The Old Guard, with his character poster marking his year of birth as 1770. His treacherous actions make it unclear whether hell return for a potentialThe Old Guard2. Born Sebastien le Livre (his nickname comes from his surname, which is French for Book), Booker was a soldier under Napoleon who deserted during the campaign into Russia. He was caught and hanged, but came back to life still hanging from the noose, being 42 years old at the time of this first death in 1812. As he lived on without ageing, Booker experienced the trauma of watching his sons die and being helpless to stop them, even as they grew to hate him for not sharing his gift of immortality. Being a young immortal,the 100-year penance that Booker is sentenced to at the end of The Old Guard would still be a significant amount of time for him.

A pivotal part of The Old Guards cast is Nile, who acts as a viewpoint character. A brand new member of the Old Guard, Nile Freeman is 26 years when she dies for the first time, having her throat cut while trying to save the life of a man she has just shot. After she wakes up in the infirmary without a mark to show for her injury, shes shunned by her fellow soldiers and about to be sent away for some probably very unpleasant testing when shes abducted by Andy. Nile has a military legacy to uphold, with her father having been killed in action, but also has a family that shes at first keen to return to. By the end of the movie, however, she has decided to stick with Andy and the other immortals, having seen the good that theyve managed to do in the world.

There are plenty of unanswered questions from The Old Guard that could be addressed in The Old Guard 2, and the precise ages of Andy and Quanh are among them. The sequel could also fill in the centuries of backstory that each of the immortals have, making their ages very important for the franchise going forward.

More:The Old Guard Ending & Sequel Setup Explained

Ernie Hudson On Ghostbusters 2016: Wouldve Been Easier If It Wasnt A Reboot

About The Author

Hannah has been with Screen Rant since 2013, covering news, features, movie premieres, Comic-Con and more! You can follow her on Twitter @HSW3K

More From Hannah Shaw-Williams

Read the original:

The Old Guard: The Ages Of The Immortals (Including Andy) - News Nation USA