Top Reasons Why Spinach Is One Of The Healthiest Leafy Greens – NDTV Doctor

Spinach can offer you a wide range of health benefits. It can be used as a ingredient in a variety of recipes. Here are some reasons why you should consume spinach.

Spinach is beneficial for your skin and hair

Your mother might have stressed on consuming a healthy diet since childhood. A well-balanced diet offers all necessary elements that you need for proper functioning as well as for optimum growth of the human body. Green leafy vegetables are power-packed with nutrients. It is advised to add leafy greens to your diet for optimum health. Spinach is one of the healthiest leaf greens that is loaded with essential nutrients offering you amazing health benefits. Yes, Popeye was doing it all right! Spinach provides you a variety of nutrients that your body needs. Here are some notable benefits of adding spinach to your diet.

91% of spinach is water. It is loaded with protein and iron too. Spinach contains iron that supports red blood cells. It is also a non-dairy source of calcium. This leafy green also contains vitamin A, C and K1. You may also receive magnesium, potassium and folate with spinach consumption.

Spinach is a non-dairy source of calciumPhoto Credit: iStock

Spinach can also help in healthy weight loss. One cup of boiled spinach is loaded with fibre and water content. It can keep you full for longer and make you consume fewer calories.

Also read:Spinach For Weight Loss: A Cup Of Spinach A Day Can Make You Lose Weight Quickly, Know How And Other Health Benefits

Potassium in spinach can help in controlling high blood pressure. A healthy diet can help in controlling hypertension. Fibre-rich foods can also help in regulating blood pressure.

Also read:Try These 7 Foods That Have More Potassium Than A Banana To Control High Blood Pressure

Diabetics can also safely add spinach to your diet. This leafy vegetable contains antioxidants that can prevent oxidative stress and stress-related changes in people with diabetes.

You would be surprised to know that spinach is beneficial for your skin and hair too. Vitamin A and iron in spinach can help in boosting skin and hair health.

Spinach can promote hair growthPhoto Credit: iStock

Promoted

Also read:Calcium: All The Reasons Why This Micronutrient Is Of Sheer Importance

Disclaimer: This content including advice provides generic information only. It is in no way a substitute for qualified medical opinion. Always consult a specialist or your own doctor for more information. NDTV does not claim responsibility for this information.

DoctorNDTV is the one stop site for all your health needs providing the most credible health information, health news and tips with expert advice on healthy living, diet plans, informative videos etc. You can get the most relevant and accurate info you need about health problems like diabetes, cancer, pregnancy, HIV and AIDS, weight loss and many other lifestyle diseases. We have a panel of over 350 experts who help us develop content by giving their valuable inputs and bringing to us the latest in the world of healthcare.

Go here to see the original:
Top Reasons Why Spinach Is One Of The Healthiest Leafy Greens - NDTV Doctor

It’s Jamun Time! Here Are Reasons To Make The Most Of This Fruit This Summer – NDTV Doctor

From diabetics to high blood pressure patients, jamuns are beneficial for one and all!

Jamuns are rich in fibre and can help in getting relief from constipation

Come summer and one cannot wait for the gorgeous jamuns! Their deep purple hue and tangy taste makes for a perfect accompaniment for fun and easy-going evening chats with family. Jamun or black plum is incredibly rich in antioxidants that can reduce inflammation and damage caused by free radicals. The contain properties that can help in regulating blood sugar, blood pressure and cholesterol. From heart patients to diabetics, jamuns are meant for one and all! Keep reading to know more about the benefits of including jamuns in your diet.

In one of her recent posts on Instagram, nutritionist Nmami Agarwal talks about the health benefits of jamuns. She lists the following reasons to include jamuns in your diet:

1. Jamuns are loaded with Vitamin C and iron. If you are deficient in iron, then jamun serves the dual purpose of providing both Vitamin C and iron. Vitamin C-rich foods are required for absorption of iron in the body. Daily intake of jamuns can help in increasing levels of haemoglobin.

Also read:Top 7 Iron-Rich Foods For Vegetarians: Add These To Your Diet

2. Improved levels of haemoglobin can further act as blood purifying agent, mentions Agarwal in the post. This can be helpful for your skin health.

3. Jamuns have a low glycemic index, which can help in keeping blood sugar levels normal. This means that a fruit like jamuns are safe for consumption for people with diabetes. However, every person responds differently to certain foods. If you have diabetes, do check with your doctor before including jamuns or any other fruit in your diet.

Also read:Is It Safe For Diabetics To Eat Mangoes? What Rujuta Diwekar Has To Say

Jamuns can help in regulating blood sugar levelsPhoto Credit: iStock

Also read:High Blood Pressure May Affect Kids Too: Know Symptoms, Risk Factors, Prevention Tips And More

To include jamuns in your diet, you can eat them as is. You can eat the fruit as part of a fruit salad and can also create a jamun smoothie with other fruits like banana and berries. You can juice them with water as well.

Promoted

(Nmami Agarwal is nutritionist at Nmami Life)

Disclaimer: This content including advice provides generic information only. It is in no way a substitute for qualified medical opinion. Always consult a specialist or your own doctor for more information. NDTV does not claim responsibility for this information.

DoctorNDTV is the one stop site for all your health needs providing the most credible health information, health news and tips with expert advice on healthy living, diet plans, informative videos etc. You can get the most relevant and accurate info you need about health problems like diabetes, cancer, pregnancy, HIV and AIDS, weight loss and many other lifestyle diseases. We have a panel of over 350 experts who help us develop content by giving their valuable inputs and bringing to us the latest in the world of healthcare.

Go here to read the rest:
It's Jamun Time! Here Are Reasons To Make The Most Of This Fruit This Summer - NDTV Doctor

Retirement age is increasing but our new study reveals most only work ten years in good health after 50 – The Conversation UK

In 1800, the global average life expectancy was only 29 years. Today, life expectancy continues to rise, with babies born in the UK in 2018 expected to live to 87.6 years for men and 90.2 years for women on average. But as life expectancy rises, so does retirement age.

Since retirement is expensive, and state pensions are paid for by workers who pay tax, many governments are now concerned there arent enough working adults to fund the growing number of people in retirement. As such, many countries have decided to increase retirement age. In the UK, state pension age is increasing from 65 to 66 this year and will reach 67 in 2028.

Though were living longer, this doesnt necessarily mean our health will allow us to work for additional years. Healthy working life expectancy tells us the average number of years people in a population are likely to be healthy and in paid work from the age of 50. Healthy working life expectancy focuses on working life after age 50, which is when health problems (such as common age-related diseases, including pain or mobility issues) can make it difficult for people to continue working or find a job that fits their needs.

Our study of healthy working life expectancy found that on average, people in England can expect to be healthy and in work for almost nine and a half years after age 50. However, these years are not necessarily lived consecutively as people may temporarily leave work or experience health problems. These findings came from data on 15,284 people aged over 50 in England who were interviewed several times from 2002 to 2013.

Compared to the national average, healthy working life expectancy is higher for men (10.94 years) and lower for women (8.25 years). We also found that healthy working life expectancy is higher for people in non-manual or self-employed occupations (such as office workers) than those in manual occupations (such as electricians or care workers). It also increased alongside education level.

People also tend to have longer healthy working lives in the south compared to the north of England. This reflects the worse health and economic conditions typically seen in the north. The amount and type of jobs available regionally also influence differences in healthy working life expectancy, as people who cannot find a job that suits them wont be able to continue working.

We also split the population into five equal-sized groups based on deprivation. We found that the people living in the least deprived areas tended to stay healthy and in work for almost four years longer (10.53 years) than those living in the most deprived areas (6.80 years).

Many factors contribute to the different average lengths of healthy working life between groups. The higher healthy working life expectancy in men compared to women can at least partly explained by women having been able to access their state pensions earlier before 2018.

A regions healthcare quality, prevalence of health problems, access to job opportunities, and whether a workplace can accommodate a persons needs are all factors that explain the differences in healthy working life expectancy. These factors may also be barriers that can prevent groups with lower healthy working life expectancy from remaining in employment. For example, those in manual occupations (and their employers) may be less able to accommodate health problems later in life.

Given that people with higher education or those living in less deprived areas are estimated to have a longer healthy working life expectancy suggests a link with socioeconomic status. Lower socioeconomic status is associated with poorer physical and mental health, and low-paid work or unemployment. Possible explanations for this link include lower quality job opportunities, money worries, and insufficient income to afford a healthy lifestyle.

Increases in retirement age has been a response to higher life expectancy nationally. However, some regions have seen bigger improvement compared with others, while some have seen declines. But international research indicates that living longer does not necessarily mean more time spent in good health and our findings also suggest that many people will find it challenging to work until the new retirement age.

A key reason for increasing retirement age is to ensure the financial sustainability of the state pension programme. But if a large proportion of the population arent healthy enough to work for longer, there may be an increased need for government financial support due to unemployment or disability. Those with health problems who cant afford to leave work may find that, without adaptations, their health interferes with their productivity, their daily tasks, or that their working hours may make them less able to effectively look after their health.

In the UK, health gaps are widening. Without interventions to improve health and access to good work opportunities, its possible that some groups could see healthy working life expectancy stay the same or even decrease. For these sub-populations, waiting longer to receive state pension income could be particularly difficult.

Though the upward trend in life expectancy stalled in 2014-2015 in many high-income countries, retirement age is still set to increase in numerous countries, including the US, the UK and Australia. Monitoring healthy working life expectancy may be important in the future for knowing whether people will be able to stay in work alongside changes to the retirement age.

Read the original post:
Retirement age is increasing but our new study reveals most only work ten years in good health after 50 - The Conversation UK

Flaxseed Oil Health Benefits: 5 Reasons Why You Should Try This Oil – NDTV Doctor

Flaxssed oil can offer you some amazing health benefits. It is beneficial for your skin, heart, body weight and more. Here are some notable health benefits of flaxseed oil.

Flaxseed oil can offer you some amazing health benefits

Flaxseeds are extremely healthy. Vegetarians are advised to add flaxseeds to their diet as these can offer them those nutrients that are usually absent in a vegetarian diet. These seeds can also help in weight loss. Flaxseeds are loaded with fibre, protein and omega-3 fatty acids. Just like seeds flaxseed oil too can offer you some amazing health benefits. It is jam packed with nutrients that can be used in various ways. Here are some reasons why you should try flaxseed oil. Also know different ways to use.

Most foods loaded with omega-3 fatty acids are animal-based sources. Flaxseeds and flaxseed oil are good sources of omega-3 fatty acids that vegetarians can enjoy.

Flaxseed oil is one of the rare heart-friendly oils. This oil boosts artery health and also helps control blood pressure. These factors contribute to a healthy heart. Omega-3 fatty acids also contribute to a healthy heart.

Flaxseeds can help boost heart healthPhoto Credit: iStock

Many oils have gained popularity in the past year for their amazing skin benefits. Flaxseed oil can also help promote skin health. It can moisturise your skin well and help you achieve a smooth texture. Use of flaxseed oil can also help reduce skin irritation.

Also read:Skincare Tips: Try These Essential Oils For Acne-Free Skin

Flaxseeds oil contains anti-inflammatory properties. Inflammation, if left uncontrolled can be harmful to your health. Several studies have also highlighted that flaxseeds oil offer properties that help reduce inflammation.

Also read:Know What To Eat And Avoid To Fight Inflammation Effectively

Flaxseed oil can help in weight loss too. This oil helps in detoxification that can support weight loss. According to a study published in the journal Appetite, flaxseed oil can help reduce appetite that can make you consume fewer calories resulting in weight loss.

According to studies, flaxseed oil may also help in weight lossPhoto Credit: iStock

Promoted

Also read:6 Side Effects Of Consuming Flaxseeds

Disclaimer: This content including advice provides generic information only. It is in no way a substitute for qualified medical opinion. Always consult a specialist or your own doctor for more information. NDTV does not claim responsibility for this information.

DoctorNDTV is the one stop site for all your health needs providing the most credible health information, health news and tips with expert advice on healthy living, diet plans, informative videos etc. You can get the most relevant and accurate info you need about health problems like diabetes, cancer, pregnancy, HIV and AIDS, weight loss and many other lifestyle diseases. We have a panel of over 350 experts who help us develop content by giving their valuable inputs and bringing to us the latest in the world of healthcare.

Read the original:
Flaxseed Oil Health Benefits: 5 Reasons Why You Should Try This Oil - NDTV Doctor

Do We Have Privacy Rights Anymore? – Lawyer Monthly Magazine

Back in the 14th century through to the 18th century, people went to court for eavesdropping and for opening and reading personal letters[1] and from the end of the 19th century, this shifted to personal information being controlled in order to protect ones privacy.

It has been mooted for decades and extends outside what we may deem as our privacy rights today. When we mention privacy, we may be taken to early 2018, to the Facebook Cambridge Analytica data scandal, or to the EUs GDPR regulation which was implemented, again, in 2018. But privacy extends further than that, to issues involving contraception, interracial marriages and abortion (think Roe v. Wade). And it is such cases that have shaped our society and law around privacy today[2].

A brief history into privacy

A major article written by Samuel Warren and his legal partner Louis Brandeis advocating privacy rights was published in 1890 in the Harvard Law Review. The Right to Privacy argued that privacy is inherent in common law and generates various privacy torts, such as the disclosure of private facts (such as the aforementioned examples). Where some counter-argued that such rights can offer protection for the privileged, Warren and Brandeis still managed to pave the way for future legal cases regarding privacy.

And while the US Constitution, to this day, does not specifically mention a right to privacy, the Supreme Court has noted that it believes this right exists in the penumbra of several other, enumerated rights

William O. Douglas an American jurist and politician who served as an Associate Justice of the Supreme Court quoted Brandeis in thePublic Utilities Commission v. Pollak case in 1952, regarding whether the radio broadcasts on public transport was a violation of freedom and privacy: The beginning of all freedom is the right to be let alone thus the right to privacy. The right to be let alone, Brandeis who was an Associate Judge at the time- quoted this in the Olmstead v. United States case in 1928, where Roy Olmsteads conviction was in part based on evidence gathered through government wiretaps, is the most comprehensive of rights, and the right most valued by civilized men. Even though the Court originally held that neither the Fourth Amendment nor the Fifth Amendment rights of the defendant were violated, the decision was later overturned by Katz v. United States in 1967[3]. This case somewhat altered privacy rights in America, as the decision expanded the Fourth Amendments protections from the right of search and seizures of an individuals persons, houses, papers, and effects, as defined in the Constitution, to include what [a person] seeks to preserve as private, even in an area accessible to the public as a constitutionally protected area[4].

And while the US Constitution, to this day, does not specifically mention a right to privacy, the Supreme Court has noted that it believes this right exists in the penumbra of several other, enumerated rights, such as the Third, Fourth, Fifth, and Fourteenth Amendments, and as such, citizens are entitled to it under the catch-all provision of the Ninth Amendment. This has shaped privacy, in the US, to this day.

How much risk is posed here if we mindlessly click agree, or how much of our lives are now actually private?

What is privacy today?

So, the right to privacy has been a much-debated issue for a very long time and it seems as society develops, so does our concern for privacy. Once upon a time, postcards were seen as a threat to our privacy and now, we dont give them a passing thought as we have bigger qualms at hand: should we accept cookies, allow our phones to track our movement, or download the latest craze, such as TikTok and risk our precious data being shared amongst strangers? How much risk is posed here if we mindlessly click agree, or how much of our lives are now actually private?

If I take myself, as an example: I dont post a vast amount on social media I could be abroad and my Facebook friends would be none the wiser as I like to exercise my right to privacy. But, simultaneously, my phone will sift through my emails and recognise I booked a flight and it needs to notify me when I ought to leave the house so I make my flight on time; it will recommend sights for me to see, hotels to stay at, it will keep track of where I visited, how long for, how many steps I did that day, what restaurants I visited, what photos I took at that specific location, so when I land back home, it can collate all this information and email me a mini 21st-century scrapbook on my adventure. My tiny phone is more aware of what I did on my holiday than my own mother. Does it bother me? Not so much, because all of these features are convenient and I am actively deciding what I share and what I keep private which seems to be the centre of many debates and legal cases (such as the aforementioned Katz v. United States case). If my phone was hacked, however, and all my information was leaked, even though I lead a very boring life, I would be concerned to how my privacy was violated and who now has all that information at hand, yet I would have to still acknowledge that I allowed my phone to track my every move and that information was always available and at risk of being available to somebody else. It is not until external parties, such as the government, want to access that data that everything becomes a little too 1984 and we feel like our privacy rights are being breached.

The global pandemic is the perfect example of this constant battle we have with privacy and our control over it.

As written more succinctly in The New Yorker, people tend to invoke their right to privacy when it serves their best interests: People are inconsistent about the kind of exposure theyll tolerate. We dont like to be fingerprinted by government agencies, a practice we associate with mug shots and state surveillance, but we happily hand our thumbprints over to Apple, which does God knows what with them.

Freedom vs. privacy: What do we want more?

The global pandemic is the perfect example of this constant battle we have with privacy and our control over it. When governments across the world began to consider or release contact-tracing apps, many very apprehensive for obvious reasons: it screams a movement towards an Orwellian era. The app, which works by recognising when two phones are close together for longer than a set period of time (and if one user is later diagnosed with the coronavirus, an alert can be sent to the other), would enable the government to potentially track where you were and who you were with. The idea that the government would have a mass amount of data in their hands, didnt sit right with people, including many people close to me. But as soon as I questioned their reasoning and asked but do you care what cookies you accept or what information apps can access? they soon came to realise that they are not as concerned with their right to privacy as they thought, as they all simply dont take any notice to what Instagram is tracking.

There is clearly a societal need and purpose for utilising location-based data for the greater good.

Nonetheless, it was understandable why they were apprehensive. Norways health authority had to delete all data gathered via its COVID-19 contact-tracing app and suspend further use of the tool as the Smittestopp app represented a disproportionate intrusion into users privacy. The UK government was also forced to abandon a centralised coronavirus contact-tracing app after spending three months and millions of pounds on its development and switched to an alternative designed by the US tech companies Apple and Google after being promoted as more privacy-focussed, leaving epidemiologists with access to less data.

Speaking to Mike Ingrassia, President and General Counsel at Truata, he explains that the COVID-19 pandemic seems likely to enhance this sense of unease among consumers regarding the use of their data. On the one hand, consumers digital footprints are being expanded at a record-breaking pace as their lives move ever more from the physical to the digital realm. This is quickly increasing the amount of personal data that companies hold regarding their customers and incentivising those companies to monetise that data more aggressively in order to thrive during the pandemic-induced recession, he shares.

On the other hand, Mike expands, The response from governments to the COVID-19 pandemic has already raised many concerns when it comes to contact tracing apps, mobile location data tracking and increased surveillance. However, as the world continues its fight against the spread of COVID-19, it has become vital for governments to assess how they can use data for social good.

But why do we mindlessly allow Zuckerberg to store our data, but panic when the government wants access?

There is clearly a societal need and purpose for utilising location-based data for the greater good. But only if it is used responsibly. Governments must ask themselves whether appropriate safeguards and technologies are being applied so that they are not, in using that data to benefit society, failing to protect the rights of the individuals behind that data. Questions that need to be considered include what type of personal data is being shared, for what purposes and for how long?, says Mike.

There is no doubt that consumers have a growing awareness of the value of their personal information, and they are increasingly concerned with how its being used, both by public and private entities. It is not yet clear whether the introduction of GDPR and other more stringent global privacy laws has moved the dial on customer trust, as there still appears to be widespread confusion and distrust amongst consumers on how their data is being collected and who it is being shared with.

At the end of the day, the government is trying to do what it has always done: conduct surveillance of individuals and groups if they suspect they are presenting a danger to society. But why do we mindlessly allow Zuckerberg to store our data, but panic when the government wants access? Is our data in better hands when Facebook is using it, or with the government?

But in this day and age, when privacy almost correlates with data and our online activity, we lack full control over how private everything is.

And as Mike explains to us, even though most governments will in good faith want to use data responsibly, they will likely lack the tools and expertise to do so on their own. Private sector assistance, such as the provision of cutting edge, privacy-enhancing data analytics technologies so governments can responsibly get powerful insights from their data, will be needed. One of the most effective ways for governments to obtain such powerful insights from unique, large data sets responsibly will be to fully anonymise those data sets first, better enabling them to extract value from their citizens data without compromising the privacy of the individuals behind that data, Mike tells us.

Taking an approach such as this, leveraging the best privacy-enhanced data analytics technologies available from the private sector, such as powerful anonymisation solutions and related analytics tools, will allow governments to unlock life-saving insights from data, without sacrificing the privacy rights of its citizens.

In the aftermath of the COVID-19 pandemic, this might be one of the greatest opportunities for responsible coordination among the public sector and the private sector. If they can both embrace this opportunity if governments have the courage to use their data innovatively, and the self-restraint to do so responsibly, and if technology companies have the creativity to offer governments the tools to do so we will all benefit.

It is a fickle scale, where our need for control lies on one scale, and our trust in the technology lies in another. Perhaps we are more concerned with our right to freedom and liberty, as that is what shaped Roe v. Wade and Public Utilities Commission v. Pollak. And if we really think about privacy in this day and age (data, data and more data), we do somewhat lack full control of who has it and where it goes.

Do we care about privacy or are we actually aiming for liberty and freedom?

Rethinking what privacy actually means

Lets think about one of the most discussed laws of 2018: GDPR. Privacy was at the heart of this EU regulation, but in reality, the new measures were partially rolled out to help people better understand the way in which information is collected and used and was designed to harmonise data privacy laws, providing greater protection and rights to individuals. It gave the average citizens more control and freedom over what they choose to share and left organisations with more liability if they breached privacy rights. It wasnt to restrict companies access to our data per se (although companies were given less mobility in this area), it was to allow us to decide what we wanted to remain private. It is the same point that was mooted when postcards were invented if you felt threatened that your mail was going to be read and thus breach your privacy rights, you had the option to use an alternative method; if you dont trust a website with your cookies, you now have the option to refuse access. We have some control over our data and what we keep private but if we want to fully enjoy the world of Siri, we have to trust in the technology and be aware that our device is constantly listening and waiting for you to call its name.

The government is aware of our right to privacy. The Fourth Amendment in the US acknowledges that. The UKs Data Protection Act acknowledges that. The right to be let alone is the most comprehensive of rights, and authorities will recognise that if we feel like our privacy is being violated, we will speak about it. But in this day and age, when privacy almost correlates with data and our online activity, we lack full control over how private everything is. In a survey conducted by EY, they found that nearly half (46%) of survey respondents number one or two concern is not having a clear picture of where personal information is stored or processed outside of their main systems and servers[5]. Once data enters the internet, it will be accessed and logged and stored and analysed and compared with a billion other pieces of data, it is almost impossible to legislate data access away[6]. So, is any of our data truly private anymore? Do we care about privacy or are we actually aiming for liberty and freedom? Is it time for us to rethink what privacy means to us now and what it truly is in the current age?

[1] https://link.springer.com/content/pdf/10.1007/978-3-642-03315-5_2.pdf

[2] https://www.newyorker.com/magazine/2018/06/18/why-do-we-care-so-much-about-privacy

[3] https://en.wikipedia.org/wiki/Olmstead_v._United_States

[4] https://en.wikipedia.org/wiki/Katz_v._United_States

[5] https://www.ey.com/Publication/vwLUAssets/ey-can-privacy-really-be-protected-anymore/$FILE/ey-can-privacy-really-be-protected-anymore.pdf

[6] https://www.computerworld.com/article/3135026/does-privacy-exist-anymore-just-barely.html

Excerpt from:

Do We Have Privacy Rights Anymore? - Lawyer Monthly Magazine

Philly Lawyers Prepping Massive Lawsuit Against City Over Tear Gas and Other Incidents – Philadelphia magazine

City

Police and other officials need to be held accountable for these military-style assaults, says Center City lawyer Paul Hetznecker.

The Philadelphia Police Department using tear gas against protesters along I-676 on June 1st. Philly lawyers are now preparing to file a massive lawsuit or lawsuits against the city and police. (Photo by Mark Makela/Getty Images)

A roundup of Philly news. This post may be updated at any time as new information becomes available.

The decision to use tear gas against protesters in Philadelphia has become one helluva bad PR nightmare for city officials.

If you havent seen the damning New York Times investigation into the June 1st tear gas incident on I-676, watch it now, and youll see what I mean.

But that decision could also wind up becoming an incredibly costly one for the city. Philly Mag has learned that lawyers will soon file a lawsuit or multiple lawsuits in federal court against various city and police officials as well as individual police officers on behalf of numerous plaintiffs.

One of the lead lawyers, Paul Hetznecker, says he expects the defendants to include the city, individual city officials, the police department, Philly police commissioner Danielle Outlaw, and police officers themselves once the team of lawyers can identify who those police officers are, in certain cases.

There were so many incidents where we cant really identify the police officers right now, because many of them did not have their names or badges displayed, explains Hetznecker, who has represented countless protesters throughout his career, including from incidents surrounding the 2000 Republican National Convention in Philadelphia and the Occupy movement. And others, its hard to identify them with riot gear on. So we will probably file the lawsuit and name them later on once we discover their identities. They should be named personally. They should be held accountable along with those in charge.

Paul Hetznecker (photo provided) and Michael Coard (file photo), two of the attorneys organizing the tear gas lawsuit in Philadelphia.

Prominent activist and attorney Michael Coard, one of the lawyers working with Hetznecker, says that he expects controversial Philly cop Joseph Bologna to be at the front of the line in terms of cops named individually as defendants. Philly district attorney Larry Krasner has charged Bologna with assaulting a protester.

According to Hetznecker, the use of tear gas on I-676 and in West Philly will certainly be at the center of any legal actions filed. But he notes that there were plenty of other examples of police actions like Bolognas during the protests that constitute violations of protesters First and Fourth amendment rights.

Police and other officials need to be held accountable for these military-style assaults, he insists.

The First Amendment is sacrosanct, adds Coard. It is not to be praised one day as this glorious document and then used like a piece of toilet paper the next day even if the city and police think that peaceful protesting is a shitty way to petition your government. What happened here is blatant, obvious and egregious. There should not only be civil liabilities but also more criminal prosecutions.

A lot of people out there have been hoping that Philly would move into the Green Phase of reopening this Friday, July 3rd. After all, its Fourth of July weekend. And officials had previously said that July 3rd was the target date. But it sounds like the Green Phase may be farther away than we expected.

Philadelphia health commissioner Thomas Farley took to Fox 29s Good Day Philadelphia on Monday morning to talk about Phillys reopening (or not reopening) plans.

As of late last week our numbers were rising rather than falling, Farley told anchors Alex Holley and Mike Jerrick. And thats in the context of numbers rising a lot around the country. Its looks like were not going to meet the targets we had laid out to go to green. So were reevaluating now

Farley added that officials are particularly concerned about restaurants. He said to expect a final decision on Tuesday.

Heres the full interview:

While Philadelphia considers pulling back on its reopening plans, a much different scene is playing out at the Jersey Shore. The state is allowing Atlantic City casinos to reopen at 25 percent capacity as of Thursday.

Hard Rock is reopening on Thursday. Ballys, Caesars and Harrahs reopen on Friday. The Borgata will begin allowing guests to enter by invitation only on Thursday prior to opening to the public on Monday.

For a full list of Atlantic City casino reopening dates and the new guidelines they have in place to try to prevent the spread of the coronavirus, go here.

Visit link:

Philly Lawyers Prepping Massive Lawsuit Against City Over Tear Gas and Other Incidents - Philadelphia magazine

Tucker Carlson’s Fanciful Defense of What He Imagines Qualified Immunity To Be – Cato Institute

A good sign that apolicy is indefensible is when its proponents cannot bring themselves to describe it accurately. Such is the case with the doctrine of qualified immunity, which is currently the subject of afurious disinformation campaign led by the lawenforcement lobby (see here, here). The most recent mouthpiece for this campaign was Tucker Carlson, who two nights ago mounted aspirited defense of an imaginary legal rule that he called qualified immunity, but which bears only the faintest resemblance to the actual doctrine. Reasons Billy Binion and IJs Patrick Jaicomo have already done agreat job explaining some of Carlsons biggest mistakes, but there is so much here that is either highly misleading or outright false that its worth unpacking in full. Strap in!

By way of background, the inciting incident for Carlsons segment on qualified immunity was the Reforming Qualified Immunity Act introduced by Senator Mike Braun (R-IN) earlier this week. As Idiscussed here, what this bill would effectively do is eliminate qualified immunity in its current form and replace it with limited safeharbor provisions. The main effect would be that people whose rights are violated would no longer need to find prior cases where someone elses rights were violated in the same way before being allowed to proceed with their claims. However, if defendants could show that either (1) their actions were specifically authorized by astate or federal law they reasonably believed to be constitutional, or (2) their actions were specifically authorized by judicial precedent that was applicable at the time, then they could avoid liability.

In other words, this bill doesnt go far as the AmashPressley Ending Qualified Immunity Act, which would eliminate the doctrine entirely. But it is still asignificant proposal that both meaningfully addresses and corrects the core absurdity of the current qualified immunity regime (the clearly established law standard), while preserving immunity in those relatively rarebut more sympatheticcases in which defendants are specifically acting in accordance with applicable statutes or judicial precedent. And, unlike the Justice in Policing Act, Senator Brauns bill would reform qualified immunity across the board for all government agents, not just members of law enforcement.

So, what did Tucker Carlson have to say about this bill?

Braun has introduced legislation in the Congress that will make it easier for leftwing groups to sue police officers.

I wont dwell on this point, because Carlson is clearly just being snarky here. But suffice to say, Brauns proposal is not specific to leftwing groups, and indeed, not specific to police at all. Rather, it just amends Section 1983,our primary federal civil rights statute, which permits all citizens to sue government agents who violate their rightsto clarify that defendants cannot escape liability, just because there is no prior case with similar facts.

Under current law, police officers in this country benefit from something thats called qualified immunity.

Again, qualified immunity is not limited to police officers. The defense can be raised by all state and local public officials who have civil rights claims brought against them, including corrections officers, public school officials, county clerks, and other municipal employees. Still, the reason qualified immunity is such ahot topic right now is because of its application to law enforcement, so Ill stop harping on this issue. Also, the suggestion that police officers actually benefit from qualified immunity is highly suspect, but well get to that later

Qualified immunity means that cops cant be personally sued when they accidentally violate peoples rights while conducting their duties. They can be sued personally when they do it intentionally, and they often are.

Here is where Carlson plunges headfirst into fantasy. This accidental/intentional distinction hes describing has no basis in qualified immunity case law. Indeed, under the clearly established law standard, adefendants state of mind has no bearing whatsoever on whether they are entitled to qualified immunitya defendant could be explicitly acting in bad faith, with the express intent to violate someones rights, and still receive immunity, so long as there was no prior case involving the precise sort of misconduct they committed.

The best illustration of this point is the Ninth Circuits recent decision in Jessop v. City of Fresno, where the court granted immunity to police officers alleged to have stolen over $225,000in cash and rare coins while executing asearch warrant. The court noted that while the theft [of] personal property by police officers sworn to uphold the law may be morally wrong, the officers could not be sued for the theft because the Ninth Circuit had never specifically decided whether the theft of property covered by the terms of asearch warrant, and seized pursuant to that warrant, violates the Fourth Amendment. In other words, it didnt matter that the officers were intending to break the law; not even the defendants here claimed that they accidentally stole from this suspect. All that mattered was that the court hadnt confronted this particular factual scenario before.

In other words, police officers are not above the law.

It is true that police officers are not literally immune from liability for their misconduct (unlike prosecutors, who actually do receive absolute immunity for violating peoples rights). But police officers are held to avastly lower standard of accountability than the citizens they police. For regular people, its awellknown legal maxim that ignorance of the law is no excuse. Even in cases with serious criminal penalties, courts routinely permit the prosecution and conviction of defendants who had no idea they were breaking the law. If anything, you would expect law enforcementpublic officials specifically charged with knowing and enforcing the lawto be held to ahigher standard of care than ordinary citizens. But in fact, theyre held to afar lower standard. Ignorance of the law is no excuseunless you wear abadge.

Cops who commit crimes can be punished . Cops who make lesser mistakes can be disciplined, suspended, or fired, and they often are. Thats the system that we have now. It works pretty well.

If this assertion doesnt cause you to burst out laughing, then you havent been paying attention to our criminal justice system for the last several decades. Suffice to say, no, our system is not working pretty well. It is extraordinarily difficult to convince prosecutors to bring charges against police officers, much less to obtain convictions (see here for alist of especially notable nonconvictions). And internal discipline measures are laughably feeble, due in large part to the power of police unions. The inadequacy of both criminal prosecution and internal discipline as meaningful accountability measures is exactly why we need arobust civil remedyand therefore exactly why qualified immunity is such aserious problem (weve argued this point in much more detail in our crossideological amicus briefs before the Supreme Court).

Civil immunity, by the way, has precisely nothing to do with anything that happened in the George Floyd case, just in case youre wondering. That cop is in jail.

Qualified immunity applies in civil law suits, not criminal prosecutions, so its true that qualified immunity will not limit the criminal prosecution of Derek Chauvin. But Carlson is wrong that the doctrine has nothing to do with anything that happened in the George Floyd case, for two reasons.

First, if George Floyds family does decide to bring acivil rights claim against Chauvin and the other officers on the scene, it is entirely possible that the officers would be able to invoke qualified immunity, depending on whether theres aprior case in the Eighth Circuit with similar facts (i.e., an officer kneeling on anonresisting suspects neck for along period of time while the suspect says he cant breathe). Even if Chauvin is convicted of murder, thats no guarantee that he wouldnt be entitled to immunity in acivil suit. Whether aprosecutor can prove the elements of murder beyond areasonable doubt is simply adifferent legal question than whether prior case law would make the violation of George Floyds rights clearly established, under modern qualified immunity doctrine.

Second, the senseless violence committed by Derek Chauvin and the stunning indifference of the other officers standing nearbyare the product of our culture of nearzero accountability for law enforcement. While that culture has many complex causes, one of the most significant is qualified immunity. Section 1983 was supposed to be the primary means of holding accountable government agents who violate our constitutional rights. Qualified immunity has severely undermined the deterrent effect of that statute, and thereby contributed to an environment where police simply do not expect to be held to account when they commit misconduct.

Qualified immunity has worked so well because police officers, maybe more than anyone else in society, must make difficult splitsecond decisions on the job, and alot. They do it constantly. Whether to arrest someone, whether to conduct asearch, whether to use force against asuspect. Sometimes, actions they sincerely and reasonably believe are legal are found later by courts to be unconstitutional.

Here, Carlson regurgitates what is probably the most commonly invoked defense of qualified immunity:that it is necessary to protect the discretion of police officers to make splitsecond decisions. And, no surprise, it is profoundly mistaken. This was the very first issue Iaddressed in my previous post on The Most Common Defenses of Qualified Immunity, and Why Theyre Wrong, but the short answer is that our substantive standards for determining what actions do and do not violate the Fourth Amendment already incorporate substantial deference to onthespot police decisionmaking. In other words, when police sincerely and reasonably make adecision about whether to arrest someone or use force, they almost certainly will not have broken the law in the first place. Qualified immunity is therefore unnecessary to protect this discretion, because the doctrine, by definition, only applies when adefendant has committed aconstitutional violation.

Moreover, as aI discussed above, qualified immunity has nothing to do with whether an officer sincerely and reasonably believed their actions to be lawful. It doesnt turn on their state of mind at all. All that matters is whether acourt determines that the facts of prior cases were sufficiently similar to hold that the law was clearly established.

The Reason article by Billy Binion aptly notes that Carlsons assertion here can only be explained by alack of familiarity with qualified immunity case law, and provides numerous examples of the sort of egregious injustices this doctrine regularly permits:

Take the cop who received qualified immunity after shooting a10yearold while in pursuit of asuspect that had no relationship to the child. The officer, sheriffs deputy Matthew Vickers, was aiming at the boys nonthreatening dog. There were also the cops who were granted qualified immunity after assaulting and arresting aman for standing outside of his own house. And the prison guards who locked anaked inmate in acell filled with raw sewage and massive amounts of human feces. And the cop who, without warning, shot a15yearold who was on his way to school. And the cops who received qualified immunity after siccing apolice dog on aperson whod surrendered. It doesnt take much thought to conclude that those courses of action were morally bankrupt.

Just so. Okay, back to Carlsons defense of whathecallsqualifiedimmunity:

Sometimes the very laws [police officers] enforce are struck down. Thats not their fault, obviously, but without qualified immunity, police could be sued for that personally.

Only atiny fraction of lawsuits against police involve claims that the laws theyre enforcing are themselves unconstitutional. But Carlson actually is correct that, without qualified immunity, police officers could be held liable for enforcing unconstitutional statutes. Indeed, that sort of application was probably the principal evil that Congress had in mind when it enacted Section 1983in 1871, as part of the Ku Klux Klan Act. Congress was well aware that southern states would continue passing laws infringing on the constitutional rights of recently freed slaves, and they wanted to deter state and local officials from carrying out such laws. Executive officersno less than legislators or judgeshave an independent obligation to enforce and respect constitutional limitations.

Still, one can understand the seeming unfairness in holding defendants personally liable when the only conduct alleged to be unlawful was executing astatute they reasonably believed to be valid. But,for that very reason, this is one of the two explicit safe harbors included in Brauns bill! His proposal specifically states that adefendant will not be liable under Section 1983 when the conduct alleged to be unlawful was specifically authorized or required by aFederal statute or regulation, or by astatute passed by the primary legislative body of the State in which the conduct was committed. In other words, Carlson is either entirely unaware of or willfully concealing the fact that Braun agrees with his own argument here, and has already incorporated it into his bill.

[Police officers] could be bankrupted, they could lose their homes. Thats unfair. It would also end law enforcement. No one would serve as apolice officer.

This is another issue Ialready addressed in my common defenses post, but Ill repeat the main points here. First, its crucial to understand that even today, police officers are nearly always indemnified for any settlements or judgments against them in civil rights claims. This means that their municipal employers, not the officers themselves, actually end up paying. Joanna Schwartz, aUCLA law professor and probably the foremost scholar of qualified immunity, demonstrated in a2014 article called Police Indemnification that, in her study period, governments paid approximately 99.98% of the dollars that plaintiffs recovered in lawsuits alleging civil rights violations by law enforcement. In other words, even when plaintiffs do overcome qualified immunity, the individual police officers rarely pay adime.

I have written elsewhere about how this practice of nearautomatic indemnification is itself problematic, because it fails to provide for individualized accountability for officers who violate peoples rights. Abetter practice, as my colleague Clark Neily has also discussed, would be to take some portion of the money that municipalities already spend on civil rights judgments, and instead put that toward an insurance allowance for individual officers. Nevertheless, as things currently stand, officers are almost never required to pay anything personally, and that wont change if we eliminate qualified immunity. The idea that police would be bankrupted or lose their homes is reckless fearmongering.

Also, with regard to the idea that eliminating qualified immunity would end law enforcement, Iwonder whether Carlson is aware that hes made atestable prediction? After all, as Idiscussed here, Colorado recently enacted acivil rights law that effectively removes the defense of qualified immunity for officers who violate peoples rights under the state constitution. Will this end law enforcement in Colorado? If Tucker Carlson or anyone who agrees with him would like to make abet on this question, Ill give generous odds.

And thats why the Supreme Court has upheld the principle of qualified immunity for decades now, often unanimously, both sides agreeing.

I will give Carlson thisheis absolutely right that the Supreme Court has shown remarkable tenacity in sticking to one of the most embarrassing, egregious mistakes in its history. Section 1983 clearly says that any state actor who violates someones constitutional rights shall be liable to the party injured, and the commonlaw history against which that statute was passed did not include any acrosstheboard defenses for all public officials. The Supreme Courts invention of qualified immunity was abrazen act of judicial policymaking that effectively rewrote this statute, and its shameful that the Justices have repeatedly declined the opportunity to correct this error.

What is surprising, however, is why Tucker Carlson approves of such blatant judicial activism in this case. After all, Carlson himself recently bemoaned how courts increasingly have come to see themselves not as interpreters of the law, their constitutional role, but as the countrys main policy makers. So, does he want the Supreme Court to faithfully interpret the text and history of Section 1983, or to continue imposing their own policy preferences?

But now, in order to placate the rioters, who he believes have more moral authority than the police, Senator Mike Braun of Indiana would like to gut qualified immunity, and make it easier for cops to be sued personally for mistakes.

I already discussed above how Senator Brauns bill does not wholly abolish qualified immunity, but rather replaces the clearly established law standard with two limited, principled safeharbors. Ialso discussed how Section 1983 doesnt make cops liable for mistakes,it makes them liable for constitutional violationsand the Fourth Amendment itself is already incredibly deferential to police decisionmaking. An officer hasnt violated the Fourth Amendment because they made the wrong call with regard to an arrest or use of force; they only violate the Fourth Amendment when they act objectively unreasonable, under the circumstances known to them at the time.

But Ido want to address this idea of moral authority. Setting aside the nonsense about placating rioters, how does it affect the moral authority of the law enforcement community when we hold police officers to alower standard of liability than any other profession? As Ive discussed previously, the proponents of qualified immunity are profoundly mistaken if they think the doctrine is doing the law enforcement community any favors. If you want to restore the moral authority of the police, you cant let police officers escape liability for egregious and immoral misconduct. If you want people to respect officers as professionals, then the law has to hold them to professional standards.

Qualified immunity, more than any other single rule or decision, has eroded the moral authority of the police, not protected it. And that is exactly why the more thoughtful members of law enforcementsuch as the Law Enforcement Action Partnership and the National Organization of Black Law Enforcement Executiveshave explicitly called for the elimination of qualified immunity. As Major Neill Franklin (Ret.) has explained: Accountability measures that show an agency is serious about respecting the rights of all of its residents help the police as much as they help the communities we serve. Theres no better way to restore community trust. And we cannot do our jobs without trust.

* * *

Carlson finishes his segment with arant about Charles Koch that would make Nancy MacLean blush, and then asks whether Senator Braun would be willing to defend the absolute immunity that members of Congress enjoy. This latter question is interestingenough on its own, but Carlson obviously just intends it as a gotcha, not as aserious point of discussion.

But the bottom line is that Tucker Carlson has done aprofound disservice to his viewers and to the country by further propagating blatant misunderstandings of what qualified immunity actually is. Its honestly hard to say whether Carlson himself has been duped, or whether he is willfully joining the disinformation campaign of the lawenforcement lobby. But either way, nobody should take what hes saying at face value. Iremain interestedto see whether any selfprofessed advocate of qualified immunity will defend the actual doctrine.

More here:

Tucker Carlson's Fanciful Defense of What He Imagines Qualified Immunity To Be - Cato Institute

Why China’s Race For AI Dominance Depends On Math – The National Interest

THE WORLD first took notice of Beijings prowess in artificial intelligence (AI) in late 2017, when BBC reporter John Sudworth, hiding in a remote southwestern city, was located by Chinas CCTV system in just seven minutes. At the time, it was a shocking demonstration of power. Today, companies like YITU Technology and Megvii, leaders in facial recognition technology, have compressed those seven minutes into mere seconds. What makes those companies so advanced, and what powers not only Chinas surveillance state but also its broader economic development, is not simply its AI capability, but rather the math power underlying it.

The race for AI supremacy has become perhaps the most visible aspect of the great power competition between America and China. The worlds dominant AI power will have the ability to shape global finance, commerce, telecommunications, warfighting, and computing. President Donald Trump recognized this last February by signing an executive order, the American AI Initiative, designed to protect U.S. leadership in key AI technologies. In just a few years, American corporations, universities, think tanks, and the government have devoted hundreds of policy papers and projects to addressing this challenge.

Yet forget about AI itself. Its all about the math, and America is failing to train enough citizens in the right kinds of mathematics to remain dominant.

AI IS not simply a black box that will grow if unlimited funds are poured into it. Dozens of think tank projects and government reports wont mean anything if Americans cant maintain mastery over the fundamental mathematics that underpin AI. Calls for billions of dollars in related investments wont add up without the abstract math ability needed to transform the economy or military.

What we call AI is in fact a suite of various algorithms and distinctive developments that draw heavily from advanced mathematics and statistics. Take deep neural networks, which have understandably become a CIO/CTO buzzword, as an example. These are not artificial brains. They are stacks of information-transforming modules that learn by repeatedly computing a chain of what are known as gradients (something rarely taught in high school calculus), which are the backbone of a family of algorithms known as backpropagation.

Similar dissections can be made for all of machine learning, which is a study of how to program computers to learn a task rather than execute a rigid pre-coded one. The ability to rapidly classify massive amounts of data, identify patterns, predict outcomes, and self-learn, all comes down to ever more sophisticated algorithms paired with increasingly powerful computational power and a commensurate amount of data.

From iPhones to Summitthe worlds most powerful supercomputer, located at the Oak Ridge National Laband from Google to Facebook, these computing platforms and programs use incredibly complex mathematical calculations to do everything from model nuclear detonations to provide web search results.

And contrary to what some prominent AI advocateslike Kai-fu Lee, author of the AI Superpowersargue, its not simply all about data. Lee is famous for saying that, today, data is the oil of the early twentieth century, and that China, which has the most data, is the new Saudi Arabia. Yet without the right type of math, and those who can creatively develop it, all the data in the world will only take you so farand certainly not far enough into the future AI advocates boldly envision.

That is why cutting-edge mathematics focuses, among other things, on being able to work with partial information loss and sparse data, or to discard useless information that is collected along with the core data. No matter how you slice it, the world runs on ones and zerosand on the white boards where the algorithms that manipulate them are thrashed out. Yet one cant simply jump into creating more powerful and elegant algorithms; it takes years of patient training in ever more complex math.

Unfortunately, American secondary school and university students are not mastering the fundamental math that prepares them to move into the type of advanced fields, such as statistical theory and differential geometry, that makes AI possible. American fifteen-year-olds scored thirty-fifth in math on the OECDs 2018 Program for International Student Assessment testswell below the OECD average. Even at the college level, not having mastered the basics needed for rigorous training in abstract problem solving, American students are often mostly taught to memorize algorithms and insert them when needed.

This failure to train students capable of advanced mathematics means fewer and fewer U.S. citizens are moving on to advanced degrees in math and science. In 2017, over 64 percent of PhD candidates and nearly 70 percent of Masters students in U.S. computer science programs were foreign nationals, and fully half of doctoral degrees in mathematics that year were awarded to non-U.S. citizens, according to the National Science Foundation. Chinese and Indian students account for the bulk of these, in large part because the most advanced training in American universities still outstrips that in their home countries, though the gap is closing with respect to China. Yet that also means that the majority of those being prepared by U.S. universities to open new frontiers in computer science and abstract math are not Americans. Some of these non-citizens will stay here. But many will return home to help grow their countries burgeoning tech industries.

There are good reasons to argue that U.S. visa restrictions on skilled workers should be eased, tempting more of those foreign nationals to stay in the United States after their studies have been completed. But the bottom line is that not enough American citizens are choosing to major in advanced math, which has corresponding implications for everythingfrom foreign competition to Silicon Valleys startup culture, from national security concerns to whether or not U.S. corporations consider themselves American.

AMERICAS SELF-INFLICTED math wounds matter because the Chinese Communist Party has made global AI dominance a national goal by 2030, and is leveraging its resources to make it so. Indeed, the world now sees the battle over AI as a battle between China and the United States. Under General Secretary Xi Jinping, China has invested heavily in AI-related technologies, making it a core focus for the modernization of Chinese industry. This effort underpins Beijings Made in China 2025 initiative, which seeks to make the country dominant in most high-tech processes.

Chinas AI market is now estimated to be worth around $3.5 billion, and Beijing has set a goal by 2030 of a one trillion yuan AI market ($142 billion). The government has pledged the equivalent of $2.1 billion to build an AI industrial park outside Beijing, among other major investments. Leading the effort is Huawei, which has established AI research laboratories in London and Singapore, unveiled a new generation of AI processor chips, and laid out an all scenario AI strategy.

Much of Chinas spending is directed towards facial and voice recognition technologies like those of Megvii and SenseTime, along with natural language processing. The focus on these particular technologies is purpose-driven: Beijing is using its countrys facility in applied mathematics and AI, whether honed in America or at home, to create a digital surveillance state that is unrivaled in history. For example, a new law requires all individuals registering new mobile phone numbers to have a facial scan. The worlds most advanced algorithms are being used to aid in monitoring and controlling Chinese society and bolster the countrys security services.

Some of this is already plainly visible. Beijing is notoriously creating a social credit system based on facial recognition and other technologies that rewards or penalizes certain behaviorjaywalking, credit unworthiness, insufficient patriotism, and the likeso as to shape individuals private and public behavior. The two far western provinces of Xinjiang and Tibet have become virtual police states within China, as their Uighur Muslim and Buddhist Tibetan populations are ceaselessly monitored and controlled through the application of facial recognition and forced DNA collection.

Chinas AI focus has global security implications as well, given Beijings military-civil fusion policy which mandates that all high-tech advancements be made available to the Chinese armed forces for incorporation in weapons systems. Just as insidiously, Beijing is reportedly recruiting the countrys smartest high school students to train them as AI weapons scientists. A recent National Science Foundation report noted that Chinese government policies do not share U.S. values of science ethics, raising concerns over U.S.-trained Chinese scientists employing advanced research that benefits the CCP s surveillance state and military.

Even as Chinas AI industry works to catch up to its American counterpart in terms of talent, the country is investing in its mathematical ability. Chinese students ranked number one in the world in math (as well as science and reading) in the latest pisa tests. While there is good reason to doubt the veracity of at least some of the Chinese scores, there is no question that China is focusing heavily on STEM education, outstripping America and European nations. The recently announced Strong Base Plan will recruit the countrys top students to study mathematics, as well as physics, chemistry, and biology, among other fields.

Read this article:

Why China's Race For AI Dominance Depends On Math - The National Interest

China and AI: What the World Can Learn and What It Should Be Wary of – Singularity Hub

China announced in 2017 its ambition to become the world leader in artificial intelligence (AI) by 2030. While the US still leads in absolute terms, China appears to be making more rapid progress than either the US or the EU, and central and local government spending on AI in China is estimated to be in the tens of billions of dollars.

The move has ledat least in the Westto warnings of a global AI arms race and concerns about the growing reach of Chinas authoritarian surveillance state. But treating China as a villain in this way is both overly simplistic and potentially costly. While there are undoubtedly aspects of the Chinese governments approach to AI that are highly concerning and rightly should be condemned, its important that this does not cloud all analysis of Chinas AI innovation.

The world needs to engage seriously with Chinas AI development and take a closer look at whats really going on. The story is complex and its important to highlight where China is making promising advances in useful AI applications and to challenge common misconceptions, as well as to caution against problematic uses.

Nesta has explored the broad spectrum of AI activity in Chinathe good, the bad, and the unexpected.

Chinas approach to AI development and implementation is fast-paced and pragmatic, oriented towards finding applications which can help solve real-world problems. Rapid progress is being made in the field of healthcare, for example, as China grapples with providing easy access to affordable and high-quality services for its aging population.

Applications include AI doctor chatbots, which help to connect communities in remote areas with experienced consultants via telemedicine; machine learning to speed up pharmaceutical research; and the use of deep learning for medical image processing, which can help with the early detection of cancer and other diseases.

Since the outbreak of Covid-19, medical AI applications have surged as Chinese researchers and tech companies have rushed to try and combat the virus by speeding up screening, diagnosis, and new drug development. AI tools used in Wuhan, China, to tackle Covid-19 by helping accelerate CT scan diagnosis are now being used in Italy and have been also offered to the NHS in the UK.

But there are also elements of Chinas use of AI that are seriously concerning. Positive advances in practical AI applications that are benefiting citizens and society dont detract from the fact that Chinas authoritarian government is also using AI and citizens data in ways that violate privacy and civil liberties.

Most disturbingly, reports and leaked documents have revealed the governments use of facial recognition technologies to enable the surveillance and detention of Muslim ethnic minorities in Chinas Xinjiang province.

The emergence of opaque social governance systems that lack accountability mechanisms are also a cause for concern.

In Shanghais smart court system, for example, AI-generated assessments are used to help with sentencing decisions. But it is difficult for defendants to assess the tools potential biases, the quality of the data, and the soundness of the algorithm, making it hard for them to challenge the decisions made.

Chinas experience reminds us of the need for transparency and accountability when it comes to AI in public services. Systems must be designed and implemented in ways that are inclusive and protect citizens digital rights.

Commentators have often interpreted the State Councils 2017 Artificial Intelligence Development Plan as an indication that Chinas AI mobilization is a top-down, centrally planned strategy.

But a closer look at the dynamics of Chinas AI development reveals the importance of local government in implementing innovation policy. Municipal and provincial governments across China are establishing cross-sector partnerships with research institutions and tech companies to create local AI innovation ecosystems and drive rapid research and development.

Beyond the thriving major cities of Beijing, Shanghai, and Shenzhen, efforts to develop successful innovation hubs are also underway in other regions. A promising example is the city of Hangzhou, in Zhejiang Province, which has established an AI Town, clustering together the tech company Alibaba, Zhejiang University, and local businesses to work collaboratively on AI development. Chinas local ecosystem approach could offer interesting insights to policymakers in the UK aiming to boost research and innovation outside the capital and tackle longstanding regional economic imbalances.

Chinas accelerating AI innovation deserves the worlds full attention, but it is unhelpful to reduce all the many developments into a simplistic narrative about China as a threat or a villain. Observers outside China need to engage seriously with the debate and make more of an effort to understandand learn fromthe nuances of whats really happening.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Dominik VanyionUnsplash

Follow this link:

China and AI: What the World Can Learn and What It Should Be Wary of - Singularity Hub

How AI Is Impacting Operations At LinkedIn – Forbes

LinkedIn has been at the cutting edge of AI for years and uses AI in many ways users may not be aware of. I recently had the opportunity to talk to Igor Perisic, Chief Data Officer (CDO) and VP of Engineering at LinkedIn to learn more about the evolution of AI at LinkedIn, how its being applied to daily activities, how worldwide data regulations impact the company, and some unique insight into the changing AI-related work landscape and job roles.

Igor Perisic, Chief Data Officer and VP of Engineering at LinkedIn

The Evolution of AI at LinkedIn

Very early on at LinkedIn, data was identified as one of the companys core differentiating factors. Another differentiating factor was a core company value of members first (clarity, consistency, and control of how member data is used) and their vision to create economic opportunity for every member of the global workforce.

As LinkedIn began finding more and more ways to weave AI into their products and services, they also recognized the importance of ensuring all employees were well-equipped to work with AI as needed in their jobs. To that end, they created an internal training program called the AI Academy. Its a program that teaches everyone from software engineers to sales teams about AI at the level most suited to them, in order for them to be prepared to work with these technologies.

One of the very first AI projects was the People You May Know (PYMK) recommendations. Essentially, this is an algorithm that recommends to members other people that they may know on the platform and helps them build their networks. It is a recommendation system that is still central to their products, although now it is much more sophisticated than it was in those early days. PYMK as a data product began around 2006. It was started by folks that would eventually be known as one of the first data science teams in the tech industry. Back in those early days, no one referred to PYMK as an AI project, as the term AI was not yet a back in favor buzz word.

The other significant project which we started around the same time was of course search ranking, which was a classic AI problem at that time due to the emergence of Google and competition in the search engine space.

How AI is applied to daily activities

At LinkedIn, Igor says that we compare AI to oxygenit permeates everything we do. For example, for our members, it helps recommend job opportunities, organizes their feed, ensures that the notifications they receive are timely and informative, and suggests LinkedIn Learning content to help them learn new skills. With respect to LinkedIns enterprise products, he says AI helps salespeople reach members that have an interest in their products, marketers serve relevant sponsored content, and recruiters identify and reach out to new talent pools. The benefits of AI at Linkedin also operate in the background, from helping protect members from fraudulent and harmful content to routing internet connections to ensure the best possible site speed for our members.

Ensuring member safety on the platform is something that we take very seriously. Being a social network with a very strong professional intent, its important to act quickly in identifying and preventing abuse. Because abuse and threats are constantly changing, AI is certainly at the core of these efforts. LinkedIn has found machine learning very helpful in detecting inappropriate profiles.

Without AI, many of their products and services would simply not function. The economic graph they use to represent the global economy is simply too large and too nuanced to be understood without it.

AI is literally enhancing every experience. Starting from the notifications our members are getting about relevant items. But, probably, one of the most prominent ways through which our members experience AI is in the feed, which sorts and ranks a heterogeneous inventory of activities (posts, news, videos, articles, etc.). To ensure relevance in the feed, its important that the algorithms consider the different nuances of content recommendations and members preferences.

One interesting example Igor shares is that at the start of 2018, they discovered an uneven distribution of engagement in the feedgains in viral actions were accrued by the top 1% of power users, and the majority of creators were increasingly receiving zero feedback. The feed model was simply doing as it was told: sharing broad-interest, viral content that would generate lots of engagement. However, he says they realized that this optimization wasnt necessarily the most beneficial for all members. To combat the negative ecosystem effect that the AI had created, they incorporated creator-side optimization in their feed relevance objective function to help their creators with smaller audiences. With this update, the ranking algorithms began taking into consideration the value that would result for both viewer and creator in surfacing a specific item. For the viewer,they wanted to surface relevant content based on their preferences, and for the creator, they wanted to encourage high-quality content and help them reach their audiences. Igor says by tweaking our models to optimize for more than just viral sharing moments, our feed changed into a healthy mix of content from influencers as well as direct connections, which then improved engagement for both viewers and creators..

How worldwide data regulations impact LinkedIn

In recent years regions around the world have started to put in place laws around how companies are able to store and use user data. Laws such as the EUs General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA) are intended to enhance privacy rights and consumer protection. For some companies, becoming compliant meant having to totally chance how they approach data. Luckily for LinkedIn, data was always considered an asset to the company and approached with respect as one of the companys core differentiating factors.

Even before GDPR, Igor says LinkedIn had an internal framework they call the 3Csclarity, consistency, and control. He says We believed then and still do today that we owed it to our members to provide clarity about what we do with their data, to be consistent in only doing as we say, and to give our members control over their data:. In that context, LinkedIn approached GDPR as an opportunity to reinforce their commitment to data privacy for all members globally. For example, LinkedIn extended GDPR Data Subject Rights to all members globally. They continue to be thoughtful in how they approach the use of members data throughout LinkedIn and in AI, and in how they review and update processes, to ensure privacy by design. Acting in the best interest of members continues to be LinkedIns north star, and they always felt that its their joint responsibility across the organization to protect members data.

The changing AI work landscape

As a very large professional social network, LinkedIn has the unique opportunity to see insights about changing job roles, popular positions, and regional popularity that other companies might not have as deep insights into. At the end of last year, LinkedIn released their third annual Emerging Jobs Report to identify the most rapidly growing jobs. AI specialist emerged as the #1 emerging job of that list, showing 74% annual growth over the past 4 years. Its especially exciting to see this growth beyond the tech industry. In 2017, they found that the education sector had the second-highest numbers of core AI skills added by members, showing that AIs growth is correlated with more research in the field.

More recently, amid the economic downturn caused by the pandemic, LinkedIn is still observing that the AI job market continues to grow. When normalized against overall job postings, AI jobs increased 8.3% in the ten weeks after the COVID-19 outbreak in the U.S. Even though AI job listings are growing slower than they did before the pandemic, and despite an overall slowdown in demand for talent, employers still appear to be open to hiring AI specialists.

Whats interesting about the field of AI is that LinkedIn is seeing an entire ecosystem of technical roles that support different stages of the AI lifecycle. If you go back to the Emerging Jobs Report at the end of last year, AI specialist roles (people who build and train models, etc.) are up, but that so-called AI-adjacent jobs are also on the rise. This means that youre seeing more demand for data scientists, data engineers, and cloud engineers. Youre also seeing this demand growing across multiple industries, not just the technology sector. It is across the entire spectrum.

Future Impact of AI

At the end of the day, AI is a tool, and its greatest potential lies in how it will augment human intelligence and how it will enable people to achieve more. LinkedIns current AI tools depend greatly on human input and can never fully be automated.

Igor strongly believes that the future of AI is in applications and especially how we leverage that tool to make us all smarter and to enable us to do more. To do so, AI needs to be much more accessible to a wider set of individuals than just AI experts. AI needs to become more of a plug-and-play, almost a point-and-click interface. Hes seeing the major cloud players get into this space, developing tools that help lower the barrier of entry into AI. Once AI is application-driven, it opens up human creativity to develop really cool and interesting use cases.

In that context, AI technologies are really fascinating across the entire spectrum; from algorithmic and mathematical developments to hardware and AI systems. Just think about the ingenuity researchers have shown in attempting to make their deep neural nets simply converge. In the AI landscape, it seems that there are treasures behind every bush or under every rock.

Follow this link:

How AI Is Impacting Operations At LinkedIn - Forbes

How IT teams can set the right foundation for AI projects – Medium

Goal #1: Support a Range of Applications

An AI platform doesnt just need to support TensorFlow or even just the model development workloads. It needs to provide testing pipelines, versioning, sandbox environments, monitoring, and more.

For example, you might start creating Kubernetes clusters for AI workloads. That cluster will run a wide set of applications that need access to a variety of datasets and compute hardware and likely even a variety of protocols.

Like with any platform hosted by IT and DevOps teams, an AI platform should support application scalability and resiliency. And, optimally, data scientists should have self-serve access to new environments.

Without a cohesive plan to support the production pipeline as a unified project, individual application silos often become inefficient, unscalable, and fragile.

Step back and ask, How can we make this set of disparate workloads as easy to manage and to scale as possible?

If youre an IT leader, you have an incredible opportunity. The success of your companys AI-fueled ambitions requires you to enable developers in a new way.

Get in front of the productionalization crisis by making architectural choices that will centralize AI infrastructure consolidating people, process and technology.

On the storage side, use the same centralized storage underneath all of the applications in the platform. For example, Pure Storages FlashBlade is great at handling all different IO patterns and has performant access for both file and object workloads, which means its well suited for any of these components.

Likewise, NVIDIAs DGX A100 brings consolidation to the compute hardware. With DGX A100, NVIDIA consolidated what used to be three separate silos of legacy compute infrastructure, each sized and designed for supporting only one specific workload: training or inference or analytics. DGX A100 supports all of these workflows using just one universal system type.

Now you have just two building blocks to manage one for storage and one for compute. This infrastructure simplicity is what lowers the threshold to be able to get models into production; theres already a place where new workloads can run. With the AIRI reference architecture from Pure Storage and NVIDIA, you can now support the end to end AI lifecycle from development to deployment on one elastic infrastructure.

See the original post here:

How IT teams can set the right foundation for AI projects - Medium

AI likes to do bad things. Here’s how scientists are stopping it from scamming you – SYFY WIRE

The robots arent taking over yet, but sometimes, they can get a little out of control.

AI apparently has a bias toward making unethical choices. This tends to spike in commercial situations, and nobody wants to get scammed by a bot. Some types of artificial intelligence even choose disproportionately when it comes to things like setting insurance prices for particular customers (yikes). Though there are many potential strategies a program can choose from, it needs to be prevented from going straight to the unethical ones. An international team of scientistshave now come up with a formula that explains why this is and are now working to combat crime by computer brain.

In an environment in which decisions are increasingly made without human intervention, there is therefore a strong incentive to know under what circumstances AI systems might adopt unethical strategies, thescientists said in a study recently published in Royal Society Open Science.

Even if there arent that many possible unethical strategies an AI program can pick up, that doesn't lessen the possibility of it picking something shady. Figuring out prices for car insurance can be tricky, since things like past accidents and points on your license have to be factored in. In a world where we are starting to communicate with robots more than humans sometimes, bots can be convenient. The problem is, in situations where money is involved, they can do things like apply price-raising penalties you dont deserve to your insurance policy (of course anyone would be thrilled if the unlikely opposite happened).

The chance of AI screwing up could mean huge consequences for a company everything from fines to lawsuits. With thinking robots come robot ethics. Youre probably wondering why unethical choices cant just be eliminated completely. They would happen in an ideal sci-fi world, but the scientists believe that the best which can be done is limiting the percentage of unethical choices to as few as possible. There is still the problem of the unethical optimization principle.

If an AI aims to maximize risk-adjusted return, then under mild conditions it is disproportionately likely to pick an unethical strategy unless the objective function allows sufficiently for this risk, as the team describes the principle. It isnt that robots are starting to turn evil.

The AI actually doesnt make unethical choices consciously. Were not at Westworld levels yet, but making a bot less likely to choose wrong will make sure we don't go there.

Excerpt from:

AI likes to do bad things. Here's how scientists are stopping it from scamming you - SYFY WIRE

Security Think Tank: Artificial intelligence will be no silver bullet for security – ComputerWeekly.com

By

Published: 03 Jul 2020

Undoubtedly, artificial intelligence (AI) is able to support organisations in tackling their threat landscape and the widening of vulnerabilities as criminals have become more sophisticated. However, AI is no silver bullet when it comes to protecting assets and organisations should be thinking about cyber augmentation, rather than just the automation of cyber security alone.

Areas where AI can currently be deployed include the training of a system to identify even the smallest behaviours of ransomware and malware attacks before it enters the system and then isolate them from that system.

Other examples include automated phishing and data theft detection which are extremely helpful as they involve a real-time response. Context-aware behavioural analytics are also interesting, offering the possibility to immediately spot a change in user behaviour which could signal an attack.

The above are all examples of where machine learning and AI can be useful. However, over-reliance and false assurance could present another problem: As AI improves at safeguarding assets, so too does it improve attacking them. As cutting-edge technologies are applied to improve security, cyber criminals are using the same innovations to get an edge over these defences.

Typical attacks can involve the gathering of information about a system or sabotaging an AI system by flooding it with requests.

Elsewhere, so-called deepfakes are proving a relatively new area of fraud that poses unprecedented challenges. We already know that cyber criminals can litter the web with fakes that can be almost impossible to distinguish real news from fake.

The consequences are such that many legislators and regulators are contemplating the establishment of rule and law to govern this phenomenon. For organisations, this means that deepfakes could lead to much more complex phishing in future, targeting employees by mimicking corporate writing styles or even individual writing style.

In a nutshell, AI can augment cyber security so long as organisations know its limitations and have a clear strategy focusing on the present while constantly looking at the evolving threat landscape.

Ivana Bartoletti is a cyber risk technical director at Deloitte and a founder of Women Leading in AI.

Link:

Security Think Tank: Artificial intelligence will be no silver bullet for security - ComputerWeekly.com

Two-thirds want tighter regulation around AI, figures show – Yahoo Finance UK

The public remains sceptical over the use of artificial intelligence (AI) to make decisions, research suggests, with nearly two-thirds wanting tighter regulation around its use.

A survey by AI innovation firm Fountech.ai revealed that 64% want more regulation introduced to make AI safer.

Artificial intelligence is becoming more prominent in large-scale decision-making, with algorithms now being used in areas such as healthcare with the aim of improving speed and accuracy of decision-making.

However, the research shows that the public does not yet have complete trust in the technology 69% say humans should monitor and check every decision made by AI software, while 61% said they thought AI should not be making any mistakes in the first place.

The idea of a machine making a decision also appears to have an impact on trust in AI, with 45% saying it would be harder to forgive errors made by technology compared with those made by a human.

As a result, many want AI to be held to a high standard of accountability, with nearly three-quarters of those asked (72%) saying they believe companies behind the development of AI should be held responsible if mistakes are made.

Nikolas Kairinos, founder of Fountech.ai, said it was not surprising that some people were uneasy about the rise of technology which can operate outside of human control.

We are increasingly relying on AI solutions to power decision-making, whether that is improving the speed and accuracy of medical diagnoses, or improving road safety through autonomous vehicles, he said.

As a non-living entity, people naturally expect AI to function faultlessly, and the results of this research speak for themselves: huge numbers of people want to see enhanced regulation and greater accountability from AI companies.

It is reasonable for people to harbour concerns about systems that can operate entirely outside human control.

AI, like any other modern technology, must be regulated to manage risks and ensure stringent safety standards.

That said, the approach to regulation should be a delicate balancing act.

AI must be allowed room to make mistakes and learn from them; it is the only way that this technology will reach new levels of perfection.

While lawmakers may need to refine responsibility for AIs actions as the technology advances, over-regulating AI risks impeding the potential for innovation with AI systems that promise to transform our lives for the better.

In a report published earlier this year, the Committee on Standards in Public Life said greater transparency was needed around AI and its potential use in the public sector in order to gain the trust of the public and reassure them over its use.

It called for the Government and regulators to establish a set of ethical principles about the use of AI and make its guidance easier to use.

Excerpt from:

Two-thirds want tighter regulation around AI, figures show - Yahoo Finance UK

Facebook Initiative Aims To Demystify AI By Crowdsourcing Ideas – Women Love Tech

Facebook recently announced its award recipients of the Ethics in AI Research Initiative for the Asia Pacific region. Among them are proposals from two Australian universities who will each receive funds to further their research in AI.

Their success follows a request for proposals submitted by Facebooks research division last year, which was made open to academic institutions, think tanks and research groups across the Asia Pacific region.

This is part of a wider initiative with Facebook in partnership with the Centre for Civil Society and Governance of The University of Hong Kong and the Privacy Commissioner for Personal Data, Hong Kong.

Through this regional outreach, Facebooks aims to simultaneously crowdsource the best local ideas and accountable practices.

As Raina Yeung, Facebooks Head of Privacy and Data Policy, Engagement, in the Asia Pacific region said, The latest advancements in AI bring transformational changes to society, and at the same time bring an array of complex ethical questions that must be closely examined.

Monash academic Professor Robert Sparrows approved proposal The uses and abuses of black box AI in emergency medicine highlights issues of concern surrounding AI. The issue, for instance, with black box AI is that it has internal rules and parameters which are opaque to their users. In the field of medicine, particularly emergency medicine, this lack of clarity is dangerous and must be correctly addressed. When decisions are made concerning human lives it is paramount for all involved that transparency exists as to how those choices are being made. For those in intensive care the prospect of receiving lesser attention due to the economic or genetic determinations made by a circuitboard is understandably concerning, as is the risk of technical malfunctions affecting ones diagnosis.

However one perceives the intrusion of AI into intellectual disciplines requiring tact and discretion, such as law or medicine, the process is ongoing and exponential. While such technologies may not currently match human performance, the constant rate of advancements in AI makes it essentially inevitable that they will do so. With this in mind the process of automation can be seen as something of a passing of the torch from humans to our AI counterparts, both in physical and intellectual fields.

The approved proposal of Doctor Sarah Bankins, of Macquarie University, AI decisions with dignity: promoting interactional justice perceptions, further highlights this shift. In this transitional stage particular care is necessary to ensure AI tools are applied in ways that are equitable and socially conscientious, as the knock-on effects of poor implementation will compound over time.

AI that can think and act for themselves, often referred to as General Intelligences, the holy grail for AI developers, are still a distant prospect. In the meantime AI researchers have vaulted smaller hurdles. Advances in machine learning, the ability of computer programs to improve autonomously without human input, have paved the way for bleeding edge technologies such as artificial language processing and driverless vehicles. These new tools boast impressive gains to productivity and, as they improve, have the potential to save human lives.

However, despite these advancing capacities such tools can not yet think or act independently, and it remains the role of conscientious human participants to dictate how and where theyre applied. By acting as custodians of our future selves and taking early steps to safeguard the infrastructure of AI against systematic inequity we can work to ensure a brighter future for all, as is Facebooks stated aim in foregrounding diverse, regional voices in the conversations of ethical practice around AI.

AI decisions with dignity: Promoting interactional justice perceptionsDr. Sarah Bankins, Prof. Deborah Richards, A/Prof. Paul Formosa, (Macquarie University), Dr. Yannick Griep (Radboud University)

The challenges of implementing AI ethics frameworks in the Asia PacificManju Lasantha Fernando, Ramathi Bandaranayake, Viren Dias, Helani Galpaya, Rohan Samarajiva (LIRNEasia)

Culturally informed pro-social AI regulation and persuasion frameworkDr. Junaid Qadir (Information Technology University of Lahore, Punjab, Pakistan), Dr. Amana Raquib (Institute of Business Administration Karachi, Pakistan)

Ethical challenges on application of AI for the aged careDr. Bo Yan, Dr. Priscilla Song, Dr. Chia-Chin Lin (University of Hong Kong)

Ethical technology assessment on AI and internet of thingsDr. Melvin Jabar, Dr. Ma. Elena Chiong Javier (De La Salle University), Mr. Jun Motomura (Meio University), Dr. Penchan Sherer (Mahidol University)

Operationalizing information fiduciaries for AI governanceYap Jia Qing, Ong Yuan Zheng Lenon, Elizaveta Shesterneva, Riyanka Roy Choudhury, Rocco Hu (eTPL.Asia)

Respect for rights in the era of automation, using AI and roboticsEmilie Pradichit, Ananya Ramani, Evie van Uden (Manushya Foundation), Henning Glasser, Dr. Duc Quang Ly, Venus Phuangkom (German-Southeast Asian Center of Excellence for Public Policy and Good Governance)

The uses and abuses of black box AI in emergency medicineProf. Robert Sparrow, Joshua Hatherley, Mark Howard (Monash University)

Women Love Tech would like to thank Nick Ouzas for his story.

Read the original post:

Facebook Initiative Aims To Demystify AI By Crowdsourcing Ideas - Women Love Tech

Google brings its AI-powered SmartReply feature to YouTube – TechCrunch

Googles SmartReply, the four-year-old, A.I.-based technology that helps suggest responses to messages in Gmail, Androids Messages, Play Developer Consoleand elsewhere, is now being made available to YouTube Creators. Google announced today the launch of an updated version of SmartReply built for YouTube, which will allow creators to more easily and quickly interact with their fans in the comments.

The feature is being rolled out to YouTube Studio, the online dashboard creators use to manage their YouTube presence, check their stats, grow their channels and engage fans. From YouTube Studios comments section, creators can filter, view and respond to comments from across their channels.

For creators with a large YouTube following, responding to comments can be a time-consuming process. Thats where SmartReply aims to help.

Image Credits: Google

Instead of manually typing out all their responses, creators will be able to instead click one of the suggested replies to respond to comments their viewers post. For example, if a fan says something about wanting to see whats coming next, the SmartReply feature may suggest a response like Thank you! or More to come!

Unlike the SmartReply feature built for email, where the technology has to process words and short phrases, the version of SmartReply designed for YouTube has to also be able to handle a more diverse set of content like emoji, ASCII art or language switching, the company notes. YouTube commenters also often post using abbreviated words, slang and inconsistent use of punctuation. This made it more challenging to implement the system on YouTube.

Image Credits: Google

Google detailed how it overcame these and other technical challenges in a post on its Google AI Blog, published today.

In addition, Google said it wanted a system where SmartReply only made suggestions when its highly likely the creator would want to reply to the comment and when the feature is able to suggest a sensible response. This required training the system to identify which comments should trigger the feature.

At launch, SmartReply is being made available for both English and Spanish comments and its the first cross-lingual and character byte-based version of the technology, Google says.

Because of the approach SmartReply is now using, the company believes it will be able to make the feature available to many more languages in the future.

See original here:

Google brings its AI-powered SmartReply feature to YouTube - TechCrunch

The Largest Cyber Attack of All Time Is Coming. And AI Could Help Stop It. – CPO Magazine

A number of articles have been published recently predicting that the largest cyberattack in history is destined to happen soon, one of the main underlying factors behind this assertion is the overnight explosion of the enterprise attack surface and large increase in noted hacks that weve witnessed during the COVID-19 Pandemic.

As an example, a recent Forbes article by Stephen McBride claims that, The Largest Cyberattack In History Could Happen Within Six Months. Although it is entirely possible and even potentially probable that the largest cyber security breach in history is right around the corner, it is also entirely avoidable. Solutions to protect networks in the changing enterprise cyber landscape we are witnessing due to events like COVID-19 do exist, but they are not your typical legacy tools utilizing AI that is based on human labeling or Supervised Learning Algorithms which most companies are relying on for their cybersecurity now.

According to McBride, switching to remote work, with employees now sharing computers with their loved ones, who are using them for everything from zoom get-togethers to school work, on such a massive scale has caused the attack surface to grow by an astounding 500 percent, virtually overnight. Before the pandemic, remote employees would have specially secured laptops and other devices, but it has been impossible, due to the quick transition, to effectively secure corporate devices now that the vast majority of employees are working from home.

As a result, hacking, phishing and ransomware attempts have increased substantially since the start COVID-19. This is due to more entry points for hackers than ever before because of our remote work situation.

With an overwhelming shortage of cybersecurity talent in the market before the pandemic struck, its completely infeasible to believe that hiring out of this situation is an option. Luckily the advent of solutions utilizing advanced AI may be the cure we need.

Before you continue reading, how about a follow on LinkedIn?

The word AI tends to scare people off due to overuse and under-delivery, but by finding and using valuable and effective artificial intelligence based cybersecurity solutions that dont add to the workload of your already overworked SOC team, but instead automate and increase efficiency, enterprises can solve this problem. AI is the only viable solution to the potential D-Day style attack were facing in the near future.

Traditional cybersecurity systems are based on signatures. If you assume that any computer with endpoint software installed on it has a probability of failure, suddenly that probability is massively increased by an order of magnitude because it is enough for just one computer to become a host for the whole companys network to potentially get compromised.

Endpoint protection is not enough and is futile in this current context because now the probability that the network is going to get infected is much higher.

When we are not working remotely the situation is very different because everyone is in a sort-of envelope, but now everyone is out and their network habits are completely different than before, with much higher rates of online shopping and visiting malicious sites pertaining to COVID-19.

In some sense the only solution to the coming attack, because its just a game of probabilities, is to gain an understanding of all network traffic or what is being sent and received over the wire and monitor for abnormalities.

For example, if an IP has behaved oddly on the inbound side, maybe moved laterally in a strange way and then exported something out, and if thats usually not what that user should be doing, you could take a look and find traces of these actions on the wire, but not on the endpoint.

There is no reason to diminish the importance of endpoint solutions, you still want to protect and monitor endpoints as much as you can, but in this situation because of the fast and overwhelming growth of the attack surface users need an additional solution which monitors the interaction on the network. The probability that one of your computers will get infected and gain access to the rest of the network increased exponentially along with the attack surface, leaving enterprises more vulnerable than ever before.

Its easy to come to the conclusion that we will be seeing an event of great magnitude very soon in the cybersecurity sphere. When is anybodys guess, but there are things we can do to prevent and mitigate it when it happens. Its almost like with the pandemic, if a local government does a great job of shutting things down, keeping people indoors, and implements contact tracing the pandemic is not going to spread as quickly and do as much damage. Like that same local government, your security team can do things now to prevent and contain a coming attack, for example implementing an AI based solution to monitor and trace potential weak points in your network and identify attacks in real time.

I believe it is probable that we will see a serious, potentially catastrophic attack in the next six months, but if enterprises are proactive about implementing these types of technologies, and employing Unsupervised or Self-Supervised AI based cybersecurity systems, we would see a drastic decrease in the probability of this big attack.

Endpoint protection may no longer be enough with more people doing online shopping and visiting malicious sites related to COVID-19. #security #respectdata Click to Tweet

The idea is really contact detection and prevention. If one computer gets breached or infected we want to keep it from infecting the whole network. A team of cybersecurity professionals who has to sift through thousands of false positive alerts might spend hours or even days trying to find a breach when alerted, and every second that passes means the network becomes more and more infected, whereas an advanced AI system can monitor the network, sift through alerts, and surface a potentially deadly attack in seconds. If were going to stop the largest cyber attack of all time before it does catastrophic damage, we need to be armed with the most intelligent and advanced tools possible.

View post:

The Largest Cyber Attack of All Time Is Coming. And AI Could Help Stop It. - CPO Magazine

Menten AIs combination of buzzword bingo brings AI and quantum computing to drug discovery – TechCrunch

Menten AI has an impressive founding team and a pitch that combines some of the hottest trends in tech to pursue one of the biggest problems in healthcare new drug discovery. The company is also $4 million richer with a seed investment from firms including Uncork Capital and Khosla Ventures to build out its business.

Menten AIs pitch to investors was the combination of quantum computing and machine learning to discover new drugs that sit between small molecules and large biologics, according to the companys co-founder Hans Melo.

A graduate of the Y Combinator accelerator, which also participated in the round alongside Social Impact Capital*, Menten AI looks to design proteins from scratch. Its a heavier lift than some might expect, because, as Melo said in an interview, it takes a lot of work to make an actual drug.

Menten AI is working with peptides, which are strings of amino acid chains similar to proteins that have the potential to slow aging, reduce inflammation and get rid of pathogens in the body.

As a drug modality [peptides] are quite new, says Melo. Until recently it was really hard to design them computationally and people tried to focus on genetically modifying them.

Peptides have the benefit of getting through membranes and into cells where they can combine with targets that are too large for small molecules, according to Melo.

Most drug targets are not addressable with either small molecules or biologics, according to Melo, which means theres a huge untapped potential market for peptide therapies.

Menten AI is already working on a COVID-19 therapeutic, although the companys young chief executive declined to disclose too many details about it. Another area of interest is in neurological disorders, where the founding team members have some expertise.

Image of peptide molecules. Image Courtesy: D-Wave

While Menten AIs targets are interesting, the approach that the company is taking, using quantum computing to potentially drive down the cost and accelerate the time to market, is equally compelling for investors.

Its also unproven. Right now, there isnt a quantum advantage to using the novel computing technology versus traditional computing. Something that Melo freely admits.

Were not claiming a quantum advantage, but were not claiming a quantum disadvantage, is the way the young entrepreneur puts it. We have come up with a different way of solving the problem that may scale better. We havent proven an advantage.

Still, the company is an early indicator of the kinds of services quantum computing could offer, and its with that in mind that Menten AI partnered with some of the leading independent quantum computing companies, D-Wave and Rigetti Computing, to work on applications of their technology.

The emphasis on quantum computing also differentiates it from larger publicly traded competitors like Schrdinger and Codexis.

So does the pedigree of its founding team, according to Uncork Capital investor, Jeff Clavier. Its really the unique team that they formed, Clavier said of his decision to invest in the early-stage company. Theres Hans the CEO who is more on the quantum side; theres Tamas [Gorbe] on the bio side and theres Vikram [Mulligan] who developed the research. Its kind of a unique fantastic team that came together to work on the opportunity.

Clavier has also acknowledged the possibility that it might not work.

Can they really produce anything interesting at the end? he asked. Its still an early-stage company and we may fall flat on our face or they may come up with really new ways to make new peptides.

Its probably not a bad idea to take a bet on Melo, who worked with Mulligan, a researcher from the Flatiron Institute focused on computational biology, to produce some of the early research into the creation of new peptides using D-Waves quantum computing.

Novel peptide structures created using D-Waves quantum computers. Image Courtesy: D-Wave

While Melo and Mulligan were the initial researchers working on the technology that would become Menten AI, Gorbe was added to the founding team to get the company some exposure into the world of chemistry and enzymatic applications for its new virtual protein manufacturing technology.

The gamble paid off in the form of pilot projects (also undisclosed) that focus on the development of enzymes for agricultural applications and pharmaceuticals.

At the end of the day what theyre doing is theyre using advanced computing to figure out what is the optimal placement of those clinical compounds in a way that is less based on those sensitive tests and more bound on those theories, said Clavier.

*This post was updated to add that Social Impact Capital invested in the round. Khosla, Social Impact, and Uncork each invested $1 million into Menten AI.

More here:

Menten AIs combination of buzzword bingo brings AI and quantum computing to drug discovery - TechCrunch

The rise of AI in medicine – Varsity Online

During the coronavirus pandemic, it's unlikely that AI doctors would work at all: the depth of moral decisions that need to be made simply can't be accommodated by a program.Vidal Balielo Jr.

By now, its almost old news that artificial intelligence (AI) will have a transformative role in medicine. Algorithms have the potential to work tirelessly, at faster rates and now with potentially greater accuracy than clinicians.

In 2016, it was predicted that machine learning will displace much of the work of radiologists and anatomical pathologists. In the same year, a University of Toronto professor controversially announced that we should stop training radiologists now. But is it really the beginning of the end for some medical specialties?

AI excels in pattern identification in determining pathologies that look certain ways, according to Elliot Fishman, a radiology and oncology professor at Johns Hopkins University and a key proponent of AI integration into medicine. Ultimately, specialties that rely heavily on visual pattern recognition notably radiology, pathology, and dermatology are those believed to be at the greatest risk. With the advent of virtual primary care services, such as Babylon, General Practice may also have to adapt in the future.

Pattern recognition functions

In January of this year, an article in Nature reported that AI systems outperformed doctors in breast cancer detection. This was carried out by an international team, including researchers from Google Health and Imperial College London on mammograms obtained from almost 29,000 women. Screening mammography currently plays a critical role in early breast cancer detection, ensuring early initiation of treatment and yielding improved patient prognoses. False negatives are a significant problem in mammography. The study found AI use was associated with an absolute reduction of 9.4% and 2.7% reduction in false negatives, in the USA and UK, respectively. Similarly, use of the AI system led to a reduction of 5.7% and 1.2% in the USA and UK respectively for false positives. The study suggested that AI outperformed the six radiologists individually, and was equivalent to the current double-reading system of two doctors currently used in the UK. These developments have already had perceptible consequences in practice: algorithms eliminate the need for a second radiologist when interpreting mammograms. However, critically, one radiologist remains responsible for the diagnosis.

AI can also be deployed to predict the cognitive decline that leads to Alzheimers disease... allowing early intervention and treatment

Earlier studies have also yielded similar results: a 2017 study published in Nature examined the use of algorithms in dermatology. The study, from Stanford University, involved an algorithm developed by computer scientists using an initial database of 130,000 skin disease images. When compared to the success rates of 21 dermatologists, the algorithm was almost equally successful. Likewise, in a study conducted by the European Society for Medical Oncology, it was found that AI exceeded the performance of 58 international dermatologists. A system reliant on a form of machine learning known as Deep Learning Convolutional Neural Network (CNN) missed fewer melanomas (the most lethal form of skin cancer), and misdiagnosed benign moles (or nevi) as malignant less often than the group of dermatologists.

Further applications in medicine

However, the prospects of AI technology extend beyond the clear applications in cancer diagnosis and radiology: recent studies have also demonstrated that AI may be able to detect genetic diseases in infants by rapid whole-genome sequencing and interpretation. Considering that time is critical in treating gravely ill children, such automated techniques can be crucial in diagnosing children who are suspected of having genetic diseases.

In addition, AI can also be deployed to predict the cognitive decline that leads to Alzheimers disease. Such computational models can be highly valuable at the individual level, allowing early intervention and treatment planning. FDA approval has also been granted to a number of companies for such technologies; these include Imagens OsteoDetect, an algorithm intended to aid wrist fracture detection. In addition, algorithms may have functions in other specialties such as anaesthesiology in monitoring and responding to physiological signs.

Limitations of AI

Despite the benefits that AI integration into clinical practice can provide, the technology is not without limitations. Machine learning algorithms are highly dependent on the quality and quantity of the data input, typically requiring millions of observations to function at suitable levels. Biases in data collection can heavily impact performance; for instance, racial or gender representation in the original data set can lead to differences in diagnostic abilities of the system for different groups, consequently leading to disparities in patient outcomes. Considering that certain pathologies, including melanoma, present differently between races and with different incidences, this can often lead to both later diagnoses and poorer outcomes for racial minorities, as found in a number of studies. Volunteer bias of the data collected is also a pertinent consideration; for example, although lactate concentration is a good predictor of death, this is not routinely measured in healthy individuals.

Considering the magnitude of what is at stake raises the question of whether it is appropriate to rely solely on machines without any human input.

Other key problems which may arise include how algorithms overfit predictions based on random errors in the data, resulting in unstable estimates which vary between data samples. In addition, clinicians may take a more cautious approach when making a diagnosis. Therefore, it may appear that a human underperforms compared to an algorithm since their actions may yield a lower accuracy in tumour identification, however this approach could lead to a lower number of critical cases missed.

Ultimately, the tendency for humans to favour propositions given by automated systems over non-automated ones, known as automation bias, may exacerbate these problems.

Attempts to replace GPs with AI have been unsuccessful

The success of AI integration into clinical practice crucially depends on the receptiveness of patients. Babylon, a start-up company based in the UK, was developed to give medical advice to patients using chat services. Although Babylon has been referred to as the biggest disruption in medical practice in years and a game-changer in UK media as quoted on Babylons website it is questionable how successful the service has been so far Babylon has been slow in recruiting patients and this month, it came under fire for data breaches. The fact that patients lose access to their regular GP if they sign up to Babylon is perhaps a key contributing factor for Babylons slow take-off. Therefore, it appears that human contact is highly valued by patients, after all, at least for some medical specialties.

Potential effect of COVID-19

The COVID-19 pandemic, with its requirements for social distancing, could potentially accelerate the use of AI. COVID-related restrictions could change the perception of patients about remote medical consultations, paving the way for increased receptiveness to primary healthcare apps including Babylon. The pandemic has also highlighted the inadequacies in fast internet access throughout the country. This may encourage increased government investment into broadband infrastructure, which may, in turn, facilitate broader penetration of AI technology. The increased pressure on the NHS may also encourage greater use of algorithms to delegate menial tasks as seen in specialties such as radiology already.

The future

AI will likely become an indispensable tool in clinical medicine, facilitating the work of professionals by automating mundane, albeit essential tasks. By reducing the medical workload, this could allow healthcare professionals to dedicate greater efforts to other aspects of their work, including patient interaction. As emphasised by the President of the Royal College of Radiologists, radiologists can instead focus more of their time on interventional radiology and in managing more complex cases to a much greater extent. Indeed, innovation may aid clinicians and augment their decision-making capabilities to improve their efficiency and diagnostic accuracy, however it remains doubtful whether technology can fully replace these roles. After all, considering the magnitude of what is at stake human life raises the question of whether it is appropriate to rely solely on machines without any human input. Therefore, it remains likely that human involvement will need to continue across medical specialties, although this may be in a reduced or adapted form.

Varsity is the independent newspaper for the University of Cambridge, established in its current form in 1947. In order to maintain our editorial independence, our newspaper and news website receives no funding from the University of Cambridge or its constituent Colleges.

We are therefore almost entirely reliant on advertising for funding, and during this unprecedented global crisis, we have a tough few weeks and months ahead.

In spite of this situation, we are going to look at inventive ways to look at serving our readership with digital content for the time being.

Therefore we are asking our readers, if they wish, to make a donation from as little as 1, to help with our running cost at least until we hopefully return to print on 2nd October 2020.

Many thanks, all of us here at Varsity would like to wish you, your friends, families and all of your loved ones a safe and healthy few months ahead.

Originally posted here:

The rise of AI in medicine - Varsity Online

The AI of digitalization – Bits&Chips

Jan Bosch is a research center director, professor, consultant and angel investor in start-ups. You can contact him at jan@janbosch.com.

yesterday

This article is the last of four where I explore different dimensions of digital transformation. Earlier, I discussed business models, product upgrades and data exploitation. The fourth dimension is concerned with artificial intelligence. Similar to the other dimensions, our research showed that theres a clear evolution path that companies go through as they transition from being traditional companies to becoming digital ones (see the figure).

In the first stage, the company is still focused on data analytics. All data is processed for the sole purpose of human consumption and interpretation. At this point, things are all about dashboard, visualization and stakeholder views.

In the second stage, the first machine learning (ML) or deep learning (DL) models are starting to be developed and deployed. The training of the models is based on static data sets that have been assembled at one point in time and that dont evolve unless theres an explicit decision taken. When that happens, a new data set is assembled and used for training.

In the third stage, DevOps and MLOps are merged in the sense that theres a continuous retraining of models based on the most recent data. This data is no longer a data set, but rather a window over a data stream thats used for training and continuous re-training. Depending on the domain and the rate of change in the underlying data, the MLOps loop is either aligned with the DevOps loop or is executed more or less frequently. For instance, when using ML/DL for house price prediction in a real-estate market, its important to frequently retrain the model based on the most recent sales data as house prices change continuously.

Especially in the software-intensive embedded systems industry, as ML/DL models are deployed in each product instance, the next step tends to be the adoption of federated approaches. Rather than conducting all training centrally, the company adopts federated learning approaches where all product instances are involved in training and model updates are shared between product instances. This allows for localization and customization as specific regions and users may want the system to behave differently. Depending on the approach to federated learning, its feasible to allow for this. For example, different drivers want their adaptive cruise control system to behave in different ways. Some want to have the system take a more careful approach whereas others would like to see a more aggressive way of breaking and accelerating. Each product instance can, over time, adjust itself in response to driver feedback.

Finally, we reach the automated experimentation stage where the system fully autonomously experiments with its own behavior with the intent of improving certain success metrics. Whereas in earlier stages, humans conduct A/B experiments or similar and the humans are the ones coming up with the A and B alternatives, here its the system itself that generates alternatives, deploys, measures the effect and decides on next steps. Although the examples in this category are few and far between, weve been involved in, among others, cases where we use a system of this type to explore configuration parameter settings (most systems have thousands) in order to optimize the systems performance automatically.

Using AI is not a binary step, but a process that evolves over time

Concluding, digital transformation is a complex, multi-dimensional challenge. One of the dimensions is the adoption of AI/ML/DL. Using AI is not a binary step, but rather a process that evolves over time and proceeds through predefined steps. Deploying AI allows for automation of tasks that couldnt be automated earlier and for improving the outcomes of automated processes through smart, automated decisions. Once you have software, you can generate data. Once you have data, you can employ AI. Once you have AI, you can truly capitalize on the potential of digitalization.

In his course Speed, data and ecosystems, Jan Bosch provides you with a holistic framework that offers strategic guidance into how you successfully can identify and address the key challenges to excel in a software-driven world.

Read more here:

The AI of digitalization - Bits&Chips