A new book examines the progress of Indias economy from its first Prime Minister to the current one – Scroll.in

Independent Indias economy is completing a 75-year-long journey. It has been a remarkable one too. Agricultural production has grown considerably and stabilised. We have not faced food shortages for over half a century. From its import-dependent status, Indian industry has transformed itself into one with a highly diversified product mix. Services produced here no longer mean just an Ayurvedic massage or the rope trick but high-end software solutions delivered onsite to the leading corporations of the world by young Indian engineers. Clearly, the economy has modernised in some significant ways.

However, heart-warming as these achievements may be, after seventy-five years Indias economic journey must be gauged by the goal that was set at its beginning. I begin this book by arguing that the goal of Indian independence, as visualised by its founders, is best reflected in Nehrus observation that India was embarking on a journey to end poverty and ignorance and disease, and the inequality of opportunity.

As these outcomes may be expected to be partly dependent on the economic progress made, this is how I have narrated the story of Indias economic journey over these nearly seventy-five years. I end this book with an evaluation of the extent to which political democracy, embraced wholeheartedly in 1947 and the procedures of which have been retained, has succeeded in delivering the goal envisaged for it.

The ending of colonialism and the adoption of political democracy did usher in an important freedom to Indians. They were no longer constrained by a foreign power and, at least in principle, were free of arbitrary rule. But surely the founders of India had more in mind for their compatriots. Actually, it is possible to argue that when Nehru spoke about ending poverty and ignorance and disease and the inequality of opportunity, he had in mind the need to endow Indians with the capability that would enable them to lead a full life.

There is also far greater undernourishment in the country and greater illiteracy. The only metric by which the Indian population is not far from the rest of the world is life expectancy. These data together imply that while Indians live almost as long as everyone else on the planet, a sizeable section of them lead a life of deprivation.

In a significant contrast, poverty, illiteracy and undernourishment have been almost eliminated in China. On every one of the indicators in the table, China does better than the world and India does worse than China. In terms of the most basic indicators of development, India has very far to go to reach the global standard.

The point of comparisons such as the one just made is to assess the gap that may exist between countries on the indicators of interest. This does involve the assumption that the benchmark used is attainable by all the countries included in the exercise.

This is a flawed understanding of the reasons for Indias condition. Her relatively poor performance on standard human development indicators can be understood by reference to public policy. In my discussion of the mortality from Covid-19, I have pointed out that the death rate across India can be explained in terms of the varying investment in a public health system, measured by the share of GDP that is devoted to public expenditure on health.

As health is a state subject in India, the analysis is based on the expenditures of state governments. This shows some of them spending less on health than they do on the police. The state of Maharashtra stands out as one that spends less than 0.5 per cent of its GDP on a public health system. During the first wave of Covid-19 it was the site of the worst form of the health crisis in India, with overflowing hospitals, limited health personnel, shortage of ventilators and oxygen and with the highest death rate among the states of India at that stage of the pandemic. No more evidence is needed to confirm the close relation between health outcomes and public policy.

As evidence on the connection between health outcomes and public policy appears in this book, I will here confine myself to the case of education. We can see the level of public spending on education in India and its consequences. Public expenditure on education as a share of GDP is lower in India than in every other regional grouping of the world. Commensurately, the outcomes in terms of literacy and schooling are, mostly, worse.

As spending here is lower than even that in sub-Saharan Africa, a region of the world with lower per capita income, it cannot be said that low spending on education reflects the capacity to spend. For India, it appears to have been a matter of priorities in public policy. Its consequence has been persisting illiteracy.

Interestingly, we find that public expenditure on education is much higher in the United States, a country committed to free market capitalism, while Indias Constitution declares the country a socialist republic. As seen in the table, the former socialist republics of Europe and Central Asia spend substantially more on health and education than India does, and this is reflected in the superior human development indicators in these countries.

It is difficult not to conclude that there is a class bias in this pattern of expenditure as public education is availed of only by the poorer classes. Whatever the underlying reason, it could not have been without wider-ranging consequences for the country. Indias children may not be receiving the attention they need at the time when they need it most that is, while at school. In any case, Indias poor performance on health and education can be understood in terms of the meagre public outlays on these foundational inputs into the capability of a population. It is the nature of its public policy alone that accounts for Indias disappointing human development record.

Excerpted with permission from Indias Economy From Nehru To Modi: A Brief History, Pulapre Balakrishnan, Permanent Black in collaboration with Ashoka University.

Go here to see the original:

A new book examines the progress of Indias economy from its first Prime Minister to the current one - Scroll.in

The Problems of Evolution as a March of Progress – SAPIENS

Herschel Walker, the former football starturnedU.S. Senate candidate from Georgia, made headlines when he recently asked at a church-based campaign stop, if evolution is true, Why are there still apes?

This chestnut continues to be echoed by creationists, despite being definitively debunked. Anthropologists have repeatedly explained that modern humans did not evolve from apes; rather, both evolved from a shared ancestor that fossil and DNA evidence indicates lived 7 to 13 million years ago.

But Walkers question raises a larger, timely point that generally escapes recognition even by some scientists and educators.

A more fruitful query might be, If evolution is true, why are there still humans? Why is our species almost universally seen as the logical endpoint of evolution, with all other species serving as inferior detours or temporary placeholders on an inevitable march toward humanity?

This default, hard-to-shake view of evolution has been debunked as definitively as Walkers ape question. Yet it continues to be echoed in education, policy, business, conservation efforts, and the behaviors of the vast majority of people in Western, industrialized nations.

It is not necessarily surprising that non-scientists might see Earths history as a progression toward higher levels of complexity, with humans representing the most complex. What is startling is that traces of this view remain in scientific thought.

Biology teachers seldom realize it underlies lessons of four-chambered hearts succeeding over three-chambered hearts, or of simple urinary flame cells in flatworms and nephridia in earthworms next giving rise to kidney tubules in higher animals. As if humans are the benchmark by which all characteristics should be measured, and developing more human-like organs is a prime indicator of evolutionary advancement.

Worse, the progressive complexity view continues to infect anthropology. Its exemplified by the iconic March of Progressa linear sequence of slumped apes eventually supplanted by upright humans. And it persists in the ideas that certain lower ancestral human populations gave rise to, and were succeeded by, more complex people, who are often depicted as having lighter skin tones.

People must unlearn this idea that biological diversity is an ascending ladder of complexity, with humans on top and nonhuman species as imperfect transitions and lesser beings. The chief result of this misguided worldview is our casual disregard for the natural environment, whichvia climate change, habitat destruction, and biodiversity losscontinues to cause disastrous consequences for humans and nonhumans alike.

Read more here:

The Problems of Evolution as a March of Progress - SAPIENS

Doe River Gorge making progress on bringing Christmas Train to the gorge for Christmas 2023 – Johnson City Press (subscription)

Country

United States of AmericaUS Virgin IslandsUnited States Minor Outlying IslandsCanadaMexico, United Mexican StatesBahamas, Commonwealth of theCuba, Republic ofDominican RepublicHaiti, Republic ofJamaicaAfghanistanAlbania, People's Socialist Republic ofAlgeria, People's Democratic Republic ofAmerican SamoaAndorra, Principality ofAngola, Republic ofAnguillaAntarctica (the territory South of 60 deg S)Antigua and BarbudaArgentina, Argentine RepublicArmeniaArubaAustralia, Commonwealth ofAustria, Republic ofAzerbaijan, Republic ofBahrain, Kingdom ofBangladesh, People's Republic ofBarbadosBelarusBelgium, Kingdom ofBelizeBenin, People's Republic ofBermudaBhutan, Kingdom ofBolivia, Republic ofBosnia and HerzegovinaBotswana, Republic ofBouvet Island (Bouvetoya)Brazil, Federative Republic ofBritish Indian Ocean Territory (Chagos Archipelago)British Virgin IslandsBrunei DarussalamBulgaria, People's Republic ofBurkina FasoBurundi, Republic ofCambodia, Kingdom ofCameroon, United Republic ofCape Verde, Republic ofCayman IslandsCentral African RepublicChad, Republic ofChile, Republic ofChina, People's Republic ofChristmas IslandCocos (Keeling) IslandsColombia, Republic ofComoros, Union of theCongo, Democratic Republic ofCongo, People's Republic ofCook IslandsCosta Rica, Republic ofCote D'Ivoire, Ivory Coast, Republic of theCyprus, Republic ofCzech RepublicDenmark, Kingdom ofDjibouti, Republic ofDominica, Commonwealth ofEcuador, Republic ofEgypt, Arab Republic ofEl Salvador, Republic ofEquatorial Guinea, Republic ofEritreaEstoniaEthiopiaFaeroe IslandsFalkland Islands (Malvinas)Fiji, Republic of the Fiji IslandsFinland, Republic ofFrance, French RepublicFrench GuianaFrench PolynesiaFrench Southern TerritoriesGabon, Gabonese RepublicGambia, Republic of theGeorgiaGermanyGhana, Republic ofGibraltarGreece, Hellenic RepublicGreenlandGrenadaGuadaloupeGuamGuatemala, Republic ofGuinea, RevolutionaryPeople's Rep'c ofGuinea-Bissau, Republic ofGuyana, Republic ofHeard and McDonald IslandsHoly See (Vatican City State)Honduras, Republic ofHong Kong, Special Administrative Region of ChinaHrvatska (Croatia)Hungary, Hungarian People's RepublicIceland, Republic ofIndia, Republic ofIndonesia, Republic ofIran, Islamic Republic ofIraq, Republic ofIrelandIsrael, State ofItaly, Italian RepublicJapanJordan, Hashemite Kingdom ofKazakhstan, Republic ofKenya, Republic ofKiribati, Republic ofKorea, Democratic People's Republic ofKorea, Republic ofKuwait, State ofKyrgyz RepublicLao People's Democratic RepublicLatviaLebanon, Lebanese RepublicLesotho, Kingdom ofLiberia, Republic ofLibyan Arab JamahiriyaLiechtenstein, Principality ofLithuaniaLuxembourg, Grand Duchy ofMacao, Special Administrative Region of ChinaMacedonia, the former Yugoslav Republic ofMadagascar, Republic ofMalawi, Republic ofMalaysiaMaldives, Republic ofMali, Republic ofMalta, Republic ofMarshall IslandsMartiniqueMauritania, Islamic Republic ofMauritiusMayotteMicronesia, Federated States ofMoldova, Republic ofMonaco, Principality ofMongolia, Mongolian People's RepublicMontserratMorocco, Kingdom ofMozambique, People's Republic ofMyanmarNamibiaNauru, Republic ofNepal, Kingdom ofNetherlands AntillesNetherlands, Kingdom of theNew CaledoniaNew ZealandNicaragua, Republic ofNiger, Republic of theNigeria, Federal Republic ofNiue, Republic ofNorfolk IslandNorthern Mariana IslandsNorway, Kingdom ofOman, Sultanate ofPakistan, Islamic Republic ofPalauPalestinian Territory, OccupiedPanama, Republic ofPapua New GuineaParaguay, Republic ofPeru, Republic ofPhilippines, Republic of thePitcairn IslandPoland, Polish People's RepublicPortugal, Portuguese RepublicPuerto RicoQatar, State ofReunionRomania, Socialist Republic ofRussian FederationRwanda, Rwandese RepublicSamoa, Independent State ofSan Marino, Republic ofSao Tome and Principe, Democratic Republic ofSaudi Arabia, Kingdom ofSenegal, Republic ofSerbia and MontenegroSeychelles, Republic ofSierra Leone, Republic ofSingapore, Republic ofSlovakia (Slovak Republic)SloveniaSolomon IslandsSomalia, Somali RepublicSouth Africa, Republic ofSouth Georgia and the South Sandwich IslandsSpain, Spanish StateSri Lanka, Democratic Socialist Republic ofSt. HelenaSt. Kitts and NevisSt. LuciaSt. Pierre and MiquelonSt. Vincent and the GrenadinesSudan, Democratic Republic of theSuriname, Republic ofSvalbard & Jan Mayen IslandsSwaziland, Kingdom ofSweden, Kingdom ofSwitzerland, Swiss ConfederationSyrian Arab RepublicTaiwan, Province of ChinaTajikistanTanzania, United Republic ofThailand, Kingdom ofTimor-Leste, Democratic Republic ofTogo, Togolese RepublicTokelau (Tokelau Islands)Tonga, Kingdom ofTrinidad and Tobago, Republic ofTunisia, Republic ofTurkey, Republic ofTurkmenistanTurks and Caicos IslandsTuvaluUganda, Republic ofUkraineUnited Arab EmiratesUnited Kingdom of Great Britain & N. IrelandUruguay, Eastern Republic ofUzbekistanVanuatuVenezuela, Bolivarian Republic ofViet Nam, Socialist Republic ofWallis and Futuna IslandsWestern SaharaYemenZambia, Republic ofZimbabwe

Read more:

Doe River Gorge making progress on bringing Christmas Train to the gorge for Christmas 2023 - Johnson City Press (subscription)

Addressing the Social Determinants of Health with AI, Partnerships – HealthITAnalytics.com

June 11, 2020 -In the healthcare industry today, it is widely understood that optimal health outcomes require addressing patients clinical and non-clinical needs their social determinants of health.

So much of individuals health is determined by factors beyond the doctors office. Where someone lives, works, and plays has a direct impact on her well-being, and its critical for health systems to gather and understand their patients social determinants data.

However, for many healthcare organizations, it can be challenging to know where to start with addressing patients social determinants. A lack of industry standards makes it difficult to collect and share this essential information, and healthcare entities may not be equipped to address unmet social needs.

These obstacles have been highlighted even more as the industry has increasingly understood the important role social determinants play in overall health, Amy Salerno, MD, MHS, Director of Community Health and Well-Being at the University of Virginia (UVA) Health System, told HealthITAnalytics.

Industry-wide and society-wide, theres a lot more recognition of the social determinants of health. In Charlottesville specifically, weve been looking at inequities and disparities in our local community and among our patients, and noting the tie to health outcomes, she said.

READ MORE: COVID-19 Deaths Linked to Social Determinants of Health Data

Because the health system is not necessarily the expert in addressing housing instability, food insecurity, or other socioeconomic factors, we need to partner with organizations in our community that are the experts and create a robust network to support individuals social needs.

In 2018, UVA Health System started to partner with community organizations to help patients beyond clinical settings. The organization selected a technology-based referral network, called Pieces, to better connect patients with community resources that meet their specific needs.

Before UVA could achieve better community health outcomes, the entity needed to develop a comprehensive network to connect patients with community groups.

We had to consider what a robust network and partnership should look like, where we respect and honor the expertise that our community partners bring to the table. We wanted to be able to address these complex issues using shared decision-making, Salerno said.

That led us to look for solutions that align with the strategic objectives of UVA from an operational standpoint, but that also serve the broader community and not just our patients. We wanted it to be a win-win, both for our community and for UVA.

READ MORE: Social Determinants of Health Vital for Assessing Heart Disease Risk

UVA implemented a real-time artificial intelligence platform that integrated directly with hospital EHRs. The platform leverages natural language processing tools to extract social determinants and clinical risk factors from unstructured notes for more timely interventions.

For UVA, the technology could help hospitals improve patient care in several key metrics, Salerno said.

The areas where we have the chance to make the biggest impact is around hospital length of stay and readmissions, she said.

For us, its been huge just to have a starting point of really robust data to tell us exactly where we are compared to other National Hospitals. The tool can also tell us where we have our biggest gaps in spaces to be able to make improvements.

The platform has also helped UVA detect gaps or disparities in COVID-19 testing and outcomes.

READ MORE: Genetics, Social Determinants Have Joint Impact on Cognitive Health

A lot of my work is around health equity and identifying potential disparities in testing access individuals who may have met testing criteria based on their symptoms, but maybe didnt receive testing, Salerno said.

Weve been able to look for care access disparities and outcomes disparities in the ICU, admissions, and mortalities. And that allows us to think about changing our processes and look at the data to see if these changes improved our disparities.

The health system also adopted a fully-linked case management platform that can be used by nearly any community-based organization, hospital, or clinic. The platform enables closed-loop referrals and care plans that help health systems address social determinants.

Multiple different types of agencies in our community have adopted the platform, Salerno said.

Weve been able to use this platform to stand up our own community calling in response to coronavirus. It acts almost like a case management and call log platform for all of the individuals looking for access to a physician and call line, especially if they have yet to be part of our UVA system. So, they dont have yet a medical record number, but we can connect them to social services if they need them.

An essential part of implementing the platform and ensuring its success was working together with community partners to find quality solutions, Salerno explained.

We had community organizations help us assess tools and look at the options, and tell us what was important to them. We also have a community coalition of organizations, including housing providers, food providers, mental health providers, transportation groups, and education leaders, that are helping us implement the tool in a community-wide way, she said.

You have to strategically align the goals of your organization with those of your community and have conversations with them about what would be helpful. What are their fears, and how can you support them?

Also critical for an effort like this: good data.

You have to have really robust, quality data before people are willing to invest the resources for change, she said.

Were an academic medical center, so for us, data is king. Having a comprehensive data platform that uses natural language processing to identify our patients needs, and enables us to connect them to resources, was a really great place for us to start.

Going forward, these cross-sector partnerships, coupled with innovative platforms and tools, will help healthcare organizations better address patients social determinants of health, leading to improved outcomes.

Health is so dependent on the non-clinical factors of your life, like food access, safe and affordable housing, employment, and other determinants. Healthcare providers are not the experts in any of those factors, although all of those factors contribute to overall well-being, Salerno concluded.

Making sure that youre partnered in creating a comprehensive system to support people along their journey and in taking care of their health and well-being is critical. Knowing that your community partners are experts in what they do, and being able to lean into them and ask for their help those mutually beneficial partnerships are essential.

Read more from the original source:

Addressing the Social Determinants of Health with AI, Partnerships - HealthITAnalytics.com

MammoScreen AI Tool Improves Diagnostic Performance of Radiologists in Detecting Breast Cancer – Cancer Network

A clinical investigation published in Radiology: Artificial Intelligence demonstrated that the concurrent use of a new artificial intelligence (AI) tool improved the diagnostic performance of radiologists in the detection of breast cancer by mammography without prolonging their workflow.1

Researchers used MammoScreen, an AI tool designed to identify regions suspicious for breast cancer on 2D digital mammograms and determine their likelihood of malignancy. The system produces a set of image positions with scores for suspicion of malignancy that are extracted from the 4 views of a standard mammogram.

The results show that MammoScreen may help to improve radiologists performance in breast cancer detection, Serena Pacil, PhD, clinical research manager at Therapixel, where the software was developed, said in a press release.2

In this multireader, multicase retrospective study, a dataset including 240 digital mammography images were analyzed by 14 radiologists by a counterbalance design, where each half of the dataset was read either with or without AI in the first session and vice versa for a second session, with the 2 sessions separated by a washout period. End points assessed by the investigators included area under the receiver operating characteristic curve (area under the curve [AUC]), sensitivity, specificity, and reading time.

Overall, the average AUC across readers was 0.769 (95% CI, 0.724-0.814) without the use of AI and 0.797 (95% CI, 0.754-0.840) with AI. The average difference in AUC was 0.028 (95% CI, 0.002-0.055; P = .035). The investigators said these data indicate greater interreader reliability with the aid of AI, resulting in more standardized results.

Further, average sensitivity was increased by 0.033 when AI support was utilized (P = .021). Reading time changed dependently with the AI-tool score.

For those with a low likelihood of malignancy (< 2.5%), the time was about the same in the first reading session and slightly decreased in the second reading session. For those with a higher likelihood of malignancy, the reading time was generally increased with the use of AI.

It should be noted that in real conditions, additional factors may have an impact on reading time (ie, stress, tiredness, etc), and that those factors were obviously not considered in the present analysis, explained the authors.

Importantly, the main limitation of this study was that the used dataset was not representative of normal screening practices. Specifically, a high rate of false-positive readings may have resulted due to readers awareness of the dataset being enriched with cancer cases, causing a laboratory effect. Moreover, because readers had no access to prior mammograms of the examined patients, other images, or additional patient information, the assessment was more challenging than a typical screening mammography reading workflow.

the overall conclusion of this clinical investigation was that the concurrent use of this AI tool improved the diagnostic performance of radiologists in the mammographic detection of breast cancer, wrote the authors. In addition, the use of AI was shown to reduce false negatives without affecting the specificity.

In March, the FDA cleared MammoScreen for use in the clinic, where it could aid in reducing the workload of radiologists. Moving forward, the investigators plan to continue to explore the behavior of the AI tool on a large screening-based population and its ability to detect breast cancer earlier.

References:1. Pacil S, Lopez J, Chone P, Bertinotti T, Grouin JM, Fillard P. Improving breast cancer detection accuracy of mammography with the concurrent use of an artificial intelligence tool. Published November 4, 2020. Radiology: Artificial Intelligence. doi:10.1148/ryai.2020190208

2. AI tool improves breast cancer detection on mammography. News release. Radiological Society of North America. Published November 4, 2020. Accessed December 3, 2020. https://www.eurekalert.org/pub_releases/2020-11/rson-ati110220.php

Read more here:

MammoScreen AI Tool Improves Diagnostic Performance of Radiologists in Detecting Breast Cancer - Cancer Network

New Baylor Study Will Train AI to Assist Breast Cancer Surgery – HITInfrastructure.com

August 07, 2020 -Researchers at Baylor College of Medicine will enroll patients in a study, ATLAS AI, which will use a high-resolution imaging system to collect images of breast tumors in order to develop artificial intelligence (AI) that can help with breast cancer surgery, according to a recent press release.

ATLAS AI will leverage Perimeter Medical Imagings OTIS system, which delivers real-time, ultra-high resolution, sub-surface images of extracted tissues, Baylor researchers explained.

The majority of breast cancer patients will undergo lumpectomy surgery as part of their treatment, hoping to remove the tumor and conserve the breast.

Perimeters AI technology, ImgAssist, is designed to utilize a machine learning model to help surgeons identify if cancer is still present when performing a lumpectomy.

This will allow surgeons to immediately remove additional tissue from the patient with the intent to reduce the likelihood that the patient will require additional surgeries, researchers explained.

One of the big problems in breast cancer surgery is that in about one in four women on whom we do a lumpectomy to remove cancer, we fail to get clear margins, Alastair Thompson, MD, professor, section chief of breast surgery and Olga Keith Wiess chair of Surgery at Baylor College of Medicine, said in the press release.

That in turn leads to a need for reoperation to avoid high recurrence rates. Hence the need for a good, effective and user-friendly tool to help us better identify if we have adequately removed the breast cancer from a womans breast, to get it right the first time.

Thomas, also a surgical oncologist at theDan L Duncan Comprehensive Cancer Centerat Baylor Medical Center and co-director of theLester and Sue Smith breast centerat Baylor College of Medicine, explained that OTIS and ImgAssist are noninvasive for the patients and fit into the routine surgical process.

Our AI technology has the potential to be a powerful tool for real-time margin visualization and assessment that we believe will help physicians improve surgical outcomes for breast cancer patients, said Andrew Berkeley, co-founder of Perimeter Medical Imaging.

The patients who enroll in these clinical studies at Baylor are contributing to new technology that we hope will assist surgeons in the future so that they can reduce the likelihood of their patients needing additional surgeries.

ATLAS AI was made possible by a $7.4 million grant from the Cancer Prevention and Research Institute of Texas (CPRIT) to further develop the AI algorithm for OTIS.

The grant will allow the company to use data collected at pathology labs at Baylor College of Medicine, the University of Texas MD Anderson Cancer Center, and UT Health San Antonio as part of the study.

The study will enroll nearly 400 patients at the beginning of next week.

Additionally, Perimeter will continue the ATLAS AI Project with a second randomized, multi-site study in nearly 600 patients to test the OTIS platform with ImgAssist AI against current standard of care.

Through the study, researchers intend to uncover whether the platform lowers the re-operation rate for breast conservation surgery, Baylor researchers said.

This could be a huge improvement for patient care. It could help patients avoid a second surgery and the physical, emotional, and financial stress that accompany an additional procedure, Thompson concluded.

Continue reading here:

New Baylor Study Will Train AI to Assist Breast Cancer Surgery - HITInfrastructure.com

How to prevent AI from taking over the world New Statesman – New Statesman

Right now AI diagnoses cancer, decides whether youll get your next job, approves your mortgage, sentences felons, trades on the stock market, populates your news feed, protects you from fraudand keeps you company when youre feeling down.

Soon it will drive you to town, deciding along the way whether to swerve to avoid hitting a wayward fox. It will also tell you how to schedule your day, which career best suits your personalityand even how many children to have.

In the further future, it could cure cancer, eliminate poverty and disease, wipe out crime, halt global warming andhelp colonise space. Fei-Fei Li, a leading technologist at Stanford, paints a rosy picture: I imagine a world in which AI is going to make us work more productively, live longerand have cleaner energy. General optimism about AI is shared by Barack Obama, Mark Zuckerbergand Jack Ma, among others.

And yet from the beginning AI has been dogged by huge concerns.

What if AI develops an intelligence far beyond our own? Stephen Hawking warned that AI could develop a will of its own, a will that is in conflict with ours and which could destroy us. We are all familiar with the typical plotlineof dystopian sci-fi movies. An alien comes to Earth, we try to control it, and it all ends very badly. AI may be the alien intelligence already in our midst.

A new algorithm-driven world could also entrench and propagate injustice, while we arenone the wiser. This is because the algorithms we trust are often black boxes whose operation we dont and sometimes cant understand. Amazons now infamous facial recognition software, Rekognition, seemed like a mostlyinnocuous tool for landlords and employers to run low-cost background checks. But it was seriously biased against people of colour, matching 28 members of the US Congress disproportionately minorities with profiles stored in a database of criminals. AI could perpetuate our worst prejudices.

Finally, there is the problem of what happens if AI is too good at what it does. Its beguiling efficiency could seduce us into allowing it to make more and more of our decisions, until we forget how to make good decisions on our own, in much the way we rely on our smartphones to remember phone numbers and calculate tips. AI could lead us to abdicate what makes us human in the first place our ability to take charge of and tellthe stories of our own lives.

[See also: Philip Ball on how machines think]

Its too early to say how our technological future will unfold. But technology heavyweights such asElon Musk and Bill Gates agree that we need to do something to control the development and spread of AI, and that we need to do it now.

Obvious hacks wont do. You might think that we can control AI by pulling its plug. But experts warn that a super-intelligent AI could easily predict our feeble attempts to shackle it and undertake measures to protect itself by, say, storing up energy reserves and infiltrating power sources. Nor will encoding a master command, Dont harm humans, save us, because its unclear what harmmeans or constitutes. When your self-driving vehicle swerves to avoid hitting a fox, it exposes you to a slight risk of death does it thereby harm you? What about when it swerves into a small group of people to avoid colliding with a larger crowd?

***

The best and most direct way to control AI is to ensure that its values are our values. By building human values into AI, we ensure that everything an AI does meets with our approval. But this is not simple. The so-called Value Alignment Problem how to get AI to respect and conform to human values is arguably the most important, if vexing, problem faced by AI developers today.

So far, this problem has been seen as one of uncertainty: if only we understood our values better, we could program AI to promote these values. Stuart Russell, a leading AI scientist at Berkeley, offers an intriguing solution. Lets design AI so that its goals are unclear. We then allow it to fill in the gaps by observing human behaviour. By learning its values from humans, the AIs goals will be our goals.

This is an ingenious hack. But the problem of value alignment isnt an issue of technological design to be solved by computer scientists and engineers. Its a problem of human understanding to be solved by philosophers and axiologists.

Thedifficulty isnt that we dont know enough about our values though, of course, we dont. Its that even if we had full knowledge of our values, these values mightnot be computationally amenable. If our values cant be captured by algorithmic architecture, even approximately, then even an omniscient God couldnt build AI that is faithful to our values. The basic problem of value alignment, then, is what looks to be a fundamental mismatch between human values and the tools currently used to design AI.

Paradigmatic AI treats values as if they were quantities like length or weight things that can be represented by cardinal units such as inches, grams, dollars. But the pleasure you get from playing with your puppy cant be put on the same scale of cardinal units as the joy you get from holding your newborn. There is no meterstickof human values. Aristotle was among the first to notice that human values are incommensurable. You cant, he argued, measure the true (as opposed to market) value of beds and shoes on the same scale of value. AI supposes otherwise.

AI also assumes that in a decision, there are only two possibilities: one option is better than the other, in which case you should choose it, or theyre equally good, in which case you can just flip a coin. Hard choices suggest otherwise. When you are agonising between two careers, neither is better than the other, but they arent equally good either they are simply different. The values that govern hard choices allow for more possibilities: options might be on a par. Many of our choices between jobs, people to marry, and even government policies are on a par.AI architecture currently makes no room for such hard choices.

Finally, AI presumes that the values in a choice are out there to be found. But sometimes we create values through the very process of making a decision. In choosing between careers, how much does financial security matter as opposed to work satisfaction? You may be willing to forgo fancy mealsin order to make full use ofyour artistic talents. But I want a big house with a big garden, and Im willing to spend my days in drudgeryto get it.

Our value commitments are up to us, and we create them through the process of choice. Since our commitments are internal manifestations of our will, observing our behaviour wont uncover their specificity. AI, as it is currently built, supposes values can be programmed as part of a reward function that the AI is meant to maximise. Human values are more complex than this.

***

So where does that leave us? There are three possible paths forward.

Ideally, we wouldtry to develop AI architecture that respects the incommensurable, parity-tolerantand self-created features of human values. This would require seriouscollaboration between computer scientists and philosophers. If we succeed, we could safely outsource many of our decisions to machines, knowing that AI will mimic human decision-making at its best. We could prevent AI from taking over the world while still allowing it to transform human life for the better.

If we cant get AI to respect human values, the next best thing is to accept that AI shouldbe of limited use to us. It can still help us crunch numbers and discern patterns in data, operating as an enhancedcalculator or smart phone, but it shouldnt be allowed to make any of our decisions. This is because when an AI makes a decision, say, to swerve your car to avoid hitting a fox, at some risk to your life, its not a decision made on the basis of human values but of alien, AI values. We might reasonably decide that we dont want to live in a world where decisions are made on the basis of values that are not our own. AI would not take over the world, but nor would it fundamentally transform human life as we know it.

The most perilous path and the one towards which we are heading is to hope in a vague way that we can strike the right balance between the risks and benefits of AI. If the mismatch between AI architecture and human values is beyond repair, we might ask ourselves: how much risk of annihilation are wewilling to tolerate in exchange for the benefits of allowing AI to make decisions for us, while, at the same time, recognisingthose decisions will necessarily be made based on values that are not our own?

That decision, at least, would be one made by us on the basis of our human values. The overwhelming likelihood, however, is that we get the trade-off wrong. We are, after all, only human. If we take this path, AI could take over the world. And it would be cold comfort that it was our human values that allowed it to doso.

Ruth Chang is the Chair and Professor of Jurisprudence at the University of Oxford and a Professorial Fellow at University College, Oxford. She is the author of Hard Choices and a TED talk on decision-making.

This article is part of the Agora series, a collaboration between the New Statesman and Aaron James Wendland, Senior Research Fellow in Philosophy at Massey College, Toronto. He tweets @aj_wendland.

Originally posted here:

How to prevent AI from taking over the world New Statesman - New Statesman

Why organizations might want to design and train less-than-perfect AI – Fast Company

These days, artificial intelligence systems make our steering wheels vibrate when we drive unsafely, suggest how to invest our money, and recommend workplace hiring decisions. In these situations, the AI has been intentionally designed to alter our behavior in beneficial ways: We slow the car, take the investment advice, and hire people we might not have otherwise considered.

Each of these AI systems also keeps humans in the decision-making loop. Thats because, while AIs are much better than humans at some tasks (e.g., seeing 360 degrees around a self-driving car), they are often less adept at handling unusual circumstances (e.g., erratic drivers).

In addition, giving too much authority to AI systems can unintentionally reduce human motivation. Drivers might become lazy about checking their rearview mirrors; investors might be less inclined to research alternatives; and human resource managers might put less effort into finding outstanding candidates. Essentially, relying on an AI system risks the possibility that people will, metaphorically speaking, fall asleep at the wheel.

How should businesses and AI designers think about these tradeoffs? In a recent paper, economics professor Susan Athey of Stanford Graduate School of Business and colleagues at the University of Toronto laid out a theoretical framework for organizations to consider when designing and delegating decision-making authority to AI systems. This paper responds to the realization that organizations need to change the way they motivate people in environments where parts of their jobs are done by AI, says Athey, who is also an associate director of the Stanford Institute for Human-Centered Artificial Intelligence, or HAI.

Atheys model suggests that an organizations decision of whether to use AI at allor how thoroughly to design or train an AI systemmay depend not only on whats technically available, but also on how the AI impacts its human coworkers.

The idea that decision-making authority incentivizes employees to work hard is not new. Previous research has shown that employees who have been given decision-making authority are more motivated to do a better job of gathering the information to make a good decision. Bringing that idea back to the AI-human tradeoff, Athey says, there may be times wheneven if the AI can make a better decision than the humanyou might still want to let humans be in charge because that motivates them to pay attention. Indeed, the paper shows that, in some cases, improving the quality of an AI can be bad for a firm if it leads to less effort by humans.

Atheys theoretical framework aims to provide a logical structure to organize thinking about implementing AI within organizations. The paper classifies AI into four types, two with the AI in charge (replacement AI and unreliable AI), and two with humans in charge (augmentation AI and antagonistic AI). Athey hopes that by gaining an understanding of these classifications and their tradeoffs, organizations will be better able to design their AIs to obtain optimal outcomes.

Replacement AI is in some ways the easiest to understand: If an AI system works perfectly every time, it can replace the human. But there are downsides. In addition to taking a persons job, replacement AI has to be extremely well-trained, which may involve a prohibitively costly investment in training data. When AI is imperfect or unreliable, humans play a key role in catching and correcting AI errorspartially compensating for AI imperfections with greater effort. This scenario is most likely to produce optimal outcomes when the AI hits the sweet spot where it makes bad decisions often enough to keep human coworkers on their toes.

With augmentation AI, employees retain decision-making power while a high-quality AI augments their effort without decimating their motivation. Examples of augmentative AI might include systems that, in an unbiased way, review and rank loan applications or job applications but dont make lending or hiring decisions. However, human biases will have a bigger influence on decisions in this scenario.

Antagonistic AI is perhaps the least intuitive classification. It arises in situations where theres an imperfect yet valuable AI, human effort is essential but poorly incentivized, and the human retains decision rights when the human and AI conflict. In such cases, Atheys model proposes, the best AI design might be one that produces results that conflict with the preferences of the human agents, thereby antagonistically motivating them to put in effort so they can influence decisions. People are going to be, at the margin, more motivated if they are not that happy with the outcome when they dont pay attention, Athey says.

To illuminate the value of Atheys model, she describes the possible design issues as well as tradeoffs for worker effort when companies use AI to address the issue of bias in hiring. The scenario runs like this: If hiring managers, consciously or not, prefer to hire people who look like them, an AI trained with hiring data from such managers will likely learn to mimic that bias (and keep those managers happy).

If the organization wants to reduce bias, it may have to make an effort to expand the AI training data or even run experimentsfor example, adding candidates from historically black colleges and universities who might not have been considered beforeto gather the data needed to train an unbiased AI system. Then, if biased managers are still in charge of decision-making, the new, unbiased AI could actually antagonistically motivate them to read all of the applications so they can still make a case for hiring the person who looks like them.

But since this doesnt help the owner achieve the goal of eliminating bias in hiring, another option is to design the organization so that the AI can overrule the manager, which will have another unintended consequence: an unmotivated manager.

These are the tradeoffs that were trying to illuminate, Athey says. AI in principle can solve some of these biases, but if you want it to work well, you have to be careful about how you train the AI and how you maintain motivation for the human.

As AI is adopted in more and more contexts, it will change the way organizations function. Firms and other organizations will need to think differently about organizational design, worker incentives, how well the decisions by workers and AI are aligned with the goals of the firm, and whether an investment in training data to improve AI quality will have desirable consequences, Athey says. Theoretical models can help organizations think through the interactions among all of these choices.

This piece was originally published by the Stanford University Graduate School of Business.

See the article here:

Why organizations might want to design and train less-than-perfect AI - Fast Company

How AI will teach us to have more empathy – VentureBeat

John, did you remember its your anniversary?

This message did not appear in my inbox and Alexa didnt say it aloud the other day. I do have reminders on Facebook , of course. Yet, there isnt an AI powering my life decisions yet. Some day, AI will become more proactive, assistive, and much smarter. In the long run, it will teach us to have more empathy the great irony of the upcoming machine learning age.

You can picture how this might work. In 2033, you walk into a meeting and an AI that connects to your synapses and scans around the room, ala Google Glass without the hardware. Because science has advanced so much, the AI knows how you are feeling. Youre tense. The AI uses facial recognition to determine who is there and youre history with each person. The guy in accounting is a jerk, and you hardly know the marketing team.

You sit down at the table, and you glance at an HUD that shows you the bio for a couple of the marketing people. You see a note about the guy in accounting. He sent an email out about his sick Labrador the week before. How is your dog doing? you ask. Based on their bios, you realize the marketing folks are young bucks just starting their careers. You relax a little.

I like the idea of an AI becoming more aware of our lives of the people around us and the circumstances. Its more than remembering an anniversary. We can use an AI to augment any activity sales and marketing, product briefings, graphic design. An AI can help us understand more about the people on our team including coworkers and clients. It could help us in our personal lives with family members and friends. It could help at formal situations.

Yes, it sounds a bit like an episode of Black Mirror. When the AI makes a mistake and tells you someone had a family member who died but tells you the wrong name, you will look foolish. And that will happen. I also see a major advantage in having an AI work a bit like a GPS. Today, theres a lot less stress involved in driving in an unfamiliar place. (Theres also the problem of people not actually knowing how to read a map and relying too much on a GPS.) An AI could help us see another persons point of view their background and experiences, their opinions. An AI could give us more empathy because it can provide more contextual information.

This also sounds like the movie Her where there is a talking voice. I see the AI as knowing more about our lives and our surroundings, then interacting with the devices we use. The AI knows about our car and our driving habits, knows when we normally wake-up. It will let people know when were late to a meeting, and send us information that is helpful for social situations. Well use an AI through a text interface, in a car, and on our computers.

This AI wont provide a constant stream of information, but the right amount the amount it knows we need to reduce stress or understand people on a deeper level. John likes coffee, you should offer to buy him some is one example. Janes daughter had a soccer game last night, ask how it went. This kind of AI will help in ways other than just providing information. It will be more like a subtext to help us communicate better and augment our daily activities.

Someday, maybe two decades from now, well remember when an AI was just used for parsing information. Well wonder how we ever used AI without the human element.

Excerpt from:

How AI will teach us to have more empathy - VentureBeat

EU’s new AI rules will focus on ethics and transparency – VentureBeat

The European Union is set to release new regulations for artificial intelligence that are expected to focus on transparency and oversight as the region seeks to differentiate its approach from those of the United States and China.

On Wednesday, EU technology chief Margrethe Vestager will unveil a wide-ranging plan designed to bolster the regions competitiveness. While transformative technologies such as AI have been labeled critical to economic survival, Europe is perceived as slipping behind the U.S., where development is being led by tech giants with deep pockets, and China, where the central government is leading the push.

Europe has in recent years sought to emphasize fairness and ethics when it comes to tech policy. Now its taking that approach a step further by introducing rules about transparency around data-gathering for technologies like AI and facial recognition. These systems would require human oversight and audits, according to a widely leaked draft of the new rules.

In a press briefing in advance of Wednesdays announcement, Vestager noted that companies outside the EU that want to deploy their tech in Europe might need to take steps like retraining facial recognition features using European data sets. The rules will cover such use cases as autonomous vehicles and biometric IDs.

But the proposal features carrots as well as sticks. The EU will propose spending almost $22 billion annually to build new data ecosystems that can serve as the basis for AI development. The plan assumes Europe has a wealth of government and industrial data, and it wants to provide regulatory and financial incentives to pool that data, which would then be available to AI developers who agree to abide by EU regulations.

In an interview with Reuters over the weekend, Thierry Breton, the European commissioner for Internal Market and Services, said the EU wants to amass data gathered in such sectors as manufacturing, transportation, energy, and health care that can be leveraged to develop AI for the public good and to accelerate Europes own startups.

Europe is the worlds top industrial continent, Breton told Reuters. The United States [has] lost much of [its] industrial know-how in the last phase of globalisation. They have to gradually rebuild it. China has added-value handicaps it is correcting.

Of course, these rules are spooking Silicon Valley companies. Regulations such as GDPR, even if they officially target Europe, tend to have global implications.

To that end, Facebook CEO Mark Zuckerberg visited Brussels today to meet with Vestager and discuss the proposed regulations. In a weekend opinion piece published by the Financial Times, however, Zuckerberg again called for greater regulation of AI and other technologies as a way to help build public trust.

We need more oversight and accountability, Zuckerberg wrote. People need to feel that global technology platforms answer to someone, so regulation should hold companies accountable when they make mistakes.

Following the introduction of the proposals on Wednesday, the public will have 12 weeks to comment. The European Commission will then officially propose legislation sometime later this year.

Excerpt from:

EU's new AI rules will focus on ethics and transparency - VentureBeat

Microsoft’s AI beats Ms. Pac-Man – TechCrunch

As with so many things in the world, the key to cracking Ms. Pac-Man is team work and a bit of positive reinforcement. That and access to funding from Microsoft and 150-plus artificial intelligence agents as Maluuba can now attest.

Last month, the Canadian deep learning company (a subsidiary of Microsoft as of January) became the first team of AI programmers to beat the 36-year-old classic.

It was a fairly anticlimactic defeat. The number hit 999,990, before the odometer flipped back over to zero. But it was an impressive victory nonetheless, marking the first time anyone human or machine has achieved the feat. Its been a white whale for the AI community for a while now.

Googles DeepMind was able to beat nearly 50 Atari games back in 2015, but the complexity of Ms. Pac-Man, with its many boards and moving parts, has made the classic title an especially difficult target. Maluuba describes its approach as divide and conquer, taking on the Atari 2600 title by breaking it up into various smaller tasks and assigning each to individual AI agents.

When we decomposed the game, there were over 150 agents working on different problems, Maluuba program manager Rahul Mehrotra told TechCrunch. For example, the Maluuba team created an agent for each fruit palate. For ghosts, the team created four agents. For edible ghosts, four more. All of these agents work in parallel, and they would seed their reward to the high level agent and then could make a decision about whats the best decision to make at this point.

Mehrotra likens the process to running a company. Larger goals are achieved by breaking employees up into individual teams. Each has their own specific goals, but all are working toward the same aggregate achievement.

This idea of breaking things down into smaller problems is the basis of how humans solve problems, explains CTO Kaheer Suleman. A company doing product development is a good example. The goal of the whole organization is to develop a product, but individually, there are groups that have their own reward and goal for the process.

The system also uses reinforcement learning, where each action is associated with either a positive or negative response. The agents then learn through trial and error. In all, the process was trained using more than 800 million frames of the game, according to a paper published this week that highlights the findings.

Mehrotra suggests the possibility of using a similar system in retail, with an AI helping human sales reps determine which customers to assist first in order to maximize their own revenue. Actually translating all of this into a useful real-world experience will prove another challenge in an of itself.

Link:

Microsoft's AI beats Ms. Pac-Man - TechCrunch

Nvidia and Baidu team on AI across cloud, self-driving, academia and the home – TechCrunch

Baidu and Nvidia announced a far-reaching agreement to work together on artificial intelligence today, spanning applications in cloud computing, autonomous driving, education and research, and domestic uses via consumer devices. It may be the most comprehensive partnership yet for Nvidia in its bourgeoning artificial intelligence business, and its likely to provide a big boost for Nvidias GPU business for years to come.

The partnership includes an agreement to use Nvidias Volta GPUs in Baidu Cloud, as well as adoption of Drive PX for Baidus efforts in bringing self-driving cars to market in partnership with multiple Chinese carmakers (you can read more about Baidus Apollo program for autonomous cars and its ambitions, details of which were announced this morning). Further, Baidu and Nvivia will work on optimizations for Baidus PaddlePaddle open source deep learning framework for Nvidia Volta, and will make it broadly accessible to researchers and academic institutions.

On the consumer front, Baidu will also add DuerOS to its Nvidia Shield TV, the Android TV-based set-top streaming box that got a hardware upgrade earlier this year. DuerOS is a virtual assistant similar to Siri or Google Assistant, and was previously announced for smart home speakers and devices. Shield TV is set to get Google Assistant support via a forthcoming update, and Nvidia is also set to eventually launch expandable smart home mics to make it accessible throughout a home, a feature which could conceivably work with DuerOS, too.

This is a big win for Nvidia, and potentially the emergence of one of the most important partnerships in modern AI computing. These two have worked together before, but this represents a broadening of their cooperation that makes them partners in virtually all potential areas of AIs future growth.

See more here:

Nvidia and Baidu team on AI across cloud, self-driving, academia and the home - TechCrunch

How AI is transforming customer service – TNW

There will always be a need for a real humanspresence in customer service, but with the rise of AI comes the glaring reality that many things can be accomplished through the implementation of an AI-powered customer servicevirtualassistant. As our technology and understanding of machine learning grows, so does the possibilities for services that could benefit from a knowledgeable chatbot. What does this mean for the consumer and how will this affect the job market in the years to come?

How many times have you been placed on hold, on the phone or through a live chat option, when all you wanted to do was ask a simple question about your account? Now, how many times as that wait taken longer than the simple question you had? While chatbots may never be able to completely replace the human customer service agent, they most certainly are already helping answer simple questions and pointing users in the right direction when needed.

Credit: Unsplash

As virtual assistants become more knowledgeable and easier to implement, more businesses will begin to use them to assist with more advancedquestions a customer or interested party may have, meaning (hopefully) quicker answers for the consumer. But just how much of customer service will be taken over by virtual assistants? According toone report from Gartnerit is believed that by the year 2020, 85% of customer relationships will be through AI-powered services.

Thats a pretty staggering number, but I talked with Diego Ventura of NoHold, a company that provides virtual agents for enterprise level businesses, and he believes those numbers need to be looked at a bit closer.

The statement could end up being true but with two important proviso: For one, we most consider all aspects of AI, not just Virtual Assistants and two, we apply the statements to specific sectors and verticals.

AI is a vast field that includes multiple disciplines like Predictive Analytics, Suggestion engines, etc. In this sense you have to just think about companies like Amazon to see how most of customer interactions are already handled automatically though some form of AI. Having said this, there are certain sectors of the industry that will always require, at least for the foreseeable future, human intervention. Think of Medical for example, or any company that provides very high end B2B products or services.

Basically, what Diego is saying is that there are many aspects of customer service already being handled by AI that we dont even realize, so when discussing that 85% mentioned above we cant look at it as 85% of customer service jobs will be replaced by AI, but, even if were not talking about 85% of the jobs involved in customer service, surely there will be some jobs that will be completely eliminated by the use of chatbots, so where does that leave us?

Its unfair to look at virtual assistants as the enemy that is taking our precious jobs. Throughout history, technology has made certain jobs obsolete as smarter, more efficient methods are implemented . Look at our manufacturing sector and it will not take long to see that many of the jobs our grandparents and great grandparents had have been completely eliminated through advancements in machinery and other technologies, the rise in AI is simply another example of us growing as humans.

Credit: Unsplash

While it may take some jobs away, it also opens up the possibility for completely new jobs that have not existed prior. Chatbot technicians and specialists being but two examples. Couple that with the fact that many of these virtual assistants actual workwiththe customer services reps to make their jobs easier, and we start seeing that virtual assistant implementation is not as scary as it might seem. Ventura seems to agree,

I see Virtual Assistants, VAs, for one as a way to primarily improve the customer experience and, two, augmenting the capabilities of existing employees rather than simply taking their jobs. VAs help users find information more easily. Most of the VA users are people who were going to the Web to self-serve anyway, we are just making it easier for them to find what they are looking for and yes, prevent escalations to the call center.

VAs are also used at the call center to help agents be more successful in answering questions, therefore augmenting their capabilities. Having said all this, there are jobs that will be replaced by automation, but I think it is just part of progress and hopefully people will see it as an opportunity to find more rewarding opportunities.

I think back to my time at a startup that was located in an old Masonic Temple. We were on the 6th floor and every morning the lobby clerk, James, would put down the crumpled paper he was reading and hobble out from behind his small desk in the middle of the lobbyand take us up to our floor on one of those old elevators that required someone to manually push and pull a lever to get their guests to a certain floor. James was a professional at it, he reminded me of an airplane pilot the way he twisted certain knobs and manipulated the lever to get us to our destination only once missing our floor in the entire two years I was there.

While James might have been an expert at his craft, technology has all but eliminated that position. When was the last time you had someone manually cart you to a floor in a hotel? When was the last time you thought about it? Were you mad at technology for taking away someones job?

As humans, we advance, thats what we do. And the rise of AI in the customer service field is just another step in our advancement and should be looked at as such. There might be some growing pains during the process, but we shouldnt let that stop us from growing and extending our knowledge. When we look at the benefits these chatbots can provide to the consumer and the business, it becomes clear that we are moving in the right direction.

Read next: How Marketing Will Change in 2017

See the original post here:

How AI is transforming customer service - TNW

Is China in the driver’s seat when it comes to AI? – VentureBeat

In the battle of technological innovation between East and West, artificial intelligence (AI) is on the front line. And Chinas influence is growing.

AI is seen as a key to unlocking big data and the Internet of Things. It allows us to make better decisions faster and will soon enable smarter cities, self-driving cars, personalized medicines, and other new commercial applications that could potentially help solve various global problems.

The field of AI is going through a period of rapid progress, with improvements in processor design and advances in machine learning, deep learning, and natural language processing. China has invested massively in AI research since 2013, and these efforts are yielding incredible results. Chinas AI pioneers are already making great strides in core AI fields.

Here are just a few examples: The three Chinese tech giants Baidu, Didi, and Tencent have each set up their own AI research labs.Baidu, in particular, is taking steps to cement itself among the worlds leading lights in deep learning. At Baidus AI lab in Silicon Valley, 200 developers are pioneering driverless car technology, visual dictionaries, and facial- and speech-recognition software to rival the offerings of American competitors.

Similarly, Tencent is sponsoring scholarships in some of Chinas leading science and technology universities, giving students access to WeChats enormous databases while at the same time ensuring the company has access to the best research and talent coming out of these institutions.

Even at a government level, spending on research is growing annually by double digits. China is said to be preparing a multi-billion-dollar initiative to further domestic AI advances with moonshot projects, startup funding, and academic research. From a $2 billion AI expenditure pledge in the little-known city of Xiangtan to matching AI subsidies worth up to $1 million in Suzhou and Shenzhen, billions are being spent to incentivise the development of AI.

These developments have not gone unnoticed in the U.S., the market responsible for much of the early AI research. In the final months of the Obama administration, the U.S. government published two separate reportsnoting that the U.S. is no longer the undisputed world leader in AI innovation and expressing concern about Chinas emergence as a major player in the field.

The reports recommended increased expenditure on machine learning research and enhanced collaboration between the U.S. government and tech industry leaders to unlock the potential of AI. But despite these efforts, 91 percent of the 1,268 tech founders, CEOs, investors, and developers surveyed at the international Collision tech conference in New Orleans in May 2017 believed that the U.S. government is fatally under-prepared for the impact of AI on the U.S. ecosystem.

Indeed, the Trump administrations proposed 2018 budget includes a 10 percent cut to the National Science Foundations funding for AI development programs, despite the previous administrations commitment to increase spending.

In contrast, China has shown increasing interest in the American AI startup world. Research firm CB Insights found that Chinese participation in funding rounds for American startups came close to $10 billion in 2016, while recent figures indicate that Chinese companies have invested in 51 U.S. AI companies, to the tune of $700 million.

While outside investment might seem a vote of confidence, it is becoming clear that belief in U.S. dominance of the tech world is flagging.

Of the investors we surveyed ahead of RISE 2017 in Hong Kong this month, 28 percent cited China as the main threat to the U.S. tech industry. Its a significant figure, indicating that Chinas influence is contining to grow. But of further surprise was the 50 percent of all respondents who believed the U.S. would lose its dominant position in the tech world to China within just five years.

In the medium-term, its prudent to be cautious about American innovation, at the very least. Historically, what has set the U.S. apart has been its capacity to course-correct, and Ive no doubt the U.S. will find a new course. But as it stands, China is in the drivers seat.

Paddy Cosgrave is the founder of Web Summit, RISE, Collision, Surge, Moneyconf and f.ounders.

Original post:

Is China in the driver's seat when it comes to AI? - VentureBeat

Google admits its diabetic blindness AI fell short in real-life tests – Engadget

The nurses in Thailand often had to scan dozens of patients as quickly as they could in poor lighting conditions. As a result, the AI rejected over a fifth of the images, and the patients were then told to come back. Thats a lot to ask from people who may not be able to take another day off work or dont have an easy way to go back to the clinic.

In addition, the research team struggled with poor internet connection and internet outages. Under ideal conditions, the algorithm can come up with a result in seconds to minutes. But in Thailand, it took the team 60 to 90 seconds to upload each image, slowing down the process and limiting the number of patients that can be screened in a day.

Google admits in the studys announcement that it has a lot of work to do. It still has to study and incorporate real-life evaluations before the AI can be widely deployed. The company added in its paper:

Since this research, we have begun to hold participatory design workshops with nurses, potential camera operators, and retinal specialists (the doctor who would receive referred patients from the system) at future deployment sites. Clinicians are designing new workflows that involve the system and are proactively identifying potential barriers to implementation.

See the original post here:

Google admits its diabetic blindness AI fell short in real-life tests - Engadget

Snapchat quietly revealed how it can put AI on your phone – Quartz

Snapchat, the only social media platform left where millennials can escape their parents, has been notoriously secret about how it packed advanced augmented reality features into its mobile app.

In a research paper published June 13 on the open publishing platform Arxiv, the company seems to detail one of its tricks for compressing crucial image recognition AI while still maintaining acceptable performance. This image recognition software, if indeed used by Snap, could be responsible for tasks like recognizing users faces and other objects in the apps World Lenses.

Snaps method hinges on two techniques: simplifying the way that its convolutional neural networks (a flavor of machine learning common in image recognition) recognize shapes, and proposing a slightly different configuration of the network to offset that simplification.

With these tweaks, Snap claims to fit its algorithm into just 5.2 MBabout the size of a standard song in MP3with accuracy that just edges out Googles latest research attempt to scale down its mobile AI. With both networks taking that same 5.2 MB space, Snap scored 65.8% accuracy while Google scored 64.7% on a standard image recognition task, according to the paper. (For AI nerds, this is top-1 accuracy, or when the network is only given one shot at guessing.)

Snap isnt the first to attempt to downsize AI for mobile, but publication of the research reveals a few key points:

Weve reached out to Snap for more information, and will update if we hear back.

Snapchat has raised its AI profile in recent months by hiring a new director of engineering, Hussein Mehanna, according to a CNBC report. Mehanna had previously worked as a director of engineering in Facebooks Apple Machine Learning division.

Facebook released code for Caffe2Go, an entire framework for running AI on mobile devices in late 2016, and Google released a mobile version of the hugely-popular TensorFlow last month at its I/O developer conference. Snaps work was built using Caffe, the open-source library developed by University of California Berkeley.

Read more here:

Snapchat quietly revealed how it can put AI on your phone - Quartz

Two Giants of AI Team Up to Head Off the Robot Apocalypse – WIRED

vF(zV^UxQOFCoJ Rg0 f6Nf{/(,?6pm2&*^GT%+)+{[.:4;.e) %+~!Bw^| ksCcYue{_i]0]}kU:T2LO7iomjjIe!w1wW`]X?:_MWvz{Ln.A24Y&U uD/> E"0A.X]1]qp,f]?9?es~dxqR-r,jc?<v/gO__N_z8>zggjH/A{X_|_lk2@p_~YRBwO~ckx_h0:9A}N.''?z!YtU_;$n'Gy@^I?~k~#9jZI~%E*"*_q^}Y;t6bmy#djs<-`z8juEP x5a4bv1O9n,EZDK 8X)]^o XV{OSs].t;bv^t{jq=k`) s%mM{c8;4Zke&(k^l[k3*D,@,uDYdC7$FXWpmLls M~`guz#t+azB4C3h4 Z".@@yJ/>gQr6t&m}(t}v//Je;}DhIz^Ll`xzY C.Y~Efivm=6vrN 3}B@68,~z2x)~oBk;svux;puypLG4$]r zz233[=u.xvo4M:xJ(%n m#;^?*nQ>}ho ?|~#a.2}&"*0HF3g1B W+>T?,N$p+@yBFQuwy&emoG Kus0,)({{QA5 v0 @?u@:?!#}'ubz 4)]?0$pR]3w v&R){.CsU5qU/KV*z%U_^F}cWq~c2[T7*{}p=[-u n./V$Eoj133c57u!'r]S>l31 -r)BbO/gt]'Ye2 e|tt;nvuMvy{S&y xRq*%y'Et;!n|~_uV7~Chmli=V}:q7}Clh&d2gqz_:fUO/l 0#oY-{^nY=>&n?fG!WX7+E {#k>.On=C9MSY q^}1P:D )6js`)?e&(N_=1HNc9E=bsH`g)@]'Dlm&(;5z({.!@PQ*6 HZ.#l"z,J:X1)$gG{HmP6Mm_ysa: Xw/b!`*oPG;@?Se8-Pat?>VaiC>wh~f>4l,}]O:/A{$ Oc:6ZW! rB+d~'4i9w/v#=E)cJ+wdDW;9``)N%a.Rr=trS8h{M)#Xxvcz? 9T`fna~t*}B!{b6G&n~Km0O]UuNxsRi!9&Foq5KH^Z=+(TI|%F~8PI*aSW >5OB50j`PxFOtBB50 38jsR2*Bq~[JV&?dx7I-5'VhK^h2VYOiPe@eP4M6c)VO%$-7./#X'Y6|^cE<=XUIXc@|ZIZ-]7]XjQyCu=Y9?DYNv9iIQR[{7F_|WiRU)_'f.3(fpl5wz`$!+LZUL3A?YjQ&^<6v|ylgm:[=noS>|/)j'EuhwnZhyW42mgp% Kg:RtbYt1%CU+~@x$._c+p-'>x4-XT{6_)+7#JF}xx6 xwo#PKSbw/U| ?xWf(777 !lE{E5^-V_V'7#:aYU>N%|} |Y_ofXZp. _X ~#a_6R~U^3%Ml+3YcTdt ##'^n'%hO/u0v!twhY^f>,xekz-9XXn^um u,ZJq+zR-aN;,3{:)kk}1z/XiY&ucmc|'<%}| >ts$]:|j`E*x o1n+` *_m/G9 Pg4|5"oMg?us%2cDA(}>W75 ,m`N,gKm 5)|i(WZk(5Mlm+?,L11?1|yct_^ =7o Z ucnEp$[V8[#R0bUwUIfDgKS:y2U:27:rCU(8 QPA( ?.zl;j6l;&3*JKUn(mSaY*LQX26]V()(b bRyjOe2S+KQzT)!kP _)JCJf!UtJNN)8UT6d4SO }PST;2d6XE`*s&qCDvDv(uQsQCOD(TCh':jD2+uQssG}Nf}U(Ku%*JJf{Uj6!P ]5!r(IR9?5)?*&K8u W^R}IRQ(]5K=J()G(OK<5<5{E-S,_yoS[KKdFeE =6d^EI{}<8O'3*JSF)>uBq!l5(T%^S>_Onjn/ +PsW`| +EL*x&x@5>5s0j(}7P9%$q*JP#P#?~}@E%dREiKj/5PJfULLi*J PD EW_%K*P HHR5PpREPuQ 8y=dA&6D1UL j.0Ini*E]PjW!SRB;X]C&F5%F5M[I1S3J BJA!W_>&y@DCJRtE^c)l"HY|RtAE^Pc)"]cG*-Eg*-K5Cd(U`h}5>1"5%KdWYkp,EX.%+T *%(%6CYHL1Cy<5B;$QsAOQGz=EE=Ez2yTtD_NO"'%H"GMEg }5'MEt!D,UOXv|o+[h'_Q7]|>KAd_$ #EA( HY(7('@;+! @Nb@H H HKF`(U,U/"B_Ob(ZX 9E$U-Oh)K@)(*B&EljOK@,,E_,GREP(o+^S6Xz3'^/{&-biM[5#[I~=!4}xw4IdUY{}e+T$X7kH'X" klpX=IKe+SS~2 Yt5/r'K6u>? .'MzBqy=6I<_ N<6PGzW@*_,xZ$&jO4,JbTVieEU;L"351wZ&Az%[k%EXZJ^j3@PM94qk)g1V@yqL|aUv>J[ rq~#:.y>O+9GP eoYYs-ZLB4uQp6nE.o[ygp';!lqj!!Z0,0))e%

See more here:

Two Giants of AI Team Up to Head Off the Robot Apocalypse - WIRED

Top 5 Big Data and AI Sports Companies of 2019 – Analytics Insight

Weve heard a ton about big data in the previous year, alongside organizations quests to harness it. Maybe the greatest big data mine is the field of sports: In-game insights heap up in stacks of information; fans clatter for the top to bottom, real-time analysis; and a players physical movements offer another look into the mechanics of our bodies. Whats more, now, with certain organizations basically gathering information for the sake of collection, organizations and sports establishments have made sense of how to repackage it artfully to make new encounters for the two competitors and their fans.

Data lets teams and companies track performance, make expectations and be definitive on the field. Off the field, experts, commentators and fans use information always regardless of whether its to give in-depth clarifications, talk about expectations or power fantasy league decisions. Big data is demonstrating that sports are something other than physical games. Presently, theyre likewise a numbers game. Football, baseball, basketball, soccer and even fantasy sports all depend on big data to maximize player efficiency and predict future performances.

Lets see big data organizations that make possible in-depth sports analysis and real-time game information.

Synergy Sports Technology is one of a developing number of sports examination organizations that fall into the subcategory that market research firm ReportsnReports calls sports training platform technology. Pegged at a humble $49 million of every 2014, ReportsnReports predicts some sort of Hail Mary go in 2021, saying the market will reach $864 million. What Synergy does is make big data analytics products with a lot of highlight reels for sure. The data is significant to teams for exploring new players and creating game plans. Tom Brady and Bill Belichick were trailblazers of the idea, with a startup in stealth mood in 2007 called SpyGate. The NBA is a vital accomplicenot so much astonishing given that Synergy CEO Garrick Barr was once in the past a mentor with the Phoenix Suns.

Krossover is a sports analytics company that gives technology products and solutions for mentors and competitors. After uploading the game film to its platform, teams get insights on team and player performance. Krossover labels and pulls data from game film and creates modified reports for an assortment of sports including football, lacrosse, volleyball and ball. Notwithstanding sparing mentors from going through hours cutting game film, the platform helps sports teams at each level, from secondary school to the stars, productively examine their rivals.

The ball-tracking innovation of this British subsidiary of Sony utilizes various high-frame-rate cameras set at vital situations inside a tennis setting, for instance, to decide precisely where a ball was hit in connection to the outside the field of playline in minor seconds. The innovation has not just altered instant replay in cricket, soccer, and tennisit can likewise give in-depth biomechanical analysis of individual player strokes. Utilizing this pinpoint information, mentors can design strokes and racquets to modify an individual players needs.

ChyronHego gives real-time data visualization and communicates illustrations for live TV, news and sports coverage. With an assortment of products and services, the organization offers Player Tracking arrangements that utilize optical, GPS and radio frequency strategies to gather data. The organizations optical tracking framework, TRACAB, utilizes cameras to follow players and ball positions in more than 300 arenas and catches live information from 4,500 games every year. Deployed in all Major League Baseball parks and arenas, TRACAB can track at a data pace of 25 points every second, giving the data to detailed breakdowns, graphic visualizations and other analysis for coaches, analysts and commentators. The organizations innovation helps control the MLBs famous and grant winning Statcast.

Another most loved of Fast Company, the Los Angeles startup, FocusMotion, from what we found, is pretty unassumingly funded at $170,000. What the organization is really proposing to do is a long way from unobtrusive. It says it can apply artificial intelligence, through machine learning, to any wearable gadget on any operating system. Its market is really developers, who download FocusMotions software development kit to make their very own applications. Pricing is likewise modest. FocusMotion doesnt profit for the initial 10,000 clients. From that point onward, it gathers a gleaming quarter for each new client of the application created with its amazing SDK, which even incorporates a pose analyzer module for yoga.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Original post:

Top 5 Big Data and AI Sports Companies of 2019 - Analytics Insight

Amazon Prime Wardrobe Could Be The Next Step In AI Becoming A Better Liar – Forbes


Forbes
Amazon Prime Wardrobe Could Be The Next Step In AI Becoming A Better Liar
Forbes
Today Amazon launched another new service to directly threaten retail store changing rooms. Amazon Prime Wardrobe is currently in beta and is a simple concept for Prime members. You order clothes, if you don't like them you can send them back within ...

and more »

Go here to see the original:

Amazon Prime Wardrobe Could Be The Next Step In AI Becoming A Better Liar - Forbes

What’s wrong with this picture? Teaching AI to spot adversarial attacks – GCN.com

Whats wrong with this picture? Teaching AI to spot adversarial attacks

Even mature computer-vision algorithms that can recognize variations in an object or image can be tricked into making a bad decision or recommendation. This vulnerability to image manipulation makes visual artificial intelligence an attractive target for malicious actors interested in disrupting applications that rely on computer vision, such as autonomous vehicles, medical diagnostics and surveillance systems.

Now, researchers at the University of California, Riverside, are attempting to harden computer-vision algorithms against attacks by teaching them what objects usually coexist near each other so if a small detail in the scene or context is altered or absent the system will still make the right decision.

When people see a horse or a boat, for example, they expect to also see a barn or a lake. If the horse is standing in a hospital or the boat is floating in clouds, a human knows something is wrong.

If there is something out of place, it will trigger a defense mechanism, Amit Roy-Chowdhury, a professor of electrical and computer engineering leading the team studying the vulnerability of computer vision systems to adversarial attacks, told UC Riverside News. We can do this for perturbations of even just one part of an image, like a sticker pasted on a stop sign.

The stop sign example refers to a 2017 study that demonstrated that images of stickers on a stop sign that were deliberately misclassified as a speed limit sign in training data were able to trick a deep neural network (DNN)-based system into thinking it saw a speed limit sign 100% of the time. An autonomous driving system trained on that manipulated data that sees a stops sign with a sticker on it would interpret that image as a speed limit sign and drive right through the stop sign. These adversarial perturbations attacks can also be achieved by adding digital noise to an image, causing the neural network to misclassify it.

However, a DNN augmented with a system trained on context consistency rules can check for violations.

In the traffic sign example, the scene around the stop sign the crosswalk lines, street name signs and other characteristics of a road intersection can be used as context for the algorithm to understand the relationship among the elements in the scene and help it deduce if some element has been misclassified.

The researchers propose to use context inconsistency to detect adversarial perturbation attacks and build a DNN-based adversarial detection system that automatically extracts context for each scene, and checks whether the object fits within the scene and in association with other entities in the scene, the researchers said in their paper.

The research was funded by a $1 million grant from the Defense Advanced Research Projects Agencys Machine Vision Disruption program, which aims to understand the vulnerability of computer vision systems to adversarial attacks. The results could have broad applications in autonomous vehicles, surveillance and national defense.

About the Author

Susan Miller is executive editor at GCN.

Over a career spent in tech media, Miller has worked in editorial, print production and online, starting on the copy desk at IDGs ComputerWorld, moving to print production for Federal Computer Week and later helping launch websites and email newsletter delivery for FCW. After a turn at Virginias Center for Innovative Technology, where she worked to promote technology-based economic development, she rejoined what was to become 1105 Media in 2004, eventually managing content and production for all the company's government-focused websites. Miller shifted back to editorial in 2012, when she began working with GCN.

Miller has a BA and MA from West Chester University and did Ph.D. work in English at the University of Delaware.

Connect with Susan at [emailprotected] or @sjaymiller.

See more here:

What's wrong with this picture? Teaching AI to spot adversarial attacks - GCN.com