How is Artificial Intelligence (AI) Changing the Future of Architecture? – AiThority

Artificial Intelligence (AI) has always been a topic of discussion- is it good enough for us? Getting more and more into this high technology world will give us a better future or not? According to recent research, almost everyone has a different requirement for automation. And most of the work of humans is done by the latest high intelligence computers. You all must be familiar with the fact of how Artificial Intelligence is changing industries, like Medicine, Automobiles, and Manufacturing. Well, what about Architecture?

The main issue is about the fact that these high tech robots will actually replace the creator? Although these high tech computers are not good enough at some ideas and you have to rely on Human Intelligence for that. However, these can be used to save a lot of time by doing some time-consuming tasks, and we can utilize that time in creating some other designs.

Artificial Intelligence is a high technology mechanical system that can perform any task but needs a few human efforts like visual interpretation or design-making etc. AI works and gives the best results possible by analyzing tons of data, and thats how it can excel in architecture.

Read More: Mobile Advertising Needs More Than Just 5G

While creating new designs, architects usually go through past designs and the data prepared throughout the making of the building. Instead of investing a lot of time and energy to create something new, it is alleged that a computer will be able to analyze the data in a short time period and will give recommendations accordingly. With this, an architect will be able to do testing and research simultaneously and sometimes even without pen and paper. It seems like it will lead to the organizations or the clients to revert to computers for masterplans and construction.

However, the value of architects and human efforts of analyzing a problem and finding the perfect solutions will always remain unchallenged.

Read More: How Automating Procurement is Like Self-Driving Cars

Parametric architecture is a hidden weapon that allows an architect to change specific parameters to create various types of output designs and create such structures that would not have been imagined earlier. It is like an architects programming language.

It allows an architect to consider a building and reframe it to fit into some other requirements. A process like this allows Artificial Intelligence to reduce the effort of an architect so that the architect can freely think about different ideas and create something new.

Constructing a building is not a one-day task as it needs a lot of pre-planning. However, this pre-planning is not enough sometimes, and you need a little bit of more effort to get an architects opinion to life. Artificial Intelligence will make an architects work significantly easier by analyzing the whole data and creating models that can save a lot of time and energy of the architect.

All in all, AI can be called an estimation tool for various aspects while constructing a building. However, when it comes to the construction part, AI can help so that human efforts become negligible.

The countless hours of research at the starting of any new project is where AI steps in and makes it easy for the architect by analyzing the aggregate data in millisecond and recommending some models so that the architect can think about the conceptual design without even using the pen or paper.

Just like while building a home for a family, if you have the whole information about the requirements of the family, you can simply pull all zoning data using AI and generate design variations in a short time period.

This era of modernization demands everything to be smartly designed. Just like smart cities, todays high technology society demands smart homes. However, now architects do not have to bother about how to use AI to create the designs of home only, but they should worry about making the users experience worth paying.

Change is something that should never change. The way your city looks today will be very different in the coming time. The most challenging task for an architect is city planning that needs a lot of precision planning. However, the primary task is to analyze all the possible aspects, and understand how a city will flow, how the population is going to be in the coming years.

All these factors are indicating one thing only, i.e., the future architects will give fewer efforts in the business of drawing and more into satisfying all the requirements of the user with the help of Artificial Intelligence.

Read More: How AI and Automation Are Joining Forces to Transform ITSM

See the article here:

How is Artificial Intelligence (AI) Changing the Future of Architecture? - AiThority

For Telangana, 2020 will be year of artificial intelligence – BusinessLine

With a view to promoting enterprises working on artificial intelligence solutions and taking leadership in this emerging technology space, the Telangana government has decided to observe 2020 as the Year of AI.

Telangana IT Minister KT Rama Rao will formally make the announcement on January 2 here, declaring 2020, the Year of AI, and release a calendar of events for the next 12 months.

The event will see signing of memorandum of agreements between the government and AI start-ups.

The Information and Technology Ministry is in the process of preparing a document with strategy framework to offer incentives exclusive to the AI initiatives.

We have come up with such documents for Blockchain and drones. With new technologies such as AI and Big Data Analytics expected to generate 8 lakh jobs in the country in the next two years, we will launch a dedicated programme for AI in 2020, Jayesh Ranjan, Principal Secretary, IT and Industries, Government of Telangana, has said.

Go here to read the rest:

For Telangana, 2020 will be year of artificial intelligence - BusinessLine

Chanukah and the Battle of Artificial Intelligence – The Ultimate Victory of the Human Being – Chabad.org

Chanukah is generally presented as a commemoration of a landmark victory for religious freedom and human liberty in ancient times. Big mistake. Chanukahs greatest triumph is still to comethe victory of the human soul over artificial intelligence.

Jewish holidays are far more than memories of things that happened in the distant pastthey are live events taking place right now, in the ever-present. As we recite on Chanukahs parallel celebration, Purim, These days will be remembered and done in every generation. The Arizal explains: When they are remembered, they reenact themselves.

And indeed, the battle of the Maccabees is an ongoing battle, oneThe battle of the Maccabees is an ongoing battle embedded deep within the fabric of our society. embedded deep within the fabric of our society, one that requires constant vigilance lest it sweep away the foundations of human liberty. It is the struggle between the limitations of the mind and the infinite expanse that lies beyond the minds restrictive boxes, between perception and truth, between the apparent and the transcendental, between reason and revelation, between the mundane and the divine.

Today, as AI development rapidly accelerates, we may be participants in yet a deeper formalization of society, the struggle between man and machine.

Let me explain what I mean by the formalization of society. Formalization is something the manager within us embraces, and something the incendiary, creative spark within that manager defies. Its why many bright kids dont do well in school, why our most brilliant, original minds are often pushed aside for promotions while the survivors who follow the book climb high, why ingenuity is lost in big corporations, and why so many of us are debilitated by migraines. Its also a force that bars anything transcendental or divine from public dialogue.

Formalization is the strangulation of life by reduction to standard formulas. ScientistsFormalization is the strangulation of life by reduction to standard formulas. reduce all change to calculus, sociologists reduce human behavior to statistics, AI technologists reduce intelligence to algorithms. Thats all very usefulbut it is no longer reality. Reality is not reducible, because the only true model of reality is reality itself. And what else is reality but the divine, mysterious and wondrous space in which humans live?

Formalization denies that truth. To reduce is useful, to formalize is to kill.

Formalization happens in a mechanized society because automation demands that we state explicitly the rules by which we work and then set them in silicon. It reduces thought to executable algorithms; behaviors to procedures, ideas to formulas. Thats fantastic because it potentially liberates us warm, living human beings from repetitive tasks that can be performed by cold, lifeless mechanisms so we may spend more time on those activities that no algorithm or formula could perform.

Potentially. The default, however, without deliberate intervention, is the edifice complex.

The edifice complex is what takes place when we create a device, institution or any other formal structurean edificeto more efficiently execute some mandate. That edifice then develops a mandate of its ownthe mandate to preserve itself by the most expedient means. And then, just as in the complex it sounds like, The Edifice Inc., with its new mandate, turns around and suffocates to deathThe Edifice Inc., with its new mandate, turns around and suffocates to death the original mandate for which it was created. the original mandate for which it was created.

Think of public education. Think of many of our religious institutions and much of our government policy. But also think of the general direction that industrialization and mechanization has led us since the Industrial Revolution took off 200 years ago.

Its an ironic formula. Ever since Adam named the animals and harnessed fire, humans have built tools and machines to empower themselves, to increase their dominion over their environment. And, yes, in many ways we have managed to increase the quality of our lives. But in many other ways, we have enslaved ourselves to our own servantsto the formalities of those machines, factories, assembly lines, cost projections, policies, etc. We have coerced ourselves into ignoring the natural rhythms of human life, the natural bonds and covenants of human community, the spectrum of variation across human character and our natural tolerance to that wide deviance, all to conform to those tight formalities our own machinery demands in the name of efficacy.

In his personal notes in the summer of 1944, having barely escaped from occupied France, the Rebbe, Rabbi Menachem M. Schneerson of righteous memory, described a world torn by a war between two ideologiesbetween those for whom the individual was nothing more than a cog in the machinery of the state, and those who understood that there can be no benefit to the state by trampling the rights of any individual. The second ideologythat held by the western Alliesis, the Rebbe noted, a Torah one: If the enemy says, give us one of you, or we will kill you all! declared the sages of the Talmud, Not one soul shall be deliberately surrendered to its death.

Basically, the life of the individual is equal to the whole. Go make an algorithm from that. The math doesntThe life of the individual is equal to the whole. Go make an algorithm from that. The math doesnt work. work. Try to generalize it. You cant. It will generate what logicians call a deductive explosion. Yet it summarizes a truth essential to the sustainability of human life on this planetas that world war demonstrated with nightmarish poignance.

That war continued into the Cold War. It presses on today with the rising economic dominance of the Communist Party of China.

In the world of consumer technology, total dominance of The Big Machine was averted when a small group of individuals pressed forward against the tide by advancing the human-centered digital technology we now take for granted. But yet another round is coming, and it rides on the seductive belief that AI can do its best job by adding yet another layer of formalization to all societys tasks.

Dont believe that for a minute. The telos of technology is to enhance human life, not to restrict it; to provide human beings with tools and devices, not to render them as such.

Technologys ultimate purpose will come in a time of which Maimonides writes, when the occupation of the entire world will be only to know the divine. AI can certainly assist us in attaining that era and living itas long as we remain its masters and do not surrender our dignity as human beings. And that is the next great battle of humanity.

To win this battle, we need once again only a small army, but an army armed with more than vision. They must be people with faith. Faith in the divine spark within the human being. For that is what underpins the security of the modern world.

Pundits will tell you that our modern world is secular. Dont believe them. They will tell you that religion is not taught in American public schools. Its a lie. Western society is sustained on the basis of a foundational, religious belief: that all human beings are equal. Thats a statement withAll human beings are equal. Thats a statement of faith. no empirical or rational support. Because it is neither. It is a statement of faith. Subliminally, it means: The value of a single human life cannot be measured.

In other words, every human life is divine.

No, we dont say those words; there is no class in school discussing our divine image. Yet it is a tacit, unspoken belief. Western society is a church without walls, a religion whose dogmas are never spoken, yet guarded jealously, mostly by those who understand them the least. Pull out that belief from between the bricks and the entire edifice collapses to the ground.

It is also a ubiquitous theme in Jewish practice. As Ive written elsewhere, leading a Jewish way of life in the modern era is an outright rebellion against the materialist reductionism of a formalized society.

We liberate ourselves from interaction with our machines once a week, on Shabbat, and rise to an entirely human world of thought, prayer, meditation, learning, songs, and good company. We insist on making every instance of food consumption into a spiritual, even mystical event, by eating kosherWe liberate ourselves from interaction with our machines once a week. and saying blessings before and after. We celebrate and empower the individual through our insistence that every Jew must study and enter the discussion of the hows and whys of Jewish practice. And on Chanukah, we insist that every Jew must create light and increase that light each day; that none of us can rely on any grand institution to do so in our proxy.

Because each of us is an entire world, as our sages state in the Mishnah, Every person must say, On my account, the world was created.

This is what the battle of Chanukah is telling us. The flame of the menorah, that is the human soul The human soul is a candle of Gd. The war-machine of Antiochus upon elephants with heavy armorthat is the rule of formalization and expedience coming to suffocate the flame. The Maccabee rebels are a small group of visionaries, those who believe there is more to heaven and earth than all science and technology can contain, more to the human soul than any algorithm can grind out, more to life than efficacy.

How starkly poignant it is indeed that practicing, religious Jews were by far the most recalcitrant group in the Hellenist world of the Greeks and Romans.

Artificial intelligence can be a powerful tool for good, but only when wielded by those who embrace a reality beyond reason. And it is that transcendence that Torah preserves within us. Perhaps all of Torah and its mitzvahs were given for this, the final battle of humankind.

Read the original here:

Chanukah and the Battle of Artificial Intelligence - The Ultimate Victory of the Human Being - Chabad.org

Who will really dominate artificial intelligence capabilities in the future? – Tech Wire Asia

The US is far ahead of everyone else but China is keen on taking the lead, soon. Source: Shutterstock

IN THE digital age, countries all around the world are racing to excel with artificial intelligent (AI) technology.

The phenomenon is not a surprise considering that that AI is undeniably a powerful solution with elaborate enterprise use across industries from medical algorithms to autonomous vehicles.

For a while now, the US has been dominating the global race in AI development and capabilities, but according to the Global AI Index, it seems like China will be dominating the field in the near future.

As the first runner up, it is expected that China will overtake the US in about 5 to 10 years, based on the countrys impressive growth records.

Based on 7 key indicators such as research, infrastructure, talent, development, operating environment, commercial ventures, and government strategy measured over the course of 12 months it looks like China is promoting growth unlike any other.

Although the US is prominently in the lead by a great margin, China has already materialized efforts to establish a bigger influence based on the countrys Next Generation Artificial Intelligence Development Plan which it launched in 2017.

Not only that, it is reported that China alone has promised to spend up to US$22 billion a mammoth figure compared to the global governmental AI spending estimated at US$35 billion throughout the next decade or so.

Nevertheless, China must recognize some areas that it needs to improve in order to successfully lead with AI.

Recording a 58.3 percent on the index, China seems to lack in terms of talent, commercial ventures, research quality, and private funding.

However, the country has still shown significant growth in various other areas. especially in the contribution of AI code. According to the worlds biggest open-source development platform, Github, China developers have contributed 13,000 AI codes to date.

This is a big jump compared to the initial count of 150 in 2015. The US, however, is still in the lead with a record of 42,000 contributions.

The need to dominate the AI market seems to be the motivation for countries around the world as the technology is a defining asset that can shift the dynamics of the global economy.

Other prominent countries to watch out for are the UK, Canada, and Germany, ranking 3rd, 4th, and 5th place consecutively.

Another Asian country making a mark in the 7th spot is Singapore, promoting a high score in talent but room for improvement in terms of its operative environment.

Despite the quick progress, experts hope that all countries looking to excel in AI will do so with ethical considerations and strategic leadership in mind.

Continue reading here:

Who will really dominate artificial intelligence capabilities in the future? - Tech Wire Asia

AI-based health app: Putting patients first – ETHealthworld.com

Doxtros AI mission is to deliver personalised healthcare better, faster and economically for every individual. It has been designed around a doctors brain to understand and recognize the unique way that humans express their symptoms.

How has Doxtro brought a change in Artificial Intelligence (AI) in the field of medicine?Our AI feature asks questions to the user so that the doctors can understand the health concerns of patients better. The feature provides valuable insights to the doctor through inputs gathered from patients before they go for a consultation. The primary insights provided are based on how patients express symptoms, patients medical history and current symptoms and machine learning into the demography based health issues and not to prescribe medicines or medical advice.

How will this app help a patient who is unable to read or write?The apps user flow is designed in such a way that the patients can get connected to a doctor through a voice call with basic chatting ability by just typing their health concern simply in the free text box. The users can continue to chat or choose to connect through a voice call. Languages supported at the moment are Hindi and English. With the basic knowledge of these two languages, we made sure that the user can use the app through voice mode and consult a doctor.

Is there a feedback system in your app?Yes, we give the highest priority to users feedback and doctors as well. Users can rate and write reviews about the doctor in the app itself once the consultation is completed. We also follow a proactive process on the feedback system. Our customer engagement executives are assigned to collate regular user feedback, document the same and action it respective functional teams internally. This is being done, because, in general, not all users will come forward to write a review, whether it is a good or bad experience. We consider this feedback seriously to improve our quality of care.

How frequently can a patient contact the doctor through your app?There are no restrictions in terms of access to the doctor in the app. The users can also add their family members, facilitate consultations with doctors and store their respective health records in the app. Currently, we offer 12 specialisations, general physician, dermatologists, cardiologists, gynaecologists, paediatricians, sexologists, diabetologists, psychologists, psychiatrists, nutritionists, dentists and gastroenterologists.

The users may have various health issues and may have varying need to connect with different specialists at different times. Based on their need, they can contact any available specialists, n number of times. Post the consultation, the window is open for 48 hours for free follow up questions with the same doctor for the users to clarify any doubts.

How is Doxtro different from other healthcare apps that use AI?What distinguishes our technology is the fact that it has been designed around a doctors brain to understand and recognize the unique way that humans express their symptoms. Doxtro AI works with two major roles in the system. Data aspect of the AI which drives the ability to do self-diagnosis and Machine Learning (ML) aspect to assist with triage. Doxtro puts patients at the centre of care, AI-assisted conversations help the patient describe symptoms, understands it and offer information to ensure the patient understands their condition and connects the right specialist.

Doxtro AI asks smart questions about patients symptoms while also considering their age, gender, and medical history. The AI in our app is used to help users understand their health issues and to choose the right doctor. All this is accomplished by ML and natural language processing technologies that we use.

How do doctors benefit from this app?Our AI engine provides great insights to the physicians to understand the patients health issues better, thus saving their valuable time and ensuring doctors focus on doctoring. Doxtro AI puts together a patients response history to ensure that the doctor has context, along with this, augmented diagnostics help to translate symptoms into potential conditions based on patients conversation with the AI and saves the time of doctors for a better diagnosis of the patients health condition.

This supports the doctors to reach out to larger people in need especially considering the shortage of qualified doctors in India. Our app enhances their practice especially with smart tools like AI, excellent workflow and ease of use.

How long has the app been there for and what exactly is your user base?Doxtro app has been in the market for more than 18 months and we have a registered user base of more than 2 Lacs as of now.

What kind of patterns have you noticed in patients?We see a lot of people adapting to the online consultation, especially the ones who need the right qualified and verified doctors. Lot more people resort to proactive wellness than illness. Doxtro's main focus is in wellness and having the right qualified and verified doctors on board. So we see increasing trends of people using Doxtro mobile app.

As per the Security and Data Privacy policy, we do not have any access to any patients' data. All the voice or chat interactions are fully encrypted and the entire application is hosted in the cloud. Hence, we won't be able to arrive at any patterns.

Continued here:

AI-based health app: Putting patients first - ETHealthworld.com

China should step up regulation of artificial intelligence in finance, think tank says – Reuters

QINGDAO, China/BEIJING (Reuters) - China should introduce a regulatory framework for artificial intelligence in the finance industry, and enhance technology used by regulators to strengthen industry-wide supervision, policy advisers at a leading think tank said on Sunday.

FILE PHOTO: China Securities Regulatory Commission Chairman Xiao Gang addresses the Asian Financial Forum in Hong Kong January 19, 2015. REUTERS/Bobby Yip/File Photo

We should not deify artificial intelligence as it could go wrong just like any other technology, said the former chief of Chinas securities regulator, Xiao Gang, who is now a senior researcher at the China Finance 40 Forum.

The point is how we make sure it is safe for use and include it with proper supervision, Xiao told a forum in Qingdao on Chinas east coast.

Technology to regulate intelligent finance - referring to banking, securities and other financial products that employ technology such as facial recognition and big-data analysis to improve sales and investment returns - has largely lagged development, showed a report from the China Finance 40 Forum.

Evaluation of emerging technologies and industry-wide contingency plans should be fully considered, while authorities should draft laws and regulations on privacy protection and data security, the report showed.

Lessons should be learned from the boom and bust of the online peer-to-peer (P2P) lending sector where regulations were not introduced quickly enough, said economics professor Huang Yiping at the National School of Development of Peking University.

Chinas P2P industry was once widely seen as an important source of credit, but has lately been undermined by pyramid-scheme scandals and absent bosses, sparking public anger as well as a broader government crackdown.

Changes have to be made among policy makers, said Zhang Chenghui, chief of the finance research bureau at the Development Research Institute of the State Council.

We suggest regulation on intelligent finance to be written in to the 14th five-year plan of the countrys development, and each financial regulator - including the central bank, banking and insurance regulators and the securities watchdog - should appoint its own chief technology officer to enhance supervision of the sector.

Zhang also suggested the government brings together the data platforms of each financial regulatory body to better monitor potential risk and act quickly as problems arise.

Reporting by Cheng Leng in Qingdao, China, and Ryan Woo in Beijing; Editing by Christopher Cushing

Go here to see the original:

China should step up regulation of artificial intelligence in finance, think tank says - Reuters

Fels backs calls to use artificial intelligence as wage-theft detector – The Age

"The amount of underpayment occurring now is so large that there is an effect on wages generally and on making life difficult for law-abiding employers."

Senator Sheldon said artificial intelligence could be used to detect discrepancies in payment data held by the Australian Taxation Office on employers in industries such as retail, hospitality, agriculture and construction.

"You could do it for wages and superannuation, with an algorithm used as a first flag for human intervention," he said.

The problems of underpayment are systemic and not readily resolvable just by strong law enforcement - even though that's vital.

Alistair Muir, chief executive of Sydney-based consultancy Vanteum, said it was possible to "train artificial intelligence algorithms across multiple data sets to detect wage theft as described by Senator Sheldon, without ever needing to move, un-encrypt or disclose the data itself".

Melbourne University associate professor of computing Vanessa Teague said a "simple computer program" could be designed to detect evidence of wage underpayment using the rules laid out in the award system, but that any such project should safeguard workers' privacy by requiring informed consent.

Industrial Relations Minister Christian Porter did not rule out introducing data matching as part of his wage theft crackdown and said workplace exploitation "will not be tolerated by this government".

Mr Porter said the government accepted "in principle" the recommendations of the migrant worker taskforce which included taking a "whole of government" approach and giving the Fair Work Ombudsman expanded information gathering powers.

The taskforce report said inter-governmental information sharing was "an important avenue" for identifying wage under payment and could be used to "support successful prosecutions".

In the latest case of alleged wage underpayment in the hospitality industry, the company behind the Crown casino eatery fronted by celebrity chef Heston Blumenthal, Dinner by Heston, this week applied to be wound up after failing to comply with a statutory notice requiring it to back pay staff for unpaid overtime.

It follows revelations of underpayments totalling hundreds of millions of dollars by employers including restauranteur George Calombaris' Made Establishment, Qantas, Coles, Commonwealth Bank, Bunnings, Super Retail Group and the Australian Broadcasting Corporation.

Professional services firm PwC has estimated that employers are underpaying Australian workers by $1.4 billion a year, affecting 13 per cent of the nation's workforce.

AI Group chief executive Innes Willox said the employer peak body did not "see a need" for increased governmental data collection powers.

Australian Retail Association president Russell Zimmerman said retailers were not inherently opposed to data matching as employers who paid workers correctly had "nothing to fear" but was unsure how effective or accurate the approach would be.

"We don't support wage theft," Mr Zimmerman said.

He blamed the significant underpayments self-reported in recent months on difficulties navigating the "complex" retail award.

Senator Sheldon rejected this argument, saying the system was "only complicated if you don't want to pay".

"You get paid for eight hours, then after that you get overtime and you get weekend penalty rates," he said.

Australian Council of Trade Unions assistant secretary Liam OBrien said the workplace law system was "failing workers who are suffering from systemic wage theft".

The minister, who is consulting unions and business leaders on the detail of his wage theft bill including what penalty should apply if employers fail to prevent accidental underpayment said the draft legislation should be released "early in the new year".

Dana is health and industrial relations reporter for The Sydney Morning Herald and The Age.

See the article here:

Fels backs calls to use artificial intelligence as wage-theft detector - The Age

Law must be adapted for the Fourth Industrial Revolution | TheHill – The Hill

We are at the borders of a new revolution, characterized by a range of new technologies that are merging the physical worlds, impacting all disciplines, economies and industries. It merges the capabilities of both the human and the machine, encompassing a wide swath of areas such as artificial intelligence, genome editing, biometrics, renewable energy, 3D printing, autonomous vehicles and the Internet of Things. Tech optimists posit that the wave of exponential growth in smart tech, artificial intelligence, machines and the interconnectedness of all aspects of modern life through technology will bring profound changes to society, and creates an unprecedented shift from the way we are familiar with how we behave, interact and think.

However, like the industrial revolutions preceding it, the shifts in power brought about by such human-technological systems also bring about issues of inequality in terms of who benefits, as well as challenges to security, privacy and community. The onset of the Fourth Industrial Revolution can help societies establish communities that reduce poverty, allow for good standards of living, increase sustainable energy sources, and improve social cohesion and inclusion if navigated wisely. Good governance, proper regulation, and adaptability of the law need to be the crux of the approach to handling Industry 4.0.

The legal challenges imposed by the Fourth Industrial Revolution are both new and greater. Data has now become a valuable business asset which fosters innovation, and lawyers must begin to ask the right questions in order to understand the creation process of data assets, its monetary value, and how it drives business. There exists an expectation that companies will use Big Data to monitor and protect their supply chains and to obtain greater insight into their customers. With Big Data tools becoming more powerful and mainstream, it may be incumbent on companies to foresee potential safety and security issues with new products and new technologies. In this regard, lawyers would need to understand the data of companies and what may be learned from the data to address challenges and mitigate legal risks.

In certain respects, data can be compared to the new oil, with the datafication of every aspect of human social, political, and economic activity. Data also defines modern geopolitical realities. The initiative for binding international agreements such as the Trans-Pacific Partnership and the United States-Mexico-Canada Agreement reflect a struggle to restructure the global economy around the protection of digital assets. Countries and private firms that can leverage artificial intelligence to industrialize learning and innovation will have an unprecedented degree of political and economic influence.

However, the existing multilateral institutions do not have the capacity or infrastructure to bring about such leverage. They were not designed to regulate intangible assets. Accordingly, adjustments to the regulatory architecture of multilateral organizations like the International Monetary Fund might be critical in shaping the Fourth Industrial Revolution. With the World Trade Organization currently debating new rules for digital trade, countries such as China, Russia and Brazil have already begun to formulate their own.

Courts will play a critical role in the push for new rules for digital trade. But in many countries, they have been criticized for slow speeds and high costs. If the Fourth Industrial Revolution is to bring about positive change to global communities, the law must make required adjustments to remain effective and utilize the technological advancements taking place. Lawyers need to envision the impact on the courts of artificial intelligence, block chains, bio-engineering and autonomous machines. Already, a court in Cleveland in the United States is using an artificial intelligence tool for sentencing. As such, artificial intelligence can be used as a tool to help predict the outcome of cases.

Fourth Industrial Revolution technologies might therefore require Fourth Industrial Revolution laws. The unknown in the evolving environments in the digital age necessitates a new narrative. Such a narrative can no doubt emerge from the law. By establishing boundaries, the law can incentivize new industries to act in ways which are not detrimental to humans.

It is important to keep in mind that technology is about choices and with the Fourth Industrial Revolution underway, it is necessary for humans to be clear about the choices they are making. Major technology companies have both the money and influence to implement technology at a greater scale than ever experienced. The positive externalities from this can be limitless but so can the risk that industry is bent toward the profit and influence of companies, and not to members of communities.

For example, one of the main concerns linked to the Fourth Industrial Revolution is that of a jobless future, where machines, algorithms and computer programs take over the work done by humans and render humans not only unemployed, but also unemployable. However, people must be proactive in shaping this technology and disruption. In doing so, the fear of losing jobs to technology is significantly reduced.

All of this will require global cooperation and a common view of the role of technology in rearranging economic, social, cultural livelihood. There is also the need to develop leaders with the skillset to manage organizations in the context of these changes. Professionals need to understand and embrace changes as well as realize that the jobs done today might be drastically different in the near future.

Accordingly, education and training systems need to show adaptability in preparing individuals for the skills that are required in the workplace of the future. It is also important that governments and the legal system are not left behind in regulating the new fields, as this would lead to a shift of power towards technology and its owners, with the possibility of creating situations of inequality and fragmented societies. As the onset of the Fourth Industrial Revolution continues, the institutions that affect these innovations must revolutionize as well.

Ali Abusedra is Doctor of Law and Visiting Scholar in International Law at University of Hull, United Kingdom

The rest is here:

Law must be adapted for the Fourth Industrial Revolution | TheHill - The Hill

Artificial Intelligence, Foresight, and the Offense-Defense Balance – War on the Rocks

There is a growing perception that AI will be a transformative technology for international security. The current U.S. National Security Strategy names artificial intelligence as one of a small number of technologies that will be critical to the countrys future. Senior defense officials have commented that the United States is at an inflection point in the power of artificial intelligence and even that AI might be the first technology to change the fundamental nature of war.

However, there is still little clarity regarding just how artificial intelligence will transform the security landscape. One of the most important open questions is whether applications of AI, such as drone swarms and software vulnerability discovery tools, will tend to be more useful for conducting offensive or defensive military operations. If AI favors the offense, then a significant body of international relations theory suggests that this could have destabilizing effects. States could find themselves increasingly able to use force and increasingly frightened of having force used against them, making arms-racing and war more likely. If AI favors the defense, on the other hand, then it may act as a stabilizing force.

Anticipating the impact of AI on the so-called offense-defense balance across different military domains could be extremely valuable. It could help us to foresee new threats to stability before they arise and act to mitigate them, for instance by pursuing specific arms agreements or prioritizing the development of applications with potential stabilizing effects.

Unfortunately, the historical record suggests that attempts to forecast changes in the offense-defense balance are often unsuccessful. It can even be difficult to detect the changes that newly adopted technologies have already caused. In the lead-up to the First World War, for instance, most analysts failed to recognize that the introduction of machine guns and barbed wire had tilted the offense-defense balance far toward defense. The years of intractable trench warfare that followed came as a surprise to the states involved.

While there are clearly limits on the ability to anticipate shifts in the offense-defense balance, some forms of technological change have more predictable effects than others. In particular, as we argue in a recent paper, changes that essentially scale up existing capabilities are likely to be much easier to analyze than changes that introduce fundamentally new capabilities. Substantial insight into the impacts of AI can be achieved by focusing on this kind of quantitative change.

Two Kinds of Technological Change

In a classic analysis of arms races, Samuel Huntington draws a distinction between qualitative and quantitative changes in military capabilities. A qualitative change involves the introduction of what might be considered a new form of force. A quantitative change involves the expansion of an existing form of force.

Although this is a somewhat abstract distinction, it is easy to illustrate with concrete examples. The introduction of dreadnoughts in naval surface warfare in the early twentieth century is most naturally understood as a qualitative change in naval technology. In contrast, the subsequent naval arms race which saw England and Germany competing to manufacture ever larger numbers of dreadnoughts represented a quantitative change.

Attempts to understand changes in the offense-defense balance tend to focus almost exclusively on the effects of qualitative changes. Unfortunately, the effects of such qualitative changes are likely to be especially difficult to anticipate. One particular reason why foresight about such changes is difficult is that the introduction of a new form of force from the tank to the torpedo to the phishing attack will often warrant the introduction of substantially new tactics. Since these tactics emerge at least in part through a process of trial and error, as both attackers and defenders learn from the experience of conflict, there is a limit to how much can ultimately be foreseen.

Although quantitative technological changes are given less attention, they can also in principle have very large effects on the offense-defense balance. Furthermore, these effects may exhibit certain regularities that make them easier to anticipate than the effects of qualitative change. Focusing on quantitative change may then be a promising way forward to gain insight into the potential impact of artificial intelligence.

How Numbers Matter

To understand how quantitative changes can matter, and how they can be predictable, it is useful to consider the case of a ground invasion. If the sizes of two armies double in the lead-up to an invasion, for example, then it is not safe to assume that the effect will simply cancel out and leave the balance of forces the same as it was prior to the doubling. Rather, research on combat dynamics suggests that increasing the total number of soldiers will tend to benefit the attacker when force levels are sufficiently low and benefit the defender when force levels are sufficiently high. The reason is that the initial growth in numbers primarily improves the attackers ability to send soldiers through poorly protected sections of the defenders border. Eventually, however, the border becomes increasingly saturated with ground forces, eliminating the attackers ability to exploit poorly defended sections.

Figures 1: A simple model illustrating the importance of force levels. The ability of the attacker (in red) to send forces through poorly defended sections of the border rises and then falls as total force levels increase.

This phenomenon is also likely to arise in many other domains where there are multiple vulnerable points that a defender hopes to protect. For example, in the cyber domain, increasing the number of software vulnerabilities that both an attacker and defender can each discover will benefit the attacker at first. The primary effect will initially be to increase the attackers ability to discover vulnerabilities that the defender has failed to discover and patch. In the long run, however, the defender will eventually discover every vulnerability that can be discovered and leave behind nothing for the attacker to exploit.

In general, growth in numbers will often benefit the attacker when numbers are sufficiently low and benefit the defender when they are sufficiently high. We refer to this regularity as offensive-then-defensive scaling and suggest that it can be helpful for predicting shifts in the offense-defense balance in a wide range of domains.

Artificial Intelligence and Quantitative Change

Applications of artificial intelligence will undoubtedly be responsible for an enormous range of qualitative changes to the character of war. It is easy to imagine states such as the United States and China competing to deploy ever more novel systems in a cat-and-mouse game that has little to do with quantity. An emphasis on qualitative advantage over quantitative advantage is a fairly explicit feature of the American military strategy and has been since at least the so-called Second Offset strategy that emerged in the middle of the Cold War.

However, some emerging applications of artificial intelligence do seem to lend themselves most naturally to competition on the basis of rapidly increasing quantity. Armed drone swarms are one example. Paul Scharre has argued that the military utility of these swarms may lie in the fact that they offer an opportunity to substitute quantity for quality. A large swarm of individually expendable drones may be able to overwhelm the defenses of individual weapon platforms, such as aircraft carriers, by attacking from more directions or in more waves than the platforms defenses are capable of managing. If this method of attack is in fact viable, one could see a race to build larger and larger swarms that ultimately results in swarms containing billions of drones. The phenomenon of offensive-then-defensive scaling suggests that growing swarm sizes could initially benefit attackers who can focus their attention increasingly intensely on less well-defended targets and parts of targets before potentially allowing defensive swarms to win out if sufficient growth in numbers occurs.

Automated vulnerability discovery tools also stand out as another relevant example, which have the potential to vastly increase the number of software vulnerabilities that both attackers and defenders can discover. The DARPA Cyber Grand Challenge recently showcased machine systems autonomously discovering, patching, and exploiting software vulnerabilities. Recent work on novel techniques such as deep reinforcement fuzzing also suggests significant promise. The computer security expert Bruce Schneier has suggested that continued progress will ultimately make it feasible to discover and patch every single vulnerability in a given piece of software, shifting the cyber offense-defense balance significantly toward defense. Before this point, however, there is reason for concern that these new tools could initially benefit attackers most of all.

Forecasting the Impact of Technology

The impact of AI on the offense-defense balance remains highly uncertain. The greatest impact might come from an as-yet-unforeseen qualitative change. Our contribution here is to point out one particularly precise way in which AI could impact the offense-defense balance, through quantitative increases of capabilities in domains that exhibit offensive-then-defensive scaling. Even if this idea is mistaken, it is our hope that by understanding it, researchers are more likely to see other impacts. In foreseeing and understanding these potential impacts, policymakers could be better prepared to mitigate the most dangerous consequences, through prioritizing the development of applications that favor defense, investigating countermeasures, or constructing stabilizing norms and institutions.

Work to understand and forecast the impacts of technology is hard and should not be expected to produce confident answers. However, the importance of the challenge means that researchers should still try while doing so in a scientific, humble way.

This publication was made possible (in part) by a grant to the Center for a New American Security from Carnegie Corporation of New York. The statements made and views expressed are solely the responsibility of the author(s).

Ben Garfinkel is a DPhil scholar in International Relations, University of Oxford, and research fellow at the Centre for the Governance of AI, Future of Humanity Institute.

Allan Dafoe is associate professor in the International Politics of AI, University of Oxford, and director of the Centre for the Governance of AI, Future of Humanity Institute. For more information, see http://www.governance.ai and http://www.allandafoe.com.

Image: U.S. Air Force (Photo by Tech. Sgt. R.J. Biermann)

See the original post here:

Artificial Intelligence, Foresight, and the Offense-Defense Balance - War on the Rocks

Finland offers crash course in artificial intelligence to EU – The Associated Press

HELSINKI (AP) Finland is offering a techy Christmas gift to all European Union citizens a free-of-charge online course in artificial intelligence in their own language, officials said Tuesday.

The tech-savvy Nordic nation, led by the 34-year-old Prime Minister Sanna Marin, is marking the end of its rotating presidency of the EU at the end of the year with a highly ambitious goal.

Instead of handing out the usual ties and scarves to EU officials and journalists, the Finnish government has opted to give practical understanding of AI to 1% of EU citizens, or about 5 million people, through a basic online course by the end of 2021.

It is teaming up with the University of Helsinki, Finlands largest and oldest academic institution, and the Finland-based tech consultancy Reaktor.

Teemu Roos, a University of Helsinki associate professor in the department of computer science, described the nearly $2 million project as a civics course in AI to help EU citizens cope with societys ever-increasing digitalization and the possibilities AI offers in the jobs market.

The course covers elementary AI concepts in a practical way and doesnt go into deeper concepts like coding, he said.

We have enormous potential in Europe but what we lack is investments into AI, Roos said, adding that the continent faces fierce AI competition from digital giants like China and the United States.

The initiative is paid for by the Finnish ministry for economic affairs and employment, and officials said the course is meant for all EU citizens whatever their age, education or profession.

Since its launch in Finland in 2018 The Elements of AI has been phenomenally successful the most popular course ever offered by the University of Helsinki, which traces its roots back to 1640 with more than 220,000 students from over 110 countries having taken it so far online, Roos said.

A quarter of those enrolled so far are aged 45 and over, and some 40% are women. The share of women is nearly 60% among Finnish participants - a remarkable figure in the male-dominated technology domain.

Consisting of several modules, the online course is meant to be completed in about six weeks full time - or up to six months on a lighter schedule - and is currently available in Finnish, English, Swedish and Estonian.

Together with Reaktor and local EU partners, the university is set to translate it to the remaining 20 of the EUs official languages in the next two years.

Megan Schaible, COO of Reaktor Education, said during the projects presentation in Brussels last week that the company decided to join forces with the Finnish university to prove that AI should not be left in the hands of a few elite coders.

An official University of Helsinki diploma will be provided to those passing and Roos said many EU universities would likely give credits for taking the course, allowing students to include it in their curriculum.

For technology aficionados, the University of Helsinkis computer science department is known as the alma mater of Linus Torvalds, the Finnish software engineer who developed the Linux operating system during his studies there in the early 1990s.

In September, Google set up its free-of-charge Digital Garage training hub in the Finnish capital with the intention of helping job-seekers, entrepreneurs and children to brush up their digital skills including AI.

Read the rest here:

Finland offers crash course in artificial intelligence to EU - The Associated Press

7 tips to get your resume past the robots reading it – CNBC

There are about 7.3 million open jobs in the U.S., according to the most recent Job Openings and Labor Turnover Survey from the Bureau of Labor Statistics. And for many job seekers vying for these openings, the likelihood they'll submit their application to an artificial intelligence-powered hiring system is growing.

A 2017 Deloitte report found 33% of employers already use some form of AI in the hiring process to save time and reduce human bias. These algorithms scan applications for specific words and phrases around work history, responsibilities, skills and accomplishments to identify candidates who match well with the job description.

These assessments may also aim to predict a candidate's future success by matching their abilities and accomplishments to those held by a company's top performers.

But it remains unclear how effective these programs are.

As Sue Shellenbarger reports for The Wall Street Journal, many vendors of these systems don't tell employers how their algorithms work. And employers aren't required to inform job candidates when their resumes will be reviewed by these systems.

That said, "it's sometimes possible to tell whether an employer is using an AI-driven tool by looking for a vendor's logo on the employer's career site," Shellenbarger writes. "In other cases, hovering your cursor over the 'submit' button will reveal the URL where your application is being sent."

CNBC Make It spoke with career experts about how to make sure your next application makes it past the initial robot test.

AI-powered hiring platforms are designed to identify candidates whose resumes match open job descriptions the most. These machines are nuanced, but their use still means very specific wording, repetition and prioritization of certain phrases matter.

Job seekers can make sure to highlight the right skills to get past initial screens by using tools, such as an online cloud generator, to understand what the AI system will prioritize most. Candidates can drop in the text of a job description and see which words appear most often, based on how large they appear within the word cloud.

CareerBuilder also created an AI resume builder to help candidates include skills on an application they may not have identified on their own.

Including transferable skills mentioned in the job description can also increase your resume odds. After all, executives from a recent IBM report say soft skills such as flexibility, time management, teamwork and communication are some of the most important skills in the workforce today.

"Job seekers should be cognizant of how they are positioning their professional background to put their best foot forward," Michelle Armer, chief people officer at talent acquisition company CareerBuilder, tells CNBC Make It. "Since a candidate's skill set will help set them apart from other applicants, putting these front and center on a resume will help make sure you're giving skills the attention they deserve."

It's also worth noting that AI enables employers to source candidates from the entire application system more easily, rather than limiting consideration just to people who applied to a specific role. "As a result," says TopResume career expert Amanda Augustine, "you could be contacted for a role the company believes is a good fit even if you never specifically applied for that opportunity."

When it comes to actually writing your resume, here are seven ways to make sure it looks best for the robots who will be reading it.

Use a text-based application like Microsoft Word rather than a PDF, HTML, Open Office, or Apple Pages document so buzzwords can be accurately scanned by AI programs. Augustine suggests job seekers skip images, graphics and logos, which might not be readable. Test how well bots will comprehend your resume by copying it into a plain text file, then making sure nothing gets out of order and no strange symbols pop up.

Mirror the job description in your work history. Job titles should be listed in reverse-chronological order, Augustine says, because machines favor documents with a clear hierarchy to their information. For each role, prioritize the most relevant information that matches the critical responsibilities and requirements of the job you're applying for. "The bullets that directly match one of the job requirements should be listed first," Augustine adds, "and other notable contributions or accomplishments should be listed lower in a set of bullets."

Include keywords from the job description, such as the role's day-to-day responsibilities, desired previous experience and overall purpose within the organization. Consider having a separate skills section, Augustine says, where you list any certifications, technical skills and soft skills mentioned in the job description.

Quantify performance results, Shellenbarger writes. Highlight ones that involve meeting company goals, driving revenue, leading a certain number of people or projects, being efficient with costs and so on.

Tailor each application to the description of each role you're applying for. These AI systems are generally built to weed out disqualifying resumes that don't match enough of the job description. The more closely you mirror the job description in your application, the better, Augustine says.

Don't place information in the document header or footer, even though resumes traditionally list contact information here. According to Augustine, many application systems can't read the information in this section, so crucial details may be omitted.

Network within the company to build contacts and get your resume to the hiring manager's inbox directly. "While AI helps employers narrow down the number of applicants they will move forward with for interviews," Armer says, "networking is also important."

AI hiring programs show promise at filling roles with greater efficiency, but can also perpetuate bias when they reward candidates with similar backgrounds and experiences as existing employees. Armer stresses hiring algorithms need to be built by teams of diverse individuals across race, ethnicity, gender, experience and other background factors in order to minimize bias.

This is also where getting your resume in front of a human can pay off the most.

"When you have someone on the inside advocating for you, you are often able to bypass the algorithm and have your application delivered directly to the recruiter or hiring manager, rather than getting caught up in the screening process," Augustine says.

Augustine recommends job seekers take stock of their existing network and identify those who may know someone at the companies they're interested in working at. "Look for professional organizations and events that are tied to your industry 10times.com is a great place to find events around the world for every imaginable field," she adds.

Finally, Armer recommends those starting their job hunt review and polish their social media profiles.

Like this story? Subscribe to CNBC Make It on YouTube!

Don't miss: This algorithm can predict when workers are about to quithere's how

Read the original post:

7 tips to get your resume past the robots reading it - CNBC

The Machines Are Learning, and So Are the Students – The New York Times

Riiid claims students can increase their scores by 20 percent or more with just 20 hours of study. It has already incorporated machine-learning algorithms into its program to prepare students for English-language proficiency tests and has introduced test prep programs for the SAT. It expects to enter the United States in 2020.

Still more transformational applications are being developed that could revolutionize education altogether. Acuitus, a Silicon Valley start-up, has drawn on lessons learned over the past 50 years in education cognitive psychology, social psychology, computer science, linguistics and artificial intelligence to create a digital tutor that it claims can train experts in months rather than years.

Acuituss system was originally funded by the Defense Departments Defense Advanced Research Projects Agency for training Navy information technology specialists. John Newkirk, the companys co-founder and chief executive, said Acuitus focused on teaching concepts and understanding.

The company has taught nearly 1,000 students with its course on information technology and is in the prototype stage for a system that will teach algebra. Dr. Newkirk said the underlying A.I. technology was content-agnostic and could be used to teach the full range of STEM subjects.

Dr. Newkirk likens A.I.-powered education today to the Wright brothers early exhibition flights proof that it can be done, but far from what it will be a decade or two from now.

The world will still need schools, classrooms and teachers to motivate students and to teach social skills, teamwork and soft subjects like art, music and sports. The challenge for A.I.-aided learning, some people say, is not the technology, but bureaucratic barriers that protect the status quo.

There are gatekeepers at every step, said Dr. Sejnowski, who together with Barbara Oakley, a computer-science engineer at Michigans Oakland University, created a massive open online course, or MOOC, called Learning How to Learn.

He said that by using machine-learning systems and the internet, new education technology would bypass the gatekeepers and go directly to students in their homes. Parents are figuring out that they can get much better educational lessons for their kids through the internet than theyre getting at school, he said.

Craig S. Smith is a former correspondent for The Times and hosts the podcast Eye on A.I.

Go here to read the rest:

The Machines Are Learning, and So Are the Students - The New York Times

How Artificial Intelligence Is Humanizing the Healthcare Industry – HealthITAnalytics.com

December 17, 2019 -Seventy-nine percent of healthcare professionals indicate that artificial intelligence tools have helped mitigate clinician burnout, suggesting that the technology enables providers to deliver more engaging, patient-centered care, according to a survey conducted by MIT Technology Review and GE Healthcare.

As artificial intelligence tools have slowly made their way into the healthcare industry, many have voiced concerns that the technology will remove the human aspect of patient care, leaving individuals in the care of robots and machines.

Healthcare institutions have been anticipating the impact that artificial intelligence (AI) will have on the performance and efficiency of their operations and their workforcesand the quality of patient care, the report stated.

Contrary to common, yet unproven, fears that machines will replace human workers, AI technologies in health care may actually be re-humanizing healthcare, just as the system itself shifts to value-based care models that may favor the outcome patients receive instead of the number of patients seen.

Through interviews with over 900 healthcare professionals, researchers found that providers are already using AI to improve data analysis, enable better treatment and diagnosis, and reduce administrative burdensall of which free up clinicians time to perform other tasks.

READ MORE: Using Artificial Intelligence to Strengthen Suicide Prevention

Numerous technologies are in play today to allow healthcare professionals to deliver the best care, increasingly customized to patients, and at lower costs, the report said.

Our survey has found medical professionals are already using AI tools, to improve both patient care and back-end business processes, from increasing the accuracy of oncological diagnosis to increasing the efficiency of managing schedules and workflow.

The survey found that medical staff with pilot AI projects spend one-third less time writing reports, while those with extensive AI programs spend two-thirds less time writing reports. Additionally, 45 percent of participants said that AI has helped increase consultation time, as well as time to perform surgery and other procedures.

For those with the most extensive AI rollouts, 70 percent expect to spend more time performing procedures than doing administrative or other work.

AI is being used to assume many of a physicians more mundane administrative responsibilities, such as taking notes or updating electronic health records, researchers said. The more AI is deployed, the less time doctors spend at their computers.

READ MORE: Patient, Provider Support Key to Healthcare Artificial Intelligence

Respondents also indicated that AI is helping them gain an edge in the healthcare market. Eighty percent of business and administrative healthcare professionals said that AI is helping them improve revenue opportunities, while 81 percent said they think AI will make them more competitive providers.

The report also showed that AI-related projects will continue to receive an increasing portion of healthcare spending now and in the future. Seventy-nine percent of respondents said they will be spending more to develop AI applications.

Respondents also indicated that AI has increased the operational efficiency of healthcare organizations. Seventy-eight percent of healthcare professionals said that their AI deployments have already created workflow improvements in areas including schedule management.

Using AI to optimize schedule management and other administrative tasks creates opportunities to leverage AI for more patient-facing applications, allowing clinicians to work with patients more closely.

AIs core value proposition is in both improving diagnosing abilities and reducing regulatory and data complexities by automating and streamlining workflow. This allows healthcare professionals to harness the wealth of insight the industry is generating, without drowning in it, the report said.

READ MORE: GE Launches Program to Ease Artificial Intelligence Adoption

AI has also helped healthcare professionals reduce clinical errors. Medical staff who dont use AI cited fighting clinical error as a key challenge two-thirds of the timemore than double that of medical staff who have AI deployments.

Additionally, advanced tools are helping users identify and treat clinical issues. Seventy-five percent of respondents agree that AI has enabled better predictions in the treatment of disease.

AI-enabled decision-support algorithms allow medical teams to make more accurate diagnoses, researchers noted.

This means doing something big by doing something really small: noticing minute irregularities in patient information. That could be the difference between acting on a life-threatening issueor missing it.

While AI has shown a lot of promise in the industry, the technology still comes with challenges. Fifty-seven percent of respondents said that integrating AI applications into existing systems is challenging, and more than half of professionals planning to deploy AI raise concerns about medical professional adoption, support from top management, and technical support.

To overcome these challenges, researchers recommended that clinical staff collaborate to implement and deploy AI tools.

AI needs to work for healthcare professionals as part of a robust, integrated ecosystem. It needs to be more than deploying technologyin fact, the more humanized the application of AI is, the more it will be adopted and improve results and return on investment. After all, in healthcare, the priority is the patient, researchers concluded.

Excerpt from:

How Artificial Intelligence Is Humanizing the Healthcare Industry - HealthITAnalytics.com

Top Artificial Intelligence Books Released In 2019 That You Must Read – Analytics India Magazine

Artificial Intelligence has had many breakthroughs in 2019. In fact, we can go as far as to say that it has trickled down to every single facet of modern life. With its intervention in our daily life, it is imperative that everyone knows about how it is affecting our lives, bringing about change in it, the threats and possible solutions.

While there are some people who still think AI is only robots and chatbots, it is important that they know of the advancements in the field. There are many online courses and books on artificial intelligence that give a comprehensive understanding to the reader whether it is a professional or an AI enthusiast.

In this article, we have compiled a list of books on artificial intelligence published in 2019 that one can use to learn more about this fascinating technology:

Written by Dr Eric Topol, an American cardiologist, geneticist and digital medicine researcher, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, is Amazon #1 bestseller this year.

This book boldly sets out the potential of AI in healthcare and deep medicine. Topol calls AI the next industrial revolution. The book contains short examples to highlight AIs importance along with a proper expansion on likely AI is going to transform the medical industry. Topol believes that AI can not only help in enhancing diagnosis and treatment but also help them in saving time in other activities like taking notes, reading scans which will eventually help them to spend more time on the patients. This is a resourceful book for someone interested in AI and its impact on healthcare.

Written by Dr Stuart Russell, Human Compatible: AI and the Problem of Control is possibly one of the most important books of this year on AI. The book talks about the threats by artificial intelligence and solutions to it. The author, Stuart Russell, makes use dry humour not to make his book sound like a boring information magazine.

The book is for both the public and AI researches, Stuart Russel, in this doesnt hammer AI, he points out the threats and solution as someone who feels a sense of responsibility towards the changes and revolution his own field is bringing.

This book is written by Marcus du Sautoy, a professor of mathematics at the University of Oxford and a researcher fellow at the Royal Society.

This book is a fact-packed, funny journey to the world of AI. It questions the present meaning of the word creativity and about how the machine will be able to crack the code on human emotions.

This book dances around the concept of using AI assistance in art-making. The book discusses the math behind ML and AI as its centre point of discussion in art.

Janelle Shanes AIwierdness.com is an AI humour blog and looks to have a different take on AI, the part of AI. In this book, the author makes use of humorous cartoons and pop-culture illustrations to try and take a look inside the algorithms that are used in machine learning.

The authors of this book Gary Marcus, a scientist and the founder and CEO of Robust.AI and Ernest Davis, a professor of computer science at NYU tell what AI is, what it is not, its potentials if we worked towards it with more resilience and be more creative. Many authors seem to hype up AI, not just the good part about it but also the wrong side about it. The authors here seem to have found the balance in between.

The book, Rebooting AI: Building Artificial Intelligence We Can Trust, highlights the weaknesses of the current technology, where it is going wrong and what should we be doing to find the solutions. It isnt just some book that only researchers can read but also for the general public. It illustrates many examples and excellent use of humour wherever needed.

The first edition of the series of books written by the Alex Castrounis, answer one of the most critical questions in todays age concerning business and AI, How can I build a successful business by using AI?

The AI for People and Business: A Framework for Better Human Experiences and Business Success is exclusively written for anyone interested in making use of AI in their organisation.

The author examines the value of Ai and gives solutions for developing an AI strategy that benefits both people and businesses.

This book by Andriy Burkov remains true to its name and just manages to do the seemingly impossible task of trying to bundle all of the machine learning inside of a hundred-page book.

This book provides an in-depth introduction to the field of machine learning with the smart choice of topics for both theory and practice.

If you are new to the field of machine learning, then this book gives you a comprehensive introduction to the vocabulary/ terminology.

comments

See the article here:

Top Artificial Intelligence Books Released In 2019 That You Must Read - Analytics India Magazine

Zebra Medical Vision Announces Agreement With DePuy Synthes to Deploy Cloud Based Artificial Intelligence Orthopaedic Surgical Planning Tools -…

KIBBUTZ SHEFAYIM, Israel--(BUSINESS WIRE)--Zebra Medical Vision, the deep learning medical imaging analytics company, announces today a global co-development and commercialization agreement with DePuy Synthes* to bring Artificial Intelligence (AI) opportunities to orthopaedics, based on imaging data.

Every year, millions of orthopaedic procedures worldwide use traditional two-dimensional (2D) CT scans or MRI imaging to assist with pre-operative planning. CT scans and MRI imaging can be expensive, and CT scans are associated with more radiation and are uncomfortable for some patients. Zebra-Meds technology uses algorithms to create three-dimensional (3D) models from X-ray images. This technology aims to bring affordable pre-operative surgical planning to surgeons worldwide without the need for traditional MRI or CT-based imaging.

We are thrilled to start this collaboration and have the opportunity to impact and improve orthopaedic procedures and outcomes in areas including the knee, hip, shoulder, trauma, and spine care, says Eyal Gura, Co-Founder and CEO of Zebra Medical Vision. We share a common vision surrounding the impact we can have on patients lives through the use of AI, and we are happy to initiate such a meaningful strategic partnership, leveraging the tools and knowledge we have built around bone health AI in the last five years.

This technology is planned to be introduced as part of DePuy Synthes VELYS Digital Surgery solutions for pre-operative, operative, and post-operative patient care.

Read more on Zebra-Meds blog: https://zebramedblog.wordpress.com/another-dimension-to-zebras-ai-how-we-impact-the-orthopedic-world

About Zebra Medical VisionZebra Medical Visions imaging analytics platform allows healthcare institutions to identify patients at risk of disease and offer improved, preventative treatment pathways, to improve patient care. The company is funded by Khosla Ventures, Marc Benioff, Intermountain Investment Fund, OurCrowd Qure, Aurum, aMoon, Nvidia, Johnson & Johnson Innovation JJDC, Inc. (JJDC) and Dolby Ventures. Zebra Medical Vision has raised $52 million in funding to date, and was named a Fast Company Top-5 AI and Machine Learning company. Zebra-Med is a global leader in AI FDA cleared products, and is installed in hospitals globally, from Australia to India, Europe to the U.S, and the LATAM region.

*Agreement is between DePuy Ireland Unlimited Company and Zebra Medical Vision.

More here:

Zebra Medical Vision Announces Agreement With DePuy Synthes to Deploy Cloud Based Artificial Intelligence Orthopaedic Surgical Planning Tools -...

Why video games and board games arent a good measure of AI intelligence – The Verge

Measuring the intelligence of AI is one of the trickiest but most important questions in the field of computer science. If you cant understand whether the machine youve built is cleverer today than it was yesterday, how do you know youre making progress?

At first glance, this might seem like a non-issue. Obviously AI is getting smarter is one reply. Just look at all the money and talent pouring into the field. Look at the milestones, like beating humans at Go, and the applications that were impossible to solve a decade ago that are commonplace today, like image recognition. How is that not progress?

Another reply is that these achievements arent really a good gauge of intelligence. Beating humans at chess and Go is impressive, yes, but what does it matter if the smartest computer can be out-strategized in general problem-solving by a toddler or a rat?

This is a criticism put forward by AI researcher Franois Chollet, a software engineer at Google and a well-known figure in the machine learning community. Chollet is the creator of Keras, a widely used program for developing neural networks, the backbone of contemporary AI. Hes also written numerous textbooks on machine learning and maintains a popular Twitter feed where he shares his opinions on the field.

In a recent paper titled On the Measure of Intelligence, Chollet also laid out an argument that the AI world needs to refocus on what intelligence is and isnt. If researchers want to make progress toward general artificial intelligence, says Chollet, they need to look past popular benchmarks like video games and board games, and start thinking about the skills that actually make humans clever, like our ability to generalize and adapt.

In an email interview with The Verge, Chollet explained his thoughts on this subject, talking through why he believes current achievements in AI have been misrepresented, how we might measure intelligence in the future, and why scary stories about super intelligent AI (as told by Elon Musk and others) have an unwarranted hold on the publics imagination.

This interview has been lightly edited for clarity.

In your paper, you describe two different conceptions of intelligence that have shaped the field of AI. One presents intelligence as the ability to excel in a wide range of tasks, while the other prioritizes adaptability and generalization, which is the ability for AI to respond to novel challenges. Which framework is a bigger influence right now, and what are the consequences of that?

In the first 30 years of the history of the field, the most influential view was the former: intelligence as a set of static programs and explicit knowledge bases. Right now, the pendulum has swung very far in the opposite direction: the dominant way of conceptualizing intelligence in the AI community is the blank slate or, to use a more relevant metaphor, the freshly-initialized deep neural network. Unfortunately, its a framework thats been going largely unchallenged and even largely unexamined. These questions have a long intellectual history literally decades and I dont see much awareness of this history in the field today, perhaps because most people doing deep learning today joined the field after 2016.

Its never a good thing to have such intellectual monopolies, especially as an answer to poorly understood scientific questions. It restricts the set of questions that get asked. It restricts the space of ideas that people pursue. I think researchers are now starting to wake up to that fact.

In your paper, you also make the case that AI needs a better definition of intelligence in order to improve. Right now, you argue, researchers focus on benchmarking performance in static tests like beating video games and board games. Why do you find this measure of intelligence lacking?

The thing is, once you pick a measure, youre going to take whatever shortcut is available to game it. For instance, if you set chess-playing as your measure of intelligence (which we started doing in the 1970s until the 1990s), youre going to end up with a system that plays chess, and thats it. Theres no reason to assume it will be good for anything else at all. You end up with tree search and minimax, and that doesnt teach you anything about human intelligence. Today, pursuing skill at video games like Dota or StarCraft as a proxy for general intelligence falls into the exact same intellectual trap.

This is perhaps not obvious because, in humans, skill and intelligence are closely related. The human mind can use its general intelligence to acquire task-specific skills. A human that is really good at chess can be assumed to be pretty intelligent because, implicitly, we know they started from zero and had to use their general intelligence to learn to play chess. They werent designed to play chess. So we know they could direct this general intelligence to many other tasks and learn to do these tasks similarly efficiently. Thats what generality is about.

But a machine has no such constraints. A machine can absolutely be designed to play chess. So the inference we do for humans can play chess, therefore must be intelligent breaks down. Our anthropomorphic assumptions no longer apply. General intelligence can generate task-specific skills, but there is no path in reverse, from task-specific skill to generality. At all. So in machines, skill is entirely orthogonal to intelligence. You can achieve arbitrary skills at arbitrary tasks as long as you can sample infinite data about the task (or spend an infinite amount of engineering resources). And that will still not get you one inch closer to general intelligence.

The key insight is that there is no task where achieving high skill is a sign of intelligence. Unless the task is actually a meta-task, that involves acquiring new skills over a broad [range] of previously unknown problems. And thats exactly what I propose as a benchmark of intelligence.

If these current benchmarks dont help us develop AI with more generalized, flexible intelligence, why are they so popular?

Theres no doubt that the effort to beat human champions at specific well-known video games is primarily driven by the press coverage these projects can generate. If the public wasnt interested in these flashy milestones that are so easy to misrepresent as steps toward superhuman general AI, researchers would be doing something else.

I think its a bit sad because research should about answering open scientific questions, not generating PR. If I set out to solve Warcraft III at a superhuman level using deep learning, you can be quite sure that I will get there as long as I have access to sufficient engineering talent and computing power (which is on the order of tens of millions of dollars for a task like this). But once Id have done it, what would I have learned about intelligence or generalization? Well, nothing. At best, Id have developed engineering knowledge about scaling up deep learning. So I dont really see it as scientific research because it doesnt teach us anything we didnt already know. It doesnt answer any open question. If the question was, Can we play X at a superhuman level?, the answer is definitely, Yes, as long as you can generate a sufficiently dense sample of training situations and feed them into a sufficiently expressive deep learning model. Weve known this for some time. (I actually said as much a while before the Dota 2 and StarCraft II AIs reached champion level.)

What do you think the actual achievements of these projects are? To what extent are their results misunderstood or misrepresented?

One stark misrepresentation Im seeing is the argument that these high-skill game-playing systems represent real progress toward AI systems, which can handle the complexity and uncertainty of the real world [as OpenAI claimed in a press release about its Dota 2-playing bot OpenAI Five]. They do not. If they did, it would be an immensely valuable research area, but that is simply not true. Take OpenAI Five, for instance: it wasnt able to handle the complexity of Dota 2 in the first place because it was trained with 16 characters, and it could not generalize to the full game, which has over 100 characters. It was trained over 45,000 years of gameplay then again, note how training data requirements grow combinatorially with task complexity yet, the resulting model proved very brittle: non-champion human players were able to find strategies to reliably beat it in a matter of days after the AI was made available for the public to play against.

If you want to one day become able to handle the complexity and uncertainty of the real world, you have to start asking questions like, what is generalization? How do we measure and maximize generalization in learning systems? And thats entirely orthogonal to throwing 10x more data and compute at a big neural network so that it improves its skill by some small percentage.

So what would be a better measure of intelligence for the field to focus on?

In short, we need to stop evaluating skill at tasks that are known beforehand like chess or Dota or StarCraft and instead start evaluating skill-acquisition ability. This means only using new tasks that are not known to the system beforehand, measuring the prior knowledge about the task that the system starts with, and measuring the sample-efficiency of the system (which is how much data is needed to learn to do the task). The less information (prior knowledge and experience) you require in order to reach a given level of skill, the more intelligent you are. And todays AI systems are really not very intelligent at all.

In addition, I think our measure of intelligence should make human-likeness more explicit because there may be different types of intelligence, and human-like intelligence is what were really talking about, implicitly, when we talk about general intelligence. And that involves trying to understand what prior knowledge humans are born with. Humans learn incredibly efficiently they only require very little experience to acquire new skills but they dont do it from scratch. They leverage innate prior knowledge, besides a lifetime of accumulated skills and knowledge.

[My recent paper] proposes a new benchmark dataset, ARC, which looks a lot like an IQ test. ARC is a set of reasoning tasks, where each task is explained via a small sequence of demonstrations, typically three, and you should learn to accomplish the task from these few demonstrations. ARC takes the position that every task your system is evaluated on should be brand-new and should only involve knowledge of a kind that fits within human innate knowledge. For instance, it should not feature language. Currently, ARC is totally solvable by humans, without any verbal explanations or prior training, but it is completely unapproachable by any AI technique weve tried so far. Thats a big flashing sign that theres something going on there, that were in need of new ideas.

Do you think the AI world can continue to progress by just throwing more computing power at problems? Some have argued that, historically, this has been the most successful approach to improving performance. While others have suggested that were soon going to see diminishing returns if we just follow this path.

This is absolutely true if youre working on a specific task. Throwing more training data and compute power at a vertical task will increase performance on that task. But it will gain you about zero incremental understanding of how to achieve generality in artificial intelligence.

If you have a sufficiently large deep learning model, and you train it on a dense sampling of the input-cross-output space for a task, then it will learn to solve the task, whatever that may be Dota, StarCraft, you name it. Its tremendously valuable. It has almost infinite applications in machine perception problems. The only problem here is that the amount of data you need is a combinatorial function of task complexity, so even slightly complex tasks can become prohibitively expensive.

Take self-driving cars, for instance. Millions upon millions of training situations arent sufficient for an end-to-end deep learning model to learn to safely drive a car. Which is why, first of all, L5 self-driving isnt quite there yet. And second, the most advanced self-driving systems are primarily symbolic models that use deep learning to interface these manually engineered models with sensor data. If deep learning could generalize, wed have had L5 self-driving in 2016, and it would have taken the form of a big neural network.

Lastly, given youre talking about constraints for current AI systems, it seems worth asking about the idea of superintelligence the fear that an extremely powerful AI could cause extreme harm to humanity in the near future. Do you think such fears are legitimate?

No, I dont believe the superintelligence narrative to be well-founded. We have never created an autonomous intelligent system. There is absolutely no sign that we will be able to create one in the foreseeable future. (This isnt where current AI progress is headed.) And we have absolutely no way to speculate what its characteristics may be if we do end up creating one in the far future. To use an analogy, its a bit like asking in the year 1600: Ballistics has been progressing pretty fast! So, what if we had a cannon that could wipe out an entire city. How do we make sure it would only kill the bad guys? Its a rather ill-formed question, and debating it in the absence of any knowledge about the system were talking about amounts, at best, to a philosophical argument.

One thing about these superintelligence fears is that they mask the fact that AI has the potential to be pretty dangerous today. We dont need superintelligence in order for certain AI applications to represent a danger. Ive written about the use of AI to implement algorithmic propaganda systems. Others have written about algorithmic bias, the use of AI in weapons systems, or about AI as a tool of totalitarian control.

Theres a story about the siege of Constantinople in 1453. While the city was fighting off the Ottoman army, its scholars and rulers were debating what the sex of angels might be. Well, the more energy and attention we spend discussing the sex of angels or the value alignment of hypothetical superintelligent AIs, the less we have for dealing with the real and pressing issues that AI technology poses today. Theres a well-known tech leader that likes to depict superintelligent AI as an existential threat to humanity. Well, while these ideas are grabbing headlines, youre not discussing the ethical questions raised by the deployment of insufficiently accurate self-driving systems on our roads that cause crashes and loss of life.

If one accepts these criticisms that there is not currently a technical grounding for these fears why do you think the superintelligence narrative is popular?

Ultimately, I think its a good story, and people are attracted to good stories. Its not a coincidence that it resembles eschatological religious stories because religious stories have evolved and been selected over time to powerfully resonate with people and to spread effectively. For the very same reason, you also find this narrative in science fiction movies and novels. The reason why its used in fiction, the reason why it resembles religious narratives, and the reason why it has been catching on as a way to understand where AI is headed are all the same: its a good story. And people need stories to make sense of the world. Theres far more demand for such stories than demand for understanding the nature of intelligence or understanding what drives technological progress.

More:

Why video games and board games arent a good measure of AI intelligence - The Verge

New Findings Show Artificial Intelligence Software Improves Breast Cancer Detection and Physician Accuracy – P&T Community

CHICAGO, Dec. 19, 2019 /PRNewswire/ --A New York City based large volume private practice radiology group conducted a quality assurance review that included an 18 monthsoftware evaluation in the breast center comprised of nine (9) specialist radiologists using an FDA cleared artificial intelligence software by Koios Medical, Inc as a second opinion for analyzing and assessing lesions found during breast ultrasound examinations.

Over the evaluation period, radiologists analyzed over 6,000 diagnostic breast ultrasound exams. Radiologists used Koios DS Breast decision support software (Koios Medical, Inc.) to assist in lesion classification and risk assessment. As part of the normal diagnostic workflow, radiologists would activate Koios DS and review the software findings with clinical details to formulate the best management.

Analysis was then performed comparing the physicians' diagnostic performance to the 18-month period prior to the introduction of the AI enabled software. Comparing the two periods, physicians recommended biopsy for suspicious lesions at a similar rate (17%) and performed 14% more biopsies increasing the cancer detection rate (from 8.5 to 11.8 per 1,000 diagnostic exams) while simultaneously experiencing a significant reduction in benign biopsies (aka, false positives). Noteworthy is the aggregate nature of the findings where adoption of the software gradually increased over time during the 18-month evaluation period. Trailing 6-month results indicate a benign biopsy reduction exceeding 20% across the group. Positive predictive value, the percentage a positive test returns a positive result, improved over 20%.

"Physicians were skeptical in the beginning that software could help them given their years of training and specialization focusing on breast radiology. With experience using Koios software, however, over time and seeing the preliminary analysis they came to realize that the Koios AI software was gradually impacting patient care in a very positive way.Initially, radiologists completed internal studies that verified Koios software's accuracy, and discovered the larger impact happens gradually over time. In looking at the statistics, physicians were pleasantly surprised to see the benefit was even greater than expected. The software has the potential to make a profound impact on overall quality," says Vice President of Activations Amy Fowler.

Koios DS Breast 2.0 is artificial intelligence software designed around a dataset of over 450,000 breast ultrasound images with known results intended for use to assist physicians analyzing breast ultrasound images and aligns a machine learning-generated probability of malignancy. This probabilityis then checked against and aligned to the lesion's assigned BI-RADScategory, the scale physicians use to recommend care pathways.

"We are seeing the promise of machine learning as a physician's assistant coming to fruition. This will undoubtedly improve quality, outcomes, and patient experiencesand ultimately save lives. Koios DS Breast 2.0 is proving this within several physician groups across the US," says company CFO Graham Anderson.

Koios DS Breast 2.0 can be used in conjunction and integrated directly into most major viewing workstation platforms and is directly available on the LOGIQTME10, GE Healthcare's next generation digital ultrasound system that integrates artificial intelligence, cloud connectivity, and advanced algorithms. Artificial intelligence software generated results can be exported directly into a patient's record. Koios Medical continues to experiment with thyroid ultrasound image data and expects to add to its offering in the next year.

"We could not be more encouraged by the results these physicians are seeing. All our prior testing on historical images have consistently demonstrated high levels of system accuracy. Now, and for the first time ever, physicians using AI software as a second opinion with patients in real-time, within their practice, are delivering on the promise to measurably elevate quality of care. Catching more cancers earlier while reducing avoidable procedures and improving patient experiences is fast becoming a reality," says Koios Medical CEO Chad McClennan.

Discussing future plans during the recent Radiological Society of North America (RSNA) annual meeting in Chicago, McClennan shared, "Several major academic medical centers and community hospitals are utilizing our software and conducting studies into the quality impact for publication. We expect those results to mimic these early clinical findings and further validate the experience of our physician customers in both in New York City and across the country, and most importantly, the positive patient impact."

About KoiosMedical:

Koios Medical develops medical software to assist physicians interpreting ultrasound images and applies deep machine learning methods to the process of reaching an accurate diagnosis. The FDA cleared Koios DS platform uses advanced AI algorithms to assist in the early detection of disease while reducing recommendations for biopsy of benign tissue. Patented technology saves physicians time, helps improve patient outcomes, and reduces healthcare costs. Koios Medical is presently focused on breast and thyroid cancer diagnosis assistance market. Women with dense breast tissue (over 40% in the US) often require an alternative to mammography for diagnosis. Ultrasound is a widely available and effective alternative to mammography with no radiation and is standard of care for breast cancer diagnosis. To learn more please contact us at info@koiosmedical.comor (732) 529-5755.

Learn more about Koios at: koiosmedical.com

View original content to download multimedia:http://www.prnewswire.com/news-releases/new-findings-show-artificial-intelligence-software-improves-breast-cancer-detection-and-physician-accuracy-300978087.html

SOURCE Koios Medical

See the rest here:

New Findings Show Artificial Intelligence Software Improves Breast Cancer Detection and Physician Accuracy - P&T Community

Tommie Experts: Ethically Educating on Artificial Intelligence at St. Thomas – University of St. Thomas Newsroom

Tommie Experts taps into the knowledge of St. Thomas faculty and staff to help us better understand topical events, trends and the world in general.

Last month, School of Engineering Dean Don Weinkauf appointed Manjeet Rege, PhD, as the director for the Center for Applied Artificial Intelligence.

Rege is a faculty member, author, mentor, AI expert, thought leader and a frequent public speaker on big data, machine learning and AI technologies. The Newsroom caught up with him to ask about the centers launch in response to a growing need to educate ethically around AI.

Were partnering with industry in a number of ways. One way is in our data science curriculum. There are electives; some students take a regular course, while others take a data science capstone project. Its optional. Students who opt for that through partnership with the industry, companies in the Twin Cities interested in embarking on an AI journey can have several business use cases that they want to try AI out with. In an enterprise, you typically have to seek funding, convince a lot of people; in this case, well find a student, or a team, who will be working on that industry-sponsored project. Its a win-win for all. The project will be supervised by faculty. The company gets access to emerging AI talent, gets to try out their business use case and the students end up getting an opportunity working on a real-world project.

Secondly, a number of companies are looking to hire talent in machine learning and AI. This is a good way for companies to access good talent. We can build relationships, sending students for internships, or even students who work on these capstone projects become important in terms of hiring.

There are also a number of professional development offerings well come out with. We offer a mini masters program in big data and AI. The local companies can come and attend an executive seminar for a week on different aspects of AI. Well be offering two- or three-day workshops on hands-on AI, for someone within a company who would like to become an AI practitioner. If they are interested in getting in-depth knowledge, they can go through our curriculum.

We also have a speaker series in partnership with SAS.

In May well be hosting a data science day, a keynote speaker, and a panel of judges to review projects the data science students are working on (six of which are part of the SAS Global Student Symposium). Theyll get to showcase the work theyve done. That panel of judges will be from local companies.

Everybody is now becoming aware that AI is ubiquitous, around us and here. The ship has already left the dock, so to speak, in terms of AI being around us. The best way to succeed at the enterprise level is to embrace this and make it a business enabler. Its important for enterprises to transform themselves into an AI-first company. Think about Google. It first defined itself as a search company. Then a mobile company. Now, its an AI-first company. That is what keeps you ahead, always.

Being aware of the problems that may arise is so important. For us to address AI biases, we have to understand how AI works. Through these multiple offerings were hoping we can create knowledge about AI. Once we have that we can address the issue of AI bias.

For example, Microsoft did an experiment where it had AI go out on the web, read the literature and learn a lot of analogies. When you went in and asked that AI questions based on, say, what man is to a woman, father is to what? Mother. Perfect. What man is to computer programmer as woman is to what? Homemaker. Thats unfortunate. AI is learning the stereotypes that exist in the literature it was learned on.

There have been hiring tools that have gender bias. Facial recognition tools that work better for lighter skin colors than darker skin colors. Bank loan programs with biases for certain demographics. There is a lot of effort in the AI community to minimize these. Humans have bias, but when a computer does it you expect perfection. An AI system learning is like a child learning; when that AI system learned about different things from the web and different relationships between man and woman, because these stereotypes existed already in the data, the computer just learned from it. Ultimately an AI system is for a human; whenever it gives you certain output, we need to be aware and go back and nudge it in the right direction.

Read more:

Tommie Experts: Ethically Educating on Artificial Intelligence at St. Thomas - University of St. Thomas Newsroom

Artificial intelligence predictions for 2020: 16 experts have their say – Verdict

2019 has seen artificial intelligence and machine learning take centre stage for many industries, with companies increasingly looking to harness the benefits of the technology for a wide range of use cases. With its advances, ethical implications and impact on humans likely to dominate conversations in the technology sector for years to come, how will AI continue to develop over the next 12 months?

Weve asked experts from a range of organisations within the AI sphere to give their predictions for 2020.

In both the private and public sectors, organisations are recognising the need to develop strategies to mitigate bias in AI. With issues such as amplified prejudices in predictive crime mapping, organisations must build in checks in both AI technology itself and their people processes. One of the most effective ways to do this is to ensure data samples are robust enough to minimise subjectivity and yield trustworthy insights. Data collection cannot be too selective and should be reflective of reality, not historical biases.

In addition, teams responsible for identifying business cases and creating and deploying machine learning models should represent a rich blend of backgrounds, views, and characteristics. Organisations should also test machines for biases, train AI models to identify bias, and consider appointing an HR or ethics specialist to collaborate with data scientists, thereby ensuring cultural values are being reflected in AI projects.

Zachary Jarvinen, Head of Technology Strategy, AI and Analytics, OpenText

A big trend for social media this year has been the rise of deepfakes and were only likely to see this increase in the year ahead. These are manipulated videos that are made to look real, but are actually inaccurate representations powered by sophisticated AI. This technology has implications for past political Facebook posts. I believe we will start to see threat actors use deepfakes as a tactic for corporate cyberattacks, in a similar way to how phishing attacks operate.

Cyber crooks will see this as a money-making opportunity, as they can cause serious harm on unsuspecting employees. This means it will be vital for organisations to keep validation technology up-to-date. The same tools that people use to create deepfakes will be the ones used to detect them, so we may see an arms race for who can use the technology first.

Jesper Frederiksen, VP and GM EMEA, Okta

When considering high-volume, fast turnaround hiring efforts, its often impossible to keep every candidate in the loop. Enter highly sophisticated artificial intelligence tools, such as chatbots. More companies are now using AI programs to inform candidates quickly and efficiently on where they stand in the process, help them navigate career sites, schedule interviews and give advice. This is significantly transforming the candidate experience, enhancing engagement and elevating overall satisfaction.

Chatbots are also increasingly becoming a tool for employees who wish to apply for new roles within their organisation. Instead of trying to work up the nerve to ask HR or their boss about new opportunities, employees can interact with a chatbot that can offer details about open jobs, give skills assessments and offer career guidance.

Whats more, some companies are offering day in the life virtual simulations that allow candidates to see what a role would entail, which can either enhance interest or help candidates self-select out of the process. It also helps employers understand if the candidate would be a good fit, based on their behavior during the simulation. In Korn Ferrys global survey of HR professionals, 78 percent say that in the coming year, it will be vital to provide candidates with these day in the life type experiences.

Byrne Mulrooney, Chief Executive Officer, Korn Ferry RPO, Professional Search and Korn Ferry Digital

Get the Verdict morning email

Despite fears that it will replace human employees, in 2020 AI and machine learning will increasingly be used to aid and augment them. For instance, customer service workers need to be certain they are giving customers the right advice. AI can analyse complex customer queries with high numbers of variables, then present solutions to the employee speeding up the process and increasing employee confidence.

Lufthansa for one is already using this method, and with a faster, more accurate and ultimately more satisfying customer experience acting as a significant differentiator more will follow. Over the next three years this trend will keep accelerating, as businesses from banks to manufacturers use AI to support their employees decisions and outperform the competition.

Felix Gerdes, Director of Digital Innovation Services at Insight UK

In 2020 were going to see increased public demand for the demystification and democratisation of AI. There is a growing level of interest and people are quite rightly not happy to sit back and accept that a robot or programme makes the decisions it does because it does or that its simply too complicated. They want to understand how varying AI works in principle, they want to have more of a role in determining how AI should engage in their lives so that they dont feel powerless in the face of this new technology.

Companies need to be ready for this shift, and to welcome it. Increasing public understanding of AI, and actively seeking to hear peoples hopes and concerns is the only way forward to ensure that the role of AI is both seen as a force for good for everyone in our society and as a result able to realise the opportunity ahead historically not something that tech industry as a whole have been good at, we need to change.

Teg Dosanjh, Director of Connected Living for Samsung UK and Ireland

As the next decade of the transforming transportation industry unfolds, investment in autonomous vehicle development will continue to grow dramatically, especially in the datacenter and AI infrastructure for training and validation. Well see a significant ramp in autonomous driving pilot programs as part of this continued investment. Some of these will include removal of the on-board safety driver. Autonomous driving technology will be applied to a wider array of industries, such as trucking and delivery, moving goods instead of people.

Production vehicles will start to incorporate the hardware necessary for self-driving, such as centralized onboard AI compute and advanced sensor suites. These new features will help power Level 2+ AI assisted driving and lay the foundation for higher levels of autonomy. Regulatory agencies will also begin to leverage new technologies to evaluate autonomous driving capability, in particular, hardware-in-the-loop simulation for accurate and scalable validation. The progress in AV development underway now and for the next few years will be instrumental to the coming era of safer, more efficient transportation.

Danny Shapiro, Senior Director of Automotive, NVIDIA

As AI tools become easier to use, AI use cases proliferate, and AI projects are deployed, cross-functional teams are being pulled into AI projects. Data literacy will be required from employees outside traditional data teamsin fact, Gartner expects that 80% of organisations will start to roll out internal data literacy initiatives to upskill their workforce by 2020.

But training is an ongoing endeavor, and to succeed in implementing AI and ML, companies need to take a more holistic approach toward retraining their entire workforces. This may be the most difficult, but most rewarding, process for many organisations to undertake. The opportunity for teams to plug into a broader community on a regular basis to see a wide cross-section of successful AI implementations and solutions is also critical.

Retraining also means rethinking diversity. Reinforcing and expanding on how important diversity is to detecting fairness and bias issues, diversity becomes even more critical for organisations looking to successfully implement truly useful AI models and related technologies. As we expect most AI projects to augment human tasks, incorporating the human element in a broad, inclusive manner becomes a key factor for widespread acceptance and success.

Roger Magoulas, VP of Radar at OReilly

The hottest trend in the industry right now is in Natural Language Processing (NLP). Over the past year, a new method called BERT (Bidirectional Encoder Representations from Transformers) has been developed for designing neural networks that work with text. Now, we suddenly have models that will understand the semantic meaning of whats in text, going beyond the basics. This creates a lot more opportunity for deep learning to be used more widely.

Almost every organisation has a need to read and understand text and spoken word whether it is dealing with customer enquiries in the contact centre, assessing social media sentiment in the marketing department or even deciphering legal contracts or invoices. Having a model that can learn from examples and build out its vocabulary to include local colloquialisms and turns of phrase is extremely useful to a much wider range of organisations than image processing alone.

Bjrn Brinne, Chief AI Officer at Peltarion

Voice assistants have established themselves as common place in our personal lives. But 2020 will see an increasing amount of businesses turning to them to improve and personalise the customer experience.

This is because, advances in AI-driven technology and natural language processing are enabling voice interactions to be translated into data. This data can be structured so that conversations can be analysed for insights.

Next year, organisations will likely begin to embrace conversational analytics to improve their chatbots and voice applications. This will ultimately result in better data-driven decisions and improved business performance.

Alberto Pan, Chief Technical Officer, Denodo

Organisations are already drowning in data, but the flood gates are about to open even wider. IDC predicts that the worlds data will grow to 175 zettabytes over the next five years. With this explosive growth comes increased complexity, making data harder than ever to manage. For many organisations already struggling, the pressure is on.

Yet the market will adjust. Over the next few years, organisations will exploit machine learning and greater automation to tackle the data deluge.

Machine learning applications are constantly improving when it comes to making predictions and taking actions based on historical trends and patterns. With its number-crunching capabilities, machine learning is the perfect solution for data management. Well soon see it accurately predicting outages and, with time, it will be able to automate the resolution of capacity challenges. It could do this, for example, by automatically purchasing cloud storage or re-allocating volumes when it detects a workload nearing capacity.

At the same time, with recent advances in technology we should also expect to see data becoming more intelligent, self-managing and self-protecting. Well see a new kind of automation where data is hardwired with a type of digital DNA. This data DNA will not only identify the data but will also program it with instructions and policies.

Adding intelligence to data will allow it to understand where it can reside, who can access it, what actions are compliant and even when to delete itself. These processes can then be carried out independently, with data acting like living cells in a human body, carrying out their hardcoded instructions for the good of the business.

However, with IT increasingly able to manage itself, and data management complexities resolved, what is left for the data leaders of the business? Theyll be freed from the low-value, repetitive tasks of data management and will have more time for decision-making and innovation. In this respect AI will become an invaluable tool, flagging issues experts may not have considered and giving them options, unmatched visibility and insight into their operations.

Jasmit Sagoo, Senior Director, Head of Technology UK&I at Veritas Technologies

2020 will be the year research & investment in ethics and bias in AI significantly increases. Today, business insights in enterprises are generated by AI and machine learning algorithms. However, due to these algorithms being built using models and data bases, bias can creep in from those that train the AI. This results in gender or racial bias be it for mortgage applications or forecasting health problems. With increased awareness of bias in data, business leaders will demand to know how AI reaches the recommendations it does to avoid making biased decisions as a business in the future.

Ashvin Kamaraju, CTO for Cloud Protection and Licensing activity atThales

2020 will be the year of health data. Everyone is agreed that smarter use of health data is essential to providing better patient care meaning treatment that is more targeted or is more cost effective. However, navigating through the thicket of consents and rules as well as the ethical considerations has caused a delay to advancement of the use of patient data.

There are now several different directions of travel emerging which all present exciting opportunities for patients, for health providers including the NHS, for Digital Health companies and for pharmaceutical companies.

Marcus Vass, Partner, Osborne Clarke

Artificial intelligence isnt just something debated by techies or sci-fi writers anymore its increasingly creeping into our collective cultural consciousness. But theres a lot of emphasis on the negative. While those big picture questions around ethics cannot and should not be ignored, in the near-term we wont be dealing with the super-AI you see in the movies.

Im excited by the possibilities well see AI open up in the next couple of years and the societal challenges it will inevitably help us to overcome. And its happening already. One of the main applications for AI right now is driving operational efficiencies and that may not sound very exciting, but its actually where the technology can have the biggest impact. If we can use AI to synchronise traffic lights to impact traffic flow and reduce the amount of time cars spend idling, that doesnt just make inner city travel less of a headache for drivers it can have a tangible impact on emissions. Thats just one example. In the next few years, well see AI applied in new, creative ways to solve the biggest problems were facing as a species right now from climate change to mass urbanisation.

Dr Anya Rumyantseva, Data Scientist at Hitachi Vantara

Businesses are investing more in AI each year, as they look to use the technology to personalize customer experiences, reduce human bias and automate tasks. Yet for most organizations AI hasnt yet reached its full potential, as data is locked up in siloed systems and applications.

In 2020, well see organizations unlock their data using APIs, enabling them to uncover greater insights and deliver more business value. If AI is the brain, APIs and integration are the nervous system that help AI really create value in a complex, real-time context.

Ian Fairclough, VP of Services, MuleSoft

2020 is going to be a tipping point, when algorithmic decision making AI will become more mainstream. This brings both opportunities and challenges, particularly around the explainability of AI. We currently have many blackbox models where we dont know how its coming to decisions. Bad guys can leverage this and manipulate these decisions.

Using machine identities, they will be able to infiltrate the data streams that feed into an AI models and manipulate them. If companies are unable to explain and see the decision making behind their AI this could go unquestioned, changing the outcomes. This could have wide reaching impacts in everything from predictive policing to financial forecasting and market decision making.

Kevin Bocek, Vice President, Security Strategy & Threat Intelligence at Venafi

Until now, robotic process automation (RPA) and artificial intelligence (AI) have been perceived as two separate things: RPA being task oriented, without intelligence built in. However, as we move into 2020, AI and machine learning (ML) will become an intrinsic part of RPA infused throughout analytics, process mining and discovery. AI will offer various functions like natural language processing (NLP) and language skills, and RPA platforms will need to be ready to accept those AI skill sets. More broadly, there will be greater adoption of RPA across industries to increase productivity and lower operating costs. Today we have over 1.7 million bots in operation with customers around the world and this number is growing rapidly. Consequently, training in all business functions will need to evolve, so that employees know how to use automation processes and understand how to leverage RPA, to focus on the more creative aspects of their job.

RPA is set to see adoption in all industries very quickly, across all job roles, from developers and business analysts, to programme and project managers, and across all verticals, including IT, BPO, HR, Education, Insurance and Banking. To facilitate continuous learning, companies must give employees the time and resources needed to upskill as job roles evolve, through methods such as micro-learning and just in time training. In the UK, companies are reporting that highly skilled AI professionals, currently, are hard to find and expensive to hire, driving up the cost of adoption and slowing technological advancement. Organisations that make a conscious decision to use automation in a way that enhances employees skills and complements their working style will significantly increase the performance benefit they see from augmentation.

James Dening, Vice President for Europe at Automation Anywhere

Read more: Artificial intelligence to create 133 million jobs globally: Report

View post:

Artificial intelligence predictions for 2020: 16 experts have their say - Verdict

Beethovens unfinished tenth symphony to be completed by artificial intelligence – Classic FM

16 December 2019, 16:31 | Updated: 17 December 2019, 14:25

Beethovens unfinished symphony is set to be completed by artificial intelligence, in the run-up to celebrations around the 250th anniversary of the composers birth.

A computer is set to complete Beethovens unfinished tenth symphony, in the most ambitious project of its kind.

Artificial intelligence has recently been used to complete Schuberts Unfinished Symphony No. 8, as well as to attempt to match the playing of revered 20th-century pianist, Glenn Gould.

Beethoven famously wrote nine symphonies (you can read more here about the Curse of the Ninth). But alongside his Symphony No. 9, which contains the Ode to Joy, there is evidence that he began writing a tenth.

Unfortunately, when the German composer died in 1827, he left only drafts and notes of the composition.

Read more: What is the Curse of the Ninth and does it really exist? >

A team of musicologists and programmers have been training the artificial intelligence, by playing snippets of Beethovens unfinished Symphony No. 10, as well as sections from other works like his Eroica Symphony. The AI is then left to improvise the rest.

Matthias Roeder, project leader and director of the Herbert von Karajan institute, told Frankfurter Allgemeine Sonntagszeitung: No machine has been able to do this for so long. This is unique.

The quality of genius cannot be fully replicated, still less if youre dealing with Beethovens late period, said Christine Siegert, head of the Beethoven Archive in Bonn and one of those managing the project.

I think the projects goal should be to integrate Beethovens existing musical fragments into a coherent musical flow, she told the German broadcaster Deutshe Welle. Thats difficult enough, and if this project can manage that, it will be an incredible accomplishment.

Read more: AI to compose classical music live in concert with over 100 musicians >

It remains to be seen and heard whether the new completed composition will sound anything like Beethovens own compositions. But Mr Roeder has said the algorithm is making positive progress.

Read more: Googles piano gadget means ANYONE can improvise classical music >

The algorithm is unpredictable, it surprises us every day. It is like a small child who is exploring the world of Beethoven.

But it keeps going and, at some point, the system really surprises you. And that happened the first time a few weeks ago. Were pleased that its making such big strides.

There will also, reliable sources have confirmed, be some human involvement in the project. Although the computer will write the music, a living composer will orchestrate it for playing.

The results of the experiment will be premiered by a full symphony orchestra, in a public performance in Bonn Beethovens birthplace in Germany on 28 April 2020.

Here is the original post:

Beethovens unfinished tenth symphony to be completed by artificial intelligence - Classic FM