Page 130«..1020..129130131132..140150..»

Category Archives: Artificial Intelligence

How AI is being trained to recognize humans beyond facial recognition – Business Insider

Posted: October 31, 2019 at 5:48 am

For private companies and government agencies trying to track peoples' movements, technology is making the task increasingly easy.

Facial recognition and analysis are becoming increasingly popular surveillance tools the technology was rolled out in airports across the world this summer as a tool for verifying flyers' identity, and is widely used by police departments for tracking suspected criminals.

Privacy-minded activists and lawmakers are now hitting back at facial recognition. The technology has been banned for law-enforcement purposes across California, and a similar bill is being weighed in Massachusetts. Meanwhile, artists and researchers have begun to develop clothes designed to thwart algorithms that detect human faces.

But emerging technology presents alternate means of identifying and tracking humans beyond facial recognition. These methods, also driven by artificial intelligence, detect the presence of humans using devices ranging from lasers to WiFi networks.

The vast range of biometric data that technology can register makes regulation difficult. Meanwhile, some of the emerging surveillance technology is already being embraced by military powers like the US and China.

Here's a rundown of emerging technology that can detect humans and track their location.

Read the original post:

How AI is being trained to recognize humans beyond facial recognition - Business Insider

Posted in Artificial Intelligence | Comments Off on How AI is being trained to recognize humans beyond facial recognition – Business Insider

Global Artificial Intelligence (AI) Hardware Markets, 2019-2024 – Start-Ups and Software Industry Giants Entering the AI Hardware Industry -…

Posted: at 5:48 am

DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence (AI) Hardware: Global Markets" report has been added to ResearchAndMarkets.com's offering.

The scope includes the analysis of the AI hardware market based on technology type, computation type, end use industries and regional markets. For each of these market segments, revenue forecasts for 2018 through 2024 are provided at the global level.

The AI hardware market is segmented into the following categories:

This report covers analyses of the global market trends, with data from 2018 to 2024 and projections of CAGR during 2019 to 2024 . The estimated values used are based on manufacturers' total revenues. Projected and forecasted revenue values are in constant U.S. dollars that have not been adjusted for inflation.

Report Includes:

Key Topics Covered:

Chapter 1 Introduction

Chapter 2 Summary and Highlights

Chapter 3 Market and Technology Background

Chapter 4 AI Hardware Market

Chapter 5 Competitive Landscape

Chapter 6 Company Profiles

For more information about this report visit https://www.researchandmarkets.com/r/2h9th9

Originally posted here:

Global Artificial Intelligence (AI) Hardware Markets, 2019-2024 - Start-Ups and Software Industry Giants Entering the AI Hardware Industry -...

Posted in Artificial Intelligence | Comments Off on Global Artificial Intelligence (AI) Hardware Markets, 2019-2024 – Start-Ups and Software Industry Giants Entering the AI Hardware Industry -…

Artificial Intelligence May Reduce Radiation Exposure in Fluoroscopy-Guided Endoscopy – Consultant360

Posted: at 5:48 am

Using a fluoroscopy system enabled with artificial intelligence (AI) during image-guided endoscopy can significantly reduce patients exposure to radiation and diminish the scatter effect to endoscopy personnel, according to late-breaking research presented at the American College of Gastroenterology (ACG) 2019 Annual Scientific Meeting and Postgraduate Course.

To reach this conclusion, Ji Young Bang, MD, from AdventHealth Orlando in Orlando, Florida, and colleagues conducted a prospective study of 100 consecutive patients who underwent endoscopy with either a conventional fluoroscopy system (n=50) or an AI-enabled fluoroscopy system (n=50).

The study outcome measures were to compare radiation exposure to patients via dose area product (DAP) and to measure radiation scatter to endoscopy personnel via a dosimeter.

The groups had no significant difference in demographics, body mass index, procedural type, or procedural/fluoroscopy time between the conventional and the AI-enabled fluoroscopy systems.

Radiation exposure to patients was lower with the AI-enabled fluoroscopy system compared with the conventional system (median DAP, 2178 mGym2 vs 5708 mGym2, respectively).

The scatter effect to endoscopy personnel was less with the AI-enabled fluoroscopy system compared with the conventional system (total deep-dose equivalent, 0.28 mSv vs 0.69 mSv, respectively), for a difference of 59.4%.

After adjusting for patient characteristics, procedural/fluoroscopy duration, and type of fluoroscopy system, only the AI-enabled fluoroscopy system and fluoroscopy duration were associated with radiation exposure.

Colleen Murphy

Reference:

Bang JY. Use of artificial intelligence to reduce radiation exposure at fluoroscopy-guided endoscopic procedures (late-breaking abstract) [abstract 73]. Presented at: ACG 2019Annual Scientific Meeting and Postgraduate Course; October 25-30, 2019; San Antonio, TX.

See the original post here:

Artificial Intelligence May Reduce Radiation Exposure in Fluoroscopy-Guided Endoscopy - Consultant360

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence May Reduce Radiation Exposure in Fluoroscopy-Guided Endoscopy – Consultant360

This chapter on the future of Artificial Intelligence was written by Artificial Intelligence – Scroll.in

Posted: at 5:48 am

In this version of the future, people will still have a role working alongside smart systems: either the technology will not be good enough to take over completely, or the decisions will have human consequences that are too important to hand over completely to a machine. Theres just one problem: when humans and semi-intelligent systems try to work together, things do not always turn out well. Like almost all of todays autonomous cars, a back-up driver was there to step in if the software failed. The so-called Level 3 system is designed to drive itself in most situations but hand control back to a human when confronted by situations it cannot handle.

If youre only needed for a minute a day, it wont work, says Stefan Heck, chief executive of Nauto, a US start-up whose technology is used to prevent professional drivers from becoming distracted. Without careful design, the intelligent systems making their way into the world could provoke a backlash against the technology. Preventing that will require more realistic expectations of the new autonomous systems, as well as careful design to make sure they mesh with the human world. Does the AI make us feel more involved or is it like dealing with an alien species?

Research from Stanford University has shown that it takes at least six seconds for a human driver to recover their awareness and take back control, says Mr Heck. But even when there is enough time for human attention to be restored, the person stepping into a situation may see things differently from the machine, making the handover far from seamless.

We need to work on a shared meaning between software systems and people this is a very difficult problem, says Mr Sikka. A second type of human/machine cooperation is designed to make sure that a sensitive task always depends on a person even in situations where an automated system has done all the preparatory work and would be quite capable of completing the task itself. Military drones, where human pilots, often based thousands of miles away, are called on to make the decision to fire at a target, are one example. Both show how AI can make humans far more effective without robbing them of control, says Mr Heck.

You cant say the technology itself can only be used in a defensive way and under human control. A final type of human in the loop system involves the use of AI that is not capable of handling a task entirely on its own but is used as an aid to human decision-making. Algorithms that crunch data and make recommendations, or direct people in which step to take next, are creeping into everyday life. The algorithms, though, are only as good as the data they are trained on and they are not good at dealing with new situations.

People required to trust these systems are often also required to take them on faith. The outcome of these computer-aided decisions may well end up being worse than those based on purely human analysis, he says. Sometimes people will blindly follow the machine, other times people will say: Hang on, that doesnt look right.

But what happens when the stakes are higher? IBM made medical diagnostics one of the main goals for Watson, the system first created to win a TV game show and then repurposed to become what it calls a more general cognitive system. Simply saying theyll still make the decisions doesnt make it so. Similar worries surfaced in the 1980s, when the field of AI was dominated by expert systems designed to guide their human users through a decision tree to reach the correct answer in any situation.

But the latest AI, based on machine learning, looks set to become far more widely adopted, and it may be harder to second-guess.

Non-experts may feel reluctant to second-guess a machine whose workings they do not understand. Technicians had no way of identifying the flaw and the machine stayed in use much longer as a result, says Mr Nourbakhsh.

Some experts, however, say headway is being made and that it will not be long before machine learning systems are able to point to the factors that led them to a particular decision. Like many working in the field, he expresses optimism that humans and machines, working together, will achieve far more than either could have done alone.

He had already founded and sold off several successful consumer technology companies, but as he grew older he wanted to do something more meaningful, that is, he wanted to build a product that would serve the people that technology startups had often ignored. Both my friend and I were entering the age at which our parents needed more help going about their daily lives, and he decided to design a product that would make life easier for the elderly.

It sounded like a wonderful product, one that would have a real market right now.

But once those material needs were taken care of, what these people wanted more than anything was true human contact, another person to trade stories with and relate to. If he had come to me just a few years earlier, I likely would have recommended some technical fix, maybe something like an AI chat bot that could simulate a basic conversation well enough to fool the human on the other end.

But there remains one thing that only human beings are able to create and share with one another: love.

Despite what science-fiction films like Her in which a man and his artificially intelligent computer operating system fall in love portray, AI has no ability or desire to love or be loved.

I firmly believe we must forge a new synergy between artificial intelligence and the human heart, and look for ways to use the forthcoming material abundance generated by artificial intelligence to foster love and compassion in our societies.

Excerpted with permission from The Tech Whisperer: On Digital Transformation and the Technologies that Enable It, Jaspreet Bindra, Penguin Portfolio.

More:

This chapter on the future of Artificial Intelligence was written by Artificial Intelligence - Scroll.in

Posted in Artificial Intelligence | Comments Off on This chapter on the future of Artificial Intelligence was written by Artificial Intelligence – Scroll.in

Artificial Intelligence in education: Who should be the custodian of the new gold? – CNBCTV18

Posted: at 5:48 am

It is common knowledge now that a homogenous and rote learning education system might suit some children but not all, thereby isolating many and preventing them from achieving their full potential. What we need to offer our children is a fun experience while learning. An experience that is immersive, experiential, self-paced, interactive and designed specifically for each child.

Fortunately, the global community has awakened to this crisis in education and has called for quality and inclusive education for all, which is the crux of the United Nations Sustainable Development Goal (SDG) 4. I perceive personalised learning as the foremost means of answering this global call and truly achieving this goal for all children across the world.

Social and institutional implications

The focus here, however, is not on highlighting the merits of personalised learning but to determine if the concept is feasible; if yes, what are the social and institutional implications?

The definite answer is Yes and information technology is the key to making it happen. Specifically, I am referring to the internet and the growth of Artificial Intelligence (AI), which offers the possibility of harnessing the collective wisdom of the many for the benefits of the individual.

The internet can be seen as offering two levels of information. The first is the actual content that is made available for learners to access lets call this the first order information. The second level of information is partially hidden it is not readily available to everyone and relates to the behavioural aspects of the users accessing and using content; this is the data that is harnessed by AI. Lets call this second order information.

An article in the May 17, 2017, issue of The Economist considers this second order information about users accessing and using information on the internet as new gold. Ben Rossi, in his article, Data revolution: the gold rush of the 21st century, estimates that the amount of data accumulated in 2011 and 2013 was more than nine times the data collected till 2011; this data is expected to reach 44 Zettabytes by 2020.

Personalised BOT

Such information has immense utility when it comes to education. Imagine a scenario where a child in rural India is having trouble with introductory algebra and the teachers limited knowledge base makes it hard for the child to find an answer. We can expect the teacher to only have a finite set of approaches to teaching algebra that is available as she or he is constrained by the human brain. Things will be very different, however, if the child is given access to an online system, which I will tentatively call Global Intelligent Education Platform (GIEP). As part of this system, the child is paired with a personalised BOT right after she/he enters school.

ABotcan be described as a computer program (a set of algorithms) that is able to support and provide guidance to a user or users in accomplishing a task or automatic repetitive tasksand may growits own intelligence after mining and analysing huge amounts of data. A Bot is a product of AI!

This BOT develops a keen understanding of the childs attributes and learning preferences by evaluating data about the childs ongoing learning experiences. It also has access to an infinite set of possible interventions arising from learner-centric data derived from the experiences of millions of other children learning algebra or any other topic worldwide to help the child overcome learning problems. Personalised learning, the holy grail of education, is a definite reality in this hypothetical scenario. As I perceive it, making this a reality for children today a distinct possibility.

Before that, however, we must overcome some ideological challenges related to the ownership of information that the BOT will access. As we have visualised it, the personalised BOTs capacity to impart learning and customise solutions will only be as strong as the amount of information that it can access. Therefore, the strength of the GIEP will depend on whether the information generated by learners all across the world is made accessible to every individual learner a pure social good.

Role of inter-governmental organisations

Who, then, should be the custodians or managers of this new gold? In many ways, the knowledge available can be considered as the global commons as described by the late Nobel Laureate Elinor Ostrom.

Who then can provide and manage this common? Can governments provide this service? The answer is both Yes and No. Yes, because governments do have the mandate to provide the social good; No, because in this case, the commons transcend national boundaries. If I need to find an analogous, I will use the global climate system as the global commons and if not managed properly can lead to climate change.

The unambiguous solution to this dilemma is that the responsibility be taken up by an inter-governmental organisation such as the United Nations or one or more of its specialised agencies such as the United Nations Education Science and Cultural Organisation (UNESCO).

To answer the two fundamental questions posed in the title of this article, I would say that the global community own the global knowledge commons and that this knowledge be managed by an inter-governmental agency such as the United Nations.

Original post:

Artificial Intelligence in education: Who should be the custodian of the new gold? - CNBCTV18

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence in education: Who should be the custodian of the new gold? – CNBCTV18

Artificial intelligence outperforms clinicians on decision of where to send post-operative patients, pilot shows – News – McKnight’s Long Term Care…

Posted: at 5:48 am

Artificial intelligence (AI) won the battle between man versus machine after a pilot study found that the technology outperformed clinicians in triaging post-operative patients for intensive care.

AI was able to correctly triage 41 out of 50 patients (82% accuracy) during the study, while surgeons had an accuracy rate of 70% after correctly triaging 35 patients. The number of incorrect triage decisions was also the lowest for AI, which had an 18% rate. Surgeons had a 30% rate.

The findings could lead to more AI usage when trying to acquire a patients clinical information for determining if they need intensive or post-operative care.

The algorithm will be improved and perfected as the machine analyzes more patients, and testing at other sites will validate the AI model. Certainly, as shown in this study, the concept is valid and may be extrapolated to any hospital, said study co-author Marcovalerio Melis, MD.

Details from the pilot study were presented during the American College of Surgeons Clinical Congress 2019 this week.

More here:

Artificial intelligence outperforms clinicians on decision of where to send post-operative patients, pilot shows - News - McKnight's Long Term Care...

Posted in Artificial Intelligence | Comments Off on Artificial intelligence outperforms clinicians on decision of where to send post-operative patients, pilot shows – News – McKnight’s Long Term Care…

The end of humanity: will artificial intelligence free us, enslave us or exterminate us? – The Times

Posted: at 5:48 am

The Berkeley professor Stuart Russell tells Danny Fortson why we are at a dangerous crossroads in our development of AI

The Sunday Times,October 27 2019, 12:01am

Stuart Russell has a rule. I wont do an interview until you agree not to put a Terminator on it, says the renowned British computer scientist, sitting in a spare room at his home in Berkeley, California. The media is very fond of putting a Terminator on anything to do with artificial intelligence.

The request is a tad ironic. Russell, after all, was the man behind Slaughterbots, a dystopian short film he released in 2017 with the Future of Life Institute. It depicts swarms of autonomous mini-drones small enough to fit in the palm of your hand and armed with a lethal explosive charge hunting down student protesters, congressmen, anyone really, and exploding in their faces. It wasnt exactly Arnold Schwarzenegger blowing people away but

Want to read more?

Subscribe now and get unlimited digital access on web and our smartphone and tablet apps, free for your first month.

Go here to see the original:

The end of humanity: will artificial intelligence free us, enslave us or exterminate us? - The Times

Posted in Artificial Intelligence | Comments Off on The end of humanity: will artificial intelligence free us, enslave us or exterminate us? – The Times

Chatbots and artificial intelligence influence in education | Opinion – Indiana Statesman

Posted: at 5:48 am

Chatbots can be used for several purposes, such as helping customers and answering complex FAQs.

They have even been used to help pick candidates in recruitment processes, so it is no surprise that the educational system is trying to implement chatbots.

The scopes of application could advance administration with the aim of facilitating procedures, as a date reminder, assistance inthe reinforcement of educational content and mentoring and accompaniment actions.

Properly trained with a huge quantity of data, a chatbot could ease both the educational process of the student and the tasks of the teacher.

This artificial assistant could respond to a 24/7 demand, allowing professors to take care of the most qualitative tasks.

There is still reluctance from students and teachers to interact with machines, but once chatbots demonstrate their efficiency and gain the confidence of both parties, we will perhaps see a boost in their use in the educational field.

Here are some applications of both chatbots and artificial intelligence within the educational area that could have an astounding impact on the whole industry:

Essay Scoring.

Feedback on individually written essays is a time-consuming job that many educators are grappling with, and the problem is even bigger in massive open online courses.

Because there are often more than 1000 students in one class, there is clearly no realistic way for written essays to be given individual feedback.

Innovators have flirted with the artificial intelligence (AI) industry to combat this problem, and a solution is close to hand.

Through feeding thousands of essays on a machine-learning algorithm, most people believe there is a good chance to replace human input with AI systems on essays.

Learning Through Chatbots.

Intelligent tutoring systems are a common application of artificial intelligence that provide students with a customized learning environment by analyzing their responses and how they go through the learning material.

Likewise, chatbots with artificial intelligence software can be used to teach students by making it look like a regular chat conversation by converting a lecture into a series of messages.

The bot will constantly determine the student's level of understanding and thus present the next section of the lecture.

Botsify is a chatbot for education that works in a similar way.

In the form of text, pictures, videos or a combination of these, Botsify introduces a specific topic to the students.

Students take quizzes after studying the subject and send the findings to their teachers. The teachers can easily monitor the grades of the students as well.

Enhance student engagement.

Students are familiar with instant messaging sites and social media nowadays.

Whether they want to chat, solve problems and find the best helper, they turn to or use a digital help desk.

This can be used to increase students' training and interest in a topic.

Teachers and students can use the messages to communicate with classrooms, offices, students and different activities.

Students would find it easy to learn about tasks, due dates or other important events.

CourseQ is a chatbot that is created by providing a simple way to talk to students, groups and teachers.

It can be used by a group to transmit messages and respond to queries from students.

Students may use it to ask class questions, and teachers may use it to interact with students, ask questions and address their concerns.

Better student support.

Here, chatbots can bring tremendous value.

More use can be made of the chatbots that support the students during the admission process by providing all the necessary information about their courses, modules and faculty.

The bots can also act as campus guides when students arrive on campus.

They will help the students learn more about scholarships, hostels, library membership, etc.

Efficient Teaching Assistants.

Students also create questions on the web and locate someone to help them accomplish the activities and overcome their concerns.

Moreover, new educators need help to ease their hectic schedules.

Bots are used as electronic teaching aids to perform teachers repetitive tasks.

Such bots are used to answer questions on the course module, classes, assignments and deadlines.

Instructors can also track the students learning progress. Chatbots can provide the students with direct reviews.

Lastly, chatbots will assess the educational needs of the students and prescribe learning material accordingly.

See the rest here:

Chatbots and artificial intelligence influence in education | Opinion - Indiana Statesman

Posted in Artificial Intelligence | Comments Off on Chatbots and artificial intelligence influence in education | Opinion – Indiana Statesman

SmartStream Introduce a New Artificial Intelligence Module to Capture Missed Payments and Receipts – Business Wire

Posted: at 5:48 am

LONDON--(BUSINESS WIRE)--SmartStream Technologies, the financial Transaction Lifecycle Management (TLM) solutions provider, today completed a proof of concept for an artificial intelligence (AI) and machine learning module within its existing TLM Cash and Liquidity Management solution for receipts and payments - essential for any business in terms of liquidity risk and regulatory reporting.

Technology that meets the market demand for forecasting liquidity has been the backbone of SmartStreams intraday liquidity management solution. The next phase of the solutions development is about predicting the settlement of cash-flows. SmartStream has been working on a proof of concept with its clients for profiling and predicted intraday settlement activity, which includes missed payments and receipts identification planned for settlement within current date. Cash management teams will gain greater visibility into the payment process and manage liquidity risk more efficiently, minimising the potential of payments being missed.

Andreas Burner, Chief Innovation Officer, SmartStream, states: This proof of concept is clearly another important step towards ensuring that our clients are keeping pace with what the regulators are demanding - and in particular the questioning of a banks position and the management of its outstanding balances. By combining our recent achievements in AI with SmartStreams many years of experience in this area, the Vienna-based Innovation Lab developed this new AI cash and liquidity prediction module. The technology continuously learns data patterns so the service continues to improve and become more efficient.

The new TLM Cash and Liquidity Management, AI and machine learning module is an important development for any financial institution with a treasury department, with its ability to predict when credit is going to arrive; giving the treasurer more control over cash-flows. The proprietary algorithm uses the data and predicts the forecasted settlement time of receipts on an intraday basis. The core of the module is underpinned by sophisticated machine learning technology that continuously improves, meaning the predictions become more accurate and treasurers can make more informed decisions.

Nadeem Shamim, Head of Cash & Liquidity Management, SmartStream, says: Things are going to get tighter in terms of managing liquidity. Collateral is expensive, capital is expensive and there is currently a big drive to reduce excessive use of capital this is an area where AI and predictive analytics can manage liquidity buffers more efficiently and that can result in significant savings.

AI and machine learning provide the banks with the opportunity to look at reducing the liquidity buffer. The rigorous analysis of unstructured data and learned settlement predictions reduces costs. It also offers another tool that can be used to mitigate the impact of reputational risk as it relates to the ability to meet payment obligations by allowing greater visibility into exposure limits with predicted forecasting. The new SmartStream user interface enables users to drill down into individual cash-flows.

Ends

View post:

SmartStream Introduce a New Artificial Intelligence Module to Capture Missed Payments and Receipts - Business Wire

Posted in Artificial Intelligence | Comments Off on SmartStream Introduce a New Artificial Intelligence Module to Capture Missed Payments and Receipts – Business Wire

Artificial intelligence expert: True artificial intelligence should also have a consciousness, but we are far from that – The Slovak Spectator

Posted: at 5:48 am

Artificial intelligence is an issue that has gained much popularity in the past few years.

This is also evident in the number of technologies referring to artificial intelligence (AI). Autonomous cars and personal assistants like Apples Siri are often spoken about, while machine learning, deep learning and neural networks are frequently featured in written text. What do these terms mean, and what is the difference between them? How far has technology based on elements of AI progressed? We discussed these topics in a series of interviews with Juraj Jnok, an expert on artificial intelligence of the ESET company.

If we can simulate human intelligence, consciousness and thinking with some technology, we achieve artificial intelligence. There is a term for it - artificial general intelligence - but there is also a concept called super intelligence. While artificial general intelligence (AGI) is meant to imitate human thinking, including its faults, super intelligence (SI) should go even further and exceed the limits of human consciousness and thinking, and considerably surpass them. However, there are more philosophical discourses involved, and we have to admit that currently, we are still far behind, even in the development of AGI.

These terms are frequently confused, even by professionals. Simply put, artificial intelligence is an umbrella notion. It includes a wide range of topics that also cover the issues of robotics, machine learning and so on. Thus, machine learning is just one sphere of AI, and currently, it is probably gaining the most attention. Deep learning, on the other hand, is just one part of machine learning. This sphere is inspired by how the brain functions and tries to simulate the connection between neurons in the brain.

The idea of machine learning is quite simple. We have a lot of data available, and through ML, we want to make a compact representation of it. This means that if I have a huge amount of data, I do not have to sort through it all on my own. It is enough for me to take a smaller sample, assort it and use an algorithm on it to in order to assign it the basic sorting/classification. Then, I let the learned algorithm work on another, smaller sample, and watch if it sorts it out according to my wish. If not, I adjust its behaviour, for example by specifying criteria. If I am satisfied with the algorithms performance, I use it for the whole database, and the algorithm sorts it on its own in a much shorter time than any human would manage.

For example, if we want to teach a computer how to distinguish a cup, we load thousands or millions of photos of cups and glasses. Of these pictures, the algorithm tries to create some sort of generalisation on its own. Then, when I show it a new photo of a cup, it will be able to tell what is the probability that this is a cup. If I am not content with the results, I can adjust the criteria, for example, by telling it the object is a cup and so on.

Currently, when AI is mentioned, it is machine learning that is talked about the most. It already functions on a regular basis by, for example, recommending users programmes on Netflix based on the programmes they have already seen. Mobile phones that categorise photographs, autonomous cars and cyber-security are also examples of machine learning we engage in. Right now, the biggest discussion in AI revolves around machine learning; global companies like Google and Apple are investing massively in these technologies.

Deep learning also interprets bulks of data, of which we need to make a compact representation. This is called a model, which will then make predictions. However, we will not use tree algorithms but rather neural networks.

Neural networks are inspired by how the human brain works, by the functioning of neurons. The brain is basically a huge network of neurons, which is entered through some inputs. These inputs are evaluated in the brain, and then the brain gives sends the outputs into our organism. The neural network works the same. We have some inputs that enter the network. The network assigns a certain significance to the entries, evaluates them, and then returns the outputs to us.

Let us try, for instance, to explain it through the example of a decision tree, which is a common classification algorithm in ML. In a decision tree, each decision takes me to another one, followed by another, similar to a tree growing. Either I climb to one branch, or to another, and then I face the next branch, the next layer. AI during machine learning works in a similar way: either this, or that etc., round and round.

By contrast, when it comes to neural networks, this impulse enters something we can imagine as a network and crosses it by passing several layers simultaneously, or can even return back. There are even neural networks with cells that decide on what I will use in this situation and what I will dump, but I will remember it and can use it later. So, it is closer to the real functioning of the brain, even though this is not an exact copy of how the brain works, of course.

Simply put, yes. This also implies a fundamental difference, which concerns the interpretability. In the decision tree, I can find out retroactively why the AI decided the way it did. I can look back at individual steps and evaluate in which step it decided in which way. With neural networks, this is not so simple, as the path of the impulse is not direct. The impulse is evaluated many times, has a certain weight allocated, and the algorithm creates a generalisation. But I cannot say why it allocated certain weights to individual neurons and determine why it decided the way it did. This is a big problem when applying these technologies on decisions involving humans, as we cannot say clearly why the due model has decided in a certain way.

Yes. For example, in the banking sector, AI is used to evaluate a clients creditworthiness. This is a very sensitive issue, in that people want to know why the bank has not approved their credit application. Hardly anyone wishes to hear that it was artificial intelligence that decided on this, and, moreover, we are unable to explain why.

Apart from this, there is also the issue of input data. AI can learn incorrect generalisations based on the data available, for example racism. Statistically, the input data may imply that there is higher probability of a specific group of the population not repaying a mortgage. An incorrect selection of data can lead to prejudice in the decision-making process of the resulting model. However, we try to prevent this in our work.

Technically, we could already apply machine learning and deep learning in banking, but this has not been done on a mass scale for the abovementioned reasons.

Yes, there are still some other ways. Many algorithms used today are old: some were established back in the 1950s, or even earlier. Basically, everything we draw from today goes back to the 1950s, 1960s, or 1970s. The last considerable innovations date back to the 1980s and 1990s. Since then, we essentially only improve what has already been invented. Or, we have found practical use for algorithms which were only on paper.

Paradoxically, the development of ML was aided by computer games as it has been used to increase computing performance, especially in graphic chips. Another factor was the arrival of technologies associated with big data and big-capacity, and fast repositories. Until then, there was also a problem with databases. The development of these two spheres, i.e. computing performance and databases, has led to the current situation: most companies focus exactly on Machine Learning, which inevitably needs their assistance. But there are definitely more ways.

Well, not just the games, but we owe them considerable credit in this. When we split machine learning into a complete mathematical extreme, it is basically composed of matrixes. Simply put, we need to calculate decimal numbers and multiply in huge quantities. This is what effectively happens on the computing level. And in this, computers are much better than humans. Exactly the same happens with graphic cards when describing the environment in which the games are played, or when watching an HD video. Basically, these are the same mathematical operations.

Of course, algorithms have been developing and improving, and new ones have even appeared. So, you cannot say that we have made zero progress. However, it is true that even the current models of AI, commonly used in todays products, were generated in the 1990s. However, the development of machine learning, widely popular today, occurred thanks to the development of the technologies mentioned above. When it comes to ideas, we have not moved on fundamentally in the past 20 30 years. I have not seen any revolutionary idea that would considerably change the development of AI. We all rather work on the fundaments laid in the past, and we are improving them.

Let us use the very popular algorithm LSTM, or long short-term memory, as an example. This is a type of deep learning based on neural networks, which is used to process sequential data, i.e. mainly to process images and sound. It is exactly this algorithm that is used when creating fake videos, so-called deep fakes, which are very popular now, too. The video with Barack Obama spread around the world and in which he says something he never said in reality, was produced in part with an algorithm invented by two people at Graz University back in 1997. However, the algorithm didntt become popular until recently. In the 1990s, most people and companies could not afford a computing device that would manage such a performance. Today, it is much more available.

There is an older approach that is effective and is often used in robotics or in managing in industry; these are genetic algorithms. This approach is inspired by replication and the division of cells, which sometimes involve mutations. Similarly, with an algorithm we define some population, we change the functions in time, and watch how these changes, or mutations, change the results.

I can use a slightly bizarre example from the stock exchange. In order to predict how the development of stock exchange will look overtime, we need to follow numerous indices. We seek the right function for when it is the best moment to conclude a deal, to sell shares, etc. For this, we need to follow many parameters and search for balance between them depending on previous data. Thus, we create a function, enter input data, and detect how the whole system functions and how it changes. Afterwards, we make changes to this function, i.e. the mutations, and watch how the system has changed and how it is developing. We do this again and again, until we find a suitable model, which will at least partially represent reality.

There is an approach called good-old-fashioned AI. In Slovak, it is usually defined as an expert system. In this case, an expert who understands AI defines fixed rules, according to which the programme will behave. These rules can change in the process, but they will again be changed by the expert, not by the AI itself. Thus, this is human supervision of artificial intelligence.

Another popular approach is represented by Markov chains, whose fundaments were laid by Russian mathematician Andrey Markov in the beginning of the 20th century, and nowadays, are widely used as statistical models of real processes. They are used in robotics and finances to optimise the queues at airports, as well as in the PageRank algorithm of the Google browser. These methods have become the base for the area of machine learning known as reinforcement learning. Reinforcement learning, combined with expert systems, was used, for example, for AlphaGo.

For instance, the media broke the story about artificial intelligence defeating the best player in the game called Go. AI Watson from IBM was also highly publicised. These forms of artificial intelligence combine machine learning and expert systems. Their use is limited, but they have an excellent understanding of defined boundaries. However, that is all. The bottom line is that we have AI that can defeat someone in games but that cannot make decisions in other spheres.

Watson, for instance, is good at putting things into context. The paradox, though, is that it cannot decide based on what it has discovered. So, it is not a conscious or purposeful activity. Watson is great when diagnosing an MRI. The analysis implies that it has a higher rate of success than most radiologists. This is understandable, as a radiologists effectiveness is derived from their experience, from how many X-rays they have already seen. This is, essentially, machine learning.

Moreover, radiologists decision-making is often impacted by human weaknesses like fatigue, current mood, or whether the person is hungry or thirsty. Watson is not affected by such factors. It is enough to pour thousands or millions of X-ray images classified as good, bad, or whatever into the AI. Based on these images, Watson is able to predict diagnosis with a very high success rate. It has the capacity to see more images than a single radiologist can see in their lifetime. Moreover, it can even recognise a change in a single pixel, which is close to impossible for humans.

ML is popular because it is apt for a wide scale of tasks we face in everyday life. For instance, it can make a prediction based on previous data and look for anomalies, and it has computer vision, which has been functioning for many years. So, ML is not popular because it is the best form of AGI.

Many wise people even consider ML a deadlock. Returning to the example with the cup I mentioned: if you want to teach a child what a cup is, you do not show them a million cups. The human brain does not work like this.

This is a good question. There are two ways to view this question. What does an insufficient computing performance mean? Some researchers claim that if we wanted to translate the human brain into a computer, we could model it into the currently best-performing computer globally. One human brain into the best-performing and most expensive super-computer - that does not seem very effective.

This is another problem. An approximate estimate can be made. The performance of modern computers is calculated in units called FLOPS (Floating-point Operations Per Second). Roughly said, this is the number of operations a computer can do in a decimal point of a second. The best-performing super-computers have calculating performance in tens of peta-flops, or something crazy like that. In other words, these super computers can calculate simulations of the atmosphere and nuclear explosions with complex equations with a billion parameters. The human brain would do such calculations with only the estimated speed of 0.001 FLOPS. But the human brain can do other things todays computers would not be able to simulate, as they are too complicated.

This is not the focal point; it is centred more on consciousness or deciding. These are things we do not understand properly. We do not know how consciousness works, but we do know that it does not work in the way ML does. Nobody looks at 500,000 photos of cats to recognise a cat on the street. That is why we cannot speak about AI yet; there are many challenges, and we are just talking about simulating a single average human brain.

We have just touched on it: we do not know how consciousness works. Nobody knows what consciousness is. There are philosophical definitions but we lack a mathematical model we would be able to use. We have no clear definition.

This is another question in this debate. If we want to achieve full-fledged AI, it has to have consciousness. Without it, it will be mere list of rules which the machine follows, but its decision-making is not independent. For the AI to be independent, it has to have its own ability to think, just like a human. And, of course, we humans make mistakes as well, so if AI is designed by us, it will probably not have perfect thinking and will probably make mistakes, just like us. And if it did not make them, we would already talk about super intelligence.

If we talk about creating AI like human intelligence, then it should have all these requirements, like a personality, and should make mistakes and learn lessons from them.

Exactly. Each form of artificial intelligence develops in a certain way. One has grown up in one laboratory, another in a different one. This development is distinctive, and different AI learn on different inputs, offering a different kind of evaluation, just like humans. Otherwise, we would be talking about super intelligence, which always gives perfect outputs.

I do not think so. There is a group of people who dream about it, but we also know of quite a big group that does not agree with it. This group includes many respected people, like the late Stephen Hawking, Elon Musk, and Bill Gates, who do not think we should not take this path.

I see it pragmatically. Why should humans do monotonous, boring things, if a computer can do it better? I would rather my MRI be evaluated by a really good computer with a high success rate than an average or below-average radiologist.

Thus, the key is what AI are we discussing? Do we just want intelligent help from a computer, artificial general intelligence, or super intelligence? From my point of view, now the main goal is to create AI that helps us with problems we cannot solve, or solves them much more effectively. We have not made any further progress on this.

Yes, it is possible. The question rather is do we want it at all? As I already indicated, I see it from a practical point of view. Elements of AI can help us greatly in everyday life, and that is exactly what we here, in ESET, are working on. Why not use it?

Juraj Jnok received a Bachelor's degree in applied informatics and a Masters degree in robotics at the Slovak University of Technology in Bratislava. In 2008, he joined the ESET company as an analyst of the malicious code. Since 2013, he has been leading the team responsible for automatic detection of threats and artificial intelligence. He is currently responsible for integrating machine learning into the detection kernel. He regularly lectures at specialist conferences around the world.

28. Oct 2019 at 6:00

See the rest here:

Artificial intelligence expert: True artificial intelligence should also have a consciousness, but we are far from that - The Slovak Spectator

Posted in Artificial Intelligence | Comments Off on Artificial intelligence expert: True artificial intelligence should also have a consciousness, but we are far from that – The Slovak Spectator

Page 130«..1020..129130131132..140150..»