The Weight of the Universe Physicists Challenge the Standard Model of Cosmology – SciTechDaily

The Universe contains unimaginably many objects. Cosmologists are trying to weigh them all. ESO/T. Preibisc

Results from physicists in Bochum have challenged the Standard Model of Cosmology. Infrared data, which have recently been included in the analysis, could be decisive.

Bochum cosmologists headed by Professor Hendrik Hildebrandt have gained new insights into the density and structure of matter in the Universe. Several years ago, Hildebrandt had already been involved in a research consortium that had pointed out discrepancies in the data between different groups. The values determined for matter density and structure differed depending on the measurement method. A new analysis, which included additional infrared data, made the differences stand out even more. They could indicate that this is the flaw in the Standard Model of Cosmology.

Rubin, the science magazine of Ruhr-Universitt Bochum, has published a report on Hendrik Hildebrandts research. The latest analysis of the research consortium, called Kilo-Degree Survey, was published in the journal Astronomy and Astrophysics in January 2020.

Cosmologist Hendrik Hildebrandt is looking for answers to fundamental questions about the Universe, for example how great the density of matter is in space. Credit: Roberto Schirdewahn

Research teams can calculate the density and structure of matter based on the cosmic microwave background, a radiation that was emitted shortly after the Big Bang and can still be measured today. This is the method used by the Planck Research Consortium.

The Kilo-Degree Survey team, as well as several other groups, determined the density and structure of matter using the gravitational lensing effect: as high-mass objects deflect light from galaxies, these galaxies appear in a distorted form in a different location than they actually are when viewed from Earth. Based on these distortions, cosmologists can deduce the mass of the deflecting objects and thus the total mass of the Universe. In order to do so, however, they need to know the distances between the light source, the deflecting object and the observer, among other things. The researchers determine these distances with the help of redshift, which means that the light of distant galaxies arrives on Earth shifted into the red range.

In order to determine the density of matter in the universe using the gravitational lensing effect, cosmologists look at distant galaxies, which usually appear in the shape of an ellipse. These ellipses are randomly oriented in the sky.On its way to Earth, the light from the galaxies passes high-mass objects, such as clusters of galaxies that contain large quantities of invisible dark matter. As a result light is deflected, and the galaxies appear distorted when viewed from Earth.Since the light travels a long way, it is repeatedly deflected by high-mass objects. Light from galaxies that are close to each other mostly passes the same objects and is thus deflected in a similar way.Neighboring galaxies therefore tend to be distorted in a similar way and point in the same direction, although the effect is exaggerated here. Researchers explore this tendency in order to deduce the mass of the deflecting objects.Credit: Agentur der RUB

To determine distances, cosmologists therefore take images of galaxies at different wavelengths, for example one in the blue, one in the green and one in the red range; they then determine the brightness of the galaxies in the individual images. Hendrik Hildebrandt and his team also include several images from the infrared range in order to determine the distance more precisely.

Previous analyses had already shown that the microwave background data from the Planck Consortium systematically deviate from the gravitational lensing effect data. Depending on the data set, the deviation was more or less pronounced; it was most pronounced in the Kilo-Degree Survey. Our data set is the only one based on the gravitational lensing effect and calibrated with additional infrared data, says Hendrik Hildebrandt, Heisenberg professor and head of the RUB research group Observational Cosmology in Bochum. This could be the reason for the greater deviation from the Planck data.

To verify this discrepancy, the group evaluated the data set of another research consortium, the Dark Energy Survey, using a similar calibration. As a result, these values also deviated even more strongly from the Planck values.

High-mass objects in the Universe are not perfect lenses. As they deflect light, they create distortions. The resulting images appear like looking through the foot of a wine glass. Credit: Roberto Schirdewahn

Scientists are currently debating whether the discrepancy between the data sets is actually an indication that the Standard Model of Cosmology is wrong or not. The Kilo-Degree Survey team is already working on a new analysis of a more comprehensive data set that could provide further insights. It is expected to provide even more precise data on matter density and structure in spring 2020.

Reference: KiDS+VIKING-450: Cosmic shear tomography with optical and infrared data by H. Hildebrandt, F. Khlinger, J. L. van den Busch, B. Joachimi, C. Heymans, A. Kannawadi, A. H. Wright, M. Asgari, C. Blake, H. Hoekstra, S. Joudaki, K. Kuijken, L. Miller, C. B. Morrison, T. Trster, A. Amon, M. Archidiacono, S. Brieden, A. Choi, J. T. A. de Jong, T. Erben, B. Giblin, A. Mead, J. A. Peacock, M. Radovich, P. Schneider, C. Sifn and M. Tewes, 13 January 2020, Astronomy & Astrophysics.DOI: 10.1051/0004-6361/201834878

Read the original here:

The Weight of the Universe Physicists Challenge the Standard Model of Cosmology - SciTechDaily

Hot Super-Earth Discovered Orbiting Ancient Star | Astronomy – Sci-News.com

An international team of astronomers has discovered a close-in super-Earth exoplanet in the HD 164922 planetary system.

An artists impression of the super-Earth exoplanet HD 164922d. Image credit: Sci-News.com.

HD 164922 is a bright G9-type star located approximately 72 light-years away in the constellation of Hercules.

Also known as Gliese 9613 or LHS 3353, the star is slightly smaller and less massive than the Sun and is 9.6 billion years old.

HD 164922 is known to host two massive planets: the temperate sub-Neptune HD 164922c and the Saturn-mass planet HD 164922b in a wide orbit.

The sub-Neptune is 12.9 times more massive than Earth, and orbits the parent star once every 75.8 days at a distance of 0.35 AU (astronomical units).

The Saturn-like planet has a mass 0.3 times that of Jupiter and an orbital period of 1,201 days at a distance of 2.2 AU.

In a new study, Dr. Serena Benatti from the INAF Astronomical Observatory of Palermo and colleagues searched for additional low-mass planets in the inner region of the HD 164922 system.

The astronomers analyzed 314 spectra of the host star collected by HARPS-N (High Accuracy Radial velocity Planet Searcher for the Northern hemisphere), a spectrograph on the Telescopio Nazionale Galileo at the Roque de los Muchachos Observatory, La Palma, Canary Islands, Spain.

We monitored this target in the framework of the Global Architecture of Planetary Systems (GAPS) project focused on finding close-in low-mass companions in systems with outer giant planets, they said.

The team detected an additional inner super-Earth with a minimum mass of 4 times that of the Earth.

Named HD 164922d, the planet orbits the star once every 12.5 days at a distance of 0.1 AU.

This target will not be observed with NASAs Transiting Exoplanets Survey Satellite (TESS), at least in Cycle 2, to verify if it transits, the researchers said.

Dedicated observations with ESAs CHarachterizing ExOPlanet Satellite (CHEOPS) could be proposed, but they can be severely affected by the uncertainty on the transit time.

The teams paper will be published in the journal Astronomy & Astrophysics.

_____

S. Benatti et al. 2020. The GAPS Programme at TNG XXIII. HD 164922 d: a close-in super-Earth discovered with HARPS-N in a system with a long-period Saturn mass companion. A&A, in press; arXiv: 2005.03368

Continue reading here:

Hot Super-Earth Discovered Orbiting Ancient Star | Astronomy - Sci-News.com

Exploring Astronomy Club and enduring COVID-19 The Mesa Press – Mesa Press

Astronomy Club is the only club at San Diego Mesa College that allows you to explore all that lies beyond the planet on which we live. Between the study, exploration and discovery of countless planets, stars, galaxies, comets, asteroids, and the infinite concept of space itself, its easy to see why astronomy is such a uniquely sought-after field of study. Marie Yokers, a student majoring in astrophysics, is the Astronomy Clubs current president. Upon attending the virtual presentation given by Jonny Kim, An Evening with an Astronaut on April 29, I noticed the event was organized by the San Diego Mesa Astronomy Club and reached out to its president.

Astronomy Club was originally founded in the fall semester of 2018 by Alexander Beltzer-Sweeney as founding president, with Ana Parra, Alex Hewett and Danny Rosales fulfilling the remaining crucial positions within the club.

The idea of a club based in science may sound off-putting to some, but Yokers extended this message to those unsure, I want to mimic what Dr. Kim had said, which was, Space is for everyone. It doesnt belong to anyone. The demographic absolutely reflects this sentiment. People come in from all walks of life, all ages, and all different experiences.

Space is vast, unknown, abstract, daunting and even confusing to some, but why is it important? Astronomy may not have daily applications like mathematics or English, but rather encompasses a broader realm of both academics and interest.

Yokers described the importance of this complex subject stating, Astronomy has played such a deep role in the development of the human race with agriculture, travel, culture, religion, etc. This begs the question, if astronomy is a more complex, all-encompassing subject, then how is it any less important than other subjects deemed essential? It isnt any less important. Astronomy is a field of study that observes all that lies outside of our atmosphere, and utilizes the knowledge and practices of physics, biology, geology and mathematics to continually broaden our understanding of the universe. These key traits, the nagging acknowledgement that there are many things we dont have answers for and the fascination with the possibility of life found outside of the world we know make astronomy a study, field, and practice all its own.

According to American Astronomical Society, astronomy is a rather small field in terms of career, which incidentally leads to high levels of competition for open positions.

If you browse classes online, Astronomy is a class offered at Mesa, so how is Astronomy club different from the class? First, its a club, and beyond that, Astronomy Club has its own constitution which includes the following two goals, To promote interest in astronomy and related space sciences on the San Diego Mesa College campus. Provide opportunities for members to learn more about astronomy & related space sciences through club outings, lectures, work-based learning opportunities, and internships.

Astronomy Club operates through a balance of volunteering, education and discussions. Due to COVID-19 shifts have been made. If you find yourself wondering what types of things happen in Astronomy Club, Yokers identified a few including film discussions, attending talks such as the one held at the Fleet Science Center earlier this year, attending the Astronomy Associations Star Parties which allows amateur astronomers to observe, practice and congregate in a fun learning environment, and various other activities. In the words of Yokers, Basically, if its space-related, we try to jump in to learn and have fun!

After taking a two-year hiatus break from school to work, Yokers returned in 2018 to revisit her interest in astrophysics and took Astronomy 101 at Mesa. About her choice to enroll in the class, It was the first class that I actually had a passion to do well in, and it was the first class that I really connected with the professor (Dr. Stojimirovic). I confided in Dr. Stojimirovic about wanting to pursue astrophysics as a career and she really helped push me in the correct direction.

It was at this point that Yokers found a role in Astronomy Club as treasurer and grew with it. Yokers went on to say, If I did not take that chance- I would not have met the great network of people that I have so much to credit to today.

Busy class or work schedules, the idea exploring personal interests, and the pressure to pursue the right education and career path can get overwhelming. Finding encouragement or inspiration from your family, a club, a friend or professor can give you the extra boost you may have needed.

At the moment, COVID-19 has taken a toll on classes, jobs, student clubs, businesses, and leisurely activities alike, yet strides are being made to ensure Astronomy Clubs continuance. Dont settle for locking yourself in your room with your now dust-coated textbooks, Yokers encouraged, My motto for the club post-virus has been keep moving forward. Would I step away from my physics homework for this event? If its a yes, then the event is a go.

At this time, Astronomy Club consists mainly of movie nights, game nights and discussion, with the occasional lecture found easily online. Among the present changes, Yokers mentioned the voting of new officers is hopefully taking place within the next two weeks, inviting anyone interested to reach out to the club email, astroclubmesa@gmail.com.

Astronomy Club meetings happen every Wednesday from 5:30 7 p.m., through Zoom for the time being. Once students are able to return to campus, the club meetings will be at the STEM Center.

With many student clubs derailed by COVID-19, and social distancing leading to feelings of loneliness and even lack of direction or drive, Astronomy Club will take you to the cosmos.

See the rest here:

Exploring Astronomy Club and enduring COVID-19 The Mesa Press - Mesa Press

What is Artificial Intelligence? | Azure Blog and Updates …

It has been said that Artificial Intelligence will define the next generation of software solutions. If you are even remotely involved with technology, you will almost certainly have heard the term with increasing regularity over the last few years. It is likely that you will also have heard different definitions for Artificial Intelligence offered, such as:

The ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. Encyclopedia Britannica

Intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans. Wikipedia

How useful are these definitions? What exactly are tasks commonly associated with intelligent beings? For many people, such definitions can seem too broad or nebulous. After all, there are many tasks that we can associate with human beings! What exactly do we mean by intelligence in the context of machines, and how is this different from the tasks that many traditional computer systems are able to perform, some of which may already seem to have some level of intelligence in their sophistication? What exactly makes the Artificial Intelligence systems of today different from sophisticated software systems of the past?

It could be argued that any attempt to try to define Artificial Intelligence is somewhat futile, since we would first have to properly define intelligence, a word which conjures a wide variety of connotations. Nonetheless, this article attempts to offer a more accessible definition for what passes as Artificial Intelligence in the current vernacular, as well as some commentary on the nature of todays AI systems, and why they might be more aptly referred to as intelligent than previous incarnations.

Firstly, it is interesting and important to note that the technical difference between what used to be referred to as Artificial Intelligence over 20 years ago and traditional computer systems, is close to zero. Prior attempts to create intelligent systems known as expert systems at the time, involved the complex implementation of exhaustive rules that were intended to approximate intelligent behavior. For all intents and purposes, these systems did not differ from traditional computers in any drastic way other than having many thousands more lines of code. The problem with trying to replicate human intelligence in this way was that it requires far too many rules and ignores something very fundamental to the way intelligent beings make decisions, which is very different from the way traditional computers process information.

Let me illustrate with a simple example. Suppose I walk into your office and I say the words Good Weekend? Your immediate response is likely to be something like yes or fine thanks. This may seem like very trivial behavior, but in this simple action you will have immediately demonstrated a behavior that a traditional computer system is completely incapable of. In responding to my question, you have effectively dealt with ambiguity by making a prediction about the correct way to respond. It is not certain that by saying Good Weekend I actually intended to ask you whether you had a good weekend. Here are just a few possible intents behind that utterance:

And more.

The most likely intended meaning may seem obvious, but suppose that when you respond with yes, I had responded with No, I mean it was a good football game at the weekend, wasnt it?. It would have been a surprise, but without even thinking, you will absorb that information into a mental model, correlate the fact that there was an important game last weekend with the fact that I said Good Weekend? and adjust the probability of the expected response for next time accordingly so that you can respond correctly next time you are asked the same question. Granted, those arent the thoughts that will pass through your head! You happen to have a neural network (aka your brain) that will absorb this information automatically and learn to respond differently next time.

The key point is that even when you do respond next time, you will still be making a prediction about the correct way in which to respond. As before, you wont be certain, but if your prediction fails again, you will gather new data, which leads to my suggested definition of Artificial Intelligence, as it stands today:

Artificial Intelligence is the ability of a computer system to deal with ambiguity, by making predictions using previously gathered data, and learning from errors in those predictions in order to generate newer, more accurate predictions about how to behave in the future.

This is a somewhat appropriate definition of Artificial Intelligence because it is exactly what AI systems today are doing, and more importantly, it reflects an important characteristic of human beings which separates us from traditional computer systems: human beings are prediction machines. We deal with ambiguity all day long, from very trivial scenarios such as the above, to more convoluted scenarios that involve playing the odds on a larger scale. This is in one sense the essence of reasoning. We very rarely know whether the way we respond to different scenarios is absolutely correct, but we make reasonable predictions based on past experience.

Just for fun, lets illustrate the earlier example with some code in R! If you are not familiar with R, but would like to follow along, see the instructions on installation. First, lets start with some data that represents information in your mind about when a particular person has said good weekend? to you.

In this example, we are saying that GoodWeekendResponse is our score label (i.e. it denotes the appropriate response that we want to predict). For modelling purposes, there have to be at least two possible values in this case yes and no. For brevity, the response in most cases is yes.

We can fit the data to a logistic regression model:

Now what happens if we try to make a prediction on that model, where the expected response is different than we have previously recorded? In this case, I am expecting the response to be Go England!. Below, some more code to add the prediction. For illustration we just hardcode the new input data, output is shown in bold:

The initial prediction yes was wrong, but note that in addition to predicting against the new data, we also incorporated the actual response back into our existing model. Also note, that the new response value Go England! has been learnt, with a probability of 50 percent based on current data. If we run the same piece of code again, the probability that Go England! is the right response based on prior data increases, so this time our model chooses to respond with Go England!, because it has finally learnt that this is most likely the correct response!

Do we have Artificial Intelligence here? Well, clearly there are different levels of intelligence, just as there are with human beings. There is, of course, a good deal of nuance that may be missing here, but nonetheless this very simple program will be able to react, with limited accuracy, to data coming in related to one very specific topic, as well as learn from its mistakes and make adjustments based on predictions, without the need to develop exhaustive rules to account for different responses that are expected for different combinations of data. This is this same principle that underpins many AI systems today, which, like human beings, are mostly sophisticated prediction machines. The more sophisticated the machine, the more it is able to make accurate predictions based on a complex array of data used to train various models, and the most sophisticated AI systems of all are able to continually learn from faulty assertions in order to improve the accuracy of their predictions, thus exhibiting something approximating human intelligence.

You may be wondering, based on this definition, what the difference is between machine learning and Artificial intelligence? After all, isnt this exactly what machine learning algorithms do, make predictions based on data using statistical models? This very much depends on the definition of machine learning, but ultimately most machine learning algorithms are trained on static data sets to produce predictive models, so machine learning algorithms only facilitate part of the dynamic in the definition of AI offered above. Additionally, machine learning algorithms, much like the contrived example above typically focus on specific scenarios, rather than working together to create the ability to deal with ambiguity as part of an intelligent system. In many ways, machine learning is to AI what neurons are to the brain. A building block of intelligence that can perform a discreet task, but that may need to be part of a composite system of predictive models in order to really exhibit the ability to deal with ambiguity across an array of behaviors that might approximate to intelligent behavior.

There are a number of practical advantages in building AI systems, but as discussed and illustrated above, many of these advantages are pivoted around time to market. AI systems enable the embedding of complex decision making without the need to build exhaustive rules, which traditionally can be very time consuming to procure, engineer and maintain. Developing systems that can learn and build their own rules can significantly accelerate organizational growth.

Microsofts Azure cloud platform offers an array of discreet and granular services in the AI and Machine Learning domain, that allow AI developers and Data Engineers to avoid re-inventing wheels, and consume re-usable APIs. These APIs allow AI developers to build systems which display the type of intelligent behavior discussed above.

If you want to dive in and learn how to start building intelligence into your solutions with the Microsoft AI platform, including pre-trained AI services like Cognitive Services and the Bot Framework, as well as deep learning tools like Azure Machine Learning, Visual Studio Code Tools for AI, and Cognitive Toolkit, visit AI School.

Read the original post:

What is Artificial Intelligence? | Azure Blog and Updates ...

MS in Artificial Intelligence | Artificial Intelligence

The Master of Science in Artificial Intelligence (M.S.A.I.) degree program is offered by the interdisciplinary Institute for Artificial Intelligence. Areas of specialization include automated reasoning, cognitive modeling, neural networks, genetic algorithms, expert databases, expert systems, knowledge representation, logic programming, and natural-language processing. Microelectronics and robotics were added in 2000.

Admission is possible in every semester, but Fall admission is preferable. Applicants seeking financial assistance should apply before February 15, but assistantships are sometimes awarded at other times. Applicants must include a completed application form, three letters of recommendation, official transcripts, Graduate Record Examinations (GRE) scores, and a sample of your scholarly writing on any subject (in English). Only the General Test of the GRE is required for the M.S.A.I. program. International students must also submit results of the TOEFL and a statement of financial support. Applications must be completed at least six weeks before the proposed registration date.

No specific undergraduate major is required for admission, but admission is competitive. We are looking for students with a strong preparation in one or more relevant background areas (psychology, philosophy, linguistics, computer science, logic, engineering, or the like), a demonstrated ability to handle all types of academic work (from humanities to mathematics), and an excellent command of written and spoken English.

For more information regarding applications, please vist theMS Program AdmissionsandInformation for International Studentspages.

Requirements for the M.S.A.I. degree include: interdisciplinary foundational courses in computer science, logic, philosophy, psychology, and linguistics; courses and seminars in artificial intelligence programming techniques, computational intelligence, logic and logic programming, natural-language processing, and knowledge-based systems; and a thesis. There is a final examination covering the program of study and a defense of the written thesis.

For further information on course and thesis requirements, please visit theCourse & Thesis Requirementspage.

The Artificial Intelligence Laboratories serve as focal points for the M.S.A.I. program. AI students have regular access to PCs running current Windows technology, and a wireless network is available for students with laptops and other devices. The Institute also features facilities for robotics experimentation and a microelectronics lab. The University of Georgia libraries began building strong AI and computer science collections long before the inception of these degree programs. Relevant books and journals are located in the Main and Science libraries (the Science library is conveniently located in the same building complex as the Institute for Artificial Intelligence and the Computer Science Department). The University's library holdings total more than 3 million volumes.

Graduate assistantships, which include a monthly stipend and remission of tuition, are available. Assistantships require approximately 13-15 hours of work per week and permit the holder to carry a full academic program of graduate work. In addition, graduate assistants pay a matriculation fee and all student fees per semester.

For an up to date description of Tuition and Fees for both in-state and out-of-state students, please visit the site of theBursar's Office.

On-campus housing, including a full range of University-owned married student housing, is available to students. Student fees include use of a campus-wide bus system and some city bus routes. More information regarding housing is available here:University of Georgia Housing.

The University of Georgia has an enrollment of over 34,000, including approximately 8,000 graduate students. Students are enrolled from all 50 states and more than 100 countries. Currently, there is a very diverse group of students in the AI program. Women and international students are well represented.

Additional information about the Institute and the MSAI program, including policies for current students, can be found in the AI Student Handbook.

View post:

MS in Artificial Intelligence | Artificial Intelligence

What Are the Advantages of Artificial Intelligence …

The general benefit of artificial intelligence, or AI, is that it replicates decisions and actions of humans without human shortcomings, such as fatigue, emotion and limited time. Machines driven by AI technology are able to perform consistent, repetitious actions without getting tired. It is also easier for companies to get consistent performance across multiple AI machines than it is across multiple human workers.

Companies incorporate AI into production and service-based processes. In a manufacturing business, AI machines can churn out a high, consistent level of production without needing a break or taking time off like people. This efficiency improves the cost-basis and earning potential for many companies. Mobile devices use intuitive, voice-activated AI applications to offer users assistance in completing tasks. For example, users of certain mobile phones can ask for directions or information and receive a vocal response.

The premise of AI is that it models human intelligence. Though imperfections exist, there is often a benefit to AI machines making decisions that humans struggle with. AI machines are often programmed to follow statistical models in making decisions. Humans may struggle with personal implications and emotions when making similar decisions. Famous scientist Stephen Hawking uses AI to communicate with a machine, despite suffering from a motor neuron disease.

Read the rest here:

What Are the Advantages of Artificial Intelligence ...

AI Tutorial | Artificial Intelligence Tutorial – Javatpoint

The Artificial Intelligence tutorial provides an introduction to AI which will help you to understand the concepts behind Artificial Intelligence. In this tutorial, we have also discussed various popular topics such as History of AI, applications of AI, deep learning, machine learning, natural language processing, Reinforcement learning, Q-learning, Intelligent agents, Various search algorithms, etc.

Our AI tutorial is prepared from an elementary level so you can easily understand the complete tutorial from basic concepts to the high-level concepts.

In today's world, technology is growing very fast, and we are getting in touch with different new technologies day by day.

Here, one of the booming technologies of computer science is Artificial Intelligence which is ready to create a new revolution in the world by making intelligent machines.The Artificial Intelligence is now all around us. It is currently working with a variety of subfields, ranging from general to specific, such as self-driving cars, playing chess, proving theorems, playing music, Painting, etc.

AI is one of the fascinating and universal fields of Computer science which has a great scope in future. AI holds a tendency to cause a machine to work as a human.

Artificial Intelligence is composed of two words Artificial and Intelligence, where Artificial defines "man-made," and intelligence defines "thinking power", hence AI means "a man-made thinking power."

So, we can define AI as:

Artificial Intelligence exists when a machine can have human based skills such as learning, reasoning, and solving problems

With Artificial Intelligence you do not need to preprogram a machine to do some work, despite that you can create a machine with programmed algorithms which can work with own intelligence, and that is the awesomeness of AI.

It is believed that AI is not a new technology, and some people says that as per Greek myth, there were Mechanical men in early days which can work and behave like humans.

Before Learning about Artificial Intelligence, we should know that what is the importance of AI and why should we learn it. Following are some main reasons to learn about AI:

Following are the main goals of Artificial Intelligence:

Artificial Intelligence is not just a part of computer science even it's so vast and requires lots of other factors which can contribute to it. To create the AI first we should know that how intelligence is composed, so the Intelligence is an intangible part of our brain which is a combination of Reasoning, learning, problem-solving perception, language understanding, etc.

To achieve the above factors for a machine or software Artificial Intelligence requires the following discipline:

Following are some main advantages of Artificial Intelligence:

Every technology has some disadvantages, and thesame goes for Artificial intelligence. Being so advantageous technology still, it has some disadvantages which we need to keep in our mind while creating an AI system. Following are the disadvantages of AI:

Before learning about Artificial Intelligence, you must have the fundamental knowledge of following so that you can understand the concepts easily:

Our AI tutorial is designed specifically for beginners and also included some high-level concepts for professionals.

We assure you that you will not find any difficulty while learning our AI tutorial. But if there any mistake, kindly post the problem in the contact form.

Visit link:

AI Tutorial | Artificial Intelligence Tutorial - Javatpoint

It’s Called Artificial Intelligencebut What Is Intelligence? – WIRED

Elizabeth Spelke, a cognitive psychologist at Harvard, has spent her career testing the worlds most sophisticated learning systemthe mind of a baby.

Gurgling infants might seem like no match for artificial intelligence. They are terrible at labeling images, hopeless at mining text, and awful at videogames. Then again, babies can do things beyond the reach of any AI. By just a few months old, they've begun to grasp the foundations of language, such as grammar. They've started to understand how the physical world works, how to adapt to unfamiliar situations.

Yet even experts like Spelke don't understand precisely how babiesor adults, for that matterlearn. That gap points to a puzzle at the heart of modern artificial intelligence: We're not sure what to aim for.

Consider one of the most impressive examples of AI, AlphaZero, a program that plays board games with superhuman skill. After playing thousands of games against itself at hyperspeed, and learning from winning positions, AlphaZero independently discovered several famous chess strategies and even invented new ones. It certainly seems like a machine eclipsing human cognitive abilities. But AlphaZero needs to play millions more games than a person during practice to learn a game. Most tellingly, it cannot take what it has learned from the game and apply it to another area.

To some members of the AI priesthood, that calls for a new approach. What makes human intelligence special is its adaptabilityits power to generalize to never-seen-before situations, says Franois Chollet, a well-known AI engineer and the creator of Keras, a widely used framework for deep learning. In a November research paper, he argued that it's misguided to measure machine intelligence solely according to its skills at specific tasks. Humans don't start out with skills; they start out with a broad ability to acquire new skills, he says. What a strong human chess player is demonstrating isn't the ability to play chess per se, but the potential to acquire any task of a similar difficulty. That's a very different capability.

Chollet posed a set of problems designed to test an AI program's ability to learn in a more generalized way. Each problem requires arranging colored squares on a grid based on just a few prior examples. It's not hard for a person. But modern machine-learning programstrained on huge amounts of datacannot learn from so few examples. As of late April, more than 650 teams had signed up to tackle the challenge; the best AI systems were getting about 12 percent correct.

A self-driving car cannot intuit from common sense what will happen if a truck spills its load.

It isn't yet clear how humans solve these problems, but Spelke's work offers a few clues. For one thing, it suggests that humans are born with an innate ability to quickly learn certain things, like what a smile means or what happens when you drop something. It also suggests we learn a lot from each other. One recent experiment showed that 3-month-olds appear puzzled when someone grabs a ball in an inefficient way, suggesting that they already appreciate that people cause changes in their environment. Even the most sophisticated and powerful AI systems on the market can't grasp such concepts. A self-driving car, for instance, cannot intuit from common sense what will happen if a truck spills its load.

Josh Tenenbaum, a professor in MIT's Center for Brains, Minds & Machines, works closely with Spelke and uses insights from cognitive science as inspiration for his programs. He says much of modern AI misses the bigger picture, likening it to a Victorian-era satire about a two-dimensional world inhabited by simple geometrical people. We're sort of exploring Flatlandonly some dimensions of basic intelligence, he says. Tenenbaum believes that, just as evolution has given the human brain certain capabilities, AI programs will need a basic understanding of physics and psychology in order to acquire and use knowledge as efficiently as a baby. And to apply this knowledge to new situations, he says, they'll need to learn in new waysfor example, by drawing causal inferences rather than simply finding patterns. At some pointyou know, if you're intelligentyou realize maybe there's something else out there, he says.

This article appears in the June issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.

Special Series: The Future of Thinking Machines

Original post:

It's Called Artificial Intelligencebut What Is Intelligence? - WIRED

Artificial intelligence is struggling to cope with how the world has changed – ZDNet

From our attitude towards work to our grasp of what two metres look like, the coronavirus pandemic has made us rethink how we see the world. But while we've found it hard to adjust to the new reality, it's been even harder for the narrowly-designed artificial intelligence models that have been created to help organisation make decisions. Based on data that described the world before the crisis, these won't be making correct predictions anymore, pointing to a fundamental problem in they way AI is being designed.

David Cox, IBM director of the MIT-IBM Watson AI Lab, explains that faulty AI is particularly problematic in the case of so-called black box predictive models: those algorithms which work in ways that are not visible, or understandable, to the user. "It's very dangerous," Cox says, "if you don't understand what's going on internally within a model in which you shovel data on one end to get a result on the other end. The model is supposed to embody the structure of the world, but there is no guarantee that it will keep working if the world changes."

The COVID-19 crisis, according to Cox, has only once more highlighted what AI experts have argued for decades: that algorithms should be more explainable.

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

For example, if you were building a computer program that was a complete blackbox, aimed at predicting what the stock market would be like based on past data, there is no guarantee it's going to continue to produce good predictions in the current coronavirus crisis, he argues.

What you actually need to do is build a broader model of the economy that acknowledges supply and demand, understands supply-chains, and incorporates that knowledge, which is closer to something that an economist would do. Then you can reason about the situation more transparently, he says.

"Part of the reason why those models are hard to trust with narrow AIs is because they don't have that structure. If they did it would be much easier for a model to provide an explanation for why they are making decisions. These models are experiencing challenges now. COVID-19 has just made it very clear why that structure is important," he warns.

It's important not only because the technology would perform better and gain in reliability, but also because businesses would be far less reluctant to adopt AI if they trusted the tool more. Cox pulls out his own statistics on the matter: while 95% of companies believe that AI is key to their competitive advantage, only 5% say they've extensively implemented the technology.

While the numbers differ from survey to survey,the conclusion has been the same for some time now: there remains a significant gap between the promise of AI and its reality for businesses. And part of the reason that industry is struggling to deploy the technology boils down to a lack of understanding of AI. If you build a great algorithm but can't explain how it works, you can't expect workers to incorporate the new tool in their business flow. "If people don't understand or trust those tools, it's going to be a lost cause," says Cox.

Explaining AI is one of the main focuses of Cox's work. The MIT-IBM Watson Lab, which he co-directs, comprises of 100 AI scientists across the US university and IBM Research, and is now in its third year of operation. The Lab's motto, which comes up first thing on its website, is self-explanatory: "AI science for real-world impact".

Back in 2017, IBM announced a $240 million investment over ten years to support research by the firm's own researchers, as well as MIT's, in the newly-founded Watson AI Lab. From the start, the collaboration's goal has had a strong industry focus, with an idea to unlock the potential of AI for "business and society". The lab's focus is not on "narrow AI", which is the technology in its limited format that most organizations know today; instead the researchers should be striving for "broad AI". Broad AI can learn efficiently and flexibly, across multiple tasks and data streams, and ultimately has huge potential for businesses. "Broad AI is next," is the Lab's promise.

The only way to achieve broad AI, explains Cox, is to bridge between research and industry. The reason that AI, like many innovations, remains stubbornly stuck in the lab, is because the academics behind the technology struggle to identify and respond to the real-world needs of businesses. Incentives are misaligned; the result is that organizations see the potential of the tool, but struggle to use it. AI exists and it is effective, but is still not designed for business.

SEE: Developers say Google's Go is 'most sought after' programming language of 2020

Before he joined IBM, Cox spent ten years as a professor in Harvard University. "Coming from academia and now working for IBM, my perspective on what's important has completely changed," says the researcher. "It has given me a much clearer picture of what's missing."

The partnership between IBM and MIT is a big shift from the traditional way that academia functions. "I'd rather be there in the trenches, developing those technologies directly with the academics, so that we can immediately take it back home and integrate it into our products," says Cox. "It dramatically accelerates the process of getting innovation into businesses."

IBM has now expanded the collaboration to some of its customers through a member program, which means that researchers in the Lab benefit from the input of players from different industries. From Samsung Electronics to Boston Scientific through banking company Wells Fargo, companies in various fields and locations can explain their needs and the challenges they encounter to the academics working in the AI Watson Lab. In turn, the members can take the intellectual property generated in the Lab and run with it even before it becomes an IBM product.

Cox is adamant, however, that the MIT-IBM Watson AI Lab was also built with blue-sky research compatibility in mind. The researchers in the lab are working on fundamental, cross-industry problems that need to be solved in order to make AI more applicable. "Our job isn't to solve customer problems," says Cox. "That's not the right use for the tool that is MIT. There are brilliant people in MIT that can have a hugely disruptive impact with their ideas, and we want to use that to resolve questions like: why is it that AI is so hard to use or impact in business?"

Explainability of AI is only one area of focus. But there is also AutoAI, for example, which consists of using AI to build AI models, and would let business leaders engage with the technology without having to hire expensive, highly-skilled engineers and software developers. Then, there is also the issue of data labeling: according to Cox, up to 90% of the data science project consists of meticulously collecting, labeling and curating the data. "Only 10% of the effort is the fancy machine-learning stuff," he says. "That's insane. It's a huge inhibitor to people using AI, let alone to benefiting from it."

SEE: AI and the coronavirus fight: How artificial intelligence is taking on COVID-19

Doing more with less data, in fact, was one of the key features of the Lab's latest research project, dubbed Clevrer, in which an algorithm can recognize objects and reason about their behaviors in physical events from videos. This model is a neuro-symbolic one, meaning that the AI can learn unsupervised, by looking at content and pairing it with questions and answers; ultimately, it requires far less training data and manual annotation.

All of these issues have been encountered one way or another not only by IBM, but by the companies that signed up to the Lab's member program. "Those problems just appear again and again," says Cox and that's whether you are operating in electronics or med-tech or banking. Hearing similar feedback from all areas of business only emboldened the Lab's researchers to double down on the problems that mattered.

The Lab has about 50 projects running at any given time, carefully selected every year by both MIT and IBM on the basis that they should be both intellectually interesting, and effectively tackling the problem of broad AI. Cox maintains that within this portfolio, some ideas are very ambitious and can even border blue-sky research; they are balanced, on the other hand, with other projects that are more likely to provide near-term value.

Although more prosaic than the idea of preserving purely blue-sky research, putting industry and academia in the same boat might indeed be the most pragmatic solution in accelerating the adoption of innovation and making sure AI delivers on its promise.

See the article here:

Artificial intelligence is struggling to cope with how the world has changed - ZDNet

Powering the Artificial Intelligence Revolution – HPCwire

It has been observed by many that we are at the dawn of the next industrial revolution: The Artificial Intelligence (AI) revolution. The benefits delivered by this intelligence revolution will be many: in medicine, improved diagnostics and precision treatment, better weather forecasting, and self-driving vehicles to name a few. However, one of the costs of this revolution is going to be increased electrical consumption by the data centers that will power it. Data center power usage is projected to double over the next 10 years and is on track to consume 11% of worldwide electricity by 2030. Beyond AI adoption, other drivers of this trend are the movement to the cloud and increased power usage of CPUs, GPUs and other server components, which are becoming more powerful and smart.

AIs two basic elements, training and inference, each consume power differently. Training involves computationally intensive matrix operations over very large data sets, often measured in terabytes to petabytes. Examples of these data sets can range from online sales data to captured video feeds to ultra-high-resolution images of tumors. AI inference is computationally much lighter in nature, but can run indefinitely as a service, which draws a lot of power when hit with a large number of requests. Think of a facial recognition application for security in an office building. It runs continuously but would stress the compute and storage resources at 8:00am and again at 5:00pm as people come and go to work.

However, getting a good handle on power usage in AI is difficult. Energy consumption is not part of standard metrics tracked by job schedulers and while it can be set up, it is complicated and vendor dependent. This means that most users are flying blind when it comes to energy usage.

To map out AI energy requirements, Dr. Miro Hodak led a team of Lenovo engineers and researchers, which looked at the energy cost of an often-used AI workload. The study, Towards Power Efficiency in Deep Learning on Data Center Hardware, (registration required) was recently presented at the 2019 IEEE International Conference on Big Data and was published in the conference proceedings. This work looks at the energy cost of training ResNet50 neural net with ImageNet dataset of more than 1.3 million images on a Lenovo ThinkSystem SR670 server equipped with 4 Nvidia V100 GPUs. AC data from the servers power supply, indicates that 6.3 kWh of energy, enough to power an average home for six hours, is needed to fully train this AI model. In practice, trainings like these are repeated multiple times to tune the resulting models, resulting in energy costs that are actually several times higher.

The study breaks down the total energy into its components as shown in Fig. 1. As expected, the bulk of the energy is consumed by the GPUs. However, given that the GPUs handle all of the computationally intensive parts, the 65% share of energy is lower than expected. This shows that simplistic estimates of AI energy costs using only GPU power are inaccurate and miss significant contributions from the rest of the system. Besides GPUs, CPU and memory account for almost quarter of the energy use and 9% of energy is spent on AC to DC power conversion (this is within line of 80 PLUS Platinum certification of SR670 PSUs).

The study also investigated ways to decrease energy cost by system tuning without changing the AI workload. We found that two types of system settings make most difference: UEFI settings and GPU OS-level settings. ThinkSystem servers provides four UEFI running modes: Favor Performance, Favor Energy, Maximum Performance and Minimum Power. As shown in Table 1, the last option is the best and provides up to 5% energy savings. On the GPU side, 16% of energy can be saved by capping V100 frequency to 1005 MHz as shown in Figure 2. Taking together, our study showed that system tunings can decrease energy usage by 22% while increasing runtime by 14%. Alternatively, if this runtime cost is unacceptable, a second set of tunings, which save 18% of energy while increasing time by only 4%, was also identified. This demonstrates that there is lot of space on system side for improvements in energy efficiency.

Energy usage in HPC has been a visible challenge for over a decade, and Lenovo has long been a leader in energy efficient computing. Whether through our innovative Neptune liquid-cooled system designs, or through Energy-Aware Runtime (EAR) software, a technology developed in collaboration with Barcelona Supercomputing Center (BSC). EAR analyzes user applications to find optimum CPU frequencies to run them at. For now, EAR is CPU-only, but investigations into extending it to GPUs are ongoing. Results of our study show that that is a very promising way to bring energy savings to both HPC and AI.

Enterprises are not used to grappling with the large power profiles that AI requires, the way HPC users have become accustomed. Scaling out these AI solutions will only make that problem more acute. The industry is beginning to respond. MLPerf, currently the leading collaborative project for AI performance evaluation, is preparing new specifications for power efficiency. For now, it is limited to inference workloads and will most likely be voluntary, but it represents a step in the right direction.

So, in order to enjoy those precise weather forecasts and self-driven cars, well need to solve the power challenges they create. Today, as the power profile of CPUs and GPUs surges ever upward, enterprise customers face a choice between three factors: system density (the number of servers in a rack), performance and energy efficiency. Indeed, many enterprises are accustomed to filling up rack after rack with low cost, adequately performing systems that have limited to no impact on the electric bill. Unfortunately, until the power dilemma is solved, those users must be content with choosing only two of those three factors.

Here is the original post:

Powering the Artificial Intelligence Revolution - HPCwire

An AI future set to take over post-Covid world – The Indian Express

Updated: May 18, 2020 10:03:39 pm

Written by Seuj Saikia

Rabindranath Tagore once said, Faith is the bird that feels the light when the dawn is still dark. The darkness that looms over the world at this moment is the curse of the COVID-19 pandemic, while the bird of human freedom finds itself caged under lockdown, unable to fly. Enthused by the beacon of hope, human beings will soon start picking up the pieces of a shared future for humanity, but perhaps, it will only be to find a new, unfamiliar world order with far-reaching consequences for us that transcend society, politics and economy.

Crucially, a technology that had till now been crawling or at best, walking slowly will now start sprinting. In fact, a paradigm shift in the economic relationship of mankind is going to be witnessed in the form of accelerated adoption of artificial intelligence (AI) technologies in the modes of production of goods and services. A fourth Industrial Revolution as the AI-era is referred to has already been experienced before the pandemic with the backward linkages of cloud computing and big data. However, the imperative of continued social distancing has made an AI-driven economic world order todays reality.

Setting aside the oft-discussed prophecies of the Robo-Human tussle, even if we simply focus on the present pandemic context, we will see millions of students accessing their education through ed-tech apps, mothers buying groceries on apps too and making cashless payments through fintech platforms, and employees attending video conferences on relevant apps as well: All this isnt new phenomena, but the scale at which they are happening is unparalleled in human history. The alternate universe of AI, machine learning, cloud computing, big data, 5G and automation is getting closer to us every day. And so is a clash between humans (labour) and robots (plant and machinery).

This clash might very well be fuelled by automation. Any Luddite will recall the misadventures of the 19th-century textile mills. However, the automation that we are talking about now is founded on the citadel of artificially intelligent robots. Eventually, this might merge the two factors of production into one, thereby making labour irrelevant. As factories around the world start to reboot post COVID-19, there will be hard realities to contend with: Shortage of migrant labourers in the entire gamut of the supply chain, variations of social distancing induced by the fears of a second virus wave and the overall health concerns of humans at work. All this combined could end up sparking the fire of automation, resulting in subsequent job losses and possible reallocation/reskilling of human resources.

In this context, a potential counter to such employment upheavals is the idea of cash transfers to the population in the form of Universal Basic Income (UBI). As drastic changes in the production processes lead to a more cost-effective and efficient modern industrial landscape, the surplus revenue that is subsequently earned by the state would act as a major source of funds required by the government to run UBI. Variants of basic income transfer schemes have existed for a long time and have been deployed to unprecedented levels during this pandemic. Keynesian macroeconomic measures are increasingly being seen as the antidote to the bedridden economies around the world, suffering from near-recession due to the sudden ban on economic activities. Governments would have to be innovative enough to pump liquidity into the system to boost demand without harming the fiscal discipline. But what separates UBI from all these is its universality, while others remain targeted.

This new economic world order would widen the cracks of existing geopolitical fault lines particularly between US and China, two behemoths of the AI realm. Datanomics has taken such a high place in the valuation spectre that the most valued companies of the world are the tech giants like Apple, Google, Facebook, Alibaba, Tencent etc. Interestingly, they are also the ones who are at the forefront of AI innovations. Data has become the new oil. What transports data are not pipelines but fibre optic cables and associated communication technologies. The ongoing fight over the introduction of 5G technology central to automation and remote command-control architecture might see a new phase of hostility, especially after the controversial role played by the secretive Chinese state in the COVID-19 crisis.

The issues affecting common citizens privacy, national security, rising inequality will take on newer dimensions. It is pertinent to mention that AI is not all bad: As an imperative change that the human civilisation is going to experience, it has its advantages. Take the COVID-19 crisis as an example. Amidst all the chaos, big data has enabled countries to do contact tracing effectively, and 3D printers produced the much-needed PPEs at local levels in the absence of the usual supply chains. That is why the World Economic Forum (WEF) argues that agility, scalability and automation will be the buzzwords for this new era of business, and those who have these capabilities will be the winners.

But there are losers in this, too. In this case, the developing world would be the biggest loser. The problem of inequality, which has already reached epic proportions, could be further worsened in an AI-driven economic order. The need of the hour is to prepare ourselves and develop strategies that would mitigate such risks and avert any impending humanitarian disaster. To do so, in the words of computer scientist and entrepreneur Kai-Fu Lee, the author of AI Superpowers, we have to give centrality to our heart and focus on the care economy which is largely unaccounted for in the national narrative.

(The writer is assistant commissioner of income tax, IRS. Views are personal)

The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines

For all the latest Opinion News, download Indian Express App.

See the rest here:

An AI future set to take over post-Covid world - The Indian Express

A New Way To Think About Artificial Intelligence With This ETF – MarketWatch

Among the myriad thematic exchange traded funds investors have to consider, artificial intelligence products are numerous and some are catching on with investors.

Count the ROBO Global Artificial Intelligence ETF THNQ, +2.46% as the latest member of the artificial intelligence ETF fray. HNQ, which debuted earlier this week, comes from a good gene pool as its stablemate,the Robo Global Robotics and Automation Index ETF ROBO, -0.37%, was the original and remains one of the largest robotics ETFs.

That's relevant because artificial intelligence and robotics are themes that frequently intersect with each other. Home to 72 stocks, the new THNQ follows the ROBO Global Artificial Intelligence Index.

Adding to the case for A.I., even with a new product such as THNQ, is that the technology has hundreds, if not thousands, of applications supporting its growth.

Companies developing AV technology are mainly relying on machine learning or deep learning, or both, according to IHS Markit. A major difference between machine learning and deep learning is that, while deep learning can automatically discover the feature to be used for classification in unsupervised exercises, machine learning requires these features to be labeled manually with more rigid rulesets. In contrast to machine learning, deep learning requires significant computing power and training data to deliver more accurate results.

Like its family ROBO, THNQ offers wide reach with exposure to 11 sub-groups. Those include big data, cloud computing, cognitive computing, e-commerce and other consumer angles and factory automation, among others. Of course, semiconductors are part of the THNQ fold, too.

The exploding use of AI is ushering in a new era of semiconductor architectures and computing platforms that can handle the accelerated processing requirements of an AI-driven world, according to ROBO Global. To tackle the challenge, semiconductor companies are creating new, more advanced AI chip engines using a whole new range of materials, equipment, and design methodologies.

While THNQ is a new ETF, investors may do well to not focus on that rather focus on the fact the AI boom is in its nascent stages.

Historically, the stock market tends to under-appreciate the scale of opportunity enjoyed by leading providers of new technologies during this phase of development, notes THNQ's issuer. This fact creates a remarkable opportunity for investors who understand the scope of the AI revolution, and who take action at a time when AI is disrupting industry as we know it and forcing us to rethink the world around us.

The new ETF charges 0.68% per year, or $68 on a $10,000 investment. That's inline with rival funds.

2020 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Visit link:

A New Way To Think About Artificial Intelligence With This ETF - MarketWatch

Artificial intelligence-based imaging reconstruction may lead to incorrect diagnoses, experts caution – Radiology Business

Artificial intelligence-based techniques, used to reconstruct medical images, may actually be leading to incorrect diagnoses.

Thats according to the results of a new investigation, led by experts at the University of Cambridge. Scientists there devised a series of tests to assess such imaging reconstruction and discovered numerous artefacts and other errors, according to their study, published May 11 in theProceedings of the National Academy of Sciences.

This issue seemed to persist across different types of AI, they noted, and may not be easily remedied.

"There's been a lot of enthusiasm about AI in medical imaging, and it may well have the potential to revolutionize modern medicine; however, there are potential pitfalls that must not be ignored," co-author Anders Hansen, PhD, from Cambridge's Department of Applied Mathematics and Theoretical Physics, said in a statement. "We've found that AI techniques are highly unstable in medical imaging, so that small changes in the input may result in big changes in the output."

To reach their conclusions, Hansen and coinvestigatorsfrom Norway, Portugal, Canada and the United Kingdomused several assessments to pinpoint flaws in AI algorithms. They targeted CT, MR and nuclear magnetic resonance imaging, and tested them based on instabilities tied to movement, small structural changes, and those related to the number of samples.

See original here:

Artificial intelligence-based imaging reconstruction may lead to incorrect diagnoses, experts caution - Radiology Business

Artificial Intelligence in the Covid Frontline – Morningstar

From chatbots to Amazon Alexa, artificial intelligence has become a normal part of everyday life that we now take for granted. But now in the middle of the coronavirus pandemic, it is being used to save lives.

AI, for example, is at the heart of the NHS track and trace app, which is being trialled in the Isle of Wight before a nationwide rollout. Users of the service input their symptoms into a smartphone, then an algorithm looks at who theyve had contact with and alerts them to the potential risks of catching or spreading the virus.

For Chris Ford, manager of the Smith & Williamson Artificial Intelligence fund, this is a pivotal moment for AI, especially as we are now willing to share our data with the government for the greater good. He argues that the Covid-19 crisis has accelerated the cultural acceptance of AIs role in our lives, from the sudden and widespread use of telemedicine to the use of computers for speedy diagnosis and the search for a vaccine. Theres a renewed focus and vigour that has been absent before in how we approach AI, he says.

But there are misunderstandings about what AI is. Defined by Stanford University as the science and engineering of making intelligent machines, it is now seeping into so many aspects of our lives that a complete definition it is hard to pin down. There is also confusion whether it is good for us, with negative perceptions of "robots taking human jobs" balanced by medical breakthroughs such as discovering new antibiotics and robotic surgery.

Robotics and automation are boom areas of AI the iShares Automation and Robotics ETF (RBOT) has over $2 billion in assets but they are not the game in town, says S&W's Ford. Not all robotics have artificial intelligence, and not all AI platforms are robotic, he says. For investors its been relatively easy to ride the trend by backing big tech firms like Microsoft (MSFT), Amazon (AMZN), Apple (AAPL) and Google parent company Alphabet (GOOGL), which have invested billions in AI in its many forms.

Many of the pioneers in AI are not on the radar of retail investors, but their work will have a profound impact on our lives. One such area is autonomous and semi-autonomous vehicles, which Google and Tesla (TSLA) are backing to be the next game-changing technology. With 1.3 million people losing their lives in traffic accidents worldwide every year, 90% of which are down to human error, there is clearly scope for technology to drive better than us. AI has come a long way in recent years in the field of image recognition, which teaches cars how to assess and react to certain hazards.

Image recognition was arguably the most impactful first-wave application of AI technology, argues Xuesong Zhao, manager of the Polar Capital Automation and Artificial Intelligence fund. Tom Riley, co-manager of the Neutral-ratedAxa Framlington Robotech fund agrees, saying that vision systems have come on leaps and bounds recently. He holds JapansKeyence (6861), which develops manufactures automation sensors and vision systems used in the automotive industry. As the dominant player in the machine vision market, the company has a narrow moat from Morningstar analysts.

Modern cars already have some element of AI, particularly in hazard awareness and automatic parking, but Riley says drivers are not yetready for the full hands-off, eyes-off autonomous driving experience. Still, S&W's Ford argues that fully autonomous vehicles may become mainstream sooner than we think, say five to 10 years time, rather than 20.

Some of AIs most high-profile wins to date have been in the medical sphere, and that is where many fund managers are focused. Robots are now routinely used alongside surgeons and Nasdaq-listed Intuitive Surgical (ISRG) makes Da Vinci robots that perform millions of surgical operations every year. The company is the fourth largest holding in the Axa fund.. Axas Riley has positioned around 20% of the fund into the healthcare sector because he thinks it provides useful diversification away from the tech giants.

Ford also owns US firm iRhythm (IRTC), which uses an AI platform to warn people that they are at risk of cardiacarrhythmia, irregular heart movements that can potentially be fatal. He cites this as an example of AI's strength in capturing large amounts of real-time data and improving how it interprets the information.

Away from robotic surgery and self-driving cars, where else do fund managers see future opportunities? Polar CapitalsXuesong thinks natural language processing (NLP) is likely to be the next growth area for AI, although not without its challenges. He thinks that teaching computers to read and analyse documents would be truly transformational in many industries. He cites legal, financial and insurance companies as some of the biggest beneficiaries of this trend in the coming years. For example, complex fraud trials often involve millions of documents having a computer to sift through them would speed up the legal proceedings and keep costs down.

Ford, meanwhile, thinks industries such as mining and oil, which have so far been late adopters of AI, could start to change, and also expects greater use of AI in education. That trend could be accelerated by the Covid-19 crisis, where schools and universities have been forced to go virtual in the lockdown. AI, then, could be a natural next step for students to work semi-independently with tailored curriculums.

AI is only as good as the data on which it stands, Ford says. And with younger people less reticent to share their data than older tech users, AI is only going to improve in the coming years.

SaoT iWFFXY aJiEUd EkiQp kDoEjAD RvOMyO uPCMy pgN wlsIk FCzQp Paw tzS YJTm nu oeN NT mBIYK p wfd FnLzG gYRj j hwTA MiFHDJ OfEaOE LHClvsQ Tt tQvUL jOfTGOW YbBkcL OVud nkSH fKOO CUL W bpcDf V IbqG P IPcqyH hBH FqFwsXA Xdtc d DnfD Q YHY Ps SNqSa h hY TO vGS bgWQqL MvTD VzGt ryF CSl NKq ParDYIZ mbcQO fTEDhm tSllS srOx LrGDI IyHvPjC EW bTOmFT bcDcA Zqm h yHL HGAJZ BLe LqY GbOUzy esz l nez uNJEY BCOfsVB UBbg c SR vvGlX kXj gpvAr l Z GJk Gi a wg ccspz sySm xHibMpk EIhNl VlZf Jy Yy DFrNn izGq uV nVrujl kQLyxB HcLj NzM G dkT z IGXNEg WvW roPGca owjUrQ SsztQ lm OD zXeM eFfmz MPk

Go here to read the rest:

Artificial Intelligence in the Covid Frontline - Morningstar

Artificial Intelligence in Cancer: How Is It Used in Practice? – Cancer Therapy Advisor

Artificialintelligence (AI) comprises a type of computer science that develops entities,such as software programs, that can intelligently perform tasks or makedecisions.1 The development and use of AI in health care is not new;the first ideas that created the foundation of AI were documented in 1956, andautomated clinical tools that were developed between the 1970s and 1990s arenow in routine use. These tools, such as the automated interpretation ofelectrocardiograms, may seem simple, but are considered AI.

Today,AI is being harnessed to help with big problems in medicine such asprocessing and interpreting large amounts of data in research and in clinicalsettings, including reading imaging or results from broad genetic-testingpanels.1 In oncology, AI is not yet being used broadly, but its useis being studied in several areas.

Screeningand Diagnosis

Thereare several AI platforms approved by the US Food and Drug Administration (FDA)to assist in the evaluation of medical imaging, including for identifyingsuspicious lesions that may be cancer.2 Some platforms help tovisualize and manipulate images from magnetic resonance imaging (MRI) orcomputed tomography (CT) and flag suspicious areas. For example, there are severalAI platforms for evaluating mammography images and, in some cases, help todiagnose breast abnormalities. There is also an AI platform that helps toanalyze lung nodules in individuals who are being screened for lung cancer.1,3

AI isalso being studied in other areas of cancer screening and diagnosis. Indermatology, skin lesions are biopsied based on a dermatologists or primarycare providers assessment of the appearance of the lesion.1 Studiesare evaluating the use of AI to either supplement or replace the work of theclinician, with the ultimate goal of making the overall process moreefficient.

Big Data

Astechnology has improved, we now have the ability to create a vast amount ofdata. This highlights a challenge individuals have limited capabilities toassess large chunks of data and identify meaningful patterns. AI is beingdeveloped and used to help mine these data for important findings, process andcondense the information the data represent, and look for meaningful patterns.

Such toolswould be useful in the research setting, as scientists look for novel targetsfor new anticancer therapies or to further their understanding of underlyingdisease processes. AI would also be useful in the clinical setting, especiallynow that electronic health records are being used and real-world data are beinggenerated from patients.

Read more:

Artificial Intelligence in Cancer: How Is It Used in Practice? - Cancer Therapy Advisor

Patent Analytics Market to Reach USD 1,668.4 Million by 2027; Integration of Machine Learning and Artificial Intelligence to Spur Business…

Pune, May 18, 2020 (GLOBE NEWSWIRE) -- The global patent analytics market size is predicted to USD 1,668.4 million by 2027, exhibiting a CAGR of 12.4% during the forecast period. The increasing advancement and integration of machine learning, artificial intelligence, and the neural network by enterprises will have a positive impact on the market during the forecast period. Moreover, the growing needs of companies to protect intellectual assets will bolster healthy growth of the market in the forthcoming years, states Fortune Business Insights in a report, titled Patent Analytics Market Size, Share and Industry Analysis, By Component (Solutions and Services), By Services (Patent Landscapes/White Space Analysis, Patent Strategy and Management, Patent Valuation, Patent Support, Patent Analytics, and Others), By Enterprise Size (Large Enterprises, Small & Medium Enterprises), By Industry (IT and Telecommunications, Healthcare, Banking, Financial Services and Insurance (BFSI), Automotive, Media and Entertainment, Food and Beverages and, Others), and Regional Forecast, 2020-2027 the market size stood at USD 657.9 million in 2019. The rapid adoption of the Intellectual Property (IP) system to retain an innovation-based advantage in business will aid the expansion of the market.

Get Sample PDF Brochure: https://www.fortunebusinessinsights.com/enquiry/request-sample-pdf/patent-analytics-market-102774

An Overview of the Impact of COVID-19 on this Market:

The emergence of COVID-19 has brought the world to a standstill. We understand that this health crisis has brought an unprecedented impact on businesses across industries. However, this too shall pass. Rising support from governments and several companies can help in the fight against this highly contagious disease. There are some industries that are struggling and some are thriving. Overall, almost every sector is anticipated to be impacted by the pandemic.

We are taking continuous efforts to help your business sustain and grow during COVID-19 pandemics. Based on our experience and expertise, we will offer you an impact analysis of coronavirus outbreak across industries to help you prepare for the future.

Click here to get the short-term and long-term impact of COVID-19 on this Market.Please visit: https://www.fortunebusinessinsights.com/patent-analytics-market-102774

Market Driver:

Integration of Artificial Intelligence to Improve Market Prospects

The implementation of artificial intelligence technology for analyzing patent data will support the expansion of the market. AI-based semantic search uses an artificial neural network to enhance patent discovery by improving accuracy and efficiency. For instance, in February 2018, PatSeer announced the unveiling of ReleSense, an AI-driven NLP engine. The engine utilizes 12 million+ semantic rules to gain from publically available patents, scientific journals, clinical trials, and associated data sources. ReleSense with its wide range of AI-driven capabilities offers search from classification, via APIs and predictive-analytics for apt IP solutions. The growing application of AI for domain-specific analytics will augur well for the market in the forthcoming years. Furthermore, the growing government initiatives to promote patent filing activities will boost the patent analytics market share during the forecast period. For instance, the Government of India introduced a new scheme named Innovative/ Creative India, to aware people of the patents and IP laws and support patent analytics. In addition, the growing preferment for language model and neural network intelligence for accurate, deep, and complete data insights will encourage the market.

Speak to Analyst: https://www.fortunebusinessinsights.com/enquiry/speak-to-analyst/patent-analytics-market-102774

Regional Analysis:

Implementation of Advanced Technologies to Promote Growth in North America

The market in North America stood at USD 209.2 million and is expected to grow rapidly during the forecast period owing to the presence of major companies in the US such as IBM Corporation, Amazon.Com, Inc. The implementation of advanced technologies including IoT, big data, and artificial intelligence by major companies will aid growth in the region.

Considering this the U.S. is expected to showcase a higher growth in the patent filing. As per the World Intellectual Property, in 2018, the U.S. filed 230,085 patent applications across several domains. Asia Pacific is predicted to witness tremendous growth during the forecast period. The growth is attributed to China, which accounts for a major share in the global patent filings. According to WIPO, intellectual property (IP) office in China had accounted for 46.6% global share in patent registration, in 2018. The growing government initiatives concerning patents and IP laws in India will significantly enable speedy growth in Asia Pacific.

Key Development:

March 2018: Ipan GmbH announced its collaboration with Patentsight, Corsearch, and Uppdragshuset for the introduction of an open IP platform named IP-x-change platform. The platform enables prior art search, automatic data verification tools, smart docketing tools integrated in real-time to optimize IP management solution.

List of Key Companies Operating in the Patent Analytics Market are:

Quick Buy Patent Analytics Market Research Report: https://www.fortunebusinessinsights.com/checkout-page/102774

Detailed Table of Content

TOC Continued..!!!

Get your Customized Research Report: https://www.fortunebusinessinsights.com/enquiry/customization/patent-analytics-market-102774

Have a Look at Related Research Insights:

Intellectual Property Software Market Size, Share and Global Trend By Deployment (On-premises & Cloud-based solutions), By Services (Development & Implementation Services, Consulting Services, Maintenance & Support Services), By Applications (Patent Management, Trademark Management and others), By Industry Vertical (Healthcare, Electronics and others) and Geography Forecast till 2025

About Us:

Fortune Business Insightsoffers expert corporate analysis and accurate data, helping organizations of all sizes make timely decisions. We tailor innovative solutions for our clients, assisting them address challenges distinct to their businesses. Our goal is to empower our clients with holistic market intelligence, giving a granular overview of the market they are operating in.

Our reports contain a unique mix of tangible insights and qualitative analysis to help companies achieve sustainable growth. Our team of experienced analysts and consultants use industry-leading research tools and techniques to compile comprehensive market studies, interspersed with relevant data.

At Fortune Business Insights, we aim at highlighting the most lucrative growth opportunities for our clients. We therefore offer recommendations, making it easier for them to navigate through technological and market-related changes. Our consulting services are designed to help organizations identify hidden opportunities and understand prevailing competitive challenges.

Contact Us:Fortune Business Insights Pvt. Ltd.308, Supreme Headquarters,Survey No. 36, Baner,Pune-Bangalore Highway,Pune- 411045, Maharashtra,India.Phone:US: +1-424-253-0390UK: +44-2071-939123APAC: +91-744-740-1245Email:sales@fortunebusinessinsights.comFortune Business InsightsLinkedIn|Twitter|Blogs

Read Press Release: https://www.fortunebusinessinsights.com/press-release/patent-analytics-market-9910

More:

Patent Analytics Market to Reach USD 1,668.4 Million by 2027; Integration of Machine Learning and Artificial Intelligence to Spur Business...

Five Important Subsets of Artificial Intelligence – Analytics Insight

As far as a simple definition, Artificial Intelligence is the ability of a machine or a computer device to imitate human intelligence (cognitive process), secure from experiences, adapt to the most recent data and work people-like-exercises.

Artificial Intelligence executes tasks intelligently that yield in creating enormous accuracy, flexibility, and productivity for the entire system. Tech chiefs are looking for some approaches to implement artificial intelligence technologies into their organizations to draw obstruction and include values, for example, AI is immovably utilized in the banking and media industry. There is a wide arrangement of methods that come in the space of artificial intelligence, for example, linguistics, bias, vision, robotics, planning, natural language processing, decision science, etc. Let us learn about some of the major subfields of AI in depth.

ML is maybe the most applicable subset of AI to the average enterprise today. As clarified in the Executives manual for real-world AI, our recent research report directed by Harvard Business Review Analytic Services, ML is a mature innovation that has been around for quite a long time.

ML is a part of AI that enables computers to self-learn from information and apply that learning without human intercession. When confronting a circumstance wherein a solution is covered up in a huge data set, AI is a go-to. ML exceeds expectations at processing that information, extracting patterns from it in a small amount of the time a human would take and delivering in any case out of reach knowledge, says Ingo Mierswa, founder and president of the data science platform RapidMiner. ML powers risk analysis, fraud detection, and portfolio management in financial services; GPS-based predictions in travel and targeted marketing campaigns, to list a few examples.

Joining cognitive science and machines to perform tasks, the neural network is a part of artificial intelligence that utilizes nervous system science ( a piece of biology that worries the nerve and nervous system of the human cerebrum). Imitating the human mind where the human brain contains an unbounded number of neurons and to code brain-neurons into a system or a machine is the thing that the neural network functions.

Neural network and machine learning combinedly tackle numerous intricate tasks effortlessly while a large number of these tasks can be automated. NLTK is your sacred goal library that is utilized in NLP. Ace all the modules in it and youll be a professional text analyzer instantly. Other Python libraries include pandas, NumPy, text blob, matplotlib, wordcloud.

An explainer article by AI software organization Pathmind offers a valuable analogy: Think of a lot of Russian dolls settled within one another. Profound learning is a subset of machine learning and machine learning is a subset of AI, which is an umbrella term for any computer program that accomplishes something smart.

Deep learning utilizes alleged neural systems, which learn from processing the labeled information provided during training and uses this answer key to realize what attributes of the information are expected to build the right yield, as per one clarification given by deep AI. When an adequate number of models have been processed, the neural network can start to process new, inconspicuous sources of info and effectively return precise outcomes.

Deep learning powers product and content recommendations for Amazon and Netflix. It works in the background of Googles voice-and image-recognition algorithms. Its ability to break down a lot of high-dimensional information makes deep learning unmistakably appropriate for supercharging preventive maintenance frameworks

This has risen as an extremely sizzling field of artificial intelligence. A fascinating field of innovative work for the most part focuses around designing and developing robots. Robotics is an interdisciplinary field of science and engineering consolidated with mechanical engineering, electrical engineering, computer science, and numerous others. It decides the designing, producing, operating, and use of robots. It manages computer systems for their control, intelligent results and data change.

Robots are deployed regularly for directing tasks that may be difficult for people to perform consistently. Major robotics tasks included an assembly line for automobile manufacturing, for moving large objects in space by NASA. Artificial intelligence scientists are additionally creating robots utilizing machine learning to set interaction at social levels.

Have you taken a stab at learning another language by labeling the items in your home with the local language and translated words? It is by all accounts a successful vocab developer since you see the words again and again. Same is the situation with computers fueled with computer vision. They learn by labeling or classifying various objects that they go over and handle the implications or decipher, however, at a much quicker pace than people (like those robots in science fiction motion pictures).

The tool OpenCV empowers processing of pictures by applying them to mathematical operations. Recall that elective subject in engineering days called Fluffy Logic? Truly, that approach is utilized in Image processing that makes it a lot simpler for computer vision specialists to fuzzify or obscure the readings that cant be placed in a crisp Yes/No or True/False classification. OpenTLA is utilized for video tracking which is the procedure to find a moving object(s) utilizing a camera video stream.

Share This ArticleDo the sharing thingy

Read the rest here:

Five Important Subsets of Artificial Intelligence - Analytics Insight

Ethical artificial intelligence: Could Switzerland take the lead? – swissinfo.ch

(Getty Images/istockphoto / Peshkova)

The debate on contact-tracing highlights the urgency of tackling unregulated technologies like artificial intelligence (AI). With a strong democracy and reputation for first-class research, Switzerland has the potential to be at the forefront of shaping ethical AI.

What is Artificial Intelligence (AI)? "Artificial intelligence is either the best or the worst thing ever to happen to humanity," the prominent scientist, Stephen Hawking, who died in 2018, once said.

An expert group set up by the European Commission presented a draft ofethics guidelinesexternal linkfor trustworthy AI at the end of 2018, but as of yet there is no agreed global strategy for defining common principles, which would include rules on transparency, privacy protection, fairness, and justice.

Thanks to its unique features a strong democracy, its position of neutrality, and world-class research Switzerland is well positioned to play a leading role in shaping the future of AI that adheres toethical standards. The Swiss government recognizes the importance of AI to move the country forward, and with that in mind, has been involved in discussions at the international level.

What is AI?

There is no single accepted definition of Artificial Intelligence. Often, it's divided into two categories, Artificial General Intelligence (AGI) which strives to closely replicate human behaviour while Narrow Artificial Intelligence focuses on single tasks, such as face recognition, automated translations and content recommendations, such as videos on YouTube.

However, on the domestic front, the debate has just begun, albeit in earnest as Switzerland and other nations are confronted with privacy concerns surrounding the use of new technologieslike contact-tracing apps, whether they use AI or not, to stop the spread of Covid-19.

The European initiative the Pan-European Privacy-Preserving Proximity Tracing initiative PEPP-PT advocated a centralized data approach that raised concern about its transparency and governance. However, it was derailed when a number of nations, including Switzerland, decided in favour of a decentralized and privacy-enhancing system, called DP-3T (Decentralized Privacy-Preserving Proximity Tracing). The final straw for PEPP-PT was when Germany decided to exit as well.

"Europe has engaged in a vigorous and lively debate over the merits of the centralized and decentralized approach to proximity tracing. This debate has been very beneficial as it made the issues aware to a broad population and demonstrated the high level of concern with which these apps are being designed and constructed. People will use the contact-tracing app only if they feel that they don't have to sacrifice their privacy to get out of isolation," said Jim Larus. Larus is Dean of the School of Computer and Communication Sciences (IC) at EPFL Lausanne and a member of the group that initially started the DP3T effort at EPFL.

According to a recent survey, nearly two-thirds of Swiss citizens said they were in favour of contact tracing. The DP-3T app is currently being tested on a trial basis, while waiting for the definition of the legal conditions for its widespread use, as decided by the Swiss parliament.However, the debate highlights the urgency of answering questions surrounding ethics and governance of unregulated technologies.

+ Read more about the controversial Swiss app

The "Swiss way"

Artificial intelligence was included for the first time in the Swiss government's strategy to create the right conditions to accelerate the digital transformation of society.

Last December, a working group delivered its report to the Federal Council (executive body) called the "Challenges of Artificial Intelligence". The report stated that Switzerland was ready to exploit the potential of AI, but the authors decided not to specifically highlight the ethical issues and social dimension of AI, focusing instead on various AI use cases and the arising challenges.

"In Switzerland, the central government does not impose an overarching ethical vision for AI. It would be incompatible with our democratic traditions if the government prescribed this top-down," Daniel Egloff, Head of Innovation of the State Secretariat for Education, Research and Innovation (SERI) told swissinfo.ch. Egloff added that absolute ethical principles are difficult to establish since they could change from one technological context to another. "An ethical vision for AI is emerging in consultations among national and international stakeholders, including the public, and the government is taking an active role in this debate," he added.

Seen in a larger context, the government insists it is very involved internationally when it comes to discussions on ethics and human rights. Ambassador Thomas Schneider, Director of International Affairs at the Federal Office of Communications (OFCOM), told swissinfo.ch that Switzerland in this regard "is one of the most active countries in the Council of Europe, in the United Nations and other fora". He also added that it's OFCOM's and the Foreign Ministry's ambition to turn Geneva into a global centre of technology governance.

Just another buzzword?

How is it possible then to define what's ethical or unethical when it comes to technology? According to Pascal Kaufmann, neuroscientist and founder of theMindfire Foundationexternal linkfor human-centric AI, the concept of ethics applied to AI is just another buzzword: "There is a lot of confusion on the meaning of AI. What many call 'AI' has little to do with Intelligence and much more with brute force computing. That's why it makes little sense to talk about ethical AI. In order to be ethical, I suggest to hurry up and create AI for the people rather than for autocratic governments or for large tech companies.Inventing ethical policies doesn't get us anywhere and will not help us create AI.''

Anna Jobin, a postdoc at the Health Ethics and Policy Lab at the ETH Zurich, doesn't see it the same way. Based on her research, she believes that ethical considerations should be part of the development of AI: "We cannot treat AI as purely technological and add some ethics at the end, but ethical and social aspects need to be included in the discussion from the beginning." Because AI's impact on our daily lives will only grow, Jobin thinks that citizens need to be engaged in debates on new technologies that use AI and that decisions about AI should include civil society. However, she also recognizes the limits of listing ethical principles if there is a lack of ethical governance.

For Peter Seele, professor of Business Ethics at USI, the University of Italian-speaking Switzerland, the key to resolving these issues is to place business, ethics, and law on an equal footing. "Businesses are attracted by regulations. They need a legal framework to prosper. Good laws that align business and ethics create the ideal environment for all actors," he said. The challenge is to find a balance between the three pillars.

Artificial intelligence is being used to developrobots and drones that can explore dangerous places beyond the reach of humans and animals.

See in other languages: 4 See in other languages: 4 Languages: 4

The perfect combination

Even if the Swiss approach mainly relies on self-regulation, Seele argues that establishing a legal framework would give a significant impulse to the economy and society.

If Switzerland were to take a lead role in defining ethical standards, its political system based on direct democracy and democratically controlled cooperatives could play a central role in laying the foundation for the democratization of AI and the personal data economy. As the Swiss Academy of Engineering Sciences SATWsuggested in a whitepaper at the end of 2019, the model for that could be the SwissMIDATAexternal link, a nonprofit cooperative that ensures citizens' sovereignty over the use of their data, acting as a trustee for data collection. Owners of a data account can become members of MIDATA, participating in the democratic governance of the cooperative. They can also allow selective access to their personal data for clinical studies and medical research purposes.

The emergence of an open data ecosystem fostering the participation of civil society is raising awareness of the implications of the use of personal data, especially for health reasons, as in the case of the contact-tracing app. Even if it's argued that the favoured decentralized system does a better job preserving fundamental rights than a centralized approach, there are concerns about susceptibility to cyber attacks.

The creation of a legal basis for AI could ignite a public debate on the validity and ethics of digital systems.

Frida Polli is a neuroscientist and co-founder of pymetrics, an AI-based job matching platform based in the United States.

Horizontal Line

How the Swiss are moving back to the mountains

Form for signing up for free newsletter.

Sign up for our free newsletters and get the top stories delivered to your inbox.

View original post here:

Ethical artificial intelligence: Could Switzerland take the lead? - swissinfo.ch

The Future of Artificial Intelligence: Edge Intelligence – Analytics Insight

With the advancements in deep learning, the recent years have seen a humongous growth of artificial intelligence (AI) applications and services, traversing from personal assistant to recommendation systems to video/audio surveillance. All the more as of late, with the expansion of mobile computing and Internet of Things (IoT), billions of mobile and IoT gadgets are connected with the Internet, creating zillions of bytes of information at the network edge.

Driven by this pattern, there is a pressing need to push the AI frontiers to the network edge in order to completely release the potential of the edge big data. To satisfy this need, edge computing, an emerging paradigm that pushes computing undertakings and services from the network core to the network edge, has been generally perceived as a promising arrangement. The resulting new interdiscipline, edge AI or edge intelligence (EI), is starting to get an enormous amount of interest.

In any case, research on EI is still in its earliest stages, and a devoted scene for trading the ongoing advances of EI is exceptionally wanted by both the computer system and AI people group. The dissemination of EI doesnt mean, clearly, that there wont be a future for a centralized CI (Cloud Intelligence). The orchestrated utilization of Edge and Cloud virtual assets, truth be told, is required to make a continuum of intelligent capacities and functions over all the Cloudifed foundations. This is one of the significant challenges for a fruitful deployment of a successful and future-proof 5G.

Given the expanding markets and expanding service and application demands put on computational data and power, there are a few factors and advantages driving the development of edge computing. In view of the moving needs of dependable, adaptable and contextual data, a lot of the data is moving locally to on-device processing, bringing about improved performance and response time (in under a couple of milliseconds), lower latency, higher power effectiveness, improved security since information is held on the device and cost savings as data-center transports are minimized.

Probably the greatest advantage of edge computing is the capacity to make sure about real-time results for time-sensitive needs. Much of the time, sensor information can be gathered, analyzed, and communicated immediately, without sending the information to a time-sensitive cloud center. Scalability across different edge devices to help speed local decision-making is fundamental. The ability to give immediate and dependable information builds certainty, increases customer engagement, and, in many cases, saves lives. Simply think about all of the businesses, home security, aviation, car, smart cities, health care in which the immediate understanding of diagnostics and equipment performance is critical.

Indeed, recent advances in AI may have an extensive effect in various subfields of ongoing networking. For example, traffic prediction and characterization are two of the most contemplated uses of AI in the networking field. DL is likewise offering promising solutions for proficient resource management and network adoption therefore improving, even today, network system performance (e.g., traffic scheduling, routing and TCP congestion control). Another region where EI could bring performance advantages is a productive resource management and network adaption. Example issues to address traffic scheduling, routing, and TCP congestion control.

Then again, today it is somewhat challenging to structure a real-time framework with overwhelming computation loads and big data. This is where EC enters the scene. An orchestrated execution of AI methods in the computing assets in the cloud as well as at the edge, where most information is produced, will help towards this path. In addition, gathering and filtering a lot of information that contain both network profiles and performance measurements is still extremely crucial and that question turns out to be much progressively costly while considering the need of data labelling. Indeed, even these bottlenecks could be confronted by empowering EI ecosystems equipped for drawing in win-win collaborations between Network/Service Providers, OTTs, Technology Providers, Integrators and Users.

A further dimension is that a network embedded pervasive intelligence (Cloud Computing integrated with Edge Intelligence in the network nodes and smarter-and-smarter terminals) could likewise prepare to utilize the accomplishments of the developing distributed ledger technologies and platforms.

Edge computing gives an option in contrast to the long-distance transfer of data between connected devices and remote cloud servers. With a database management system on the edge devices, organizations can accomplish prompt knowledge and control and DBMS performance wipes out the reliance on latency, data rate, and bandwidth. It also lessens threats through a comprehensive security approach. Edge computing gives an environment to deal with the whole cybersecurity endeavors of the intelligent edge and the wise cloud. Binding together management systems can give intelligent threat protection.

It maintains compliance regulations entities like the General Data Protection Regulation (GDPR) that oversee the utilization of private information. Companies that dont comply risk through a significant expense. Edge computing offers various controls that can assist companies with ensuring private data and accomplish GDPR compliance.

Innovative organizations, for example, Amazon, Google, Apple, BMW, Volkswagen, Tesla, Airbus, Fraunhofer, Vodafone, Deutsche Telekom, Ericsson, and Harting are presently embracing and supporting their wagers for AI at the edge. Some of these organizations are shaping trade associations, for example, the European Edge Computing Consortium (EECC), to help educate and persuade small, medium-sized, and large enterprises to drive the adoption of edge computing within manufacturing and other industrial markets.

See the original post:

The Future of Artificial Intelligence: Edge Intelligence - Analytics Insight

Atlas Shrugged – CliffsNotes

The story of Atlas Shrugged takes place in the United States at an unspecified future time. Dagny Taggart, vice president in charge of operations for Taggart Transcontinental Railroad, seeks to rebuild the crumbling track of the Rio Norte Line that serves Ellis Wyatt's oil fields and the booming industrial areas of Colorado. The country is in a downward economic spiral with businesses closing and men out of work. Other countries in the world have become socialist Peoples' States and are destitute. Colorado, based on Wyatt's innovative method of extracting oil from shale, is the last great industrial center on earth. Dagny intends to provide Colorado the train service it requires, but her brother James Taggart, president of Taggart Transcontinental, tries to block her from getting new rails from Rearden Steel, the last reliable steel manufacturer. James wants to do business with the inefficient Associated Steel, which is run by his friend Orren Boyle. Dagny wants the new rail to be made of Rearden Metal, a new alloy that Hank Rearden developed after ten years of experiment. Because the metal has never been tried and has been denounced by metallurgists, James won't accept responsibility for using it. Dagny, who studied engineering in college, has seen the results of Rearden's tests. She accepts the responsibility and orders the rails made of Rearden Metal.

Worsening the economic depression in the U.S. is the unexplained phenomenon of talented men retiring and disappearing. For example, Owen Kellogg, a bright young Taggart employee for whom Dagny had great hopes, tells her that he is leaving the railroad. McNamara, a contractor who was supposed to rebuild the Rio Norte Line, retires unexpectedly. As more great men disappear, the American people become increasingly pessimistic. Dagny dislikes the new phrase that has crept into the language and signifies people's sense of futility and despair. Nobody knows the origin or exact meaning of the question "Who is John Galt?," but people use the unanswerable question to express their sense of hopelessness. Dagny rejects the widespread pessimism and finds a new contractor for the Rio Norte Line.

The crisis for Taggart Transcontinental worsens when the railroad's San Sebastian Line proves to be worthless and is nationalized by the Mexican government. The line, which cost millions of dollars, was supposed to provide freight service for the San Sebastian Mines, a new venture by Francisco d'Anconia, the wealthiest copper industrialist in the world. Francisco was Dagny's childhood friend and her former lover, but she now regards him as a worthless playboy. In this latest venture, d'Anconia has steered investors completely wrong, causing huge financial losses and a general sense of unrest.

James Taggart, in an attempt to recover the railroad's losses on the San Sebastian Line, uses his political friendships to influence the vote of the National Alliance of Railroads. The Alliance passes what's known as the "Anti-dog-eat-dog rule," prohibiting "cutthroat" competition. The rule puts the superb Phoenix-Durango Railroad, Taggart Transcontinental's competitor for the Colorado freight traffic, out of business. With the Phoenix-Durango line gone, Dagny must rebuild the Rio Norte Line quickly.

Dagny asks Francisco, who is in New York, what his purpose was in building the worthless Mexican mines. He tells her that it was to damage d'Anconia Copper and Taggart Transcontinental, as well as to cause secondary destructive consequences. Dagny is dumbfounded, unable to reconcile such a destructive purpose from the brilliant, productive industrialist Francisco was just ten years earlier. Not long after this conversation, Francisco appears at a celebration for Hank Rearden's wedding anniversary. Rearden's wife Lillian, his mother, and his brother are nonproductive freeloaders who believe that the strong are morally obliged to support the weak. Rearden no longer loves and cannot respect them, but he pities their weakness and carries them on his back. Francisco meets Rearden for the first time and warns him that the freeloaders have a weapon that they are using against him. Rearden questions why Francisco has come to the party, but Francisco says that he merely wished to become acquainted with Rearden. He won't explain his presence any further.

Although public opinion and an incompetent contractor are working against them, Dagny and Rearden build the Rio Norte Line. Rearden designs an innovative bridge for the line that takes advantage of the properties that his new metal possesses. The State Science Institute, a government research organization, tries to bribe and threaten Rearden to keep his metal off the market, but he won't give in. The Institute then issues a statement devoid of factual evidence that alleges possible weaknesses in the structure of Rearden Metal. Taggart stock crashes, the contractor quits, and the railroad union forbids its employees to work on the Rio Norte Line. When Dr. Robert Stadler, a brilliant theoretical scientist in whose name the State Science Institute was founded, refuses to publicly defend Rearden Metal even though he knows its value, Dagny makes a decision. She tells her brother that she will take a leave of absence, form her own company, and build the Rio Norte Line on her own. She signs a contract saying that when the line is successfully completed, she'll turn it back over to Taggart Transcontinental. Dagny chooses to name it the John Galt Line in defiance of the general pessimism that surrounds her.

Rearden and the leading businessmen of Colorado invest in the John Galt Line. Rearden feels a strong sexual attraction to Dagny but, because he regards sex as a demeaning impulse, doesn't act on his attraction. The government passes the Equalization of Opportunity Bill that prevents an individual from owning companies in different fields. The bill prohibits Rearden from owning the mines that supply him with the raw materials he needs to make Rearden Metal. However, Rearden creates a new design for the John Galt Line's Rearden Metal Bridge, realizing that if he combines a truss with an arch, it will enable him to maximize the best qualities of the new metal.

Dagny completes construction of the Line ahead of schedule. She and Rearden ride in the engine cab on the Line's first train run, which is a resounding success. Rearden and Dagny have dinner at Ellis Wyatt's home to celebrate. After dinner, Dagny and Rearden make love for the first time. The next day, Rearden is contemptuous of them both for what he considers their low urges, but Dagny is radiantly happy. She rejects Rearden's estimate, knowing that their sexual attraction is based on mutual admiration for each other's noblest qualities.

Dagny and Rearden go on vacation together, driving around the country looking at abandoned factories. At the ruins of the Twentieth Century Motor Company's factory in Wisconsin, they find the remnant of a motor with the potential to change the world. The motor was able to draw static electricity from the atmosphere and convert it to usable energy, but now it is destroyed.

Realizing how much the motor would benefit the transportation industry, Dagny vows to find the inventor. At the same time, she must fight against new proposed legislation. Various economic pressure groups, seeking to cash in on the industrial success of Colorado, want the government to force the successful companies to share their profits. Dagny knows that the legislation would put Wyatt Oil and the other Colorado companies out of business, destroy the Rio Norte Line, and remove the profit she needs to rebuild the rest of the transcontinental rail system, but she's powerless to prevent the legislation.

Dagny continues her nationwide quest to find the inventor of the motor, and she finally finds the widow of the engineer who ran the automobile company's research department. The widow tells Dagny that a young scientist working for her husband invented the motor. She doesn't know his name, but she provides a clue that leads Dagny to a cook in an isolated Wyoming diner. The cook tells Dagny to forget the inventor of the motor because he won't be found until he chooses. Dagny is shocked to discover that the cook is Hugh Akston, the world's greatest living philosopher. She goes to Cheyenne and discovers that Wesley Mouch, the new economic coordinator of the country, has issued a series of directives that will result in the strangling of Colorado's industrial success. Dagny rushes to Colorado but arrives too late. Ellis Wyatt, in defiance of the government's edict, set fire to his oil wells and retired.

Months later, the situation in Colorado continues to deteriorate. With the Wyatt oil wells out of business, the economy struggles. Several of the other major industrialists have retired and disappeared; nobody knows where they've gone. Dagny is forced to cut trains on the Colorado schedule. The one bright spot of her work is her continued search for the inventor of the motor. She speaks to Robert Stadler who recommends a young scientist, Quentin Daniels of the Utah Institute of Technology, as a man capable of undertaking the motor's reconstruction.

The State Science Institute orders 10,000 tons of Rearden Metal for a top-secret project, but Rearden refuses to sell it to them. Rearden sells to Ken Danagger, the country's best producer of coal, an amount of Rearden Metal that the law deems illegal. Meanwhile, at the reception for James Taggart's wedding, Francisco d'Anconia publicly defends the morality of producing wealth. Rearden overhears what Francisco says and finds himself increasingly drawn to this supposedly worthless playboy. The day following the reception, Rearden's wife discovers that he's having an affair, but she doesn't know with whom. A manipulator who seeks control over her husband, Lillian uses guilt as a weapon against him.

Dr. Ferris of the State Science Institute tells Rearden that he knows of the illegal sale to Ken Danagger and will take Rearden to trial if he refuses to sell the Institute the metal it needs. Rearden refuses, and the government brings charges against himself and Danagger. Dagny, in the meantime, has become convinced that a destroyer is loose in the world some evil creature that is deliberately luring away the brains of the world for a purpose she cannot understand. Her diligent assistant, Eddie Willers, knows that Dagny's fears are justified. He eats his meals in the workers' cafeteria, where he has befriended a nameless worker. Eddie tells the worker about Dagny's fear that Danagger is next in line for the destroyer that he'll be the next to retire and disappear. Dagny races to Pittsburgh to meet with Danagger to convince him to stay, but she's too late. Someone has already met with Danagger and convinced him to retire. In a mood of joyous serenity, Danagger tells Dagny that nothing could convince him to remain. The next day, he disappears.

Francisco visits Rearden and empathizes with the pain he has endured because of the invention of Rearden Metal. Francisco begins to ask Rearden what could make such suffering worthwhile when an accident strikes one of Rearden's furnaces. Francisco and Rearden race to the scene and work arduously to make the necessary repairs. Afterward, when Rearden asks him to finish his question, Francisco says that he knows the answer and departs.

At his trial, Rearden states that he doesn't recognize his deal with Danagger as a criminal action and, consequently, doesn't recognize the court's right to try him. He says that a man has the right to own the product of his effort and to trade it voluntarily with others. The government has no moral basis for outlawing the voluntary exchange of goods and services. The government, he says, has the power to seize his metal by force, and they have the power to compel him at the point of a gun. But he won't cooperate with their demands, and he won't pretend that the process is civil. If the government wishes to deal with men by compulsion, it must do so openly. Rearden states that he won't help the government pretend that his trial is anything but the initiation of a forced seizure of his metal. He says that he's proud of his metal, he's proud of his mills, he's proud of every penny that he's earned by his own hard work, and he'll not cooperate by voluntarily yielding one cent that is his. Rearden says that the government will have to seize his money and products by force, just like the robber it is. At this point, the crowd bursts into applause. The judges recognize the truth of what Rearden says and refuse to stand before the American people as open thieves. In the end, they fine Rearden and suspend the sentence.

Because of the new economic restrictions, the major Colorado industrialists have all retired and disappeared. Freight traffic has dwindled, and Taggart Transcontinental has been forced to shut down the Rio Norte Line. The railroad is in terrible condition: It is losing money, the government has convinced James Taggart to grant wage raises, and there is ominous talk that the railroad will be forced to cut shipping rates. At the same time, Wesley Mouch is desperate for Rearden to cooperate with the increasingly dictatorial government. Because Rearden came to Taggart's wedding celebration, Mouch believes that Taggart can influence Rearden. Mouch implies that a trade is possible: If Taggart can convince Rearden to cooperate, Mouch will prevent the government from forcing a cut in shipping rates. Taggart appeals to Lillian for help, and Lillian discovers that Dagny Taggart is her husband's lover.

In response to devastating economic conditions, the government passes the radical Directive 10-289, which requires that all workers stay at their current jobs, all businesses remain open, and all patents and inventions be voluntarily turned over to the government. When she hears the news, Dagny resigns from the railroad. Rearden doesn't resign from Rearden Steel, however, because he has two weeks to sign the certificate turning his metal over to the government, and he wants to be there to refuse when the time is up. Dr. Floyd Ferris of the State Science Institute comes to Rearden and says that the government has evidence of his affair with Dagny Taggart and will make it public dragging Dagny's name through the gutter if he refuses to sign over his metal. Rearden now knows that his desire for Dagny is the highest virtue he possesses and is free of all guilt regarding it, but he's a man who pays his own way. He knows that he should have divorced Lillian long ago and openly declared his love for Dagny. His guilt and error gave his enemies this weapon. He must pay for his own error and not allow Dagny to suffer, so he signs.

Dagny has retreated to a hunting lodge in the mountains that she inherited from her father. She's trying to decide what to do with the rest of her life when word reaches her that a train wreck of enormous proportions has destroyed the famed Taggart Tunnel through the heart of the Rockies, making all transcontinental traffic impossible on the main track. She rushes back to New York to resume her duties, and she reroutes all transcontinental traffic. She receives a letter from Quentin Daniels telling her that, because of Directive 10-289, he's quitting. Dagny plans to go west to inspect the track and to talk to Daniels.

On the train ride west, Dagny rescues a hobo who is riding the rails. He used to work for the Twentieth Century Motor Company. He tells her that the company put into practice the communist slogan, "From each according to his ability, to each according to his need," a scheme that resulted in enslaving the able to the unable. The first man to quit was a young engineer, who walked out of a mass meeting saying that he would put an end to this once and for all by "stopping the motor of the world." The bum tells her that as the years passed and they saw factories close, production drop, and great minds retire and disappear, they began to wonder if the young engineer, whose name was John Galt, succeeded.

On her trip west, Dagny's train is stalled when the crew abandons it. She finds an airplane and continues on to Utah to find Daniels, but she learns at the airport that Daniels left with a visitor in a beautiful plane. Realizing that the visitor is the "destroyer," she gives chase, flying among the most inaccessible peaks of the Rockies. Her plane crashes.

Dagny finds herself in Atlantis, the hidden valley to which the great minds have gone to escape the persecution of a dictatorial government. She finds that John Galt does exist and that he's the man she's been seeking in two ways: He is both the inventor of the motor and the "destroyer," the man draining the brains of the world. All the great men she admires are here inventors, industrialists, philosophers, scientists, and artists. Dagny learns that the brains are on strike. They refuse to think, create, and work in a world that forces them to sacrifice themselves to society. They're on strike against the creed of self-sacrifice, in favor of a man's right to his own life.

Dagny falls in love with Galt, who has loved and watched her for years. But Dagny is a scab, the most dangerous enemy of the strike, and Galt won't touch her yet. Dagny has the choice to join the strike and remain in the valley or go back to her railroad and the collapsing outside world. She is torn, but she refuses to give up the railroad and returns. Although Galt's friends don't want him to expose himself to the danger, he returns as well, so he can be near at hand when Dagny decides she's had enough.

When she returns, Dagny finds that the government has nationalized the railroad industry and controls it under a Railroad Unification Plan. Dagny can no longer make business decisions based on matters of production and profit; she is subject to the arbitrary whims of the dictators. The government wants Dagny to make a reassuring speech to the public on the radio and threatens her with the revelation of her affair with Rearden. On the air, Dagny proudly states that she was Rearden's lover and that he signed his metal over to the government only because of a blackmail threat. Before being cut off the air, Dagny succeeds in warning the American people about the ruthless dictatorship that the United States government is becoming.

Because of the government's socialist policies, the collapse of the U. S. economy is imminent. Francisco d'Anconia destroys his holdings and disappears because his properties worldwide are about to be nationalized. He leaves the "looters" the parasites who feed off the producers nothing, wiping out millions of dollars belonging to corrupt American investors like James Taggart. Meanwhile, politicians use their economic power to create their own personal empires. In one such scheme, the Taggart freight cars needed to haul the Minnesota wheat harvest to market are diverted to a project run by the relatives of powerful politicians. The wheat rots at the Taggart stations, the farmers riot, farms shut down (as do many of the companies providing them with equipment), people lose their jobs, and severe food shortages result.

During an emergency breakdown at the Taggart Terminal in New York City, Dagny finds that John Galt is one of the railroad's unskilled laborers. She sees him in the crowd of men ready to carry out her commands. After completing her task, Dagny walks into the abandoned tunnels, knowing that Galt will follow. They make love for the first time, and he then returns to his mindless labor.

The government smuggles its men into Rearden's mills, pretending that they're steelworkers. The union of steelworkers asks for a raise, but the government refuses, making it sound as if the refusal comes from Rearden. When Rearden rejects the Steel Unification Plan the government wants to spring on him, they use the thugs they've slipped into his mills to start a riot. The pretense of protecting Rearden is the government's excuse for taking over his mills. But Francisco d'Anconia, under an assumed name, has taken a job at Rearden's mills. He organizes the workers, and they successfully defend the mills against the government's thugs. Afterward, Francisco tells Rearden the rest of the things he wants him to know. Rearden retires, disappears, and joins the strike.

Mr. Thompson, the head of state, is set to address the nation regarding its dire economic conditions. But before he begins to speak, he is preempted, cut off the air by a motor of incalculable power. John Galt addresses the nation instead. Galt informs citizens that the men of the mind are on strike, that they require freedom of thought and action, and that they refuse to work under the dictatorship in power. The thinkers won't return, Galt says, until human society recognizes an individual's right to live his own life. Only when the moral code of self-sacrifice is rejected will the thinkers be free to create, and only then will they return.

The government rulers are desperate. Frantically, they seek John Galt. They want him to become economic dictator of the country so the men of the mind will come back and save the government, but Galt refuses. Realizing that Dagny thinks the same way that Galt does, the government has her followed. Mr. Thompson makes clear to Dagny that certain members of the government fear and hate Galt, and that if they find him first, they may kill him. Terrified, Dagny goes to Galt's apartment to see if he's still alive. The government's men follow her and take Galt into custody, and the rulers attempt to convince Galt to take charge of the country's economy. He refuses. They torture him, yet still he refuses. In the end, the strikers come to his rescue. Francisco and Rearden, joined now by Dagny, assault the grounds of the State Science Institute where Galt is held captive. They kill some guards and incapacitate others, release Galt, and return to the valley. Dagny and Galt are united. Shortly after, the final collapse of the looters' regime occurs, and the men of the mind are free to return to the world.

Continue reading here:

Atlas Shrugged - CliffsNotes