Artificial intelligence Facts for Kids

Artificial intelligence (AI) is the ability of a computer program or a machine to think and learn. It is also a field of study which tries to make computers "smart". John McCarthy came up with the name "artificial intelligence" in 1955.

In general use, the term "artificial intelligence" means a machine which mimics human cognition. At least some of the things we associate with other minds, such as learning and problem solving can be done by computers, though not in the same way as we do.

An ideal (perfect) intelligent machine is a flexible agent which perceives its environment and takes actions to maximize its chance of success at some goal. As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition. For example, optical character recognition is no longer perceived as an exemplar of "artificial intelligence": it is just a routine technology.

At present we use the term AI for successfully understanding human speech, competing at a high level in strategic game systems (such as Chess and Go), self-driving cars, and interpreting complex data. Some people also consider AI a danger to humanity if it progresses unabatedly.

An extreme goal of AI research is to create computer programs that can learn, solve problems, and think logically. In practice, however, most applications have picked on problems which computers can do well. Searching data bases and doing calculations are things computers do better than people. On the other hand, "perceiving its environment" in any real sense is way beyond present-day computing.

AI involves many different fields like computer science, mathematics, linguistics, psychology, neuroscience, and philosophy. Eventually researchers hope to create a "general artificial intelligence" which can solve many problems instead of focusing on just one. Researchers are also trying to create creative and emotional AI which can possibly empathize or create art. Many approaches and tools have been tried.

Objects that look and act like humans exist in every major civilization. The first appearance of artificial intelligence is in Greek myths, like Talos of Crete or the bronze robot of Hephaestus. Humanoid robots were built by Yan Shi, Hero of Alexandria, and Al-Jazari. Sentient machines became popular in fiction during the 19th and 20th centuries with the stories of Frankenstein and R.U.R.

Formal logic was developed by ancient Greek philosophers and mathematicians. This study of logic produced the idea of a computer in the 19th and 20th century. Mathematician Alan Turing's theory of computation said that any mathematical problem could be solved by processing 1's and 0's. Advances in neurology, information theory, and cybernetics convinced a small group of researchers that an electronic brain was possible.

AI research really started with a conference at Dartmouth College in 1956. It was a month long brainstorming session attended by many people with interests in AI. At the conference they wrote programs that were amazing at the time, beating people at checkers or solving word problems. The Department of Defense started giving a lot of money to AI research and labs were created all over the world.

Unfortunately, researchers really underestimated just how hard some problems were. The tools they had used still did not give computers things like emotions or common sense. Mathematician James Lighthill wrote a report on AI saying that "in no part of the field have discoveries made so far produced the major impact that was then promised". The U.S and British governments wanted to fund more productive projects. Funding for AI research was cut, starting an "AI winter" where little research was done.

AI research revived in the 1980s because of the popularity of expert systems, which simulated the knowledge of a human expert. By 1985, 1 billion dollars were spent on AI. New, faster computers convinced U.S and British governments to start funding AI research again. However, the market for Lisp machines collapsed in 1987 and funding was pulled again, starting an even longer AI winter.

AI revived again in the 90s and early 2000s with its use in data mining and medical diagnosis. This was possible because of faster computers and focusing on solving more specific problems. In 1997, Deep Blue became the first computer program to beat chess world champion Garry Kasparov. Faster computers, advances in deep learning, and access to more data have made AI popular throughout the world. In 2011 IBM Watson beat the top two Jeopardy! players Brad Rutter and Ken Jennings, and in 2016 Google's AlphaGo beat top Go player Lee Sedol 4 out of 5 times.

An ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.

Read the original:

Artificial intelligence Facts for Kids

Understanding the Four Types of Artificial Intelligence

The common, and recurring, view of the latest breakthroughs in artificial intelligence research is that sentient and intelligent machines are just on the horizon. Machines understand verbal commands, distinguish pictures, drive cars and play games better than we do. How much longer can it be before they walk among us?

The new White House report on artificial intelligence takes an appropriately skeptical view of that dream. It says the next 20 years likely wont see machines exhibit broadly-applicable intelligence comparable to or exceeding that of humans, though it does go on to say that in the coming years, machines will reach and exceed human performance on more and more tasks. But its assumptions about how those capabilities will develop missed some important points.

As an AI researcher, Ill admit it was nice to have my own field highlighted at the highest level of American government, but the report focused almost exclusively on what I call the boring kind of AI. It dismissed in half a sentence my branch of AI research, into how evolution can help develop ever-improving AI systems, and how computational models can help us understand how our human intelligence evolved.

The report focuses on what might be called mainstream AI tools: machine learning and deep learning. These are the sorts of technologies that have been able to play Jeopardy! well, and beat human Go masters at the most complicated game ever invented. These current intelligent systems are able to handle huge amounts of data and make complex calculations very quickly. But they lack an element that will be key to building the sentient machines we picture having in the future.

We need to do more than teach machines to learn. We need to overcome the boundaries that define the four different types of artificial intelligence, the barriers that separate machines from us and us from them.

There are four types of artificial intelligence: reactive machines, limited memory, theory of mind and self-awareness.

The most basic types of AI systems are purely reactive, and have the ability neither to form memories nor to use past experiences to inform current decisions. Deep Blue, IBMs chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine.

Deep Blue can identify the pieces on a chess board and know how each moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities.

But it doesnt have any concept of the past, nor any memory of what has happened before. Apart from a rarely used chess-specific rule against repeating the same move three times, Deep Blue ignores everything before the present moment. All it does is look at the pieces on the chess board as it stands right now, and choose from possible next moves.

This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesnt rely on an internal concept of the world. In a seminal paper, AI researcher Rodney Brooks argued that we should only build machines like this. His main reason was that people are not very good at programming accurate simulated worlds for computers to use, what is called in AI scholarship a representation of the world.

The current intelligent machines we marvel at either have no such concept of the world, or have a very limited and specialized one for its particular duties. The innovation in Deep Blues design was not to broaden the range of possible movies the computer considered. Rather, the developers found a way to narrow its view, to stop pursuing some potential future moves, based on how it rated their outcome. Without this ability, Deep Blue would have needed to be an even more powerful computer to actually beat Kasparov.

Similarly, Googles AlphaGo, which has beaten top human Go experts, cant evaluate all potential future moves either. Its analysis method is more sophisticated than Deep Blues, using a neural network to evaluate game developments.

These methods do improve the ability of AI systems to play specific games better, but they cant be easily changed or applied to other situations. These computerized imaginations have no concept of the wider world meaning they cant function beyond the specific tasks theyre assigned and are easily fooled.

They cant interactively participate in the world, the way we imagine AI systems one day might. Instead, these machines will behave exactly the same way every time they encounter the same situation. This can be very good for ensuring an AI system is trustworthy: You want your autonomous car to be a reliable driver. But its bad if we want machines to truly engage with, and respond to, the world. These simplest AI systems wont ever be bored, or interested, or sad.

This Type II class contains machines can look into the past. Self-driving cars do some of this already. For example, they observe other cars speed and direction. That cant be done in a just one moment, but rather requires identifying specific objects and monitoring them over time.

These observations are added to the self-driving cars preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road. Theyre included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car.

But these simple pieces of information about the past are only transient. They arent saved as part of the cars library of experience it can learn from, the way human drivers compile experience over years behind the wheel.

So how can we build AI systems that build full representations, remember their experiences and learn how to handle new situations? Brooks was right in that it is very difficult to do this. My own research into methods inspired by Darwinian evolution can start to make up for human shortcomings by letting the machines build their own representations.

We might stop here, and call this point the important divide between the machines we have and the machines we will build in the future. However, it is better to be more specific to discuss the types of representations machines need to form, and what they need to be about.

Machines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called theory of mind the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior.

This is crucial to how we humans formed societies, because they allowed us to have social interactions. Without understanding each others motives and intentions, and without taking into account what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible.

If AI systems are indeed ever to walk among us, theyll have to be able to understand that each of us has thoughts and feelings and expectations for how well be treated. And theyll have to adjust their behavior accordingly.

The final step of AI development is to build systems that can form representations about themselves. Ultimately, we AI researchers will have to not only understand consciousness, but build machines that have it.

This is, in a sense, an extension of the theory of mind possessed by Type III artificial intelligences. Consciousness is also called self-awareness for a reason. (I want that item is a very different statement from I know I want that item.) Conscious beings are aware of themselves, know about their internal states, and are able to predict feelings of others. We assume someone honking behind us in traffic is angry or impatient, because thats how we feel when we honk at others. Without a theory of mind, we could not make those sorts of inferences.

While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences. This is an important step to understand human intelligence on its own. And it is crucial if we want to design or evolve machines that are more than exceptional at classifying what they see in front of them.

This article was originally published on The Conversation.

See the article here:

Understanding the Four Types of Artificial Intelligence

Artificial intelligence project lets Holocaust survivors …

Millions perished in the Holocaust, but a group of survivors will now be able to live on, at least via real-time video conversations about their experiences and perspectives, forever. In an innovative attempt to harness the artificial intelligence technologies of the present and the future to keep alive the stories of the past, Holocaust survivors may be the first people ever to be able to continue carrying on conversations (virtually, at least) even after their deaths. Lesley Stahl reports on this fascinating project on the next edition of 60 Minutes, Sunday, April 5 at 7 p.m., ET/PT on CBS.Heather Maio had worked for years on Holocaust-related exhibits and knew that "Schindler's List" director Steven Spielberg had created the Shoah Foundation to record the testimonies of thousands of Holocaust survivors. But Maio wanted to create something more interactive. "I wanted to talk to a Holocaust survivor like I would today, with that person sitting right in front of me," she told Stahl. Maio believed that artificial intelligence technology could make her notion realizable, so she pitched her idea to Stephen Smith, the executive director of the USC Shoah Foundation in Los Angeles, and now her husband.Smith was intrigued, but some of his colleagues initially feared it could cheapen, or "Disney-fy" the Holocaust. "We had a lot of pushback on this project," Smith said. "'Is it the right thing to do Are we trying to keep them alive beyond their deaths?' Everyone had questions except for one group of people, the survivors themselves, who said, 'Where do I sign up?'" So far, more than 20 interviews, including one with a 93-year-old U.S. Army veteran who helped liberate a concentration camp, have been recorded. Each subject spends a full five days answering questions in an attempt to record responses to every question conceivable. The questions are then logged and alternative questions are entered into the database. Each interview is recorded with more than 20 cameras so that as technology advances and 3D, hologram-type display becomes the norm, all required angles will be available.Three of the survivors interviewed have since died. One of them was Eva Kor, who appeared on 60 Minutes in 1992 to tell her story of having been experimented on, along with her identical twin sister, by Nazi S.S. physician Josef Mengele. Kor died last summer, but using the Shoah foundation's technology, Stahl was able to conduct another 60 Minutes interview with Kor's digital image. What was Mengele like? "He had a gorgeous face, a movie star face, and very pleasant, actually," Kors' digital image told Stahl. "Dark hair, dark eyes. When I looked into his eyes, I could see nothing but evil. People say that the eyes are the center of the soul, and in Mengele's case, that was correct."Stahl interviewed the first Holocaust survivor filmed for the project, Pinchas Gutter, who was sent to the Majdanek concentration camp at age 11 and was the only member of his family to survive. Gutter was asked some 2,000 questions. Stahl spoke to him in person, but she also spoke to his digital image, which can now be seen in Holocaust museums in Dallas, Indiana and Illinois, where visitors can ask him their own questions. As many may wonder far into the future, Stahl asked Gutter how he can still have faith in God after the horrors he experienced. "How can you possibly not believe in God?" Gutter's digital image replied. "God gave human beings the knowledge of right and wrong and he allowed them to do what they wished on this earth, to find their own way. To my mind, when God sees what human beings are up to, especially things like genocide, he weeps."

Follow this link:

Artificial intelligence project lets Holocaust survivors ...

4 Main Types of Artificial Intelligence – G2

Although AI is undoubtedly multifaceted, there are specific types of artificial intelligence under which extended categories fall.

What are the four types of artificial intelligence?

There are a plethora of terms and definitions in AI that can make it difficult to navigate the difference between categories, subsets, or types of artificial intelligence and no, theyre not all the same. Some subsets of AI include machine learning, big data, and natural language processing (NLP); however, this article covers the four main types of artificial intelligence: reactive machines, limited memory, theory of mind, and self-awareness.

These four types of artificial intelligence comprise smaller aspects of the general realm of AI.

Reactive machines are the most basic type of AI system. This means that they cannot form memories or use past experiences to influence present-made decisions; they can only react to currently existing situations hence reactive. An existing form of a reactive machine is Deep Blue, a chess-playing supercomputer created by IBM in the mid-1980s.

Deep Blue was created to play chess against a human competitor with intent to defeat the competitor. It was programmed with the ability to identify a chess board and its pieces while understanding the pieces functions. Deep Blue could make predictions about what moves it should make and the moves its opponent might make, thus having an enhanced ability to predict, select, and win. In a series of matches played between 1996 and 1997, Deep Blue defeated Russian chess grandmaster Garry Kasparov 3 to 2 games, becoming the first computerized program to defeat a human opponent.

Deep Blues unique skill of accurately and successfully playing chess matches highlight its reactive abilities. In the same vein, its reactive mind also indicates that it has no concept of past or future; it only comprehends and acts on the presently-existing world and components within it. To simplify, reactive machines are programmed for the here and now, but not the before and after.

Reactive machines have no concept of the world and therefore cannot function beyond the simple tasks for which they are programmed. A characteristic of reactive machines is that no matter the time or place, these machines will always behave the way they were programmed. There is no growth with reactive machines, only stagnation in recurring actions and behaviors.

Limited memory is comprised of machine learning models that derive knowledge from previously-learned information, stored data, or events. Unlike reactive machines, limited memory learns from the past by observing actions or data fed to them in order to build experiential knowledge.

Although limited memory builds on observational data in conjunction with pre-programmed data the machines already contain, these sample pieces of information are fleeting. An existing form of limited memory is autonomous vehicles.

Autonomous vehicles, or self-driving cars, use the principle of limited memory in that they depend on a combination of observational and pre-programmed knowledge. To observe and understand how to properly drive and function among human-dependent vehicles, self-driving cars read their environment, detect patterns or changes in external factors, and adjust as necessary.

Not only do autonomous vehicles observe their environment, but they also observe the movement of other vehicles and people in their line of vision. Previously, driverless cars without limited memory AI took as long as 100 seconds to react and make judgments on external factors. Since the introduction of limited memory, reaction time on machine-based observations has dropped sharply, depicting the value of limited memory AI.

GIF courtesy of ProStock/Getty via Tesla

What constitutes theory of mind is decision-making ability equal to the extent of a human mind, but by machines. While there are some machines that currently exhibit humanlike capabilities (voice assistants, for instance), none are fully capable of holding conversations relative to human standards. One component of human conversation is having emotional capacity, or sounding and behaving like a person would in standard conventions of conversation.

This future class of machine ability would include understanding that people have thoughts and emotions that affect behavioral output and thus influence a theory of mind machines thought process. Social interaction is a key facet of human interaction, so to make theory of mind machines tangible, the AI systems that control the now-hypothetical machines would have to identify, understand, retain, and remember emotional output and behaviors while knowing how to respond to them.

From this, said theory of mind machines would have to be able to use the information derived from people and adapt it into their learning centers to know how to communicate with and treat different situations. Theory of mind is a highly advanced form of proposed artificial intelligence that would require machines to thoroughly acknowledge rapid shifts in emotional and behavioral patterns in humans, and also understand that human behavior is fluid; thus, theory of mind machines would have to be able to learn rapidly at a moments notice.

Some elements of theory of mind AI currently exist or have existed in the recent past. Two notable examples are the robots Kismet and Sophia, created in 2000 and 2016, respectively.

Kismet, developed by Professor Cynthia Breazeal, was capable of recognizing human facial signals (emotions) and could replicate said emotions with its face, which was structured with human facial features: eyes, lips, ears, eyebrows, and eyelids.

Sophia, on the other hand, is a humanoid bot created by Hanson Robotics. What distinguishes her from previous robots is her physical likeness to a human being as well as her ability to see (image recognition) and respond to interactions with appropriate facial expressions.

GIF courtesy of GIPHY

These two humanlike robots are samples of movement toward full theory of mind AI systems materializing in the near future. While neither fully holds the ability to have full-blown human conversation with an actual person, both robots have aspects of emotive ability akin to that of their human counterparts one step toward seamlessly assimilating into human society.

Self-aware AI involves machines that have human-level consciousness. This form of AI is not currently in existence, but would be considered the most advanced form of artificial intelligence known to man.

Facets of self-aware AI include the ability to not only recognize and replicate humanlike actions, but also to think for itself, have desires, and understand its feelings. Self-aware AI, in essence, is an advancement and extension of theory of mind AI. Where theory of mind only focuses on the aspects of comprehension and replication of human practices, self-aware AI takes it a step further by implying that it can and will have self-guided thoughts and reactions.

We are presently in tier three of the four types of artificial intelligence, so believing that we could potentially reach the fourth (and final?) tier of AI doesnt seem like a far-fetched idea.

But for now, its important to focus on perfecting all aspects of types two and three in AI. Sloppily speeding through each AI tier could be detrimental to the future of artificial intelligence for generations to come.

TIP: Find out what AI software currently exists today, and see how it can help with your business processes.

Ready to learn more in-depth information about artificial intelligence? Check out articles on the benefits and risks of AI as well as the innovative minds behind the first genderless voice assistant!

See the rest here:

4 Main Types of Artificial Intelligence - G2

Artificial Intelligence Market by Size, Share, Analysis …

CHAPTER 1: Introduction

1.1. Report description1.2. Key benefits for stakeholders1.3. Key market segments1.4. Research methodology

1.4.1. Secondary research1.4.2. Primary research1.4.3. Analyst tools & models

CHAPTER 2: Executive summary

2.1. CXO perspective

CHAPTER 3: Market overview

3.1. Market definition and scope3.2. Key findings

3.2.1. Top investment pockets3.2.2. Top winning strategies

3.3. Porter's five forces analysis3.4. Market share analysis, 20173.5. Market dynamics

3.5.1. Drivers

3.5.1.1. Increase in investment in AI technologies3.5.1.2. Growth in demand for analyzing and interpreting large amounts of data3.5.1.3. Increased customer satisfaction and increased adoption of reliable cloud applications

3.5.2. Restraint

3.5.2.1. Lack of trained and experienced staff

3.5.3. Opportunities

3.5.3.1. Increased adoption of AI in developing regions3.5.3.2. Developing smarter robots

CHAPTER 4: Artificial intelligence market, by technology

4.1. Market overview

4.1.1. Market size and forecast, by technology

4.2. Machine learning

4.2.1. Key market trends, growth factors, and opportunities4.2.2. Market size and forecast, by region4.2.3. Market analysis, by country

4.3. Natural language processing (NLP)

4.3.1. Key market trends, growth factors, and opportunities4.3.2. Market size and forecast, by region4.3.3. Market analysis, by country

4.4. Image processing

4.4.1. Key market trends, growth factors, and opportunities4.4.2. Market size and forecast, by region4.4.3. Market analysis, by country

4.5. Speech recognition

4.5.1. Key market trends, growth factors, and opportunities4.5.2. Market size and forecast, by region4.5.3. Market analysis, by country

CHAPTER 5: Artificial intelligence market, BY industry vertical

5.1. Market overview

5.1.1. Market size and forecast, by industry vertical

5.2. Media & advertising

5.2.1. Key market trends, growth factors, and opportunities5.2.2. Market size and forecast, by region5.2.3. Market analysis, by country

5.3. BFSI

5.3.1. Key market trends, growth factors, and opportunities5.3.2. Market size and forecast, by region5.3.3. Market analysis, by country

5.4. IT & telecom

5.4.1. Key market trends, growth factors, and opportunities5.4.2. Market size and forecast, by region5.4.3. Market analysis, by country

5.5. Retail

5.5.1. Key market trends, growth factors, and opportunities5.5.2. Market size and forecast, by region5.5.3. Market analysis, by country

5.6. Healthcare

5.6.1. Key market trends, growth factors, and opportunities5.6.2. Market size and forecast, by region5.6.3. Market analysis, by country

5.7. Automotive & transportation

5.7.1. Key market trends, growth factors, and opportunities5.7.2. Market size and forecast, by region5.7.3. Market analysis, by country

5.8. Other

5.8.1. Key market trends, growth factors, and opportunities5.8.2. Market size and forecast, by region5.8.3. Market analysis, by country

CHAPTER 6: Artificial intelligence market, BY region

6.1. Market overview6.2. North America

6.2.1. Key market trends, growth factors, and opportunities6.2.2. Market size and forecast, by technology6.2.3. Market size and forecast, by industry vertical6.2.4. Market size and forecast, by country

6.2.4.1. U.S.

6.2.4.1.1. U.S. market size and forecast, by technology6.2.4.1.2. U.S. market size and forecast, by industry vertical6.2.4.2. Canada6.2.4.2.1. Canada market size and forecast, by technology6.2.4.2.2. Canada Market size and forecast, by industry vertical

6.2.4.3. Mexico

6.2.4.3.1. Mexico market size and forecast, by technology6.2.4.3.2. Mexico market size and forecast, by industry vertical

6.3. Europe

6.3.1. Key market trends, growth factors, and opportunities6.3.2. Market size and forecast, by technology6.3.3. Market size and forecast, by industry vertical6.3.4. Market size and forecast, by country

6.3.4.1. Germany

6.3.4.1.1. Germany market size and forecast, by technology6.3.4.1.2. Germany market size and forecast, by industry vertical

6.3.4.2. UK

6.3.4.2.1. UK market size and forecast, by technology6.3.4.2.2. UK market size and forecast, by industry vertical

6.3.4.3. France

6.3.4.3.1. France market size and forecast, by technology6.3.4.3.2. France market size and forecast, by industry vertical

6.3.4.4. Russia

6.3.4.4.1. Russia market size and forecast, by technology6.3.4.4.2. Russia market size and forecast, by industry vertical

6.3.4.5. Rest of Europe

6.3.4.5.1. Rest of Europe market size and forecast, by technology6.3.4.5.2. Rest of Europe market size and forecast, by industry vertical

6.4. Asia-Pacific

6.4.1. Key market trends, growth factors, and opportunities6.4.2. Market size and forecast, by technology6.4.3. Market size and forecast, by industry vertical6.4.4. Market size and forecast, by country

6.4.4.1. China

6.4.4.1.1. China market size and forecast, by technology6.4.4.1.2. China market size and forecast, by industry vertical

6.4.4.2. Japan

6.4.4.2.1. Japan market size and forecast, by technology6.4.4.2.2. Japan market size and forecast, by industry vertical

6.4.4.3. India

6.4.4.3.1. India market size and forecast, by technology6.4.4.3.2. India market size and forecast, by industry vertical

6.4.4.4. Australia

6.4.4.4.1. Australia market size and forecast, by technology6.4.4.4.2. Australia market size and forecast, by industry vertical6.4.4.4.3. Rest of Asia-pacific market size and forecast, by technology6.4.4.4.4. Rest of Asia-pacific market size and forecast, by industry vertical

6.5. LAMEA

6.5.1. Key market trends, growth factors, and opportunities6.5.2. Market size and forecast, by technology6.5.3. Market size and forecast, by industry vertical6.5.4. Market size and forecast, by country

6.5.4.1. Latin America

6.5.4.1.1. Latin America market size and forecast, by technology6.5.4.1.2. Latin America market size and forecast, by industry vertical

6.5.4.2. Middle East

6.5.4.2.1. Middle East market size and forecast, by technology6.5.4.2.2. Middle East market size and forecast, by industry vertical

6.5.4.3. Africa

6.5.4.3.1. Africa market size and forecast, by technology6.5.4.3.2. Africa market size and forecast, by industry vertical

CHAPTER 7: Company profiles

7.1. Alphabet Inc. (Google Inc.)

7.1.1. Company overview7.1.2. Company snapshot7.1.3. Operating business segments7.1.4. Product portfolio7.1.5. Business performance7.1.6. Key strategic moves and developments

7.2. Apple Inc.

7.2.1. Company overview7.2.2. Company snapshot7.2.3. Operating business segments7.2.4. Product portfolio7.2.5. Business performance7.2.6. Key strategic moves and developments

7.3. Baidu, Inc.

7.3.1. Company overview7.3.2. Company snapshot7.3.3. Operating business segments7.3.4. Product portfolio7.3.5. Business performance7.3.6. Key strategic moves and developments

7.4. International Business Management Corporation

7.4.1. Company overview7.4.2. Company snapshot7.4.3. Operating business segments7.4.4. Product portfolio7.4.5. Business performance7.4.6. Key strategic moves and developments

7.5. IPsoft Inc.

7.5.1. Company overview7.5.2. Company snapshot7.5.3. Product portfolio7.5.4. Key strategic moves and developments

7.6. Microsoft Corporation

7.6.1. Company overview7.6.2. Company snapshot7.6.3. Operating business segments7.6.4. Product portfolio7.6.5. Business performance7.6.6. Key strategic moves and developments

7.7. MicroStrategy Incorporated

7.7.1. Company overview7.7.2. Company snapshot7.7.3. Operating business segment7.7.4. Product portfolio7.7.5. Business performance7.7.6. Key strategic moves and developments

7.8. NVIDIA Corporation

7.8.1. Company overview7.8.2. Company snapshot7.8.3. Operating business segments7.8.4. Product portfolio7.8.5. Business performance7.8.6. Key strategic moves and developments

7.9. Qlik Technologies Inc.

7.9.1. Company overview7.9.2. Company snapshot7.9.3. Operating business segments7.9.4. Product portfolio7.9.5. Key strategic moves and developments

See the article here:

Artificial Intelligence Market by Size, Share, Analysis ...

Artificial intelligence – Simple English Wikipedia, the …

Artificial intelligence (AI) is the ability of a computer program or a machine to think and learn. It is also a field of study which tries to make computers "smart". They work on their own without being encoded with commands. John McCarthy came up with the name "artificial intelligence" in 1955.

In general use, the term "artificial intelligence" means a programme which mimics human cognition. At least some of the things we associate with other minds, such as learning and problem solving can be done by computers, though not in the same way as we do.[1] Andreas Kaplan and Michael Haenlein define AI as a systems ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation.[2]

An ideal (perfect) intelligent machine is a flexible agent which perceives its environment and takes actions to maximize its chance of success at some goal or objective.[3] As machines become increasingly capable, mental faculties once thought to require intelligence are removed from the definition. For example, optical character recognition is no longer perceived as an exemplar of "artificial intelligence": it is just a routine technology.

At present we use the term AI for successfully understanding human speech,[1] competing at a high level in strategic game systems (such as Chess and Go), self-driving cars, and interpreting complex data.[4] Some people also consider AI a danger to humanity if it continues to progress at its current pace.[5]

An extreme goal of AI research is to create computer programs that can learn, solve problems, and think logically.[6][7] In practice, however, most applications have picked on problems which computers can do well. Searching data bases and doing calculations are things computers do better than people. On the other hand, "perceiving its environment" in any real sense is way beyond present-day computing.

AI involves many different fields like computer science, mathematics, linguistics, psychology, neuroscience, and philosophy. Eventually researchers hope to create a "general artificial intelligence" which can solve many problems instead of focusing on just one. Researchers are also trying to create creative and emotional AI which can possibly empathize or create art. Many approaches and tools have been tried.

Borrowing from the management literature, Kaplan and Haenlein classify artificial intelligence into three different types of AI systems: analytical, human-inspired, and humanized artificial intelligence.[8] Analytical AI has only characteristics consistent with cognitive intelligence generating cognitive representation of the world and using learning based on past experience to inform future decisions. Human-inspired AI has elements from cognitive as well as emotional intelligence, understanding, in addition to cognitive elements, also human emotions considering them in their decision making. Humanized AI shows characteristics of all types of competencies (i.e., cognitive, emotional, and social intelligence), able to be self-conscious and self-aware in interactions with others.

The first appearance of artificial intelligence is in Greek myths, like Talos of Crete or the bronze robot of Hephaestus. Humanoid robots were built by Yan Shi, Hero of Alexandria, and Al-Jazari. Sentient machines became popular in fiction during the 19th and 20th centuries with the stories of Frankenstein and Rossum's Universal Robots.

Formal logic was developed by ancient Greek philosophers and mathematicians. This study of logic produced the idea of a computer in the 19th and 20th century. Mathematician Alan Turing's theory of computation said that any mathematical problem could be solved by processing 1's and 0's. Advances in neurology, information theory, and cybernetics convinced a small group of researchers that an electronic brain was possible.

AI research really started with a conference at Dartmouth College in 1956. It was a month long brainstorming session attended by many people with interests in AI. At the conference they wrote programs that were amazing at the time, beating people at checkers or solving word problems. The Department of Defense started giving a lot of money to AI research and labs were created all over the world.

Unfortunately, researchers really underestimated just how hard some problems were. The tools they had used still did not give computers things like emotions or common sense. Mathematician James Lighthill wrote a report on AI saying that "in no part of the field have discoveries made so far produced the major impact that was then promised".[9] The U.S and British governments wanted to fund more productive projects. Funding for AI research was cut, starting an "AI winter" where little research was done.

AI research revived in the 1980s because of the popularity of expert systems, which simulated the knowledge of a human expert. By 1985, 1 billion dollars were spent on AI. New, faster computers convinced U.S. and British governments to start funding AI research again. However, the market for Lisp machines collapsed in 1987 and funding was pulled again, starting an even longer AI winter.

AI revived again in the 90s and early 2000s with its use in data mining and medical diagnosis. This was possible because of faster computers and focusing on solving more specific problems. In 1997, Deep Blue became the first computer program to beat chess world champion Garry Kasparov. Faster computers, advances in deep learning, and access to more data have made AI popular throughout the world.[10] In 2011 IBM Watson beat the top two Jeopardy! players Brad Rutter and Ken Jennings, and in 2016 Google's AlphaGo beat top Go player Lee Sedol 4 out of 5 times.

Continue reading here:

Artificial intelligence - Simple English Wikipedia, the ...

A.I. Artificial Intelligence movie review (2001) | Roger Ebert

In the final act, events take David and Teddy in a submersible to the drowned Coney Island, where they find not only Geppetto's workshop but a Blue Fairy. A collapsing Ferris wheel pins the submarine, and there they remain, trapped and immobile, for 2,000 years, as above them an ice age descends and humans become extinct. David is finally rescued by a group of impossibly slender beings that might be aliens, but are apparently very advanced androids. For them, David is an incalculable treasure: "He is the last who knew humans." From his mind they download all of his memories, and they move him into an exact replica of his childhood home. This reminded me of the bedroom beyond Jupiter constructed for Dave by aliens in Kubrick's "2001." It has the same purpose, to provide a familiar environment in an incomprehensible world. It allows these beings, like the unseen beings in "2001," to observe and learn from behavior.

Watching the film again, I asked myself why I wrote that the final scenes are "problematical," go over the top, and raise questions they aren't prepared to answer. This time they worked for me, and had a greater impact. I began with the assumption that the skeletal silver figures are indeed androids, of a much advanced generation from David's. They too must be programmed to know, love, and serve Man. Let's assume such instructions would be embedded in their programming DNA. They now find themselves in a position analogous to David in his search for his Mommy. They are missing an element crucial to their function.

After some pseudoscientific legerdemain involving a lock of Monica's hair, they are able to bring her back after 2,000 years of death--but only for 24 hours, which is all the space-time continuum permits. Do they do this to make David happy? No, because would they care? And is a computer happier when it performs its program than when it does not? No. It is either functioning or not functioning. It doesn't know how it feels.

Here is how I now read the film: These new generation mechas are advanced enough to perceive that they cannot function with humans in the absence of humans, and I didn't properly reflect this in my original review of the film. David is their only link to the human past. Whatever can be known about them, he is an invaluable source. In watching his 24 hours with Mommy, they observe him functioning at the top of his ability.

Read more from the original source:

A.I. Artificial Intelligence movie review (2001) | Roger Ebert

Drug research turns to artificial intelligence in COVID-19 fight – Business in Vancouver

Handol Kim, CEO of Variational AI Inc.: even if were able to collapse the front end, you still have five or six years of clinical trials and who knows if we need a drug in five or six years for COVID-19?|Rob Kruyt

Variational AI Inc.s bread and butter rests in novel drug discovery, specifically using artificial intelligence (AI) to compress the years-long preclinical process to perhaps a single year.

But in the midst of a pandemic, even a year might be too long to find a treatment for COVID-19, according to CEO Handol Kim.

Even if were able to collapse the front end, you still have five or six years of clinical trials and who knows if we need a drug in five or six years for COVID-19? he said.

We thought, Well, the fastest way to do this is repurposing existing drugs.

The pitch caught the interest of the Digital Technology Supercluster, which last month committed to spending $60 million of its $153 million budget to develop partnerships across its networks to address issues brought on by the pandemic.

Variational AIs partnership with adMare BioInnovations Inc., a not-for-profit organization that helps commercialize academic research, was among the first to get the nod from the Vancouver-based supercluster.

In pharmaceutical development, the way small molecules bind to a target such as a protein can be likened to how a key must fit a lock exactly for anything to happen.

If successful, the molecules can prevent the target (or receptor) from doing something or else excite the protein into doing something more in other words: they become the basis of a treatment.

That research take years, which is why Variational AI is using artificial intelligence to accelerate the process. But the companys algorithm can also take all the approved drugs in the world and use AI to see how they bind and determine which drugs would be most effective against the virus.

No clinical trials are necessary if the drugs have already been approved by authorities.

While novel drugs and vaccines that are being developed towards COVID-19 are moving along, those kinds of activities particularly drug development take many, many years, said Lana Janes, a venture partner at adMare, whose organization is shepherding tech company Variational AI through the pharma world.

It makes sense to apply it [Variational AIs technology] right now.

This early effort comes after Ottawa mandated on March 20 that the nations five superclusters reach out to their 1,800 members to come up with ways to tackle COVID-19.

Since then, the Digital Technology Supercluster is in the midst of reviewing more than 300 submissions from its 500 members.

The Digital Technology Supercluster has officially given four projects the go-ahead, including the Variational AI-led effort.

Separate local initiatives have also caught the eye of the federal government.

Prime Minister Justin Trudeau announced on May 3 Vancouver-based AbCellera Biologics Inc. would be receiving $175 million as it pursues the quick development, manufacturing and distribution of therapeutic antibodies.

Last month American pharmaceutical company Eli Lilly and Co. (NYSE:LLY) partnered with AbCellera to develop a new drug for the treatment and prevention of the COVID-19 virus.

Eli Lilly will use AbCelleras platform to zero in on antibodies generated in a natural immune response to the coronavirus.

The goal is to develop a new drug to treat people who have become infected with the virus.

Clinical trials are expected to begin as early as July.

Handol, meanwhile, said the fact Variational AI is a tech company wading into pharmaceutical waters allows his team to look at the drug development in a different way than the incumbents.

Whats unprecedented is the speed of the research that has been mobilized across the globe to try and fight this pandemic, Janes said. Its broken down those silos that can sometimes exist in science.

Read the original post:

Drug research turns to artificial intelligence in COVID-19 fight - Business in Vancouver

Artificial Intelligence is Evolving to Process the World Like Humans – Interesting Engineering

As engineers and researchers work on developing and perfecting their machine learning and AI algorithms, the end goal is ultimately to recreate the human brain. The most perfect AI imaginable would be able to process the world around us through typical sensory input but leverage the storage and computing strengths of supercomputers.

With that end goal in mind, it's not hard to understand the ways that AI is evolving as it continues to be developed. Deep learning AI is able to interpret patterns and derive conclusions. In essence, it's learning how to mimic the way that humans process the world around us.

That said, from the onset, AIs generally need typical computer input, like coded data. Developing AIs that can process the world through audio and visual input, sensory input, is a much harder task.

In order to understand artificial intelligence in the context of a perception-based interface, we need to understand what the end goal is. We need to understand how the brain is modeled and works.

Our brains are essentially the world's most powerful supercomputers, except for the fact that they're made out of organic material, rather than silicon and other materials.

Our right brain is largely perception-based, it's focused on the interpretation of environmental inputs like taste, feel, sound, sight, etc. Our left brain, on the other hand, is focused on rational thought. Our senses provide patterns to our right brain, and to our left brain, those senses provide the rationale for decision making. In a sense, we have two AIs in our head that work together to create a logical, yet also emotionally swayed machine.

RELATED: ELON MUSK SHARES HIS VIEWS ON AI, NEURALINK, AUTOPILOT, AND THE BLUE DOT IN A PODCAST

Human intelligence and our definition of what an intelligent thing is all drawback to how we ourselves process the world. In order for artificial intelligence to truly succeed, that is to be the best version of itself that it can be, then it needs to be intelligent from a human perspective.

All this draws back to modern AI in a simple way, AI is programmed in how to make a decision. Machine learning algorithms allow code to be pseudo-organically generated so that algorithms can "learn" in a sense. All of this programming is based on reasoning, on "if, then, do this."

Arguably, our brain's decision-making process is just as much based on emotions and feeling as it is reason. Emotional intelligence is a significant portion of what makes intelligence. It's the ability to read a situation, to understand other human's emotions and reactions. In order for AIs to evolve and be the best possible algorithm, they need to be able to process sensory input and emotion.

Most artificial intelligence systems are primarily created on the foundation of deep learning algorithms. This is the means of exposing a computer program to thousands of examples and AI learning how to solve problems through this process. Deep learning can be boiled down to teaching a computer how to be smart.

After any given deep learning phase for an AI, the system can perceive the inputs that it was trained on and make decisions therein. The decision-making tree that the AI forms from traditional deep learning mimics the way the right side of our brain works. It is based on the perception of inputs, of pseudo-senses.

RELATED: ARTIFICIAL INTELLIGENCE PROVES TO BE 10 PERCENT FASTER AND MORE EFFICIENT THAN HUMAN LAWYERS

Deep learning is a way of getting computers to reason, not just with if-then statements, but through the understanding of the situation. That said, the current situations AI are being trained on aren't as complex as interpreting a conversation with Becky to see if she's into you. Rather it's more along the lines of is this a dark cat, a black bag, or the night sky. Primitive, but still sensory perception...

While deep learning is currently heavily focused on one pathway, meaning AIs are developing specialties, eventually it won't be too far fetched to start training AIs on multiple things at once. Just like a toddler might learn colors and numbers at the same time. Expanding this out, as computer processing power grows, perhaps accelerated by practical quantum computing, there's no question that AIs will evolve to become more human.

Advanced AI will continue to deal with understanding and processing patterns from the world around us. Through this, it will develop more complex models on how to process that information. In a sense, AIs are like toddlers, but soon they're going to be teenagers, and eventually, they may graduate with a doctorate. All figuratively of course... though, an age where an AI graduates a university probably isn't that far off.

RELATED: ELON MUSK AND OPEN AI WANT TO CREATE AN ARTIFICIAL INTELLIGENCE THAT WON'T SPELL DOOM FOR HUMANITY

When we think about intelligent humans, we usually think of the most rationally minded people. Yet, we miss out on what is so unique about human intelligence creativity. In a sense, we take for granted our creativity, yet it is the thing that makes us the most intelligent of living beings. Our ability to process situations, not just understand what the sum of two numbers is, is what makes us uniquely intelligent. So uniquely intelligent that we can design and create artificially intelligent beings that will soon be able to match our human intelligence.

While modern AIs are primarily focused on singular strands of intelligence, whether that be finding which picture contains a bicycle or which email is spam, we're already training AIs to be all-around smart, humanly smart.

See the original post here:

Artificial Intelligence is Evolving to Process the World Like Humans - Interesting Engineering

Women wanted: Why now could be a good time for women to pursue a career in AI – CNBC

The coronavirus pandemic has upended countless jobs and even entire industries, leaving many wondering which will emerge out of the other side.

One industry likely to endure or even thrive under the virus, however, is artificial intelligence (AI), which could offer a glimpse into one of the rising careers of the future.

"This outbreak is creating overwhelming uncertainty and also greater demand for AI," IBM's vice president of data and AI, Ritika Gunnar told CNBC Make It.

Already, AI has been deployed sweepingly to help tackle the pandemic. Hospitals use the technology to diagnose patients; governments employ it in contact tracing apps and companies rely on it to support the biggest work from home experiment in history.

And that demand is only set to rise. Market research company International Data Corporation says it expects the number of AI jobs globally to grow 16% this year.

That could create new opportunities in an otherwise challenging jobs market. But the industry will need more women, in particular, if it is to overcome some of its historic bias challenges.

"In order to remove bias from AI, you need diverse perspectives among the people working on it. That means more women, and more diversity overall, in AI," said Gunnar.

The industry has been making progress lately. In a new report released Wednesday,IBMfound the majority (85%) of AI professionals think the industry has become more diverse over recent years, which has had a positive impact on the technology.

Of the more than 3,200 people surveyed acrossNorth America, Europe and India, 86% said they are now confident in AI systems' ability to make decisions without bias.

The AI opportunities from this crisis are numerous and the career opportunities are there.

Lisa Bouari

executive director, OutThought AI Assistants

However, Lisa Bouari, executive director at OutThought AI Assistants and a recipient of IBM's Women Leaders in AI awards, said more needs to be done to encourage women into the industry and keep them there.

"Attracting and retaining women are two halves of the same issue supporting a greater balance of women in AI," said Bouari. "The issues highlighted in the report around career progression, and hurdles, hold the keys to helping women stay in AI careers, and ultimately attracting more women as the status quo evolves."

For Gunnar, that means getting more women and girls excited about AI from a young age.

"We should expose girls to AI, math and science at a much earlier age so they have a support system in place," said Gunnar.

Indeed, IBM's report noted that although more women have been drawn to the industry over recent years, they did not consider AI a viable career path until later in life due to a lack of support during early education.

A plurality of men (46%) said they became interested in a tech career in high school or earlier, while a majority of women (53%) only considered it a possible path during their undergraduate degree or grad school.

But Bouari said she's hopeful that the surge in demand for AI currently can help drive the industry forward.

"The AI opportunities from this crisis are numerous and the career opportunities are there if we can successfully move hurdles and adopt it efficiently," she said.

Don't miss:Reaching gender equality at work means getting over this major hurdle first

Like this story?Subscribe to CNBC Make It on YouTube!

Read the rest here:

Women wanted: Why now could be a good time for women to pursue a career in AI - CNBC

Artificial Intelligence Used to Identify Light Sources With Far Fewer Measurements – Unite.AI

Julien Rebetez, is the Lead Software & Machine Learning Engineer at Picterra. Picterra provides a geospatial cloud-based-platform specially designed for training deep learning based detectors, quickly and securely.

Without a single line of code and with only few human-made annotations, Picterras users build and deploy unique actionable and ready to use deep learning models.

Itautomates the analysis of satellite and aerial imagery, enabling users to identify objects and patterns.

What is it that attracted you to machine learning and AI?

I started programming because I wanted to make video games and got interested in computer graphics at first. This led me to computer vision, which is kind of the reverse process where instead of having the computer create a fake environment, you have it perceive the real environment. During my studies, I took some Machine Learning courses and I got interested in the computer vision angle of it. I think whats interesting about ML is that its at the intersection between software engineering, algorithms and math and it still feels kind of magical when it works.

Youve been working on using machine learning to analyze satellite image for many years now. What was your first project?

My first exposure to satellite imagery was the Terra-i project (to detect deforestation) and I worked on it during my studies. I was amazed at the amount of freely available satellite data that is produced by the various space agencies (NASA, ESA, etc). You can get regular images of the planet for free every day or so and this is a great resource for many scientific applications.

Could you share more details regarding the Terra-i project?

The Terra-i project (http://terra-i.org/terra-i.html) was started by Professor Andrez Perez-Uribe, from HEIG-VD (Switzerland) and is now led by Louis Reymondin, from CIAT (Colombia). The idea of the project is to detect deforestation using freely available satellite images. At the time, we worked with MODIS imagery (250m pixel resolution) because it provided a uniform and predictable coverage (both spatially and temporally). We would get a measurement for each pixel every few days and from this time series of measurements, you can try to detect anomalies or novelties as we call them in ML sometimes.

This project was very interesting because the amount of data was a challenge at the time and there was also some software engineering involved to make it work on multiple computers and so on. From the ML side, it used Bayesian Neural Network (not very deep at the time ) to predict what the time series of a pixel should look like. If the measurement didnt match the prediction, then we would have an anomaly.

As part of this project, I also worked on cloud removal. We took a traditional signal processing approach there, where you have a time series of measurements and some of them will be completely off because of a cloud. We used a fourier-based approach (HANTS) to clean the time series before detecting novelties in it. One of the difficulties is that if we would clean it too strongly, wed also remove novelties, so there were quite some experiments to do to find the right parameters.

You also designed and implemented a deep learning system for automatic crop type classification from aerial (drone) imagery of farm fields. What were the main challenges at the time?

This was my first real exposure to Deep Learning. At the time, I think the main challenge were more on getting the framework to run and properly use a GPU than on the ML itself. We used Theano, which was one of the ancestors of Tensorflow.

The goal of the project was to classify the type of crop in a field, from drone imagery. We tried an approach where the Deep Learning Model was using color histograms as inputs as opposed to just the raw image. To make this work reasonably quickly, I remember having to implement a custom Theano layer, all the way to some CUDA code. That was a great learning experience at the time and a good way to dig a bit into the technical details of Deep Learning.

Youre officially the Lead Software and Machine Learning Engineer at Picterra. How would you best describe your day to day activities?

It really varies, but a lot of it is about keeping an eye on the overall architecture of the system and the product in general and communicating with the various stakeholders. Although ML is at the core of our business, you quickly realize that most of the time is not spent on ML itself, but all the things around it: data management, infrastructure, UI/UX, prototyping, understanding users, etc This is quite a change from Academia or previous experience in bigger companies where you are much more focused on a specific problem.

Whats interesting about Picterra is that we not only run Deep Learning Models for users, but we actually allow them to train their own. That is different from a lot of the typical ML workflows where you have the ML team train a model and then publish it to production. What this means is that we cannot manually play with the training parameters as you often do. We have to find some training method that will work for all of our users. This led us to create what we call our experiment framework, which is a big repository of datasets that simulates the training data our users would build on the platform. We can then easily test changes to our training methodology against these datasets and evaluate if they help or not. So instead of evaluating a single model, we are more evaluating an architecture + training methodology.

The other challenge is that our users are not ML practitioners, so they dont necessarily know what a training set is, what a label is and so on. Building a UI to allow non-ML practitioners to build datasets and train ML models is a constant challenge and there is a lot of back-and-forth between the UX and ML teams to make sure we guide users in the right direction.

Some of your responsibilities include prototyping new ideas and technologies. What are some of the more interesting projects that you have worked on?

I think the most interesting one at Picterra was the Custom Detector prototype. 1.5 years ago, we had built-in detectors on the platform: those were detectors that we trained ourselves and made accessible to users. For example, we had a building detector, a car detector, etc

This is actually the typical ML workflow: you have some ML engineer develop a model for a specific case and then you serve it to your clients.

But we wanted to do something differently and push the boundaries a bit. So we said: What if we allow users to train their own models directly on the platform ? There were a few challenges to make this work: first, we didnt want this to take multiple hours. If you want to keep this feeling of interactivity, training should take a few minutes at most. Second, we didnt want to require thousands of annotations, which is typically what you need for large Deep Learning models.

So we started with a super simple model, did a bunch of tests in jupyter and then tried to integrate it in our platform and test the whole workflow, with a basic UI and so on. At first, it wasnt working very well in most cases, but there were a few cases where it would work. This gave us hope and we started iterating on the training methodology and the model. After some months, we were able to reach a point where it worked well, and we now have our users using this all the time.

What was interesting about this is the double challenge of keeping the training fast (currently a few minutes) and therefore the model not too complex, but at the same time making it complex enough that it works and solves users problems. On top of that, it works with few (<100) labels for a lot of cases.

We also applied many of Googles Rules of Machine Learning, in particular the ones about implementing the whole pipeline and metrics before starting to optimize the model. It puts you into system thinking mode where you figure out that not all your problems should be handled by the core ML, but some of them can be pushed to the UI, some of them pre/post-processed, etc

What are some of the machine learning technologies that are used at Picterra?

In production, we are currently using Pytorch to train & run our models. We are also using Tensorflow from time to time, for some specific models developed for clients. Other than that, its a pretty standard scientific Python stack (numpy, scipy) with some geospatial libraries (gdal) thrown in.

Can you discuss how Picterra works in the backend once someone uploads images and wishes to train the neural network to properly annotate objects?

Sure, so first when you upload an image, we process it and store it in a Cloud-Optimized-Geotiff (COG) format on our blobstore (Google Cloud Storage), which allows us to quickly access blocks of the image without having to download the whole image later on. This is a key point because geospatial imagery can be huge: we have users routinely working with 5000050000 images.

So then, to train your model, you will have to create your training dataset through our web UI. You will do that by defining 3 types of areas:

Once you have created this dataset, you can simply click Train and well train a detector for you. What happens next is that we enqueue a training job, have one of our GPU worker pick it up (new GPU workers are started automatically if there are many concurrent jobs), train your model, save its weights to the blobstore and finally predict in the testing area to display on the UI. From there, you can iterate over your model. Typically, youll spot some mistakes in testing areas and add training areas to help the model improve.

Once you are happy with the score of your model, you can run it at scale. From the users point of view, this is really simple: just click on Detect next to the image you want to run it on. But its a bit more involved under the hood if the image is large. To speed things up, handle failures and avoid having detections taking multiple hours, we break down large detections in grid cells and run an independent detection job for each cell. This allows us to run very large-scale detections. For example, we had a customer run detection over the whole country of Denmark on 25cm imagery, which is in the range of TB of data for a single project. Weve covered a similar project in this medium post.

Is there anything else that you would like to share about Picterra?

I think whats great about Picterra is that it is a unique product, at the intersection between ML and Geospatial. What differentiates us from other companies that process geospatial data is that we equip our users with a self-serve platform. They can easily find locations, analyze patterns, and detect and count objects on Earth observation imagery. It would be impossible without machine learning, but our users dont even need basic coding skills the platform does the work based on a few human-made annotations. For those who want to go deeper and learn the core concepts of machine learning in the geospatial domain, we have launched a comprehensive online course.

What is also worth mentioning is that possible applications of Picterra are endless detectors built on the platform have been used in city management, precision agriculture, forestry management, humanitarian and disaster risk management, farming, etc., just to name the most common applications. We are basically surprised every day by what our users are trying to do with our platform. You can give it a try and let us know how it worked on social media.

Thank you for the great interview and for sharing with us how powerful Picterra is, readers who wish to learn more should visit the Picterra website.

Read this article:

Artificial Intelligence Used to Identify Light Sources With Far Fewer Measurements - Unite.AI

The Impending Artificial Intelligence Revolution in Healthcare – Op-Ed – HIT Consultant

Harjinder Sandhu, CEO of Saykara

For at least a decade, healthcare luminaries have been predicting the coming AI revolution. In other fields, AI has evolved beyond the hype and has begun to showcase real and transformative applications: autonomous vehicles, fraud detection, personalized shopping, virtual assistants, and so on. The list is long and impressive. But in healthcare, despite the expectations and the tremendous potential in improving the delivery of care, the AI revolution is just getting started. There have been definite advancements in areas such as diagnostic imaging, logistics within healthcare, and speech recognition for documentation. Still, the realm of AI technologies that impact the cost and quality of patient care continues to be rather narrow today.

Why has AI been slow in delivering change in the care processes of healthcare? With a wealth of new AI algorithms and computing power ready to take on new challenges, the limiting function in AIs successful application has been the availability of meaningful data sets to train on. This is surprising to many, given that EHRs were supposed to have solved the data barrier.

The promise of EHRs was that they would create a wealth of actionable data that could be leveraged for better patient care. Unfortunately, this promise never fully materialized. Most of the interesting information that can be captured in the course of patient care either is not or is captured minimally or inconsistently. Often, just enough information is recorded in the EHR to support billing and is in plain text (not actionable) form. Worse, documentation requirements have had a serious impact on physicians, to whom it ultimately fell to input much of that data. Burnout and job dissatisfaction among physicians have become endemic.

EHRs didnt create the documentation challenge. But using an EHR in the exam room can significantly detract from patient care. Speech recognition has come a long way since then, although it hasnt changed that fundamental dynamic of the screen interaction that takes away from the patient. Indeed, using speech recognition, physicians stare at the screen even more intently as they must be mindful of mistakes that the speech recognition system may generate.

Having been involved in the advancement of speech recognition in the healthcare domain and been witness to its successes and failures, I continue to believe that the next stage in the evolution of this technology would be to free physicians from the tyranny of the screen. To evolve from speech recognition systems to AI-based virtual scribes that listen to doctor-patient conversations, creating notes, and entering orders.

Using a human scribe solves a significant part of the problem for physicians scribes relieve the physician of having to enter data manually. For many physicians, a scribe has allowed them to reclaim their work lives (they can focus on patients rather than computers) as well as their personal lives (fewer evening hours completing patient notes). However, the inherent cost of both training and then employing a scribe has led to many efforts to build digital counterparts, AI-based scribes that can replicate the work of a human scribe.

Building an AI scribe is hard. It requires a substantially more sophisticated system than the current generation of speech recognition systems. Interpreting natural language conversation is one of the next major frontiers for AI in any domain. The current generation of virtual assistants, like Alexa and Siri, simplify the challenge by putting boundaries on speech, forcing a user, for example, to express a single idea at a time, within a few seconds and within the boundaries of a list of skills that these systems know how to interpret.

In contrast, an AI system that is listening to doctor-patient conversations must deal with the complexity of human speech and narrative. A patient visit could last five minutes or an hour, the speech involves at least two parties (the doctor and the patient), and a patients visit can meander to irrelevant details and branches that dont necessarily contribute to a physician making their diagnosis.

As a result of the complexity of conversational speech, it is still quite early for fully autonomous AI scribes. In the meantime, augmented AI scribes, AI systems augmented by human power, are filling in the gaps of AI competency and allowing these systems to succeed while incrementally chipping away at the goal of making these systems fully autonomous. These systems are beginning to do more than simply relieve doctors of the burden of documentation, though that is obviously important. The real transformative impact will be from capturing a comprehensive set of data about a patient journey in a structured and consistent fashion and putting that into the medical records, thereby building a base for all other AI applications to come.

About Harjinder Sandhu

Harjinder Sandhu, CEO of Saykara, a company leveraging the power and simplicity of the human voice to make delivering great care easier while streamlining physician workflow

Follow this link:

The Impending Artificial Intelligence Revolution in Healthcare - Op-Ed - HIT Consultant

Is artificial intelligence the answer to the care sector amid COVID-19? – Descrier

It is clear that the health and social care sectors in the United Kingdom have long been suffering from systematic neglect, and this has predictably resulted in dramatic workforce shortages. These shortages have been exacerbated by the current coronavirus crisis, and will be further compounded by the stricter immigration rules coming into force in January 2021. The Home Office is reportedly considering an unexpected solution to this; replacing staff with tech and artificial intelligence.

To paraphrase Aneurin Bevan, the mark of a civilised society is how it treats its sick and vulnerable. As a result, whenever technology is broached in healthcare, people are sceptical particularly if it means removing that all-important human touch.

Such fears are certainly justified. Technology and AI itself has become fraught with issues: there is a wealth of evidence that points to prove algorithms can become susceptible to absorbing the unconscious human biases of its designers, particularly around gender and race. Even the Home Office has been found using discriminatory algorithms that scan and evaluate visa applications while a similar algorithm utilised in hospitals in the US was found to be systematically discriminating against black people as the software was more likely to refer white patients to care programmes.

Such prejudices clearly present AI as unfit in healthcare. Indeed, technology is by no means a quick fix to staff shortages and should never be used at the expense of human interaction, especially in areas that are as emotionally intensive as care.

However, this does not mean that the introduction of AI into the UK care sector is necessarily a slippery slope to a techno-dystopia. Robotics have already made vital changes in the healthcare sector; surgical robots, breast cancer scanners and algorithms that can detect even the early stages of Alzheimers have proved revolutionary. The coronavirus crisis itself has reinforced just how much we rely on technology as we are able to keep in touch with our loved ones and work from home.

Yet in a more dramatic example of the potential help AI could deliver in the UK, robots have been utilised to disinfect the streets of China amid the coronavirus pandemic and one hospital at the centre of the outbreak in Wuhan outnumbered its doctor workforce with robotic aides to slow the spread of infection.

Evidently, if used correctly, AI and automation could improve care and ease the burden on staff in the UK. The Institute for Public Policy Research even calculated that 30% of work done by adult social care staff could be automated, saving the sector 6 billion. It is important to stress, though, that this initiative cannot be used as a cost cutting exercise if money is saved by automation, it should be put back into the care sector to improve both the wellbeing of those receiving care, and also the working conditions of the carers themselves.

There is much that care robots cannot do, but they can provide some level of companionship, and can serve as assistance with medication prep while smart speakers can remind or alert patients. AI can realistically monitor vulnerable patients safety 24/7 while allowing them to maintain their privacy and sense of independence.

There are examples of tech being used in social care around the world that demonstrate the positive effect that it can have; in Japan specifically, they have implemented the use of a robot called Robear that helps carry patients from their bed to their wheelchairs, a bionic suit called HAL that assists with motor tasks, and Paro a baby harp seal bot that is a therapeutic companion which has been shown to alleviate anxiety and depression in dementia sufferers. Another, a humanoid called Pepper, has been introduced as an entertainer, cleaner and corridor monitor to great success.

It is vital, though, that if automation and AI is to be introduced on a wide scale into the care sector, it must work in harmony with human caregivers. It could transform the care sector for the better if used properly, however the current government does not view it in this way; and the focus on automation is ushered in to coincide with the immigration rules that will prohibit migrant carers from entry. Rolling out care robots across the nation on such a huge scale in the next 9 months is mere blue sky thinking; replacing the fresh-and-blood and hard graft of staff with robots is therefore far-fetched at best, but disastrous to a sector that is suffering under a 110,000 staff shortage at worst. Besides, robots still disappointingly lack the empathy required for the job and simply cannot give the personal, compassionate touch that is so important; they can only ease the burden on carers, and cannot step in their shoes alone.

While in the long term it is possible that automation in the care sector could help ease the burden on staff, and plug gaps as an when it is needed, the best course of action that is currently attainable in order to solve the care crisis is for the government to reconsider just who it classifies as low skilled in relation to immigration as some Conservative MPs have already made overtures towards.

In order to remedy the failing care sector, the government should invest both in home grown talent and relax restrictions on carers from overseas seeking to work in the country. A renovation of the care sector is needed; higher wages, more reasonable hours, more secure contracts, and the introduction of a care worker visa is what is so desperately needed, and if this is implemented in conjunction with support from AI and automation we could see the growing and vibrant care sector for which this country is crying out.

Excerpt from:

Is artificial intelligence the answer to the care sector amid COVID-19? - Descrier

Cryptocurrency Market Capitalizations | CoinMarketCap

#NameMarket CapPriceVolume (24h)Circulating SupplyChange (24h)Price Graph (7d)#NameMarket CapPriceVolume (24h)Circulating SupplyChange (24h)Price Graph (7d)

1

$178,091,943,201

18,370,175 BTC

-1.81%

2

$23,454,459,129

110,853,677 ETH

0.13%

3

$9,788,746,425

44,112,853,111 XRP *

2.52%

4

$6,362,837,597

6,361,032,509 USDT *

-0.30%

5

$4,949,058,674

18,402,238 BCH

6.22%

6

$3,911,928,819

18,400,765 BSV

1.41%

7

$3,114,696,521

64,690,056 LTC

2.37%

8

$2,671,488,827

155,536,713 BNB *

0.99%

9

$2,567,303,989

922,473,488 EOS *

1.97%

10

$2,036,556,704

709,840,732 XTZ *

5.12%

11

$1,469,639,681

20,232,489,050 XLM *

1.31%

12

$1,438,953,776

350,000,000 LINK *

8.71%

13

$1,351,750,650

25,927,070,538 ADA

0.92%

14

$1,132,236,400

17,553,482 XMR

-0.21%

15

$1,115,912,568

16,603,196,347 CRO *

1.02%

16

$1,094,681,294

66,682,072,191 TRX *

1.59%

17

$1,060,173,208

999,498,893 LEO *

-0.26%

18

$931,930,134

222,668,093 HT *

-0.25%

19

$835,831,909

116,313,299 ETC

2.50%

20

$788,511,027

70,538,831 NEO *

12.68%

21

$770,077,833

9,484,216 DASH

3.58%

22

$706,145,799

706,239,390 USDC *

-0.34%

23

$675,761,431

288,208,798 HEDG *

-1.78%

24

$581,292,063

2,779,530,283 MIOTA *

13.89%

25

$535,204,624

190,688,439 ATOM *

More here:

Cryptocurrency Market Capitalizations | CoinMarketCap

Bitcoin Tops $10,000 First Time Since February, Before Halving – Bloomberg

Photographer: Akos Stiller/Bloomberg

Photographer: Akos Stiller/Bloomberg

The worlds biggest cryptocurrency briefly rallied back above $10,000 ahead of a technical event seen by some as a catalyst for longer-term price gains.

Bitcoin rose as much as 2.7% to a high of $10,070 on Friday in Asia trading, briefly taking it into five figures for the first time since Feb. 24, and was holding at $9,929 at 7:40 a.m. in New York. Thats before the cryptocurrencys upcoming halving, when the rewards miners receive for processing transactions will be cut in half as soon as next week, an intentional feature of Bitcoin designed to control inflation.

Bitcoin trades sentiment-driven at its peaks and valleys, and the post-halving hangover is part of the normal price ebbs and flows on top of Bitcoins fundamental value, said Jehan Chu, managing partner with blockchain investment and advisory firm Kenetic Capital.

The cryptocurrency has more than doubled in price since mid-March, joining a wider rally in global equities since getting rocked by coronavirus-related volatility thats depressed economic growth, consumption activity and corporate earnings. It traded around $10,500 on Feb. 13.

Read more: Bitcoin Is Staging a Comeback Reminiscent of 2017 Bubble Frenzy

Markets have been bullish since the March lows and this is across asset classes, including crypto, said Vijay Ayyar, Singapore-based head of business development at crypto exchange Luno. Money-printing by the Fed and other central banks globally have given a lot of confidence to investors that the economy will be supported no matter what.

Paul Tudor Jones, founder and chief executive of Tudor Investment Corp., said he bought Bitcoin futures as a hedge against inflation he sees being stoked by massive fiscal spending and bond-buying by central banks to combat the pandemic. Jones previously dabbled in Bitcoin in 2017, doubling his money before exiting the trade near its peak at almost $20,000.

Bitcoin will likely see sub-$10,000 levels post-halving, but the surge in institutional interest from investors like Paul Tudor Jones is undeniable validation for Bitcoin, Chu said.

While Bitcoin has been notoriously volatile over the years and crashed spectacularly after a peak near $20,000 in December 2017, it has also slowly been making inroads. Regulated exchanges have gradually been offering more in the way of products like futures and options around the asset and institutional interest has been building.

Cryptocurrencies still have their fair share of skeptics, from Warren Buffett to Nouriel Roubini. And data last month from PricewaterhouseCoopers LLP showed the industry struggled to attract mainstream investment last year as global fund raising and deals both dried up, including a 76% collapse in M&A value to $451 million from almost $1.9 billion the year before.

With the Bitcoin halving fast approaching, we believe a short-term pullback is highly likely immediately post-halving, as traders begin taking profits, said Lennard Neo, head of research at Stack AM Pte., which provides cryptocurrency trackers and index funds. In the longer-term, we can expect Bitcoin to register significant price appreciation toward the end of 2020 and early 2021.

(Updates prices)

Before it's here, it's on the Bloomberg Terminal.

View original post here:

Bitcoin Tops $10,000 First Time Since February, Before Halving - Bloomberg

Cryptocurrency market jumps by over $13 billion driven by bitcoin as major technical event approaches – CNBC

Mehmet Ali Ozcan | Anadolu Agency | Getty Images

A rally in bitcoin led the cryptocurrency market higher ahead of a major technical event for the digital coin and as industry participants report an increased interest from institutional investors.

Bitcoin crossed $10,000 on Friday morning Singapore time, the first time it has hit that price since February, according to data from CoinDesk. The cryptocurrency had pared some of those gains and was trading around $9,900.75 as of 1:39 p.m. Singapore time, still representing a more than 6.4% rise from the day before.

The entire market capitalization or value of the cryptocurrency market had jumped by more than $13 billion from the day before, as of around 1:39 p.m. Singapore time. That move had been largely driven by bitcoin which makes up most of that figure. The value of the entire market stood at $268.07 billion.

Industry participants said that a number of factors from supportive central bank monetary policy to increased interest from institutional investors has factored into the bitcoin rally.

Bitcoin suffered two bouts of intense selling in March sending it to a low of around $3,867, a price not seen since March 2019. Since then, the price has rallied over 150%.

Meanwhile, stock markets, which also saw sharp drops in March, have recovered. The Dow Jones Industrial Averageis up 28.4% since its March low.

"Overall markets have been bullish since the March lows and this is across asset classes, including crypto," Vijay Ayyar, head of business development at cryptocurrency exchange Luno, told CNBC. "Money printing by the Fed and other central banks globally have given a lot of confidence to investors that the economy will be supported no matter what."

The U.S. Federal Reserve has announced a number of unprecedented measures to help cushion the economic blow from the coronavirus outbreak. Other central banks around the world, including the European Central Bank (ECB), have unveiled their own stimulus packages. Central bank policies are seen as supportive of risk assets like stocks.

Part of the rise in bitcoin's price since the March low has been anticipation of a technical event known as "halving."

Bitcoin is not issued by a centralized authority like fiat currencies are. That is why it is often called a "decentralized" cryptocurrency. Instead it is governed by code and is underpinned by a technology known asblockchain.

In the world of bitcoin, so-called miners with specialized high-powered computers compete with each other to solve complex math problems to validate bitcoin transactions. Whoever "wins" this race gets rewarded in newly minted bitcoin. This "mining" activity happens in blocks, which is essentially a group of transactions joined into one.

Currently, these miners receive 12.5 bitcoin per block mined.The rewards are halved every few years to keep a lid on inflation. On May 12, the reward per miner will be cut in half again, to 6.25 new bitcoin.

The effect is that the supply of bitcoin coming onto the market is reduced. Previous halving events, which happen every four years, havepreceded big price increases in bitcoin.

"For the past few weeks, we have seen additional players enter the BTC market as prices have trended upward in anticipation of the halving event as bulls saw this as an opportunity to buy BTC ahead of a price pop and what many expect will be significant price appreciation," Matthew Dibb, co-founder of Stack, a bitcoin index fund provider, told CNBC. BTC refers to bitcoin's currency code like USD for the U.S. dollar.

"This has undoubtedly continued into this week and may even carry over the weekend as the halving draws closer."

Dibb said there are other factors at play as well, including more institutional money flowing into bitcoin.

Paul Tudor Jones, a high-profile Wall Street hedge fund manager,revealed in a message that one of his funds holds a low single-digit percentage infutures on the cryptocurrency, Bloomberg Newsreported.

"The news that renowned investor, Paul Tudor Jones, has backed bitcoinpublicly praising the asset for its properties as a store of value has almost certainly helped catalyse BTC's sudden movement into the US$10,000 zone," Dibb said.

"With monetary easing policies and 'unlimited' economic stimuli being recently unveiled across the world, fiat currencies seem set to weaken substantially. This has, in turn, led to bitcoin's narrative as a 'store of value' to gain added traction amongst investors who are seeking to hedge against volatility in traditional markets."

Bitcoin has often been compared to gold as a so-called safe haven asset during turbulent times for other risky assets like stock markets. However, recently, bitcoin has fallen and risen when stock markets have.

Bitcoin has always been known as a very volatile asset subject to huge price swings. In 2017, bitcoin saw somewhat of a frenzy that sent its price from under $1,000 at the start of the year to a record high of over $19,700 in December that year.

However, in 2018 the price of bitcoin came crashing down to just over $3,000 by mid-December.

Dibb believes that the recent rally is different from what was seen in 2017.

"This market is not moving purely on the back of retail speculationand it is primarily Bitcoin which is experiencing gains, not the altcoin market," Dibb said referring to smaller digital coins. "It is only now that we are really beginning to see institutional and accredited investors operating within the Bitcoin space, bringing a level of market maturity and financial understanding which was all but absent from the cryptocurrency sector as late as 2017 and 2018."

However, the risk of a substantial drop remains.

"We have gone from 3K to 10K in 2 months, too fast, too soon. There will be a pullback, and that will determine what kind of crash it is," Luno's Ayyar said.

"We could pull back to 8K, hold, and them move higher to 15K. Or we could go right back down to 3K as well. At this pointthough, one has to be bullish, unless, we see a violent move down. I think the current run up though is part of a larger move up, so don't think we'll see 3K again anytime soon. But if we do run up to 15-20K, then likelihood of a big move down and larger correction is higher."

Read the rest here:

Cryptocurrency market jumps by over $13 billion driven by bitcoin as major technical event approaches - CNBC

All Facts and Figure of Cryptocurrency is Clear as Real – AMBCrypto

Introduction

Many people look up to the bitcoins as a black currency or black market which is probably due to the lack of an understanding of the concept of it. The only market where the bitcoins are largely accepted with all dignity is the darknet. Here everything was available for sale and with complete anonymity.

Before we discuss further the facts of bitcoin, let us tell you that the allin1bitcoins will give you some more details about bitcoins facts. While let us tell you what does dark web actually means and what is it famous amongst the users. The advantage of an anonymous web browser like the darknet will help you to hide your identity. Likewise, a bitcoin exchange in the darknet will also help you to keep your transaction history hidden among all.

Are All Bitcoin Activities Illegal?

Since there are masses of people who believe in the fact that bitcoin is illegal, it is very important to make them understand that the reality is different. While you may be sure that all bitcoins deal with illegal activities but here are some facts check for you.

It is not as much anonymous as much you think it to be, while darknet is much more anonymous, while bitcoin is nowhere near to what the darknet stands with respect to anonymity. Blockchain keeps the bitcoins safe and enclosed for use and saving.

There is no other physical record that would connect the people with their wallets apart from the blockchain. It only keeps the code of the sanders address and the code of the receivers address in the blockchain. While blockchain keeps all identity of the coin holder as anonymous as possible and it never reveals the name of the coin holder. You must be aware that your address is only visible when the transaction on the bitcoins in and out happens.

However, the government and law of many countries have been able to trace this utility and the heads behind the anonymous darknet but since it is one of the largest platforms so filtering out everyone had not been possible. So many of the bitcoins from the darknet was seized and taken by the government and later on, they were auctioned and given to the bidders. While seizing the coins from the darknet and while tracking and tracing the address from which they were sent, it was found that the people who exchanged them were dealers in the darknet.

Is it Really Anonymous?

From whatever we have just told above, we hope that it is much clear that the darknet is not as anonymous as we consider it to be. It is possible to trace everything down only when exposed to highly expert professionals. You have very well understood that with extreme preciseness you will be able to trace down the anonymous users of the darknet. The addresses that are available in the darknet for the bitcoin transactions made have been dissected and found out that they are fake addresses used in the name of the FBI.

Conclusion

It is raw and wrong maybe but the reality is that the dark web net needs bitcoins to thrive and exist while the bitcoins need the dark web net to exist. They are mutually dependent on each other and this factor cannot be changed ever. Bitcoins are decentralized while the dark web can be anonymous but it is not easy to make it decentralized. A bitcoin loss can be treated as a total loss from the system but a fault in the dark web net can be stripped down and brought down to the minimum.

Disclaimer: This is a paid post and should not be considered as news/advice.

Read more from the original source:

All Facts and Figure of Cryptocurrency is Clear as Real - AMBCrypto

PiixPay Allows Customers to Conveniently Use Cryptocurrency to Handle Bills and Recurring Payments – CardRates.com

In a Nutshell: PiixPay is a fintech platform that allows users to easily pay bills using cryptocurrency. The technology converts Bitcoin, Bitcoin Cash, Litecoin, or Dash to euros and sends the fiat money to the users bank account. PiixPays Instafill feature allows deposits made to the users crypto wallet to be automatically converted to euros and deposited into his or her bank account. The platform promotes the adoption of cryptocurrency by both consumers and merchants by making it simpler to use and demonstrating its real-world use.

When the Co-Founders of PiixPay Evald-Hannes Kree and Raivo Malter began mining Bitcoin back in 2013, they recognized cryptocurrencys value as a decentralized currency system.

But they also recognized how a lack of infrastructure was keeping crypto from reaching its full potential as a form of digital currency.

In seeking an answer to this question, they created PiixPay, which does just that.

The Estonia-based platform allows users to make bank payments using Bitcoin, Bitcoin Cash, Litecoin, and Dash cryptocurrencies by converting users crypto to euros in 102 countries.

Saar said the founders were motivated to create the platform for their own personal use but quickly realized the value in scaling the technology so other crypto enthusiasts could also use their digital currencies in the real world.

They started it for fun, for themselves. Like a real startup must arise out of a need, and the need is your own, and you want to solve it, Saar said. Then you understand that there are many more people in the world facing the same problem, so you say, lets solve it for them too

PiixPay is an all-new crypto-payment platform that allows anyone to send and receive invoices in the form of digital assets across the globe, according to an article on the PiixPay blog. It makes use of a standardized currency rate so that customers can lock in a fixed exchange rate thereby minimizing currency losses for all involved parties.

To use the platform, users begin by entering the specifics for the invoice they wish to pay, as well as their name and contact information.

Once all of the paperwork is over, the payee needs to send across a fixed number of bitcoins to a specific wallet address that is provided to them, according to the article. After processing all of the data-sets, PiixPay then carries out the payment in the form of a SEPA [Single Euro Payments Area] bank transaction (using euro as its currency standard).

Saar said that when the company began in 2014, Bitcoin was the only cryptocurrency PiixPay audiences could use to pay bills, but due to the notoriously slow speed of the original crypto, the founders added the options available today.

He also said that paying bills is the most common use case for PiixPay, and the easiest example to explain to people how the platform works, although there are uses for it beyond making payments.

The company explains on its website how the platform is economically viable and offers convenience to what can be a complicated process.

Owing to the various monetary regulations that exist within East Europe, SEPA transfers are usually cumbersome, according to the website. However, PiixPay helps push these transactions in a way that all invoices are cleared within a period of 1-3 working days.

PiixPays Instafill service connects a users crypto wallet address to a bank account to easily convert crypto to euros.

Each time coins are sent to that dedicated address, the payment processor will exchange the cryptocurrency and send the fiat to your bank account, according to the website. You can also check the status of any payment at all times.

Saar said this process is extremely fast because PiixPay is constantly exchanging cryptocurrency and it maintains buffer funds in Europe so, as soon as the wallet receives cryptocurrency the company is already making a payout to the users IBAN (International Bank Account Number).

This is the fastest way for people who are getting paid in cryptocurrency, that they can have it in their bank account without doing anything else, Saar said. You just provide your company with your wallet address, and every time a payment has been done, you dont have to wait. Your euros will be in your IBAN account.

He said this is a very popular use for PiixPay audiences.

The platform is also easy to use for merchants, according to the company.

If users wish to apply for a merchant account, they can do so by sending the folks over at PiixPay an email describing the nature of their business setup, along with their contact details, according to the website.

And the open-source API is available to everyone to utilize and modify to fit their own needs and specifications.

Platform users may also select the companys convenient PDF invoicing option for greater ease of use.

When an invoice is generated, companies can directly forward these documents to the PiixPay processing module which will then scan the QR code and convert the payment amount into the correct number of Bitcoins (based on the current exchange rate), according to the website.

Saar also discussed the evolving nature of the cryptocurrency ecosystem and how platforms like PiixPay can help lead to more consumers adopting digital currencies.

Its a question of the egg and the chicken, he said. If there is no merchant accepting crypto it means there is no service or no means to spend your crypto.

And PiixPay is essentially introducing cryptocurrency to people who are on the old monetary system, Saar said.

But I think this is a very necessary service at the moment because it gives people some kind of peace of mind, as cryptocurrency can be seen as volatile, he said. Merchants want to be protected against falling rates. And bookkeepers and accountants want to see some steady assets.

Saar said knowing how to deal with swift increases or decreases in the value of Bitcoin is still a gray area for many accountants.

The more merchants and accountants who can see real-world uses of cryptocurrencies, the better, Saar said.

The advantage of having these services automatically convert cryptocurrency to fiat and deposit directly into bank accounts so that users can pay bills is that it makes the manual process of exchanging into fiat and sending via bank transfers much simpler, which can become very complex and time-consuming, according to DashNews.org.

By removing some of the complexities from the process, the likelihood that the average consumer will adopt cryptocurrency will increase.

Saar said there are a lot of possibilities with Bitcoin and other cryptocurrencies, but it is important to help educate the general public about how they work.

Eventually, receiving your paycheck in crypto or sending friends and family crypto will be as common as seeing a direct deposit in your bank account.

Originally posted here:

PiixPay Allows Customers to Conveniently Use Cryptocurrency to Handle Bills and Recurring Payments - CardRates.com

Bitcoin (BTC) Remains a Widely-Used Cryptocurrency for Dark Web Transactions, a New Report Claims – Crowdfund Insider

A recent report from Rand (Research And Development) Corporation, an American non-profit global policy think tank thats funded by the US government (and private endowment corporations, universities, and private individuals), claims that Bitcoin (BTC), the flagship cryptocurrency, is being used to carry out a relatively large number of dark web transactions.

Rand Corporations study looked into the use of privacy-oriented digital currencies, such as Monero (XMR) and Zcash (ZEC), to facilitate dark web transactions.

The Electric Coin Company, the firm behind the development of Zcash, had commissioned the research study, which was published on May 6, 2020.

The report says that Zcash has only a minor presence on the dark web, which suggests that it may have been seen as a less attractive option to dark web users and is used less often compared to other cryptocurrencies, particularly Bitcoin and Monero.

The report acknowledges that there may be some indications or anecdotal evidence that Zcash could have been used or promoted for use in illicit activities.

However, the report claims there is no evidence of widespread illicit use of Zcash. It goes on to clarify:

[The] absence of evidence does not equate to evidence of absence enduring vigilance against malicious use of this cryptocurrency is nonetheless important.

Erik Silfversten, an analyst at Rand Europe, says that there wasnt any significant evidence that the Zcash had been used to carry out illicit transactions, however, he admitted that it doesnt mean that the cryptocurrency isnt used for illegal activity at all.

Silfversten added:

We have to look at technology as a neutral, that it could be used for a wide variety of applications, and then we have to look at the actual evidence.

In January 2020, Chainalysis, a leading blockchain analytics and cybersecurity firm, reported that it traced $2.8 billion in Bitcoin (BTC) being transferred to criminals via cryptocurrency exchanges in 2019. The company claims that most of these transactions went through Binance and Huobi, two of the worlds largest crypto trading platforms.

Chainalysis management noted:

While exchanges have always been a popular off-ramp for illicit cryptocurrency, theyve taken in a steadily growing share since the beginning of 2019. Over the course of the entire year, we traced $2.8 billion in Bitcoin that moved from criminal entities to exchanges.

Read more:

Bitcoin (BTC) Remains a Widely-Used Cryptocurrency for Dark Web Transactions, a New Report Claims - Crowdfund Insider

Bitcoin’s halving might see a large influx of investors wanting a piece of the cryptocurrency market – Mashable SE Asia

Cryptocurrency is the out of control child in the investment world. Its prices fluctuate by the minute and have been deemed one of the riskier investments to be made. Even Warren Buffet, one of the richest man in the world, is discouraging people from investing in it.

But every four years, the price of bitcoin will be cut in half, in a bid to stabilize the crypto market. Mining for the currency will only yield 50 percent, which will reduce the number of coins in the market and preventing it from going through price inflation.

But stabilizing the crypto market brings along another factor: It will attract more investors.

eToro analyst Simon Peters said, During and after the first halving in 2012, the key investors were those already involved in the asset class. The bitcoin investor base was almost exclusively made up of those in the know; blockchain scientists and data programmers as well as libertarians interested in the idea of a monetary system outside of political influence and central bank control.

When bitcoin was halved in 2012, the world saw the price of a one coin drop to US$13 and peeked at US$230 in just six months. Four years later in 2016, the coin was halved again, and thats when people started to pay more attention to it. Thus, the price rocketed to about US$9,800 per coin.

This was one of the reasons cryptocurrencies was put on the map, and more people were looking at ways to sink their hands into this digital gold mine.

During the halving, eToro saw that 50 percent of the investors in Malaysia were millennials.

Peters added, Alongside the computer programmers and blockchain scientists were ordinary people, from management consultants to electricians and hairdressers. Suddenly bitcoin was on everyones lips.

Since then, the crypto industry has matured, with talks of regulation, institutional investors entering the market and even central banks expressing an interest in the asset class. Combine this with another price rally expected after the 2020 halving, and we could be on the precipice of crypto becoming a mainstay of investors portfolios in the same way as stocks, bonds and commodities.

eToro believes that with every halving of bitcoin, its technology and pricing will improve, as well as increase in adoption and regulation.

The halving will happen on May 12, which could see the price of a bitcoin drop to about US$3,000 from is the current price of US$9,900.

If you plan to invest in bitcoin when the halving happens, do use services that have been endorsed by your countrys government, and only invest what youre willing to lose.

Read more:

Bitcoin's halving might see a large influx of investors wanting a piece of the cryptocurrency market - Mashable SE Asia