Breaking News and Updates
- Abolition Of Work
- Alternative Medicine
- Artificial Intelligence
- Atlas Shrugged
- Ayn Rand
- Basic Income Guarantee
- Cbd Oil
- Chess Engines
- Cloud Computing
- Conscious Evolution
- Cosmic Heaven
- Designer Babies
- Donald Trump
- Ethical Egoism
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom of Speech
- Gene Medicine
- Genetic Engineering
- Germ Warfare
- Golden Rule
- Government Oppression
- High Seas
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Longevity
- Immortality Medicine
- Intentional Communities
- Jordan Peterson
- Life Extension
- Mars Colonization
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- National Vanguard
- New Utopia
- Online Casino
- Personal Empowerment
- Political Correctness
- Politically Incorrect
- Post Human
- Post Humanism
- Private Islands
- Quantum Computing
- Quantum Physics
- Resource Based Economy
- Ron Paul
- Second Amendment
- Second Amendment
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Teilhard De Charden
- The Singularity
- Tor Browser
- Transhuman News
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Zeitgeist Movement
The Evolutionary Perspective
Category Archives: Ai
Posted: February 10, 2017 at 3:14 am
If you accept that business is always evolving, learning and changing then you won't be surprised by this forecast. Think ultimate velocity. Think the next wave of digital disruption. This makes mobile, big data and the cloud seem like old news. The competitive landscape of companies, markets and individuals just got very complex and interesting. Artificial intelligence, AI is the new competitive advantage. Our civilization is heading for a reality check.
We will need to make a call very soon. That is about how the AI Wars will play out. Do we want a Human-Centric Future, enabled by AI but not replaced by AI? This will be a central question in the debate over AI in work, society and business. We need to consider the future trends in AI that would challenge the Human-Centric Future.
AI maybe both our greatest competition and our greatest creation.
We have entered a new era--The AI Wars. Artificial intelligence, and the current computer programs that deliver various forms of machine learning, natural language processing, neural networks and cognitive computing is emerging fast as a competitive force in every industry, nation and market. The only question that matters is Are You Future Ready? How will you adapt, integrate into your business or career as you prepare for the AI Wars?
Amazon is using Alexa to compete against all of the other retailers on the planet and Google Home. Tesla's AI downloads updated geo-intelligence to compete against all the other car brands that don't update via the cloud. IBM's Watson is automating decision analysis that competes with clinics and hospitals not enabled by its cognitive computer. This is just the beginning of the AI Wars. Companies that are using AI to compete will shape the future of AI.
There are companies using AI for diagnosing disease, deciphering law, designing fashion, writing films, drafting music, reading taxes or figuring out if your a terrorist, fraudster or threat. AI is everywhere. If you are within sight of a videocamera, cell phone, city, driving in a car or traveling by transit, online or off, unless you are on Mars you are likely exposed to AI in real-time. You may not know this.
Here's a forecast--every job a human can do will be augmented by (increased intelligence assets) and possibly replaced by AI. Companies will use AI to outcompete other companies. Nations will use AI to compete against other nations. AI augmented humans will outcompete the Naturals--humans not augmented by AI.
We must prepare now for this extreme future possibility. AI is the ultimate competitor and collaborator of humans. AI is the game changer of the future that is coming sooner then we think. So smart AI is an investment every organization and nation needs to make now so we can shape the future of AI to become Human-Centric.
Now the challenge is how will we will redesign organizations, alliances, markets, work and careers in a world where AI is a partner, enabler, producer and yes, a competitor? We need to redesign our civilization to keep pace with the advancement of AI. Now I am not a dystopian. I believe we need to prepare smarter to meet these challenges but they are coming. No denial needed. Most of what AI will bring will be productive and positive. Some of the developments will pose difficulty, challenge and threat.
Artificial intelligence will be the most powerful future competitive force influencing every business, markets, security, creativity and every profession--from law, medicine, engineering to gaming and entertainment. Having AI that can deliver solutions, faster then, even more cost-effective then, with greater quality then humans is coming. This is the inevitable end game of digital transformation.
Geopolitical power will be shaped not just by economics, wealth and might but by AI. Thinking machines that can out think the competition mean a new world of geopolitical intelligence that may evolve beyond states, law, human knowledge and understanding. How do we figure out what we cannot understand? When AI writes its own rules, operating system and behaviors and we don't understand how will we realize that we have created a potential competitor not just a collaborator. The AI Wars are coming.
The ultimate digital disruption is coming. I am not advocating that AI will replace human jobs but rather that it could happen if we don't plan ahead--become Future Ready, redesign our world to anticipate this future. Companies will and are even today competing using AI. Predictive analytics and big data driven by AI is a competitive differentiator. Make sure you are in this game--shape this future.
Even if AI surpasses humans in a autonomous world of smart technology that is faster then humans, we should hold to a Human-Centric Future. We should be ready for this future as we are creating it now. I remain positive and suggest that the future is best served by humanity using AI to fix the grand challenges that face our world--hunger, security, water, disease, poverty and sustainability. We could use some help and I advocate for AI to be directed to help enable humans to fix the planet. Makes sense to this futurist.
This Blogger's Books and Other Items from...
Future Smart: Managing the Game-Changing Trends that Will Transform Your World
by James Canton
Future Smart: Managing the Game-Changing Trends that Will Transform Your World
by James Canton
Posted: at 3:14 am
In the future, its likely that many aspects of human society will be controlled either partly or wholly by artificial intelligence. AI computer agents could manage systems from the quotidian (e.g., traffic lights) to the complex (e.g., a nations whole economy), but leaving aside the problem of whether or not they can do their jobs well, there is another challenge: will these agents be able to play nice with one another? What happens if one AIs aims conflict with anothers? Will they fight, or work together?
Googles AI subsidiary DeepMind has been exploring this problem in a new study published today. The companys researchers decided to test how AI agents interacted with one another in a series of social dilemmas. This is a rather generic term for situations in which individuals can profit from being selfish but where everyone loses if everyone is selfish. The most famous example of this is the prisoners dilemma, where two individuals can choose to betray one another for a prize, but lose out if both choose this option.
As explained in a blog post from DeepMind, the companys researchers tested how AI agents would perform in these sorts of situations, by dropping them into a pair of very basic video games.
In the first game, Gathering, two player have to collect apples from a central pile. They have the option of tagging the other player with a laser beam, temporarily removing them from the game, and giving the first player a chance to collect more apples. You can see a sample of this gameplay below:
In the second game, Wolfpack, two players have to hunt a third in an environment filled with obstacles. Points are claimed not just by the player that captures the prey, but by all players near to the prey when its captured. You can see a gameplay sample of this below:
What the researchers found was interesting, but perhaps not surprising: the AI agents altered their behavior, becoming more cooperative or antagonistic, depending on the context.
For example, with the Gathering game, when apples were in plentiful supply, the agents didnt really bother zapping one another with the laser beam. But, when stocks dwindled, the amount of zapping increased. Most interestingly, perhaps, was when a more computationally-powerful agent was introduced into the mix, it tended to zap the other player regardless of how many apples there were. That is to say, the cleverer AI decided it was better to be aggressive in all situations.
AI agents varied their strategy based on the rules of the game
Does that mean that the AI agent thinks being combative is the best strategy? Not necessarily. The researchers hypothesize that the increase in zapping behavior by the more-advanced AI was simply because the act of zapping itself is computationally challenging. The agent has to aim its weapon at the other player and track their movement activities which require more computing power, and which take up valuable apple-gathering time. Unless the agent knows these strategies will pay off, its easier just to cooperate.
Conversely, in the Wolfpack game, the cleverer the AI agent, the more likely it was to cooperate with other players. As the researchers explain, this is because learning to work with the other player to track and herd the prey requires more computational power.
The results of the study, then, show that the behavior of AI agents changes based on the rules theyre faced with. If those rules reward aggressive behavior (Zap that player to get more apples) the AI will be more aggressive; if they rewards cooperative behavior (Work together and you both get points!) theyll be more cooperative.
That means part of the challenge in controlling AI agents in the future, will be making sure the right rules are in place. As the researchers conclude in their blog post: As a consequence [of this research], we may be able to better understand and control complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet - all of which depend on our continued cooperation.
See the original post:
Posted: at 3:14 am
A lot of things happened in 2016.
For starters, 2016 was the year when the filter bubble popped and the fake news controversy shook the media industry. Following the U.S. elections, Facebook came under fire as having influenced the results by enabling the spread of fake news on its platform. A report by Buzzfeed showed how fake stories, such as Pope Francis endorsing Donald Trump, received considerably more engagement than true stories from legitimatemedia outlets like the New York Times and the Washington Post. Mark Zuckerberg was quick to dismiss the claim, but considering that nearly half of all Americans get their news primarily from the platform, it is very reasonable to believe Facebook did play a role in the elections.
The fake news controversy led to a lot of discussion and some great ideason how to face it. Under the spotlight, both Facebook and Google reacted by banning fake news sites from advertising with them. Facebook also went a step further by introducing new measures to limit the spread of fake news on its platform, such as the ability for users to report dubious content, which then shows a disputed warning label next to it.
While those are promising first steps, I am afraid they wont be enough. I believe our current misinformation problem is only the tip of a massive iceberg and this looming disaster starts with AI.
2016 was also the year where AI became mainstream. Following a long period of disappointments, AI is making a comeback thanks to recent breakthroughs such as deep learning. Now, rather than having to code the solution to a problem, it is possible to teach the computer to solve the problem on its own. This game-changing approach is enabling incredible products that would have been thought impossible just a few years ago, such as voice-controlled assistants like Amazon Echo and self-driving cars.
While this is great, AI is also enabling some impressive but downright scary new tools for manipulating media. These tools have the power to forever change how we perceive and consume information.
For instance, a few weeks ago, Adobe announced VoCo, a Photoshop for speech.In other words, VoCo is an AI-powered tool that can replicate human voices. All you need is to feed the software a 20-minute long audio recording of someone talking. The AI will analyze it and learn how that person talks. Then, just type anything, and the computer will read your words in that persons voice. Fundamentally, Adobe built VoCo to help sound editors easily fix audio mistakes in podcasts or movies. However, as you can guess, the announcement led to major concerns about the potential implications of the technology, from reducing trust in journalism to causing major security threats.
This isnt the end of it. What we can do with audio, we can also do with video:
Face2Face is an AI-powered tool that can do real-time video reenactment. The process is roughly the same as VoCo: Feed the software a video recording of someone talking, and it will learn the subtle ways that persons face moves and operates. Then, using face-tracking tech, you can map your face to that persons, essentially making them do anything you want with an uncanny level of realism.
Combine VoCo and Face2Face, and you get something very powerful: the ability to manipulate a video to make someone say exactly what you want in a way that is nearly indistinguishable from reality.
It doesnt stop here. AI is enabling many other ways to impersonate you. For instance, researchers created an AI-powered tool that can imitate any handwriting, potentially allowing someone to manipulate legal and historical documents or create false evidence to use in court. Even creepier, a startup created an AI-powered memorial chatbot: software that can learn everything about you from your chat logs, and then allow your friends to chat with your digital self after you die.
Remember the first time you realized that youd been had? That you saw a picture you thought was real, only to realize it was photoshopped? Well, here we go again.
Back in the days, people used to say that the camera cannot lie. Thanks to the invention of the camera, it was possible, for the first time, to capture reality as it was. Consequently, it wasnt long before photos became the most trusted pieces of evidence one could rely upon. Phrases like photographic memory are a testament to that. Granted, people have been historically manipulating photos, but those edits were rare and required the tedious work of experts. This isnt the case anymore.
Todays generation knows very well that the camera does lie, all the time. With the widespread adoption of photo-editing tools such as Photoshop, manipulating and sharing photos has now become one of the Internet favorites hobbies. By making it so easy to manipulate photos, these tools also made it much harder to differentiate fake photos from real ones. Today, when we see a picture that seems very unlikely, we naturally assume that it is photoshopped, even though it looks very real.
With AI, we are heading toward a world where this will be the case with every form of media: text, voice, video, etc. To be fair, tools like VoCo and Face2Face arent entirely revolutionary. Hollywood has been doing voice and face replacement for many years. However, what is new is that you no longer need professionals and powerful computers to do it. With these new tools, anyone will be able to achieve the same results using a homecomputer.
VoCo and Face2Face might not give the most convincing results right now, but the technology will inevitably improve and, at some point, be commercialized. This might take a year, or maybe 10 years, but it is only a matter of time before any angry teenager can gettheir hands on AI-powered software that can manipulate any media in ways that are indistinguishable from the original.
Given how well fake news tends to perform online, and that our trust in the media industry is at an all-time low, this is troubling. Consider, for instance, how such a widespread technology could impact:
In 2016, Oxford Dictionaries chose post-truth as the international word of the year, and for good reason. Today, it seems we are increasingly living in a kingdom of bullshit, where the White House spreads alternative facts and everything is a matter of opinion.
Technology isnt making any of this easier. As it improves our lives, it is also increasingly blurring the line between truth and falsehood. Today, we live in a world of Photoshop, CGI, and AI-powered beautifying selfie apps. The Internet promised to democratize knowledge by enabling free access to information. By doing so, it also opened up a staggering floodgate of information that includes loads of rumors, misinformation, and outright lies.
Social media promised to make us more open and connected to the world. It also made us more entrenched in digital echo chambers, where shocking, offensive, and humiliating lies are systematically reinforced, generating a ton of money for their makers in the process. Now AI is promising, among other things, to revolutionize how we create and edit media. By doing so, it will also make distortion and forgery much easier.
This doesnt mean any of these technologies are bad. Technology, by definition, is a mean to solve a problem and solving problems is always a good thing. As with everything that improves the world, technological innovation often comes with undesired side effects thattend to grab the headlines. However, in the long run, technologys benefit to society far outweighs its downsides. The worldwide quality of life has been getting better by almost any possible metric: Education, life expectancy, income, and peace are better than they have ever been in history. Technology, despite its faults, is playing a huge role in all of these improvements.
This is why I believe we should push for the commercialization of tools like VoCo or Face2Face. The technology works. We cant prevent those who want to use it for evil from getting their hands on it. If anything, making these tools available to everyone will make the public aware of their existence and by extension, aware of the easily corruptible nature of our media. Just like with Photoshop and digital photography, we will collectively adapt to a world where written, audio, and video content can be easily manipulated by anyone. In the end, we might even end up having some fun with it.
Go here to read the rest:
Posted: at 3:14 am
The Axon AI group will include about 20 programmers and engineers. They'll be tasked with developing AI capabilities specifically for public safety and law enforcement. The backbone of the Axon AI platform comes from Dextro Inc. Their computer-vision and deep learning system can search the visual contents of a video feed in real time. Technology from the Fossil Group, which Taser also acquired, will support Dextro's search capability by "improving the accuracy, efficiency and speed of processing images and video," according to the company's press release.
The AI platform is the latest addition to Taser's Axon ecosystem, which include everything from body and dash cameras to evidence and interview logging. Altogether the Axon system handles 5.2 petabytes of data from more than half of the nation's major city police departments.
With the new AI system in place, law enforcement could finally get a handle on all that footage. "Axon AI will greatly reduce the time spent preparing videos for public information requests or court submission," Taser CEO, Rick Smith, said in a statement. "This will lay the foundation for a future system where records are seamlessly recorded by sensors rather than arduously written by police officers overburdened by paperwork."
See the rest here:
Posted: at 3:14 am
February 9, 2017 When faced with a challenge, whats a tech company to do? Turn to technology, Facebook suggests.
Following criticism that its ad-approval process was failing to weed out discriminatory ads,Facebook has revised its approach to advertising, the company announced on Wednesday. In addition to updating its policies about how advertisers can use data to target users, the social media giant plans to implement a high-tech solution: machine learning.
In recent years, artificial intelligence has climbed off the pages of science fiction novels and into myriad aspects of everyday life, from internet searches to health care decisions to traffic recommendations. But Facebook's new ad-approval algorithms wade into greener territory as the company attempts to utilize machine learning to address, or at least not contribute to, social discrimination.
Machine learning has been around for half a century at least but were only now starting to use it to make a social difference,Geoffrey Gordon, an associate professor in the Machine Learning Department at Carnegie Mellon University in Pittsburgh, Penn., tells The Christian Science Monitor in a phone interview. Its going to become increasingly important.
Though analysts caution that machine learning has its limits, such an approach also carries tremendous potential for addressing these types of challenges. With that in mind, more companies particularly in the tech sector are likely to deploy similar techniques.
Facebooks change of strategy, intended to make the platform more inclusive, follow the discovery that some of its ads were specifically excluding certain racial groups. In October, nonprofit investigative news site ProPublica tested the companys ad approval process with an ad for a renter event that explicitly excluded African-Americans. The Fair Housing Act of 1968 prohibits discrimination or showing preference to anyone on the basis of race, making that ad illegal but it was nevertheless approved within 15 minutes, ProPublica reported.
Why? Because while Facebook doesn't ask users to identify their race and bars advertisers from directing their content at specific races, they have a host of information about users on file: pages they like, what languages they use, and so on. This kind of information is important to advertisers, since it means they can improve their chances of making a sale by targeting their ads toward people who are more likely to buy their product.
But by creating a demographic picture of a user, this data may make it possible to determine an individuals race, and then improperly exclude or target individuals. The company's updated policies emphasize that advertisers cannot discriminate against users on the basis of personal attributes, which Facebook says include "race, ethnicity, color, national origin, religion, age, sex, sexual orientation,gender identity, family status, disability, medical or genetic condition."
There's a fine line between appropriate use of such information and discrimination, as Facebooks head of US multicultural sales, Christian Martinez, explained following the ProPublica investigation: a merchant selling hair care products that are designed for black women will need to reach that constituency, while an apartment building that wont rent to black people or an employer that only hires men [could use the information for]negative exclusion.
For Facebook, the challenge is maintaining that advertising advantage, while preventing discrimination, particularly where its illegal. Thats where machine learning comes in.
Were beginning to test new technology thatleverages machine learning to help us identify adsthat offer housing, employment or credit opportunities - the types of advertising stakeholders told us they were concerned about, the company said in a statement on Wednesday.
The computer is just looking for patterns in data that you supply to it, explains Professor Gordon.
That means Facebook can decide which areas it wants to focus on namely, ads that offer housing, employment or credit opportunities, according to the company and then supply hundreds of examples of these types of ads to a computer.
If a human teaches the computer by initially labeling each ad as discriminatory or nondiscriminatory, a computer can learn to go from the text of the advertising to a prediction of whether its discriminatory or not, Gordon says.
This kind of machine learning known as supervised learning already has dozens of applications, from determining which emails are spam to recognizing faces in a photo.
But there are certainly limits to its effectiveness, Gordon adds.
Youre not going to do better than your source of information, he explains. Teaching the machine to recognize discriminatory ads requires lots of examples of similar ads.
If the distribution of ads that you see changes, the machine learning might stop working, Gordon explains, noting that these changing strategies on the part of content producers can often get them past AI filters, like your email spam filter. Insufficient understanding of details on the part of machines can also lead to high-profile problems, like Google Photos, which in 2015 mistakenly labeled black people as gorillas.
Teaching the machine also means having a person take the time to go through hundreds of ads and label them, as well as continue to check and correct a machines work. That makes the system vulnerable to human biases.
That process of refinement involves sorting, labeling and tagging which is difficult to do without using assumptions about ethnicity, gender, race, religion and the like, explains Amy Webb, founder and CEO of the Future Today Institute, in an email to the Monitor. The system learns through a process of real-time experimenting and testing, so once bias creeps in, it can be difficult to remove it.
More overt bias issues have already been observed with AI bots, like Tay, Microsofts chatbot, who repeated the Nazi slogans fed to her by Twitter users. While this bias may be more subtle, since it is presumably unintentional, it could conceivably create persistent problems.
Unbiased machine learning is the subject of a lot of current research, says Gordon. One answer, he suggests, is having a lot of teachers, since it offers a consensus view of discrimination that may be less vulnerable to individual biases.
Since October, the company has been working with civil rights groups and government organizations to strengthen their nondiscrimination policies. Despite potential obstacles, those groups seem pleased with the progress that the AI system and associated steps represent.
We like Facebook for following up on its commitment to combatting discriminatory targeting in online advertisements, Wade Henderson, president and chief executive officer of the Leadership Conference on Civil and Human Rights, said in a statement on Wednesday.
And machine learning is likely to become a component in other companies efforts to combat discrimination, as well as perform a host of other functions. Though he notes that tech companies are typically fairly secretive about their plans, Gordon suggests that such projects are probably already underway at many of them.
Facebook isnt the only company doing this as far as I know, all of the tech companies are considering a similar ... question, he concludes.
But is the ability to target advertising on social media platforms really worth the trouble? Professor Webb, who also teaches at the NYU School of Business, sounds a note of caution.
My behavior in Facebook is not an accurate representation for who I really am, how I think, and how I act and thats true of most people, she writes. We sometimes like, comment and post authentically, but more often were revealing just the aspirational versions of ourselves. That may ultimately not be useful for would-be advertisers.
Posted: at 3:14 am
Youd be forgiven for finding little exceptional about the latest defeat of an arsenal of poker champions by the computer algorithm Libratus in Pittsburgh last week. After all, inthe last decade or two, computers have made a habit of crushingboard game heroes. And at first blush, this appears to be just another iteration in that all-too-familiar story. Peel back a layer though, and the most recent AI victory is as disturbing as it is compelling. Lets explore the compelling side of the equation before digging into the disturbing implications of the Libratus victory.
By now, many of us are familiar with the idea of AI helping out in healthcare. For the last year or so IBM has been bludgeoning us with TV commercials about its Jeopardy-winning Watson platform, now being put to use to help oncologists diagnose and treat cancer. And while I wish to take nothing away from that achievement, Watson is a question answering system with no capacity for strategic thinking. The latter topic belongs to a class of situations more germane to the field of game theory. Game theory is usually tucked underthe sub-genre of economics, for it deals with how entities make strategic decisions in the pursuit of self interest. Its also the discipline from which the AI poker playing algorithm Libratus gets its smarts.
What does this have to do with health care and the flu? Think of disease as a game between strategic entities. Picture avirus as one player, a player with a certain set of attack and defense strategies. When the virus encounters your body, a game ensues, in which your body defends with its own strategies and hopefully prevails. This game has been going on a long time, with humans having only a marginal ability to control the outcome. Our bodys natural defenses have been developed in evolutionary time, and thus have a limited ability to make on the fly adaptations.
But what if we could recruit computers to be our allies in this game against viruses? And what if the same reasoning ability that allowed Libratus to prevail over the best poker mindsin the world could tacklehow to defeat a virus or a bacterial infection? This is in fact the subject of a compelling research paperby Toumas Sandholm, the designer of the Libratus algorithm. In it, he explains at length how an AI algorithm could be used for drug design and disease prevention.
With only the health of the entire human race at stake, its hard to imagine a rationale that would discourage us from making use of such a strategic superpower. Now for the disturbing part of story, and the so-called fable of the sparrows recounted by Nick Bostrom in his singular work Superintelligence: Paths, Dangers and Strategies. In the preface to the book, he tells of a group of sparrows who recruit a baby owl to help defend them against other predators, not realizing the owl might one day grow up and devour them all. In Libratus, an algorithm thats in essence a universal strategic game-playing machine, and is likely capable of besting humankind in any number of real-world strategic games, we may have finally met our owl. And while the end of the story between ourselves and Libratus has yet to be determined, prudence would surely advise we tread carefully.
See original here:
Posted: at 3:14 am
Dynatrace Drives Digital Innovation With AI Virtual Assistant
And then there's davis, the artificial intelligence (AI)-driven interface to Dynatrace's deep, real-time knowledge about application and infrastructure performance. Interact with davis via Amazon Alexa's soothing, vaguely British female voice, asking ...
Read more here:
Posted: at 3:14 am
I spent most of last week in the Midtown Hilton in New York City attending Legaltech 2017, or Legalweek: The Experience, or some sort of variation of the two. For the most part, it pretty much had the same feel as every other Legaltech Ive attended. But I agree with my fellow Above the Law tech columnist, Bob Ambrogi, that ALM deserves kudos for trying to change the focus a bit. It may take a year or two of experimentation to get it right, but at least theyre trying.
This year, one of the topics that popped up over and over throughout the conference was artificial intelligence and its potential impact on the practice of law. In part the AI focus was attributable to the Keynote speaker on the opening day of the conference,Andrew McAfee, author of The Second Machine Age(affiliate link). His talk focused on ways that AI would disrupt business as usual in the years to come. His predictions were in part premised on his assertion that key technologies had improved greatly in recent years and as a result were in the midst of a convergence of these technologies such that AI is finally coming of age.
I was particularly excited about this keynote sinceId started reading McAfeesbook in mid-December after Klaus Schauser, the CTO of AppFolio, MyCases parent company, recommended it to me. As McAfee explains in his book, its abundantly clear that AI is already having an incredible impact on other industries.
But what about the legal industry? I started mulling over this issue last September after attending ILTA in D.C. andwriting about a few different legal software platforms grounded in AI concepts. Because I find this topic to be so interesting, I decided to hone in on it during my interviews at Legaltech as well, which I livestreamed via Periscope.
First I met with Mark Noel, managing director of professional services at Catalyst Repository Systems. After he shared the news ofCatalysts latest release, Insight Enterprise, a platform for corporate general counsel designed to centralize and streamline discovery processes, we turned to AI and his thoughts on how it will affect the legal industry over the next year. He believes that AI will eventually manage the more tedious parts of practicing law, thus allowing lawyers to focus on the analytical aspects that tend to be more interesting: Some of the types of tasks lawyers are best at I dont see AI taking over anytime soon. A lot of what lawyers work with is justice, fairness, and equity, which are more abstract. The ultimate goal of legal practice the human practitioner is going to have to do, but the the grunt work and repeatable stuff like discovery which is becoming more onerous because of growing data volumes those are the kinds of things these tools can take over for us. You can watch the full interview here.
Next I spoke with AJ Shankar, the founder of Everlaw, an ediscovery platform that recently rolled out an integrated litigation case management tool as well, which I wrote about here. According to AJ, AI is undergoing a renaissance across many different industries. But when it comes to the legal space, its a different story. AI is not ready to make the tough judgments that lawyers make, but it is ready to augment human processes. AI will become a very important assistant for you. It will work hand in hand with humans who will then provide the valuable context. You can watch the full interview here.
I also met with Jack Grow, the president of LawToolBox, which provides calendaring and docketing softwareand he talked to me about their latest integration with DocuSign. Then we moved onto AI and Jack suggested that in the short term, the focus would be on aggregating the data needed to build useful AI platforms for the legal industry. Over the next year software vendors will figure out how to collect better data that can be consumed for analysis later on, so it can be put into an algorithm to make better use of it. Theyll be building the foundation and infrastructure so that they can later take advantage of artificial intelligence. You can watch the full interview here.
And last but certainly not least, I spoke with Jeremiah Kelman, the president of Everchron, a company that Ive covered previously, which provides a collaborative case management platform for litigators. Jeremiah predicts that AI will provide very targeted and specific improvements for lawyers. Replacement of lawyers sounds interesting, but its more about leveraging the information you have and the data that is out there and using it to provide insights and give direction to lawyers as they do their tasks and speed up what they do. From research, ediscovery, case management, and things across the spectrum, well see it in targeted areas and youll get the most impact from leveraging and improving within the existing framework. You can watch the full interview here.
Nicole Black is a Rochester, New York attorney and the Legal Technology Evangelist at MyCase, web-based law practice management software. Shes been blogging since 2005, has written a weekly column for the Daily Record since 2007, is the author of Cloud Computing for Lawyers, co-authors Social Media for Lawyers: the Next Frontier, and co-authors Criminal Law in New York. Shes easily distracted by the potential of bright and shiny tech gadgets, along with good food and wine. You can follow her on Twitter @nikiblack and she can be reached at firstname.lastname@example.org.
Posted: February 9, 2017 at 6:13 am
Artificial intelligence is a young field full of nearly unlimited potential that remains largely misunderstood by most people. We've come a long way since Watson won Jeopardy in 2011 and IBM formed the business unit with over $1 billion in investments. AI is no longer a one-trick pony. AI technology from IBM Watson and multiple companies such as WayBlazer and SparkCognition has moved firmly into the real world. It is now being used for a variety of daily applications including:
We have no doubt come a good distance on what is indeed a very long road. My colleagues at Intel believe that AI will be bigger than the Internet. Software that can understand context and learn about users as individuals is an entirely new paradigm for computing. But many dangers and problems lie ahead, if we don't look past the hype and focus on five key areas:
1. Applying AI It all starts with what you are trying to achieve. Companies are struggling to generate business value with AI. Data scientists are overwhelmed by the complexity and quantity of data, and line-of-business executives for their part are underwhelmed by the tangible output of those data scientists. (See the recently published Harvard Business Review article, "Why Youre Not Getting Value from Your Data Science.") Machine learning teams are struggling with what business problems to solve with clear outcomes. What is needed is a clear set of high-value use cases by industry and process domains where AI can create demonstrable business value.
2. Building AI. We have a global talent shortage, and the demand for data scientists continues to grow rapidly, far outpacing the anemic growth in supply. A McKinsey study predicts that by 2018 the number of data science jobs in the United States alone will exceed 490,000, but there will be fewer than 200,000 available data scientists to fill these positions. Globally, demand for data scientists is projected to exceed supply by more than 50 percent by 2018.
In addition, the training offered at universities is too focused on the mathematical and research aspects of AI and machine learning. Largely missing are strategy, design, insights, and change management. This oversight may have serious consequences for graduating students and their future employerswithout a multi-disciplinary approach, we will be graduating data scientists capable of designing an algorithm that is mathematically elegant, but doesnt make strategic sense for the business.
3. Testing AI. Quality assurance is one of the most important parts of software development. Products must pass a number of tests before they reach the real worldthese include unit testing, boundary testing, stress testing, and other practices. In addition, we need systems that deliver the required training data for machine learning of systems. AI is not deterministicmeaning you can receive different results from the same input data when training it. The software learns in different, nuanced ways each time it is trained. So we need new types of software testing that start with an initial "ground truth" and then verify whether the AI system is doing its job.
4. Governing AI. Every transformative tool that people have createdfrom the steam engine to the microprocessoraugments human capabilities. Successful use of these tools requires proper governance, and AI is no different; we need governance to ensure that AI is developed the right way and for the right reasons. As the UX designer Mark Rolston wrote last year on Co.Design, "The coming tidal wave of [AI-based decision support software] threatens to give very few people a phenomenal amount of suggestive power over a great many peoplethe kind of power that is hard to trace and almost impossible to stop."
AI systems should be manageable and able to clearly explain their actions. Algorithm development has so far been driven by the goal of improving performance, at the expense of credibility and traceability, which means we end up with opaque "black boxes." We are already seeing such black boxes rejected by users, regulators, and companies, as they fail the regulatory, compliance and risk requirements of corporations dealing with sensitive personal health and financial information. This issue will only get bigger as AI leads to new processes and longer chains of responsibility.
Last years White House report on "Preparing for the Future of Artificial Intelligence" outlined key areas of governance:
5. Experiencing AI. One of the biggest stories at the 2017 Consumer Electronics Show in Las Vegas was the exponential growth of Amazon's Alexa ecosystem. It foretold a future of endless smart home and office products accessible via voice, gesture, and other ways through Amazon Echo. Another tech giant, chipmaker Nvidia, presented an expansive vision for homes, offices, and cars controlled by AI assistants. Meanwhile holographic projection, VR headsets, and "merged" reality technologies like Intels Project Alloy showed that the fundamental way we experience computers is evolving.
When it comes to experiencing AI, researchers tend to focus on creating better algorithms. But theres really much more to be done here. The quality of the user experience determines both the usefulness of the product and its rate of adoption, and this is why I believe design is the next frontier of AI. At the machine intelligence firm CognitiveScale, where I'm chairman, we are facing this challenge with cognitive computing, the type of AI software we create for multinational banks, retailers, healthcare providers, and others. Like a lot of enterprise systems today our software is cloud-based. So how do you make something as nebulous sounding as a "cognitive cloud" something that a user would be thrilled to welcome into her daily life?
"Cognitive design" is the subject of a longer article, but here I will hint that a key strategy is to focus on the micro-interactions between man and machinethe fleeting moments that add up to make engagement with an AI system delightful. Just as designers use tools like journey maps to develop a human-centered experience around a particular product or service, companies must practice "cognitive design thinking"creating an experience between man and machine that builds efficacy, trust, and an emotional bond. In the end, outcomes are determined as much by the human element as by the software element.
All of this only touches the surface of the issues and difficulties that lie ahead. AI isnt just software, and it isnt just about making things easier. Its potential for radical social and economic change is enormous, and it will touch every aspect of our personal and public lives, which is why we need to think carefully and ethically about how we apply, build, test, govern, and experience machine intelligence.
Posted: at 6:13 am
On the left, 8x8 images; in the middle, the images generated by Google; and on the right, the original 32x32 faces. Photograph: Google
Googles neural networks have achieved the dream of CSI viewers everywhere: the company has revealed a new AI system capable of enhancing an eight-pixel square image, increasing the resolution 16-fold and effectively restoring lost data.
The neural network could be used to increase the resolution of blurred or pixelated faces, in a way previously thought impossible; a similar system was demonstrated for enhancing images of bedrooms, again creating a 32x32 pixel image from an 8x8 one.
Googles researchers describe the neural network as hallucinating the extra information. The system was trained by being shown innumerable images of faces, so that it learns typical facial features. A second portion of the system, meanwhile, focuses on comparing 8x8 pixel images with all the possible 32x32 pixel images they could be shrunken versions of.
The two networks working in harmony effectively redraw their best guess of what the original facial image would be. The system allows for a huge improvement over old-fashioned methods of up-sampling: where an older system might simply look at a block of red in the middle of a face, make it 16 times bigger and blur the edges, Googles system is capable of recognising it is likely to be a pair of lips, and draw the image accordingly.
Of course, the system isnt capable of magic. While it can make educated guesses based on knowledge of what faces generally look like, it sometimes wont have enough information to redraw a face that is recognisably the same person as the original image. And sometimes it just plain screws up, creating inhuman monstrosities. Nontheless, the system works well enough to fool people around 10% of the time, for images of faces.
Running the same system on pictures of bedrooms is even better: test subjects were unable to correctly pick the original image almost 30% of the time. A score of 50% would indicate the system was creating images indistinguishable from reality.
Although this system exists at the extreme end of image manipulation, neural networks have also presented promising results for more conventional compression purposes. In January, Google announced it would use a machine learning-based approach to compress images on Google+ four-fold, saving users bandwidth by limiting the amount of information that needs to be sent. The system then makes the same sort of educated guesses about what information lies between the pixels to increase the resolution of the final picture.
See the original post: