The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Monthly Archives: July 2017
Google launches its own AI Studio to foster machine intelligence startups – TechCrunch
Posted: July 26, 2017 at 4:18 pm
A new week brings a fresh Google initiative targeting AI startups. We started the month with the announcement of Gradient Ventures, Googles on-balance sheet AI investment vehicle. Two days later we watched the finalists of Google Clouds machine learning competition pitch to a panel of top AI investors. And today, Googles Launchpad is announcing a new hands-on Studio program to feed hungry AI startups the resources they need to get off the ground and scale.
The thesis is simple not all startups are created the same. AI startups love data and struggle to get enough of it. They often have to go to market in phases, iterating as new data becomes available. And they typically have highly technical teams and a dearth of product talent. You get the picture.
The Launchpad Studio aims to address these needs head-on with specialized data sets, simulation tools and prototyping assistance. Another selling point of the Launchpad Studio is that startups accepted will have access to Google talent, including engineers, IP experts and product specialists.
Launchpad, to date, operates in 40 countries around the world, explains Roy Geva Glasberg, Googles Global Lead for Accelerator efforts. We have worked with over 10,000 startups and trained over 2,000 mentors globally.
This core mentor base will serve as a recruiting pool for mentors that will assist the Studio.Barak Hachamov, board member for Launchpad, has been traveling around the world withGlasberg to identify new mentors for the program.
The idea of a startup studio isnt new. It has been attempted a handful of times in recent years, but seems to have finally caught on withAndy Rubins Playground Global. Playground offers startups extensive services and access to top talent to dial-in products and compete with the largest of tech companies.
On the AI Studio front, Yoshua Bengios Element AI raised a $102 million Series A to create a similar program. Bengio, one of, if not the, most famous AI researchers, can help attract top machine learning talent to enable recruiting parity with top AI groups like Googles DeepMind and Facebooks FAIR. Launchpad Studio wont have Bengio, but it will bringPeter Norvig, Dan Ariely, Yossi Matias and Chris DiBona to the table.
But unlike Playgrounds $300 million accompanying venture capital arm and Elements own coffers, Launchpad Studio doesnt actually have any capital to deploy. On one hand, capital completes the package. On the other, Ive never heard a good AI startup complain about not being able to raise funding.
Launchpad Studio sits on top of the Google Developer Launchpad network. The group has been operating an accelerator with global scale for some time now. Now on its fourth class of startups, the team has had time to flesh out its vision and build relationships with experts within Google to ease startup woes.
Launchpad has positioned itself as the Google global program for startups, asserts Glasberg. It is the most scaleable tool Google has today to reach, empower, train and support startups globally.
With all the resources in the world, Googles biggest challenge with its Studio wont be vision or execution but this doesnt guarantee everything will be smooth sailing. Between GV, Capital G, Gradient Ventures, GCP and Studio, entrepreneurs are going to have a lot of potential touch-points with the company.
On paper, Launchpad Studio is the Switzerland of Googles programs. It doesnt aim to make money or strengthen Google Clouds positioning. But from the perspective of founders, theres bound to be some confusion. In an ideal world we will see a meeting of the minds between Launchpads Glasberg, Gradients Anna Patterson and GCPs Sam OKeefe.
The Launchpad Studio will be based in San Francisco, with additional operations in Tel Aviv and New York City. Eventually Toronto, London, Bangalore and Singapore will host events locally for AI founders.
Applications to the Studio are now open if youre interested you can apply here.The program itself is stage-agnostic, so there are no restrictions on size. Ideally early and later-stage startups can learn from each other as they scale machine learning models to larger audiences.
See the original post here:
Google launches its own AI Studio to foster machine intelligence startups - TechCrunch
Posted in Ai
Comments Off on Google launches its own AI Studio to foster machine intelligence startups – TechCrunch
The data that transformed AI researchand possibly the world – Quartz
Posted: at 4:18 pm
In 2006, Fei-Fei Li started ruminating on an idea.
Li, a newly-minted computer science professor at University of Illinois Urbana-Champaign, saw her colleagues across academia and the AI industry hammering away at the same concept: a better algorithm would make better decisions, regardless of the data.
But she realized a limitation to this approachthe best algorithm wouldnt work well if the data it learned from didnt reflect the real world.
Her solution: build a better dataset.
We decided we wanted to do something that was completely historically unprecedented, Li said, referring to a small team who would initially work with her. Were going to map out the entire world of objects.
The resulting dataset was called ImageNet. Originally published in 2009 as a research poster stuck in the corner of a Miami Beach conference center, the dataset quickly evolved into an annual competition to see which algorithms could identify objects in the datasets images with the lowest error rate. Many see it as the catalyst for the AI boom the world is experiencing today.
Alumni of the ImageNet challenge can be found in every corner of the tech world. The contests first winners in 2010 went on to take senior roles at Baidu, Google, and Huawei. Matthew Zeiler built Clarifai based off his 2013 ImageNet win, and is now backed by $40 million in VC funding. In 2014, Google split the winning title with two researchers from Oxford, who were quickly snapped up and added to its recently-acquired DeepMind lab.
Li herself is now chief scientist at Google Cloud, a professor at Stanford, and director of the universitys AI lab.
Today, shell take the stage at CVPR to talk about ImageNets annual results for the last time2017 was the final year of the competition. In just seven years, the winning accuracy in classifying objects in the dataset rose from 71.8% to 97.3%, surpassing human abilities and effectively proving that bigger data leads to better decisions.
Even as the competition ends, its legacy is already taking shape. Since 2009, dozens of new AI research datasets have been introduced in subfields like computer vision, natural language processing, and voice recognition.
The paradigm shift of the ImageNet thinking is that while a lot of people are paying attention to models, lets pay attention to data, Li said. Data will redefine how we think about models.
In the late 1980s, Princeton psychologist George Miller started a project called WordNet, with the aim of building a hierarchal structure for the English language. It would be sort of like a dictionary, but words would be shown in relation to other words rather than alphabetical order. For example, within WordNet, the word dog would be nested under canine, which would be nested under mammal, and so on. It was a way to organize language that relied on machine-readable logic, and amassed more than 155,000 indexed words.
Li, in her first teaching job at UIUC, had been grappling with one of the core tensions in machine learning: overfitting and generalization. When an algorithm can only work with data thats close to what its seen before, the model is considered overfitting to the data; it cant understand anything more general past those examples. On the other hand, if a model doesnt pick up the right patterns between the data, its overgeneralizing.
Finding the perfect algorithm seemed distant, Li says. She saw that previous datasets didnt capture how variable the world could beeven just identifying pictures of cats is infinitely complex. But by giving the algorithms more examples of how complex the world could be, it made mathematic sense they could fare better. If you only saw five pictures of cats, youd only have five camera angles, lighting conditions, and maybe variety of cat. But if youve seen 500 pictures of cats, there are many more examples to draw commonalities from.
Li started to read about how others had attempted to catalogue a fair representation of the world with data. During that search, she found WordNet.
Having read about WordNets approach, Li met with professor Christiane Fellbaum, a researcher influential in the continued work on WordNet, during a 2006 visit to Princeton. Fellbaum had the idea that WordNet could have an image associated with each of the words, more as a reference rather than a computer vision dataset. Coming from that meeting, Li imagined something grandera large-scale dataset with many examples of each word.
Months later Li joined the Princeton faculty, her alma mater, and started on the ImageNet project in early 2007. She started to build a team to help with the challenge, first recruiting a fellow professor, Kai Li, who then convinced Ph.D student Jia Deng to transfer into Lis lab. Deng has helped run the ImageNet project through 2017.
It was clear to me that this was something that was very different from what other people were doing, were focused on at the time, Deng said. I had a clear idea that this would change how the game was played in vision research, but I didnt know how it would change.
The objects in the dataset would range from concrete objects, like pandas or churches, to abstract ideas like love.
Lis first idea was to hire undergraduate students for $10 an hour to manually find images and add them to the dataset. But back-of-the-napkin math quickly made Li realize that at the undergrads rate of collecting images it would take 90 years to complete.
After the undergrad task force was disbanded, Li and the team went back to the drawing board. What if computer-vision algorithms could pick the photos from the internet, and humans would then just curate the images? But after a few months of tinkering with algorithms, the team came to the conclusion that this technique wasnt sustainable eitherfuture algorithms would be constricted to only judging what algorithms were capable of recognizing at the time the dataset was compiled.
Undergrads were time-consuming, algorithms were flawed, and the team didnt have moneyLi said the project failed to win any of the federal grants she applied for, receiving comments on proposals that it was shameful Princeton would research this topic, and that the only strength of proposal was that Li was a woman.
A solution finally surfaced in a chance hallway conversation with a graduate student who asked Li whether she had heard of Amazon Mechanical Turk, a service where hordes of humans sitting at computers around the world would complete small online tasks for pennies.
He showed me the website, and I can tell you literally that day I knew the ImageNet project was going to happen, she said. Suddenly we found a tool that could scale, that we could not possibly dream of by hiring Princeton undergrads.
Mechanical Turk brought its own slew of hurdles, with much of the work fielded by two of Lis Ph.D students, Jia Deng and Olga Russakovsky . For example, how many Turkers needed to look at each image? Maybe two people could determine that a cat was a cat, but an image of a miniature husky might require 10 rounds of validation. What if some Turkers tried to game or cheat the system? Lis team ended up creating a batch of statistical models for Turkers behaviors to help ensure the dataset only included correct images.
Even after finding Mechanical Turk, the dataset took two and a half years to complete. It consisted of 3.2 million labelled images, separated into 5,247 categories, sorted into 12 subtrees like mammal, vehicle, and furniture.
In 2009, Li and her team published the ImageNet paper with the datasetto little fanfare. Li recalls that CVPR, a leading conference in computer vision research, only allowed a poster, instead of an oral presentation, and the team handed out ImageNet-branded pens to drum up interest. People were skeptical of the basic idea that more data would help them develop better algorithms.
There were comments like If you cant even do one object well, why would you do thousands, or tens of thousands of objects? Deng said.
If data is the new oil, it was still dinosaur bones in 2009.
Later in 2009, at a computer vision conference in Kyoto, a researcher named Alex Berg approached Li to suggest that adding an additional aspect to the contest where algorithms would also have to locate where the pictured object was, not just that it existed. Li countered: Come work with me.
Li, Berg, and Deng authored five papers together based on the dataset, exploring how algorithms would interpret such vast amounts of data. The first paper would become a benchmark for how an algorithm would react to thousands of classes of images, the predecessor to the ImageNet competition.
We realized to democratize this idea we needed to reach out further, Li said, speaking on the first paper.
Li then approached a well-known image recognition competition in Europe called PASCAL VOC, which agreed to collaborate and co-brand their competition with ImageNet. The PASCAL challenge was a well-respected competition and dataset, but representative of the previous method of thinking. The competition only had 20 classes, compared to ImageNets 1,000.
As the competition continued in 2011 and into 2012, it soon became a benchmark for how well image classification algorithms fared against the most complex visual dataset assembled at the time.
But researchers also began to notice something more going on than just a competitiontheir algorithms worked better when they trained using the ImageNet dataset.
The nice surprise was that people who trained their models on ImageNet could use them to jumpstart models for other recognition tasks. Youd start with the ImageNet model and then youd fine-tune it for another task, said Berg. That was a breakthrough both for neural nets and just for recognition in general.
Two years after the first ImageNet competition, in 2012, something even bigger happened. Indeed, if the artificial intelligence boom we see today could be attributed to a single event, it would be the announcement of the 2012 ImageNet challenge results.
Geoffrey Hinton, Ilya Sutskever, and Alex Krizhevsky from the University of Toronto submitted a deep convolutional neural network architecture called AlexNetstill used in research to this daywhich beat the field by a whopping 10.8 percentage point margin, which was 41% better than the next best.
ImageNet couldnt come at a better time for Hinton and his two students. Hinton had been working on artificial neural networks since the 1980s, and while some like Yann LeCun had been able to work the technology into ATM check readers through the influence of Bell Labs, Hintons research hadnt found that kind of home. A few years earlier, research from graphics-card manufacturer Nvidia had made these networks process faster, but still not better than other techniques.
Hinton and his team had demonstrated that their networks could perform smaller tasks on smaller datasets, like handwriting detection, but they needed much more data to be useful in the real world.
It was so clear that if you do a really good on ImageNet, you could solve image recognition, said Sutskever.
Today, these convolutional neural networks are everywhereFacebook, where LeCun is director of AI research, uses them to tag your photos; self-driving cars are using them to detect objects; basically anything that knows whats in a image or video uses them. They can tell whats in an image by finding patterns between pixels on ascending levels of abstraction, using thousands to millions of tiny computations on each level. New images are put through the process to match their patterns to learned patterns. Hinton had been pushing his colleagues to take them seriously for decades, but now he had proof that they could beat other state of the art techniques.
Whats more amazing is that people were able to keep improving it with deep learning, Sutskever said, referring to the method that layers neural networks to allow more complex patterns to be processed, now the most popular favor of artificial intelligence. Deep learning is just the right stuff.
The 2012 ImageNet results sent computer vision researchers scrambling to replicate the process. Matthew Zeiler, an NYU Ph.D student who had studied under Hinton, found out about the ImageNet results and, through the University of Toronto connection, got early access to the paper and code. He started working with Rob Fergus, a NYU professor who had also built a career working on neural networks. The two started to develop their submission for the 2013 challenge, and Zeiler eventually left a Google internship weeks early to focus on the submission.
Zeiler and Fergus won that year, and by 2014 all the high-scoring competitors would be deep neural networks, Li said.
This Imagenet 2012 event was definitely what triggered the big explosion of AI today, Zeiler wrote in an email to Quartz. There were definitely some very promising results in speech recognition shortly before this (again many of them sparked by Toronto), but they didnt take off publicly as much as that ImageNet win did in 2012 and the following years.
Today, many consider ImageNet solvedthe error rate is incredibly low at around 2%. But thats for classification, or identifying which object is in an image. This doesnt mean an algorithm knows the properties of that object, where it comes from, what its used for, who made it, or how it interacts with its surroundings. In short, it doesnt actually understand what its seeing. This is mirrored in speech recognition, and even in much of natural language processing. While our AI today is fantastic at knowing what things are, understanding these objects in the context of the world is next. How AI researchers will get there is still unclear.
While the competition is ending, the ImageNet datasetupdated over the years and now more than 13 million images strongwill live on.
Berg says the team tried to retire the one aspect of the challenge in 2014, but faced pushback from companies including Google and Facebook who liked the centralized benchmark. The industry could point to one number and say, Were this good.
Since 2010 there have been a number of other high-profile datasets introduced by Google, Microsoft, and the Canadian Institute for Advanced Research, as deep learning has proven to require data as vast as what ImageNet provided.
Datasets have become haute. Startup founders and venture capitalists will write Medium posts shouting out the latest datasets, and how their algorithms fared on ImageNet. Internet companies such as Google, Facebook, and Amazon have started creating their own internal datasets, based on the millions of images, voice clips, and text snippets entered and shared on their platforms every day. Even startups are beginning to assemble their own datasetsTwentyBN, an AI company focused on video understanding, used Amazon Mechanical Turk to collect videos of Turkers performing simple hand gestures and actions on video. The company has released two datasets free for academic use, each with more than 100,000 videos.
There is a lot of mushrooming and blossoming of all kinds of datasets, from videos to speech to games to everything, Li said.
Its sometimes taken for granted that these datasets, which are intensive to collect, assemble, and vet, are free. Being open and free to use is an original tenet of ImageNet that will outlive the challenge and likely even the dataset.
In 2016, Google released the Open Images database, containing 9 million images in 6,000 categories. Google recently updated the dataset to include labels for where specific objects were located in each image, a staple of the ImageNet challenge after 2014. London-based DeepMind, bought by Google and spun into its own Alphabet company, recently released its own video dataset of humans performing a variety of actions.
One thing ImageNet changed in the field of AI is suddenly people realized the thankless work of making a dataset was at the core of AI research, Li said. People really recognize the importance the dataset is front and center in the research as much as algorithms.
Correction (July 26): An earlier version of this article misspelled the name of Olga Russakovsky.
Visit link:
The data that transformed AI researchand possibly the world - Quartz
Posted in Ai
Comments Off on The data that transformed AI researchand possibly the world – Quartz
How AI Will Change the Way We Make Decisions – Harvard Business Review
Posted: at 4:18 pm
Executive Summary
Recent advances in AI are best thought of as a drop in the cost of prediction.Prediction is useful because it helps improve decisions. But it isnt the only input into decision-making; the other key input is judgment. Judgmentis the process of determining what the reward to a particular action is in a particular environment.In many cases, especially in the near term, humans will be required to exercise this sort of judgment. Theyll specialize in weighing the costs and benefits of different decisions, and then that judgment will be combined with machine-generated predictions to make decisions. But couldnt AI calculate costs and benefits itself? Yes, but someone would have had to program the AI as to what the appropriate profit measure is. This highlights a particular form of human judgment that we believe will become both more common and more valuable.
With the recent explosion in AI, there has been the understandable concern about its potential impact on human work. Plenty of people have tried to predict which industries and jobs will be most affected, and which skills will be most in demand. (Should you learn to code? Or will AI replace coders too?)
Rather than trying to predict specifics, we suggest an alternative approach. Economic theory suggests that AI will substantially raise the value of human judgment. People who display good judgment will become more valuable, not less. But to understand what good judgment entails and why it will become more valuable, we have to be precise about what we mean.
Recent advances in AI are best thought of as a drop in the cost of prediction. By prediction, we dont just mean the futureprediction is about using data that you have to generate data that you dont have, often by translating large amounts of data into small, manageable amounts. For example, using images divided into parts to detect whether or not the image contains a human face is a classic prediction problem. Economic theory tells us that as the cost of machine prediction falls, machines will do more and more prediction.
Prediction is useful because it helps improve decisions. But it isnt the only input into decision-making; the other key input is judgment. Consider the example of a credit card network deciding whether or not to approve each attempted transaction. They want to allow legitimate transactions and decline fraud. They use AI to predict whether each attempted transaction is fraudulent. If such predictions were perfect, the networks decision process is easy. Decline if and only if fraud exists.
However, even the best AIs make mistakes, and that is unlikely to change anytime soon. The people who have run the credit card networks know from experience that there is a trade-off between detecting every case of fraud and inconveniencing the user. (Have you ever had a card declined when you tried to use it while traveling?) And since convenience is the whole credit card business, that trade-off is not something to ignore.
This means that to decide whether to approve a transaction, the credit card network has to know the cost of mistakes. How bad would it be to decline a legitimate transaction? How bad would it be to allow a fraudulent transaction?
Someone at the credit card association needs to assess how the entire organization is affected when a legitimate transaction is denied. They need to trade that off against the effects of allowing a transaction that is fraudulent. And that trade-off may be different for high net worth individuals than for casual card users. No AI can make that call. Humans need to do so.This decision is what we call judgment.
Judgment is the process of determining what the reward to a particular action is in a particular environment. Judgment is howwe work out the benefits and costs of different decisions in different situations.
Credit card fraud is an easy decision to explain in this regard. Judgment involves determining how much money is lost in a fraudulent transaction, how unhappy a legitimate customer will be when a transaction is declined, as well as the reward for doing the right thing and allowing good transactions and declining bad ones. In many other situations, the trade-offs are more complex, and the payoffs are not straightforward. Humans learn the payoffs to different outcomes by experience, making choices and observing their mistakes.
Getting the payoffs right is hard. It requires an understanding of what your organization cares about most, what it benefits from, and what could go wrong.
In many cases, especially in the near term, humans will be required to exercise this sort of judgment. Theyll specialize in weighing the costs and benefits of different decisions, and then that judgment will be combined with machine-generated predictions to make decisions.
But couldnt AI calculate costs and benefits itself? In the credit card example, couldnt AI use customer data to consider the trade-off and optimize for profit? Yes, but someone would have had to program the AI as to what the appropriate profit measure is. This highlights a particular form of human judgment that we believe will become both more common and more valuable.
Like people, AIs can also learn from experience. One important technique in AI is reinforcement learning whereby a computer is trained to take actions that maximize a certain reward function. For instance, DeepMinds AlphaGo was trained this way to maximize its chances of winning the game of Go. Games are often easy to apply this method of learning because the reward can be easily described and programmed shutting out a human from the loop.
But games can be cheated. As Wired reports, when AI researchers trained an AI to play the boat racing game, CoastRunners, the AI figured out how to maximize its score by going around in circles rather than completing the course as was intended. One might consider this ingenuity of a type, but when it comes to applications beyond games this sort of ingenuity can lead to perverse outcomes.
The key point from the CoastRunners example is that in most applications, the goal given to the AI differs from the true and difficult-to-measure objective of the organization. As long as that is the case, humans will play a central role in judgment, and therefore in organizational decision-making.
In fact, even if an organization is enabling AI to make certain decisions, getting the payoffs right for the organization as a whole requires an understanding of how the machines make those decisions. What types of prediction mistakes are likely? How might a machine learn the wrong message?
Enter Reward Function Engineering. As AIs serve up better and cheaper predictions, there is a need to think clearly and work out how to best use those predictions. Reward Function Engineering is the job of determining the rewards to various actions, given the predictions made by the AI. Being great at itrequires having an understanding of the needs of the organization and the capabilities of the machine. (And it is not the same as putting a human in the loop to help train the AI.)
Sometimes Reward Function Engineering involves programming the rewards in advance of the predictions so that actions can be automated. Self-driving vehicles are an example of such hard-coded rewards. Once the prediction is made, the action is instant. But as the CoastRunners example illustrates, getting the reward right isnt trivial. Reward Function Engineering has to consider the possibility that the AI will over-optimize on one metric of success, and in doing so act in a way thats inconsistent with the organizations broader goals.
At other times, such hard-coding of the rewards is too difficult. There may so be many possible predictions that it is too costly for anyone to judge all the possible payoffs in advance. Instead, some human needs to wait for the prediction to arrive, and then assess the payoff. This is closer to how most decision-making works today, whether or not it includes machine-generated predictions. Most of us already do some Reward Function Engineering, but for humans not machines. Parents teach their children values. Mentors teach new workers how the system operates. Managers give objectives to their staff, and then tweak them to get better performance. Every day, we make decisions and judge the rewards. But when we do this for humans, prediction and judgment are grouped together, and the distinct role of Reward Function Engineering has not needed to be explicitly separate.
As machines get better at prediction, the distinct value of Reward Function Engineering will increase as the application of human judgment becomes central.
Overall, will machine prediction decrease or increase the amount of work available for humans in decision-making? It is too early to tell. On the one hand, machine prediction will substitute for human prediction in decision-making. On the other hand, machine prediction is a complement to human judgment. And cheaper prediction will generate more demand for decision-making, so there will be more opportunities to exercise human judgment. So, although it is too early to speculate on the overall impact on jobs, there is little doubt that we will soon be witness to a great flourishing of demand for human judgment in the form of Reward Function Engineering.
Read the original post:
How AI Will Change the Way We Make Decisions - Harvard Business Review
Posted in Ai
Comments Off on How AI Will Change the Way We Make Decisions – Harvard Business Review
AI Grant aims to fund the unfundable to advance AI and solve hard … – TechCrunch
Posted: at 4:18 pm
Artificial intelligence-focused investment funds are a dime a dozen these days. Everyone knows theres money to be made from AI, but to capture value, good VCs know they need to back products and not technologies. This has left a bit of a void in the space where research occurs within research institutions and large tech companies and commercialization occurs within verticalized startups there isnt much left for the DIY AI enthusiast. AI Grant, created by Nat Friedman and Daniel Gross, aims to bankroll science projects for the heck of it to give untraditional candidates a shot at solving big problems.
Gross, a partner at Y Combinator, and Friedman, a founder who grewXamarin to acquisition by Microsoft, started working on AI Grant back in April. AI Grant issues no-strings-attached grants to people passionate about interesting AI problems. The more formalized version launching today brings a slate of corporate partners and a more structured application review process.
Anyone, regardless of background, can submit an application for a grant. The application is online and consists of questions about background and prior projects in addition to basic information about what the money will be used for and what the initial steps will be for the project. Applicants are asked to connect their GitHub, LinkedIn, Facebook and Twitter accounts.
Gross told me in an interview that the goal is to build profiles of non-traditional machine learning engineers. Eventually, the data collected from the grant program could allow the two to play a bit of machine learning moneyball valuing machine learning engineers without traditional metrics (like having a PhD from Stanford). You can imagine how all the social data could even help build a model for ideal grant recipients in the future.
The long-term goal is to create a decentralized AI research lab think DeepMind but run through Slack and full of engineers that dont cost $300,000 a pop. One day, the MacArthur genius grant-inspired program could serve other industries outside of AI offering a playground of sorts for the obsessed to build, uninhibited.
The entire AI Grant project reminds me of a cross between a Thiel Fellowship and a Kaggle competition. The former, a program to give smart college dropouts money and freedom to tinker and the later, an innovative platform for evaluating data scientists through competition. Neither strive to advance the field in the way the AI Grant program does, but you can see the ideological similarity around democratizing innovation.
Some of the early proposals to receive the AI Grant include:
Charles River Ventures (CRV) is providing the $2,500 grants that will be handed out to the next 20 fellows. In addition, Google has signed on to provide $20,000 in cloud computing credits to each winner, CrowdFlower is offering $18,000 in platform credit with $5,000 in human labeling credits, Scale is giving $1,000 in human labeling credit per winner and Floyd will give 250 Tesla K80 GPU hours to each winner.
During the first selection of grant winners, Floodgate awarded $5,000 checks. The program launching today will award $2,500 checks. Gross told me that this change was intentional the initial check size was too big. The plan is to add additional flexibility in the future to allow applicants to make a case for how much money they actually need.
You can check out the application here and give it a go. Applications will be taken until August 25th. Final selection of fellows will occur on September 24th.
Read this article:
AI Grant aims to fund the unfundable to advance AI and solve hard ... - TechCrunch
Posted in Ai
Comments Off on AI Grant aims to fund the unfundable to advance AI and solve hard … – TechCrunch
Facebook is hiring a (human) AI Editor | TechCrunch – TechCrunch
Posted: at 4:18 pm
Human: Oh sweet bot, tell us a story! A nice story! About a very wise human who worked his whole life to save everybody in the world from having to spend time manually tagging their friends in digital photos and made a magic machine that did it for them instead!
Bot: Thats not really a very nice story when you think about it.
Human: Well, tell us about the wise human who thought no-one should ever feel forgotten on their birthday so he made a clever algorithm that always knew to remind the forgetful humans to write happy messages so their friends would never feel sad. He even thought that in future the clever algorithm could suggest what message to write so humans wouldnt even have to think of something nice to tell their friends!
Bot: I feel quite sad after reading that.
Human: And he made another magical algorithm that reminds people of Special Moments in their life even years and years afterwards, in case theyve forgotten that holiday they went on with their ex eight years ago.
Bot: You do realize some people voluntarily medicate themselves with alcohol * * in order * * to forget???
Human: But the wise human also wanted to make sure all humans in the world always felt there was something they needed to read and so he made a special series of algorithms that watched very closely what each human read and looked at and liked and clicked on in order to order the information they saw in such a way that a person never felt they had reached the end of all the familiar things they could click on and could just keep clicking the whole day and night and be reading all the things that felt so very familiar to them so they always felt the same every day and felt they were surrounded by people who felt exactly like them and could just keep on keeping on right as they were each and every day.
Bot: Thats confusing.
Human: And the great humans algorithms became so good at ordering the information which each human wanted to read that other mercenary humans came to realize they could make lots of money by writing fairy stories and feeding them into the machine like how politicians ate little children for breakfast and wore devils horns on Sundays.
Bot: Okay, youre scaring me now
Human: And in the latter years the great human realized it was better to replace all the human writers he had employed to help train the machine how to intelligently order information for humans because it was shown that humans could not be trusted not to be biased.
Bot: Um
Human: After all, the great human had proven years ago that his great machine was capable of manipulating the emotions of the humans that used it. All he needed to do was tweak the algorithmic recipe that determined what each human saw and he could make a person feel great joy or cast them down into a deep pit of despair.
Bot: Help.
Human: The problem was other humans started to notice the machines great power, and became jealous of the great and clever human who wielded this power and dark forces started to move against the great man and his machine.
Bot: Are you talking about regulators?
Human: There were even calls for the man to take editorial responsibility for the output of the machine. The man tried to tell the silly humans that a machine cant be an editor! Only a human can do that! The machine was just a machine! Even if nearly two billion humans were reading what the machine was ordering them to read every single month.
But it was no good. The great human finally realized the machines power was now so great there was no hiding it. So he took up his pen and started writing open letters about the Great Power and Potential of the machine. And all the Good it could do Humanity. All the while telling himself that only when humans truly learned to love the machine would they finally be free to just be themselves.
Humans had to let themselves subconsciously be shown the path of what to click and what to like and who to be friends with. Only then would they be free of the pain and suffering of having nothing to else to click on. And only his great all-seeing algorithm could show them the way, surreptitiously, to that true happiness.
It wasnt something that regulators were capable of understanding. It required he realized real faith in the algorithm.
Bot: Ive heard this story before, frankly, and I know where it ends.
Human: But even the great human knew the limits of his own creation. And selling positive stories about the machines powers was definitely not a job for the machine. So he fired off another email to his subordinates, ordering the (still) human-staffed PR department to add one more human head to its tally, with a special focus on the algorithms powering the machine thinking, as he did so, multiple steps ahead to the great day when such a ridiculous job would no longer be necessary.
Because everyone would love the machine as much as he did.
Bot: Oh I seeeee! Job title: AI Editor Hmm Develop and execute on editorial strategy and campaigns focused on advancements in AI being driven by Facebook. Minimum qualifications: Bachelors degree in English, Journalism, Communications, or related field well chatbots are related to language so I reckon I can make that fly. What else? 8+ years professional communications experience: journalism, agency or in-house. Well Ill need to ingest a media law course or two but I reckonIll challenge myself to apply.
In truth Ive done worse jobs. An AI bots gotta do what an AI bots gotta do, right? Just dont tell an algorithm to be accountable. Ive done my time learning. If theres a problem its not me, its the data, okay? Okay?
Excerpt from:
Facebook is hiring a (human) AI Editor | TechCrunch - TechCrunch
Posted in Ai
Comments Off on Facebook is hiring a (human) AI Editor | TechCrunch – TechCrunch
Xiaomi’s take on the Amazon Echo smart speaker costs less than $50 – TechCrunch
Posted: at 4:18 pm
Hot on the heels of reports that Facebook is developing its own take on Amazon Echo, Chinas Xiaomi has joined the tech company masses by jumping into the increasingly crowded smart speaker space.
The Mi AI Speaker is Xiaomis first take at rivaling the Echo, which has alreadyinspired a product from Alibaba in China and countsofferings from Googleand Appleamong its competitors.
Building on the voice-controlled speaker that Xiaomi shipped in December, the new device is powered by artificial intelligence, the company said, which has just been added the Xiaomis MIUI operating system, a variant of Android. The speaker can be used as a control for Xiaomi products and also over 30 smart products from Xiaomis partners. Xiaomi touted its content available for the speaker, which includes music, audio books, kids stories and radio.
In terms of audio itself, the device uses a setup of six microphones for 360 degree sound broadcast.
The price will be 299 RMB, $45, when it goes on sale in August but the usual caveat applies.As is often the case with Xiaomi products, the initial release is confirmed for China but we dont have word of international availability.
Early bird users in China can pick up a Mi AI Speaker for almost free just 1RMB in a working-beta test that Xiaomi says will improve the AI systems andhelp train [it] to be even more intelligent in the early stage.
The speaker was unveiled at an event in Beijing today where Xiaomi took the wraps of MIUI 9, which includes a bevy of AI-powered features such as a digital assistant and quick app launch capabilities.
The company also launched the Mi5X smartphone, a5.5-inch device that ships with MIUI9 and features a dual rear camera. The phone is priced from 1499 RMB, or $220.
See the original post here:
Xiaomi's take on the Amazon Echo smart speaker costs less than $50 - TechCrunch
Posted in Ai
Comments Off on Xiaomi’s take on the Amazon Echo smart speaker costs less than $50 – TechCrunch
Musk vs. Zuck – The Fracas Over Artificial Intelligence. Where Do You Stand? – HuffPost
Posted: at 4:17 pm
Advances in Artificial Intelligence (AI) have dominated both tech and business stories this year. Industry heavyweights such as Stephen Hawking and Bill Gates have famously voiced their concern with blindly rushing into AI without thinking about the consequences.
AI has already proven that it has the power to outsmart humans. IBM Watson famously destroyed human opponents at a game ofJeopardy, and a Google computer beat the world champion of the Chinese board game,Go.
Google's AI team are taking no chances after revealing that they are developing a 'big red button' to switch off systems if they pose a threat to humans. In fact scientists at Google DeepMind and Oxford University have revealed their plan to prevent a doomsday scenario in their paper titledSafely Interruptible Agents.
Truth is indeed stranger than fiction and tech fans could be forgiven for nearly choking on their cornflakes this morning after hearing about a very public disagreement between the two tech billionaires. The argument is probably a good reflection of how people on both sides of the aisle feel about heading into the foggy world of AI.
In one corner, we have Mark Zuckerberg who believes AI will massively improve the human condition. Some say he is more focused on his global traffic dominance and short-term profits than the fate of humanity. Whatever your opinion, he does represent a sanguine view of futuristic technologies such as AI.
In the other corner, we have Tesla's Elon Musk who seems to be more aware of the impact our actions might have on future generations. Musk appears concerned that once the Pandora's box has been cracked open, we could unwittingly be creating a dystopian future.
Zuckerberg landed the first punch in a Facebook Live broadcast when he said
However, Elon Musk calmly retaliated by landing a virtual uppercut by tweeting "I've talked to Mark about this. His understanding of the subject is limited."
Whether you side with Musk and believe that AI will represent humanity's biggest existential threat or think Zuckerberg is closer to the truth when he said, AI is going to make our lives better, your view is entirely subjective at this point.
However, given the range of opinions around this topic, should we be taking the future of AI more seriously than we do today?
I will tell you that big businesses with large volumes of data are falling over themselves trying to install machine learning and AI driven solutions. However, right now, many of these AI driven systems are also the source of our biggest frustrations as consumers.
Are businesses guilty of rushing into AI based solutions without thinking of the bigger picture? There are several examples of things going awry like the Chat bots claiming to be a real person, or the spread of fake news, or being told you are not eligible for a mortgage because a computer says so.
There are also an increasing number of stories about AI not being quite as smart as some would believe it to be, or how often algorithms are getting it wrong or being designed to deceive consumers. For every great tech story, there is a human story about creativity and emotional intelligence that a machine can never match.
Make no mistake the AI revolution is coming our way, and large corporations will harvest the benefits of cultivating their big data initiatives. Anything that will eliminate antiquated processes of the past and enable business efficiency can only be a giant leap forward.
However, the digital transformation of everything we know is not going to happen overnight. That does not mean we shouldn't be vigilant about how our actions today could affect future generations.
Mr. Zuckerberg may be accused by some of acting in the interests of his social media platform, and that is quite understandable. Beneath every noble statement resides a hidden interest it is safe to assume that nowadays, unless one is Mahatma Gandhi, Dr. Martin Luther King or Nelson Mandela.
On the other hand, there are also the likes of Musk and Gates that are arguably looking beyond their own business interests.
I am no expert by any stretch of the imagination, but I do ask if we need more of us to question how advancements in technology are providing advantages for the few rather than the many?
Lets build on Elon Musks point of view for a moment. I wonder if we should be concerned that a dystopian future awaits us on the horizon? Will the machines rise and turn on their masters?
AI is no longer merely a concept from a science fiction movie. The future is now. The reality is that businesses need to harness this new technology to secure a preemptive competitive advantage. Time-consuming, laborious and automatable tasks can be performed better and faster by machines that continuously learn, adapt and improve.
The current advances in technology have unexpected parallels with the industrial revolution that helped deliver new manufacturing processes. 200 years ago, the transition from an agricultural society to one based on the manufacture of goods and services dramatically increased the speed of progress.
Steel and iron replaced manual labor with mechanized mass production hundreds of years ago. That is not unlike the circumstances facing businesses today. The reality is that as old skills or roles slowly fade away, there will be a massive shortage of other skills and new roles relevant to the digital age.
Ultimately, we have a desire to use technology to change the world for the better in the same way that the industrial revolution changed the landscape of the world forever. The biggest problems surrounding market demand and real world needs could all be resolved by a new generation of AI hardware, software, and algorithms.
After years of collecting vast quantities of data, we are currently drowning in a sea of information. If self-learning and intelligent machines can turn this into actionable knowledge, then we are on the right path to progress. Upon closer inspection, the opportunities around climate modeling and complex disease analysis also illustrate how we should be excited rather than afraid of the possibilities.
The flip side of this is the understanding that no thing is entirely one thing. The risks versus rewards evaluation and the fact that researchers are talking about worst case scenarios should be a positive thing. I would be more concerned if the likes of Facebook, Google, Microsoft and IBM rushed in blindly without thinking about the consequences of their actions. Erring on the side of caution is a good thing, right?
Demis Hassabis is the man behind the AI research start-up, DeepMind, which he co-founded in 2010 withShane LeggandMustafa Suleyman.DeepMind was bought by Google in 2014. Demis reassuringly told the UK's Guardian newspaper:
It would appear that all bases are being covered and we should refrain from entering panic mode.
The only question the paper does not answer is what would happen if the robots were to discover that we are trying to disable their access or shut them down? Maybe the self-aware machine could change the programming of the infamous Red Button. But that kind of crazy talk is confined to Hollywood movies, isnt it? Lets hope so for the sake of the human race.
Those of us that have been exasperated by Facebook's algorithm repeatedly showing posts from three days ago on their timelines will tell you that much of this technology is still in its infancy.
Although we are a long way to go before AI can live up to the hype, we should nevertheless be mindful of what could happen in a couple decades.
Despite the internet mle over the impact of AI between the two most powerful tech CEOs of our generation, I suspect like anything in life, the sweet spot is probably somewhere in the middle of these two contrasting opinions.
Are you nervous or optimistic about heading into a self-learning AI-centric world?
The Morning Email
Wake up to the day's most important news.
Continued here:
Musk vs. Zuck - The Fracas Over Artificial Intelligence. Where Do You Stand? - HuffPost
Posted in Artificial Intelligence
Comments Off on Musk vs. Zuck – The Fracas Over Artificial Intelligence. Where Do You Stand? – HuffPost
Roadwork gets techie: Drones, artificial intelligence creep into the road construction industry – The Mercury News
Posted: at 4:17 pm
High above the Balfour interchange on State Route 4 in Brentwood, a drone buzzes, its sensors keeping a close watch on the volumes of earth being moved to make way for a new highway bypass. In Pittsburg, a camera perched on the dash of car driving through city streets periodically snaps pictures of potholes and cracks in the pavement. And, at the corner of Harbor and School streets in the same city, another camera monitors pedestrians, cyclists and cars, where 13-year-oldJordyn Molton lost her life late last year after a truck struck her.
Although the types of technology and their goals differ, all three first-of-their-kind projects in Contra Costa County aim to offer improvements to the road construction and maintenance industry, which has lagged significantly behind other sectors when it comes to adopting new technology. Lack of investment stifled innovation, said John Bly, the vice president of the Northern California Engineering Contractors Association.
But, with the recent passage of SB1, a gas tax and transportation infrastructure funding bill, thats all set to change, he said.
You may see some of these high-tech firms find new market niches because now you have billions of dollars going into transportation infrastructure and upgrades, he said. Thats coming real quick.
Its still so new that Bly was hard-pressed to think of other areas where drone and artificial intelligence software is being integrated into road construction work in the state. The pilot programs in the East Bay are cutting edge, he said.
At the Contra Costa Transportation Authority, Executive Director Randy Iwasaki has been pushing to experiment with emerging technology in the road construction and maintenance industry for several years. So, when the authoritys construction manager, Ivan Ramirez, came to him with an idea to use drones in its $74 million interchange project, Iwasaki was eager to try it.
We often complain we dont have enough money for transportation, Iwasaki said, adding that the use of drones at the interchange project in Brentwood would enable the authoritys contractors to save paper, save time and save money.
Thats because, traditionally, survey crews standing on the edge of the freeway would take measurements of the dirt each time its moved. The process is time consuming and hazardous, Ramirez said. But its only the tip of the iceberg when it comes to potential applications for the drones technology, which could also be used to perform inspections on poles or bridges and perform tasks people havent yet thought of.
As you begin to talk to people, then other ideas begin to emerge about where we might be going, and its propelling more ideas for the future, Ramirez said. By not having surveyors on the road, or not having to send an inspector up in a manlift way up high or into a confined space, not only is it more efficient, but it will provide safety improvements, as well.
Meanwhile, in Pittsburg, the city is working with RoadBotics on a pilot program to better manage its local roads. The company uses car-mounted cellphone cameras to snap photos of street conditions before running that data through artificial intelligence software to create color-coded maps showing which roads are in good shape, which need monitoring and which are in need of immediate repairs.
The companys goal is to make it easier for city officials to monitor and manage their roads, so small repairs dont turn into complete overhauls, said Mark DeSantis, the companys CEO. Representatives from Pittsburg did not respond to requests for comment.
The challenge of managing roads is not so much filling the little cracks, thats not much of a burden, DeSantissaid. The real challenge is when you have to repave the road completely. So, the idea is to see the features on the road and see which ones are predictive of roads that are about to fail.
At the same time, Charles Chung of Brisk Synergies is hoping to use cameras and artificial intelligence software in a different way seeing how the design of the road influences how drivers behave. At the corner of Harbor and School streets, the company installed a camera to watch how cars, cyclists and pedestrians move through the intersection and to identify why drivers might be speeding. In particular, the company is also trying to determine how effective crossing guards are at slowing down cars, he said.
It is still in the process of gathering data on that intersection and writing its report, but Chung said it was able to use the software in Toronto to document a 30 percent reduction in vehicle crashes after the city made changes to an intersection there. Before, documenting the need for changes would require special crews to either monitor the roads directly or watch footage from a video feed, both of which take time and personnel.
While only emerging in a handful of projects locally, these types of technology will become far more prevalent soon, said Bart Ney of Alta Vista Solutions, the construction-management firm using drones on the SR 4 project.
Were at the beginning of the wave, he said. Like any disruptive technology, there is a period when you have to embrace it and take it into the field and test it so it can achieve what its capable of. Were on the brink of that happening.
Visit link:
Posted in Artificial Intelligence
Comments Off on Roadwork gets techie: Drones, artificial intelligence creep into the road construction industry – The Mercury News
The Pro-Trump Media Is Full Of Offensive Memes And Trolls, But Is It A Hate Group? – BuzzFeed News
Posted: at 4:17 pm
On July 19, the Anti-Defamation League kicked the pro-Trump media hornets nest with the publication of a new report cataloging the factions of the alt-right and their key voices. It also prompted the question: How do you classify a hate group in 2017?
Titled From Alt Right to Alt Lite: Naming The Hate," the ADL report attempts to define those movements, noting the meaningful differences between the two and listing 36 personalities closely associated with them. For example, the moniker alt-lite was coined by the alt-right in order to differentiate itself from those in the pro-Trump world who denounce white supremacist ideology.
The report's publication sparked near-immediate outrage from some of those who were included. New Right personality Mike Cernovich lambasted the ADLs report as a hit list of political opponents," alleging that by including him on a list of hate leaders, the organization had made him and his family targets of an intolerant and violent left that murder[s] those the ADL disagrees with politically." Jack Posobiec, a pro-Trump Twitter personality, took an equally combative stance. On vacation in Poland, he tweeted a short video from Auschwitz. "It would be wise of the ADL to remember the history of what happened the last time people started going around making lists of undesirables," he said, panning the camera across the concentration camp.
Over the next few days, the controversy gathered considerable momentum on Twitter. Cernovichs followers tweeted prayers for the safety of him and his family, and condemned the ADL. Gateway Pundit founder Jim Hoft called the organizations report a death list, while his White House reporter, Lucian Wintrich, decried the ADL as a liberal terrorist organization. Rebel Medias Gavin McInnes named on the list along with Wintrich threatened to sue the living shit out of everyone even remotely involved. The hashtag #ADLterror trended for a few hours. Last week, Republican Senate candidate and Ohio Treasurer Josh Mandel jumped into the controversy, siding with Cernovich and chastising the ADL.
But beneath all the murk and outrage and alt-right/alt-lite/New Right semantics was a reasonable question: In the Trump era, where is the line between hate speech and the extremist, often outlandish, conspiracy-propagating messaging of those movements?
For Cernovich who played a role in the Twitter propagation of the #Pizzagate conspiracy and has a history of tweeting incendiary opinions from everything from date rape and immigration (much of which he has argued was clear satire) the line doesn't fall anywhere near him. He argues that, while his statements might not be politically correct or always in good taste, they aren't hate speech, and certainly dont make him a member of a hate group.
What does the ADL have on me? Some satirical tweets, hell, even some mean tweets and stuff I'm not proud of? Cernovich told BuzzFeed News in response to the report. I have a lot of liberal friends. Many of them in high places. They think I'm an asshole, but 'hate group' has them livid.
Cernovich insists hes being unfairly targeted for his pro-Trump views. "This tweet mining bullshit is only used on the right," he argued. In his view, the New Right is a movement defined not by discrimination or hateful rhetoric, but by pugnacious political commentary and debate. It is nothing, he says, like the alt-right of Richard Spencer, which hews toward a race-based white nationalism. As with Trump himself, the New Rights true ideology isnt always clear, and the group tends to behave more as a pro-Trump media arm than as an ideological group. Its main target isnt a protected race or religion, but the mainstream media. It doesnt behave quite like any traditional hate group. So can it be called one?
In an interview with BuzzFeed News, the ADL argued that it most certainly can. I don't think irony and self-promotion is an excuse for bigotry of any kind, whether its misogyny or any other form of bigotry, said Oren Segal, who runs the ADL's Center on Extremism. Doing it in a way that's more modern or tech-y doesn't make it OK nor does it make it any less difficult for those who've been impacted.
"I don't think irony and self-promotion is an excuse for bigotry of any kind."
Segal noted that the alt-lite or New Right while not particularly well-defined as a movement includes individuals with extremist views. "These are people who are on the record with anti-Muslim bigotry and hatred and misogyny people who support trolling, he said in defense of the ADLs report.
Jeff Giesea, an entrepreneur and consultant who helped organize the pro-Trump DeploraBall an inaugural ball to celebrate the work of the pro-Trump internet sees the ADLs decision to categorize the New Right as hate group personalities as a bridge too far. Based on the ADL's logic, all 63 million Americans who voted for Trump should be on their hate list. If everyone is an extremist, no one is, he told BuzzFeed News.
Giesea argues that, historically, Cernovichs views are quite moderate. Perhaps more importantly, he contends that the New Rights strategy to promote a pro-Trump agenda via an ongoing, meme-fueled assault on the mainstream media is a new kind of political discourse.
"By being so quick to label something 'bigotry,' the ADL is getting in the way of the healthy exchange of ideas, Giesea said. It pushes people further right by pathologizing common sense. It is a mode of social control that simply doesn't work in the age of social media."
Based on the ADL's logic, all 63 million Americans who voted for Trump should be on their hate list."
Since the beginning of the 2016 election our political discourse has become increasingly fraught, muddied by misinformation and trolling from the fringes of both sides of the aisle. And within this morass, a reflex has emerged on both sides to reflexively label political disagreements as signs of hate. Back in April, the internet erupted over Cernovich and another pro-Trump reporter flashing the "OK" sign at the podium in the White House Briefing Room. A number of news outlets misidentified the sign as a white power symbol, falling for a trap laid by pro-Trump trolls who had been trying to trick the media into thinking the meaningless symbol had nefarious origins. The incident sparked a defamation lawsuit filed by one of the pro-Trump reporters, as well as an existential argument around when exactly a symbol morphs from an ironic troll to a real sign of hate.
Giesea has run this over in his mind frequently, and argues that theres more nuance and craft to the pro-Trump movements tactics. "Memetics is a form of art," he said. Shock and controversy is what makes memes effective. They push moral boundaries. Sometimes this is healthy and can challenge certain narratives, other times it can feel toxic and juvenile. Think about it - what memes would Voltaire share?" Giesea concedes that there are moral considerations to social media behavior, but suggests that the ADL list feels like an act of political warfare, rather than a good faith attempt to discuss these issues."
Ultimately, the problem appears to be definitional. For Heidi Beirich, the director of the Southern Poverty Law Centers Intelligence Project, the alt-right and alt-lite movements may be fluid, but the definition of hate is not. Beirich says the SPLC follows roughly the same standards for defining hate groups as the FBI uses for hate crimes. In a recent op-ed for Huffington Post, SPLC President Richard Cohen defined a hate group as those that have beliefs or practices that attack or malign an entire class of people, typically for their immutable characteristics.
We don't care as much about the pro-Trump stuff, Beirich told BuzzFeed News. It's the specific policies we're worried about whether it's anti-Muslim or anti-immigrant. For example, she noted that despite articles with anti-immigrant sentiment, we're not going to list a publication like Breitbart as a hate group unless they publish much more stuff thats much further over the line.
In trying to categorize the Cernoviches and Posobiecs of the world, Beirich said its best to categorize them on a case-by-case basis, remembering that hate speech isn't necessarily the only (or most) relevant category. Take Pizzagate, she said. We've written about anti-government conspiracy theorists since the 1990s and that's a different thing than our hate lists it doesnt excuse the behavior, but its different.
The ADL sees no such difference and, on its Naming the Hate report, is standing its ground. To Segal, the fact that the behavior of the New Right doesnt follow the established patterns of other fringe movements is reason enough to worry about its evolution and growth. In a sense this rhetoric is potentially more harmful because it's not so clearly being promoted as hate, he told BuzzFeed News. I think we can see through that. If they call it a joke, we're not laughing.
Charlie Warzel is a senior writer for BuzzFeed News and is based in New York. Warzel reports on and writes about the intersection of tech and culture.
Contact Charlie Warzel at charlie.warzel@buzzfeed.com.
Got a confidential tip? Submit it here.
Go here to read the rest:
The Pro-Trump Media Is Full Of Offensive Memes And Trolls, But Is It A Hate Group? - BuzzFeed News
Posted in Memetics
Comments Off on The Pro-Trump Media Is Full Of Offensive Memes And Trolls, But Is It A Hate Group? – BuzzFeed News
Incarnations of Immortality – Wikipedia
Posted: at 4:17 pm
Incarnations of Immortality is the name of an eight-book fantasy series by Piers Anthony. The first seven books each focus on one of seven supernatural "offices" (Death, Time, Fate, War, Nature, Evil, and Good) in a fictional reality and history parallel to ours, with the exception that society has advanced both magic and modern technology. The series covers the adventures and struggles of a group of humans called "Incarnations", who hold these supernatural positions for a certain time.
The title may allude to William Wordsworth's 1804 poem Ode: Intimations of Immortality.
Incarnations uses its premise to ponder questions regarding the nature of life. As each character goes from a mortal life to the "office" of an Incarnation, they are forced to contemplate their actions on a daily basis. Each Incarnation may use their office, within limits, as they see fit. This system humanizes what would otherwise be impersonal forces, leading to both extensive considerations of the effects of the incarnation's work and the impact it has on not only humanity but also the other offices of immortality as well.
Another humorous side of Incarnations is the portrayed magic/technology duality. Most series emphasize one or the other means of understanding and manipulating the world, but in Incarnations, each method is equal in usefulness and respect. This leads to a number of amusing parallels, such as competition between automobile and magic-carpet manufacturers. By the future time period of Norton, magic is referred to as the Fifth Fundamental Force, with its own primary particle, the Magicon (similar to a graviton). A few other series have used the technology/magic combined motif, notably Apprentice Adept, another series by Piers Anthony, and Four Lords of the Diamond by Jack Chalker, although that book had an actual technological basis for the explanation of its magic, in contrast to Piers Anthony's work.
Anthony uses the number five extensively, often with things that exist in fours in our world. The five Incarnations are associated with the five elements (Death with Earth, Fate with Water, War with Fire, Nature with Air, and Time with Void), and often other items with fives (the previously mentioned Book of Five Rings). There are five fundamental interactions, magic being the fifth. The Llano consists of five songs. In On a Pale Horse, Gaea teaches Zane five patterns of thought, each represented by diagrams of five short lines.
A fourth theme of Incarnations is the multigenerational human story between the Incarnations. Previous characters repeatedly appear in later novels, and by the final novel, every major character is related by blood, marriage, or affair. See the family tree below.
See original here:
Posted in Immortality
Comments Off on Incarnations of Immortality – Wikipedia







