Red Raiders’ Gray competing for Team Virgin Islands at FIBA AmeriCup Championships – LubbockOnline.com

Justin Gray will get some basketball experience before suiting for the Texas Tech mens basketball team in the fall.

The senior guard was picked to play for the Virgin Islands National Team in the International Basketball Federation AmeriCup Championships set to start this weekend in Bahia Blanca, Argentina.

The Virgin Islands, slated to compete in Group B, is scheduled for pool play Sunday against Canada before taking on Venezuela on Monday followed by host squad Argentina on Tuesday. All three games are expected to be carried live on FIBAs YouTube channel: http://www.youtube.com/FIBA.

After pool play is completed, the top team from each group along with Argentina advance to the semifinal round set for Sept. 2.

Gray gets a chance to play with his older brother, Johnathan, who was a member of Cornells 2010 NCAA Sweet 16 squad.

Carlos Silva Jr., A-J Media

Go here to read the rest:

Red Raiders' Gray competing for Team Virgin Islands at FIBA AmeriCup Championships - LubbockOnline.com

Tech Companies and Censorship: Where Should We Draw The Line? – Inc.com

This has been a tough week.

Starting with the terrible event that occurred last weekend in Charlottesville, VA, where clashes between neo-Nazi and white supremacist groups erupted into fights and violence and led to death of one protester.

Throughout the week, the event continued to gain steam when President Trump commented about the incident, then made a second comment, then held an unprecedented press conference that even members of his own party condemned.

As prominent CEOs's of the President's manufacturing council began to drop out, several tech companies began or intensified their crack down on hate speech and banning of alt-right and neo-Nazi websites. According to PBS News, here are just a few big names and their actions:

Cloudflare, a company that provides security services to internet companies to protect them from hackers, also joined the movement by also dropping The Daily Stormer from its network services. The move was a bit of a surprise, because Matthew Prince, co-founder and CEO of Cloudflare, has long been an advocate of free speech saying that "a website is speech, it is not a bomb,"

Cloudfire took the action, however, because management determined that the The Daily Stormer was harassing individuals who were reporting their site as abusive. Prince was also clear that he and the company found the content on the site "abhorrent and vile" and in a company memo stated that "the tipping point for us making this decision was that the team behind Daily Stormer made the claim that we were secretly supporters of their ideology ... we could not remain neutral after these claims of secret support by Cloudflare."

While these actions by tech companies seen by most as the proper and moral thing to do, some have rightfully questioned the ability of businesses in general to have such a significant influence on the fundamental right of free speech online -- censoring or even removing it altogether.

Prince goes on to say that entrepreneurs -- and society at large -- need to ask ourselves who should be responsible for policing and regulating online content. "I sit in a very privileged position," said Prince, "I see about 10 percent of all online traffic, and I can make a decision whether they can be online anymore. And I'm not sure I am the one who should be making that kind of decision."

The the question for all of us is who should be?

We are all affording the freedom of speech and expression -- a very unique, precious and delicate gift. We have also been afforded, through the sacrifice of many generations, the right to life, liberty and the pursuit of happiness.

When these two rights intersect and conflict, we need a moral standard -- not the constitution -- to moderate.

Of course, the question then becomes who gets to decide the moral standard?

Luckily, we have a democratic system in place that allows the country's citizens to select representatives who serve as the law makers that mold this standard. Is our system flawed -- absolutely -- but as Winston Churchill astutely recognized, "Democracy is the worst form of government, except for all the others."

When it comes to tech companies -- or any company for that matter -- they have an obligation to follow the law -- and that is about it. As Prince contends, the right policy is for content providers to be "content neutral." The community can be policed by its users in the form reporting reprehensible content, and companies have the obligations to engage experts and authorities in law enforcement to determine what should be removed.

Of course, if some companies wish to write and maintain an internal set of codes and as long as those codes do not infringe upon or otherwise break a law, a company has every right to do so. Customers who disagree can exercise their freedom of speech to voice their opinion or simply "protest with their wallets."

This debate will surely not end anytime soon, and by all indications, it is just getting started.

What do you think? Should censorship be under the management of companies, or should content be continued to be given freedoms under the right to free speech? Please share your (constructive and civil) comments below.

More:

Tech Companies and Censorship: Where Should We Draw The Line? - Inc.com

At Beijing book fair, publishers admit self-censorship – Yahoo News

Beijing (AFP) - Just days after the world's oldest publisher briefly caved in to Chinese censorship demands, international publishing houses are courting importers at a Beijing book fair, with some admitting they keep sensitive topics off their pages.

The censorship controversy that hit Cambridge University Press (CUP) sent a chill along the stands staffed by publishers from nearly 90 countries at the Beijing International Book Fair, which opened on Wednesday.

But some acknowledged their companies have already resorted to self-censorship to ensure that their books do not offend and are published in China.

CUP had given similar arguments when it initially complied with a Chinese import agency's demand to block articles from its China Quarterly journal, before reversing course on Monday after coming under fire from the academic community.

Terry Phillips, business development director of British-based Innova Press, was candid about it as he prepared to meet a Chinese counterpart at the fair's section for overseas publishers.

"We frequently exercise self-censorship to adapt to different markets. Every country has different sets of requirements about what they consider appropriate for education materials," Phillips told AFP.

"But as authors, I think we also have a responsibility to find ways to teach good citizenship and human rights," he said.

John Lowe, managing director of Mosaic8, an Asian educational publishing specialist based in Tokyo, said the authorities govern the distribution of the International Standard Book Number (ISBN) that companies need for their books to be sold in China.

"So it is in publishers' interest to not publish something that would anger authorities," Lowe said.

"You don't mention the three 'Ts': Tiananmen, Tibet and Taiwan. But it's usually fine to discuss human rights issues generally," Lowe said.

- CUP quiet -

The 300 articles that were temporarily removed from China Quarterly's website in China included texts on the 1989 Tiananmen Square protests, the status of Tibet, the self-ruled island of Taiwan and the Chinese democracy movement.

CUP had said last Friday that it wanted "to ensure that other academic and educational materials we publish remain available to researchers and educators in this market".

In an about-face, the publisher announced on Monday that it was restoring access to the articles after international academics criticised CUP for succumbing to Chinese pressure and launched a petition demanding that it reverse course.

But the US-based Association for Asian Studies revealed this week that CUP had received a request from China's General Administration of Press and Publications to remove 100 articles from another publication, the Journal of Asian Studies.

Cambridge University officials said they would discuss the censorship issue with the importer at the book fair, which runs until Sunday, after expressing concern about "the recent increase in requests of this nature".

Rita Yan, a CUP coordinator at the publisher's booth, told AFP that the censorship issue "wasn't affecting our activities at the book fair."

Yan declined to comment further and said CUP's managing director of academic publishing was unable to speak with the press because she was occupied with meetings.

- Censorship: 'A selling point' -

Other publishers participating in the fair said the uproar has created an atmosphere of anxiety about censorship.

"Currently, we don't have any problems, but in the future, we don't know," said Ding Yueting, a marketer for Wiley, an educational publisher and research service based in New Jersey.

A representative of a large American publishing house, who requested anonymity because she was not authorised to speak to the press, said: "We're nervous about whether there will be increased censorship requests from Chinese agencies in the future."

But a representative of another major American publisher, who also requested anonymity, said that a factor influencing self-censorship decisions is that there would be "no point" in producing books that will likely get banned.

"It would be embarrassing to go through the trouble of translating a book from English to Chinese, and then being unable to publish in China," he said.

"On the other hand, books that are censored in China often sell better abroad," he said.

"It's usually a major selling point."

Read more here:

At Beijing book fair, publishers admit self-censorship - Yahoo News

Cambridge University Press battles censorship in China – The Economist

YnO*0&5^(GAgPbLrXx 5z}!)YMB|63|'mTYE7.m[5xZy+z>IQ?or>A9rtVFy[6&f~+qOT(zcFH< (95ka]jLc( {]=6ksph[P%KLS]Ady_C8RH8 "tY);xLSwe??|}^XQ19u3"07A8c7um~| )#.2yfJDX,^ #g%qJjdDiG&)]zMAS.bwhRqff]x%8gB>W'jg0p0=X++a1(!ngwkU5-;U_u[_u+UPO"eQz}z |Jo_=+mG?.|S=^OjKkp!&CYyHF;%wV8L5N>0Z)bYva v6D<*, #@=)/Rqeyz(_wTst@ cJ8$ACTipg=k=z3u%o|SKwym-"kTdzb#(ilh')n)may;@Ann ,Ui2Ib'k1 V&My|rTk4aiQ ")?LBe|6 &N=w]E@bI#Lq> ZDr_GC#:VDq0iE*4Zg9`$ixH4T8! RBD}N h2*!C0!U+R xX :2B)F*"@(6$'iyHTS&:MxZd~JX8 1$IN5Q4=euwfGIJcY@,Bw(lbERPHf@I yrPDk!5I Tm2Vay31)il$M P$4%>D1a$2*@(bFX8L4-$sS< (.O565q!J b@)'JpO%gi"X>,`!P,c&WSH:9`Vq5i2O7 x6B&art7xhrdeVBkKm5k8<Ggi2xz9r+QTfJxW+N7FGuOm#~n[Ly>bz{/#^aark<= &k#X3Xd6eI;v)Oe={?B3?T42zl3{V0#o[5g39N%B/a|~6*6OS!;p[*fh}Xn=``mCPY?bPf-sz?<_?/nsBn_ {2N '/|dG+D 8c W2i*K,8hM&L} WISb5b R%FO!ib,8eo nU=Sm '+Y>nVi-L QnaB6L2BI!Ub aP&8 hl^:iNRo w7@IhyLQ1=]WMCy:iNw~,RVreyy;{g}sgwjU?];%yfAD-eWS^YU{W}S_>fj -FZ.U-}S;dKVl//|zRZYl.7M]k 6{gCwM0hb]_!Q-_kudw1x-n-{Hj>]=T_ysWiFhZD`#'{ <#:>xkV2Q#"}d$!cLsz!qtoRMxMEhJTS^:i n%QcJ1"[|S>Xd{{gfoOFci)M3OToo)h>{&PiGm-ERRjnf>e#_S.=x[n-.,3Naqi#wb0kW8=1'>wwiq4pS#Yn+iXr%U&~=E;B}8N;UUY?}QtzbUkaWKZmP8c?1wqn1I|_n1W*->Wt69P61. !f}x[1"{LabtqM=ja@K`IU_$Wk&[IXc"Pd_;S %sWYQ7/(Ijai<9L9C u|c+< :kP*}Kd <:vtUJiCGA%6%S$-N-8W9m{M-*b&nYLAmfhil?:ilTd](F!SM'C=]&= w]yJ;)l@3xV(v: Q-1o^u`rDa4Q5Nov$|RQ2b7?Uy"^=X*]: <;VERd!RCzqLaqkbq6I4d.Jsc]Pd2T67D?Y1bj4oUTn$b[SU6I%OH,DULrqU jaum7b>Mj Inu 6~ Xzw)4u28G/sW QE?L [1UBl(6aBEMfpYE ^S$~_"y@n6]$,9K%.?V9*4vJVa(Jwd3=UQ#h!V6u#$CvnG' i;+%aQ0!h[(G:j_j>P)RrG dFDl ^c06'S4 un*ks"uS;[`VML{Ps*!q SHKQh&J:z|:^py8~]YF,%/cc#N}rD4-oR LeTKyL@~An E@f7ANTWXCLee5j>Dox#Cf i*6YH -:^_11d0C]Omys=@CB<|(rr)lj4f(aj}2~]w.wnrnqrnq2na2nanyrq_n&1yN|;m4qCR*+06"akI43ALzny;kl.>$^g

Read more:

Cambridge University Press battles censorship in China - The Economist

10+ Years of Activists Silenced: Internet Intermediaries’ Long History of Censorship – EFF

Recent decisions by technology companies, especially upstream infrastructure technology companies, to drop neo-Nazis as customers have captured public attentionand for good reason. The content being blocked is vile and horrific, there is growing concern about hate groups across the country, and the nation is focused on issues of racism and protest.

But this is a dangerous moment for Internet expression and the power of private platforms that host much of the speech on the Internet. People cheering for companies who have censored content in recent weeks may soon find the same tactic used against causes they love. We must be careful about what we are asking these companies to do and carefully review the processes they use to do it. A look at previous examples that EFF has handled in the past 10+ years can help demonstrate why we are so concerned.

This isnt just a slippery slope fear about potential future harm. Complaints to various kinds of intermediaries have been occurring for over a decade. Its clear that Internet technology companiesespecially those further upstream like domain name registrars are simply not equipped or competent to distinguish between good complaints and bad in the U.S. much less around the world. They also have no strong mechanisms for allowing due process or correcting mistakes. Instead they merely react to where the pressure is greatest or where their business interests lie.

Here are just a few cases EFF has handled or helped from the last decade where complaints went upstream to website hosts and DNS providers, impacting activist groups specifically. And this is not to mention the many times direct user platforms like Facebook and Twitter have censored content from artists, activists, and others.

Youll notice that complainers in these cases are powerful corporations. Thats not a coincidence. Large companies have the time, money, and scary lawyers to pressure intermediaries to do their biddingsomething smaller communities rarely have.

The story gets much more frightening when governments enter the conversation. All of the major technology companies publish transparency reports documenting the many efforts made by governments around the world to require the companies to take down their customers speech.[1]

China ties the domain name system to tracking systems and censorship. Russia-backed groups flag Ukrainian speech, Chinese groups flag Tibetan speech, Israeli groups flag Palestinian speech, just to name a few. Every state has some reason to try to bend the core intermediaries to their agenda, which is why EFF along with a number of international organizations created the Manila Principlesto set out the basic rules for intermediaries to follow when responding to these governmental pressures. Those concerned about the position of the current U.S. government with regard to Black Lives Matter, Antifa groups, and similar left-leaning communities should take note: efforts to urge the current U.S. government to treat them as hate groups have already begun.

Will the Internet remain a place where small, marginalized voices get heard? For every tech CEO now worried about neo-Nazis there are hundreds of decisions made to silence voices that are made outside of public scrutiny with no transparency into decision-making or easy ways to get mistakes corrected. We understand the impulse to cheer any decisions to stand up against horrific speech, but if we embrace upstream intermediary censorship, it may very well come back to haunt us.

See the original post here:

10+ Years of Activists Silenced: Internet Intermediaries' Long History of Censorship - EFF

World’s oldest publisher reverses ‘shameful’ China censorship – CNNMoney

The university press, which describes itself as the oldest publishing house in the world, had admitted to blocking online access in China to academic works on Tiananmen Square, the Cultural Revolution and Tibet.

The University of Cambridge said in a statement on Monday that its academic leadership and the publisher had agreed to reinstate the blocked content "with immediate effect" to "uphold the principle of academic freedom."

The censored academic articles appeared in the highly regarded journal China Quarterly. Its editor, Tim Pringle, said the reversal followed a "justifiably intense reaction from the global academic community and beyond."

"Access to published materials of the highest quality is a core component of scholarly research," he said in a statement on Monday. "It is not the role of respected global publishing houses ... to hinder such access."

The decision to censor the articles drew condemnation from academics around the world.

It represented "a craven, shameful and destructive concession" to the Chinese government's "growing censorship regime," Georgetown University professor James Millward wrote in an open letter published over the weekend.

By Monday, an online petition threatening a boycott of the publisher and its journals had gathered hundreds of signatures.

Related: Facebook finds a way into China

The not-for-profit publisher had defended its action as necessary to ensure that China doesn't block "entire collections of content." It said it would never proactively censor its own content.

But many prominent academics blasted the move.

"Chinese students and scholars reading a censored version of The China Quarterly will encounter only historical facts and scholarly analyses approved by political authorities," Greg Distelhorst of MIT and Jessica Chen Weiss of Cornell wrote in a letter to Cambridge University Press.

"This censored history of China will literally bear the seal of Cambridge University," they said.

The Cambridge press, which has been operating since the reign of Queen Elizabeth I in the 16th century, has run into a challenge faced by other global publishers: obey China's censors or be locked out of its giant market.

Related: Apple's Tim Cook hopes China will ease VPN restrictions

Foreign authors who wish to publish books in China must allow their works to be altered by censors. Top news organizations like The New York Times have had their websites blocked in China for years after publishing articles that upset the ruling Communist Party.

"Western institutions have the freedom to choose," said an English-language opinion article published Sunday by Global Times, a provocative but state-sanctioned Chinese tabloid. "If they don't like the Chinese way, they can stop engaging with us. If they think China's internet market is so important that they can't miss out, they need to respect Chinese law and adapt to the Chinese way."

China's General Administration of Press and Publication, a regulatory body, didn't respond to requests for comment Monday.

Related: Banned! 11 things you won't find in China

Submitting to Beijing's demands was "a misguided, if understandable, economic decision that does harm to the Press' reputation and integrity," said Jonathan Sullivan, director of the China Policy Institute at the University of Nottingham.

"This is not the first time Beijing has leveraged the economic power of the Chinese market for political gains," he wrote in a blog post. "The fear is that it won't be the last time that Western academia is the target."

-- Serena Dong contributed to this report.

CNNMoney (London) First published August 21, 2017: 1:12 PM ET

Excerpt from:

World's oldest publisher reverses 'shameful' China censorship - CNNMoney

Delingpole: Thomas Wictor Is the Latest Victim of Google Censorship – Breitbart News

YouTube has suspended his account allegedly because he violated their terms of use; but really, he suspects, for the crime of being a Trump supporter who speaks unpalatable truths about leftist evils.

If youre unfamiliar with Thomas Wictor, youre missing a treat. Hes a Venezuelan-born recluse with a rich and varied past who, besides being the worlds greatest (and only) expert on World War I flamethrowers, also happens to produce some of the most fascinating Twitter threads and social media video commentary you will ever see on subjects ranging from Antifa to Pallywood to whats really going on in Syria and Iraq.

Some of his output is so kooky and recondite that, quite possibly, it strays into the realm of conspiracy theory.

But with Wictor you can never be quite sure because his exposition is so thorough and well-documented.

One of his specialties is forensic video analysis. This is how I first came across him, a few years back, when I wrote my first Breitbart News story based on his research. It concerned the four Palestinian boys supposedly blown up on a beach by Israeli artillery during the last Gaza conflict but really, or so Wictor claimed, murdered by Hamas who then exploited the dead children for propaganda purposes.

More recently he has attracted a big following on Twitter thanks to his epic threads which examine the truth behind various news stories, especially ones relating either to the Middle East or Antifas domestic terrorism.

This, he believes, is what got him into trouble with the lefts political correctness sentinels.

He told me:

I was able to prove at least three attempted murders by Antifa at Berkeley on April 15, 2017. In the video above [now deleted by YouTube], the Antifa member used a Fairbairn-Sykes fighting knife.

The Fairbairn-Sykes is a double-edged stabbing weapon. It produces deep wounds that bleed heavily, making it hard to save the victim. It was only incompetence on the part of Antifa and sheer luck that the free-speech supporter didnt die. The Antifa member stabbed four times. Thats attempted murder in the first degree.

The reason I came to the attention of Google was that Donald Trump Jr retweeted me. After that, my YouTube account came under almost daily assault until it was terminated.

On Twitter, I support Jews, Shia Muslims, Sunni Muslims, Christians, blacks, whites I see no religion or color. My blog posts were all technical.

The last detail is important because, according to Googles explanations as to why his YouTube account was first closed temporarily then permanently, his videos had inappropriate content.

Eventually, Wictors account was killed with death-by-faceless-bureaucracy. (Ive included the full private thread of Wictors communication with me because its soclassically Thomas Wictor)

This is the internets loss and reflects ill on both YouTube and Google.

Happily, his Twitter threads are still operative and todays is another classic. It concerns a story aboutState Rep. Beth Fukumoto (D-Hawaii) and her claim reported in Huffington Post that she receivedracistcorrespondence from a Trump supporter.

Wictor has strong suspicions that it is a hoax because, using photos of the letter and envelope from the internet, he has subjected the correspondence to forensic analysis.

Read the full thread to find out why he thinks it is fake. Its classic Wictor.

P.S. DO YOU WANT MORE ARTICLES LIKE THIS ONE DELIVERED RIGHT TO YOUR INBOX?SIGN UP FOR THE DAILY BREITBART NEWSLETTER.

Here is the original post:

Delingpole: Thomas Wictor Is the Latest Victim of Google Censorship - Breitbart News

In reversal, Cambridge University Press restores articles after China censorship row – Washington Post

BEIJING Cambridge University Press reversed course Monday after facing a major backlash from academics over its decision to bow to Chinese government demands to censor an important academic journal.

The British-based publisher announced Friday it had removed 300 articles and book reviews from a version of the China Quarterly website available in China at the request of the government. But on Monday, it rescinded that decision after outrage from the international academic community.

It said the original move had only been a temporary decision pending discussion withacademic leadership of the University of Cambridge and a scheduled meeting with the Chinese importer in Beijing.

Academic freedom is the overriding principle on which the University of Cambridge is based, it said in a statement. Therefore, while this temporary decision was taken in order to protect short-term access in China to the vast majority of the Presss journal articles, the Universitys academic leadership and the Press have agreed to reinstate the blocked content, with immediate effect, so as to uphold the principle of academic freedom on which the Universitys work is founded.

The articles touched on topics deemed sensitive to the Communist Party, including the crackdown on pro-democracy demonstrations in Tiananmen Square in 1989, policies toward Tibetan and Uighur ethnic minorities, Taiwan and the 1966-76 Cultural Revolution.

Tom Pringle, editor of China Quarterly, applauded the decision to reverse course.

Access to published materials of the highest quality is a core component of scholarly research, he said in a statement published online.It is not the role of respected global publishing houses such as CUP to hinder such access. The China Quarterly will continue to publish articles that make it through our rigorous double-blind peer review process, regardless of topic or sensitivity.

The demand to remove the articles came from Chinas General Administration of Press and Publication, which warned that if they were not removed the entire website would be made unavailable in China.

The articles would still have been available on a version of China Quarterly accessible outside China. But academics around the world had accused CUP of selling out and becoming complicit in censoring Chinese academic debate and history.

In an open letter published on Medium.com, James A. Millward, a professor of history at Georgetown University, had called the original decision a craven, shameful and destructive concession to the Peoples Republic of Chinas growing censorship regime.

Millward said the decision to agree to censorship was a clear violation of academic independence inside and outside China.

He added it was akin to the New York Times or the Economist publishing versions of their papers inside China omitting content deemed offensive to the Communist Party.

It is noteworthy that the topics and peoples CUP has so blithely chosen to censor comprise mainly minorities and the politically disadvantaged. Would you censor content about Black Lives Matter, Mexican immigrants or Muslims in your American publication list if Trump asked you to do [so]? he asked.

In another open letter, MIT assistant professor Greg Distelhorst and Cornell associate professor Jessica Chen Weiss had warned: The censored history of China will literally bear the seal of Cambridge University.

In a tweet, James Leibold, an associate professor at Melbournes La Trobe University, whose scholarship about the Xinjiang region was among the censored articles, had called the decision a shameful act.

And a petition circulatedamong academics warning that Cambridge University Press could have faced a boycott if it had continued to acquiesce to the Chinese governments demands.

It is disturbing to academics and universities worldwide that China is attempting to export its censorship on topics that do not fit its preferred narrative, Christopher Balding, an associate professor at Peking University HSBC School of Business in Shenzhen, China, the petitions originator, wrote.

If Cambridge University Press acquiesces to the demands of the Chinese government, we as academics and universities reserve the right to pursue other actions including boycotts of Cambridge University Press and related journals.

The petition requested that only academics and people working in higher education sign, and give their affiliation. It had attracted 635 signatures on Change.org, although it could not be immediately established how many signatories were academics.

Later, Balding welcomed CUP's change of heart, but added: These are issues Western institutions need to rethink. Just assuming there will be continued liberalization is not an accurate assessment.

In an editorial, Chinas state-run Global Times newspaper also cast the issue as a matter of principle and said that if Western institutions can leave if they dont likeit.

Western institutions have the freedom to choose, it wrote. If they don't like the Chinese way, they can stop engaging with us. If they think China's Internet market is so important that they can't miss out, they need to respect Chinese law and adapt to the Chinese way.

It doesn't matter if some articles on the China Quarterly disappear on the Chinese Internet. But it is a matter of principle. Time will tell whose principles cater more to this era, it added.

Experts said China's decision was part of a broader crackdown on free expression in China under President Xi Jinping that has intensified this year as the Communist Party becomes more confident and less inclined to compromise.

In the past, China's system of censorship, nicknamed the Great Firewall of China, has concentrated mainly on Chinese-language material, and has been less preoccupied with blocking English-language material, which is accessed only by a narrow elite. But that may now be changing.

The China Quarterly is very reputable within academic circles, and it does not promote the positive energy that China wants to see, said Qiao Mu, a former professor at Beijing Foreign Studies University who was demoted and ultimately left the university after criticizing the government. Instead, it touches on historical reflection, talks about Cultural Revolution and other errors that China has made in the past. These are things that China does not like and does not want to be discussed.

Qiao said the initial decision might have seemed wisefor the publisher as a company, since China is a huge market. But it would have had a negative effect on already limited academic freedom in China.

For Chinese academics, the effect is mainly psychological, he said. They will think more when doing research and impose stricter self-censorship.

Internet companies have also faced similar dilemmas: Google chose to withdraw from China rather than submit to censorship, and has been displaced here by a censored Chinese search engine, Baidu.com. But LinkedIn has submitted to censorship and continues to operate here. Apple recently complied with a demand from the Chinese government to remove many VPN (virtual private network) applications that Netizens use to access blocked websites, from its App Store in China.

Millward had argued that Cambridge as a whole has more power than it perhaps realized in a battle of wills with the Chinese Communist Party (CCP).

China is not going to ban everything branded Cambridgefrom the Chinese realm, because to do so would turn this into a big, public issue, and that is precisely what the authorities hope to avoid, he wrote.

To do so would, moreover, pit the CCP against a household name that every Chinese person who knows anything about education reveres as one of the worlds oldest and best universities. And Chinese, probably more than anyone else, revere universities, especially name-brand ones.

Cambridge University Press has made available a complete list of the articles that the Chinese government wanted censoredhere.

Luna Lin contributed to this report.

See the rest here:

In reversal, Cambridge University Press restores articles after China censorship row - Washington Post

Rewriting history is a form of censorship – The Journal

I am disgusted by the actions of lawless individuals and groups destroying statues and symbols of our past history.

Do these fools think they can somehow change our history by toppling a few bronze replicas of Civil War soldiers? They justify their actions by wrapping themselves in the banner of anti-racism, yet we do not see them attacking statues of Presidents Woodrow Wilson or Franklin D, Roosevelt, who were both racists. Wilson was responsible for re-segregating our armed forces in World War I, and Roosevelt imprisoned thousands of loyal Japanese citizens during World War II but did not arrest people of German or Italian decent.

Now we hear they want to destroy the Jefferson Memorial and rename Washington, D.C. Others have spayed paint on the Lincoln Memorial. Where are these attempts to destroy the reputations of great men like Thomas Jefferson, Benjamin Franklin and even George Washington leading us?

Is this a prelude to tearing up our beloved Constitution and Bill of Rights?

Vince Young

Dolores

Continue reading here:

Rewriting history is a form of censorship - The Journal

Expect to spend more on health care in retirement even if you’re well – CNBC

Fidelity's calculations include premiums, cost-sharing provisions and out-of-pocket costs associated with Medicare parts A, B and D but does not include other health expenses such as over-the-counter medications, dental services and long-term care. "Estimates are calculated for 'average' retirees, but may be more or less depending on actual health status, area of residence and longevity," according to the release.

Intimidating as retirement health-care figures may be, experts say there are a variety of ways to anticipate them in your overall retirement plan and, potentially, reduce them.

Health savings accounts, or HSAs, can be a smart tool, Stavisky said. These accounts, which are paired with high-deductible health plans, have a triple tax advantage: Contributions are tax deductible, grow tax free and can also be withdrawn tax-free for qualified medical costs.

"Given that $275,000 figure, the odds of you having too much money in a health savings account are pretty limited," he said.

More here:

Expect to spend more on health care in retirement even if you're well - CNBC

Time to get serious about ‘health care for all,’ says California Assembly leader who blocked it before – Sacramento Bee


Sacramento Bee
Time to get serious about 'health care for all,' says California Assembly leader who blocked it before
Sacramento Bee
Assembly Speaker Anthony Rendon said Thursday it's time for the state Legislature to have a serious discussion on how to create a universal health care system for all of California. Rendon has been under fire from the California Nurses Association ...
Single payer, revived? California lawmakers to hold health care hearings this fallThe Mercury News
California lawmakers to hold universal health care hearingsKCRA Sacramento
California Assembly Speaker Revives Bid for Universal Health CareCourthouse News Service
Long Beach Post
all 11 news articles »

Go here to read the rest:

Time to get serious about 'health care for all,' says California Assembly leader who blocked it before - Sacramento Bee

NRCC chair: ‘Clear’ mistake by Republicans to push healthcare reform first – Washington Examiner

COLUMBUS, OHIO -- Rep. Steve Stivers, R-Ohio, says it was a mistake for House Republicans to try to tackle healthcare reform first this year instead of other topics, including tax reform, and cast doubt on the idea that the Senate would be able to pass any Obamacare repeal bill this year.

"Oh, in hindsight, it's clear," Stivers, the chairman of the National Republican Congressional Committee, told the Washington Examiner when asked if it was a mistake to attempt healthcare reform first. "But it is what it is. You had to do them in some order."

"I would argue healthcare is pretty much..." Stivers said before catching himself. "We're shifting focus," he added, referring to issues like tax reform and other more achievable goals.

When asked if healthcare could resurface, he said it's possible, but said, "I doubt it."

"I think we're moving on to tax reform," he said. "It's time to move on to things we can get done and the Senate can get done. The Senate couldn't pass the skinny repeal bill. It is what it is. It time to move to we have precious time given to us by our voters. We need to focus on the things we can get done."

Stivers said healthcare is a tricky issue for Republicans as 2018 approaches, one that suggests that the GOP is unable to govern. The Senate has been unable to coalesce around a bill since the House passed the American Health Care Act in early May.

"I think it worries some people because some American citizens are losing confidence in our ability to get things done," Stivers said. "We need to reclaim their confidence and regain their confidence by getting things done. Healthcare was always the hardest of all the topics we were dealing with. There's just no consensus. We had a plan out there, but it didn't have 218 co-sponsors. It passed the House, but it didn't have the 50 votes needed, plus the vice president, in the Senate."

"In the end, the American people, I think, are willing to forgive us for not getting everything done, but they're not willing to accept us getting nothing done," he said. "It raises the stakes on tax reform. It raises the stakes on infrastructure. It raises the stakes on some welfare reform that we're going to do as part of tax reform. So it makes it important that we get those things done."

His comments come just weeks before Republicans are expected to push a tax reform plan for the first time since 1986. White House press secretary Sarah Sanders said Thursday that there could be an announcement next week on a tax reform proposal from the White House. Lawmakers, including House Speaker Paul Ryan, are also expected to be involved on the issue after they return to Washington after Labor Day.

The rest is here:

NRCC chair: 'Clear' mistake by Republicans to push healthcare reform first - Washington Examiner

Retiree health care costs up 6%, new study finds – InvestmentNews

In a perfect world, the largest expenses in retirement would be for fun things like travel and entertainment. In the real world, retiree health-care costs can take an unconscionably big bite out of savings.

A 65-year-old couple retiring this year will need $275,000 to cover health-care costs throughout retirement, Fidelity Investments said in its annual cost estimate, out this morning. That stunning number is about 6 percent higher than it was last year. Costs would be about half that amount for a single person, though women would pay a bit more than men since they live longer.

You might think that number looks high. At 65, you're eligible for Medicare, after all. But monthly Medicare premiums for Part B (which covers doctor's visits, surgeries, and more) and Part D (drug coverage) make up 35 percent of Fidelity's estimate. The other 65 percent is the cost-sharing, in and out of Medicare, in co-payments and deductibles, as well as out-of-pocket payments for prescription drugs.

And that doesn't include dental care or nursing-home and long-term care costs.

Retirees can buy supplemental, or Medigap, insurance to cover some of the things Medicare doesn't, but those premiums would lead back to the same basic estimate, said Adam Stavisky, senior vice president for Fidelity Benefits Consulting.The 6 percent jump in Fidelity's estimate mirrors the average annual 5.5 percent inflation rate for medical care that HealthView Services, which makes health-care cost projection software, estimates for the next decade. A recent report from the company drilled into which health-care costs will grow the fastest.

It estimates a long-term inflation rate of 7.2 percent for Medigap premiums and 8 percent for Medicare Part D. For out-of-pocket costs, the company estimates inflation rates of 3.7 percent for prescription drugs, 5 percent in dental, hearing, and vision services, 3 percent for hospitals, and 3.4 percent for doctor's visits and tests.

Cost-of-living-adjustments on Social Security payments, meanwhile, are expected to grow by 2.6 percent, according to the HealthView Services report.

What's really sobering is the impact of inflation on Fidelity's retiree health-care cost estimates over the years. From 2002, when Fidelity first did an estimate, to its latest projection, the number is up 70 percent.

"It's the power of compounding," Stavisky said. "It's great for investing and brutal for health-care costs."

In its 2017 statement, Fidelity brings up a fairly hot topic in health-care circles health savings accounts, or HSAs as a way employers are helping workers manage costs. (Others might describe the plans as shifting more of the rapidly rising costs of health care onto employees.) HSAs are tax-advantaged accounts to which employees can contribute a certain amount of pre-tax dollars each year to use for medical costs. Employers usually kick in some money, too. The 2017 contribution limit for singles is $3,400, and $6,750 for a person with a family.

HSAs usually accompany high-deductible health plans, which are becoming far more common. (For a good comparison of health-care savings account providers, see Morningstar's 2017 Health Savings Account Landscape.) In return for low premiums, employees have high deductibles to cover before insurance kicks in. In 2017, annual deductibles are at least $1,300 for a single person, with a maximum out-of-pocket expense of $6,550. For a family, the minimum deductible is $2,600, with an out-of-pocket cap of $13,100.

Part of the logic behind HSAs is that employees will be better health-care consumers under such plans. And they might, if being an informed, effective consumer weren't extremely difficult and time-consuming in the murky world of American health-care pricing.

And it's not available to everyone. In the real world, high costs and steep deductibles discourage many people from using the health care they've bought, starting a cascade of ills. Staying healthy can be expensive.

On the bright side, financial planners love that HSAs are "triple tax-advantaged." Money goes in pre-tax, earnings on that money aren't taxed, and the money can be used, without being taxed, for qualified medical expenses. If people have the means to pay for health-care costs out of pocket and leave the HSA money growing tax-free, it can be another tax-advantaged way to save for retirement.

Health-care costs will likely keep climbing, so one of the best investments anyone can make is to work at staying healthy, if possible. For a sense of how much health care could cost you in retirement, and how staying healthy can lower those costs, try AARP's health-care costs calculator. It provides a rough cost estimate based on your height, weight, gender, and state. Users can add in various health conditions to see how much they might add to projected health-care costs in retirement, or subtract from them if, for instance, an overweight person slimmed down.

Whether you're 60 or 25 or somewhere in between, the prospect of retirement should be more inspiring cities to visit, languages to learn, books to read, or to write than the anxious business of war-gaming what your health will be like 10 or 20 or 30 years out. But if paying closer attention to your body and mind now means more money for travel and growth and relaxation after a long hard working life, it's not a bad trade-off.

Read the original here:

Retiree health care costs up 6%, new study finds - InvestmentNews

Streamlining health care – Stowe Today

Vermont residents spend more on health care than the national or regional averages, federal data show.

Community Health Services of Lamoille Valley hopes to reverse that trend, at least at the local level, by streamlining services with a new $5.5 million home.

The one-story building scheduled to open in January, will provide one-stop shopping for patients and lower costs, says CEO Kevin Kelley, who noted that the community health organization has not raised its rates for services in a number of years.

We are forward-thinking five, 10, 15 years down the road, because health care is changing, Kelley said, and he hopes the new space will draw additional qualified professionals to the area, making a larger impact on the people that Community Health serves which is currently 69 percent of Lamoille County residents with a primary-care doctor.

Community Health Services is a federally qualified health center thats designed to ensure that Lamoille County residents have easy access to high-quality, timely, comprehensive health care at an affordable price.

It encompasses six diverse medical practices Appleseed Pediatrics, Stowe Family Practice, Morrisville Family Health Care, the Behavioral Health & Wellness Center, Neurological Clinic and the Community Dental Clinic and over the last decade has doubled its staff from 70 employees to 142.

It has also boosted the number of annual patient visits from 14,965 to 17,418. While thats an increase of only 15 percent, it has helped double the organizations annual revenue over the past 10 years to $15.6 million.

As the agency has grown, the organizations five locations across Morrisville have nearly burst at the seams.

We are totally out of space, Kelley said.

The new 27,000-square-foot structure on a 4-acre field the organization owns at 407 Washington Highway, near the current home of Morrisville Family Health Care, should help alleviate the space problem.

And, for patients, it will consolidate the services provided in the five leased buildings into one owned by Community Health Services.

When we lease properties, we cant control increases in costs, Kelley said. But the single building will have a fixed-rate mortgage, be energy-efficient and provide other fixed costs that can reduce the impact on patients.

It will also help alleviate parking problems at Morrisville Family Health Care with 103 new spaces, compared to just 22 at the current building, and allow patients to simply walk across the hall to other services they need.

And if their primary doctor recommends they see a specialist, a conference room will offer telemedicine remote diagnosis and treatment so the patient doesnt have to leave the building.

In the past, Community Health Services has offered telemedicine only for psychiatric care.

The new building will also house Appleseed and the neurology clinic now in the basement of Morrisville Family Health allowing patients with neurological disorders, such as epilepsy or multiple sclerosis, to walk into a first-floor clinic rather than deal with stairs or an elevator.

Eventually, Morrisville Family Health Care will shuffle across the street, increasing its capacity with three exam rooms instead of two, as well as another procedure room, and the medically assisted treatment team, which helps patients with addiction, will increase its capacity in the new building.

Behavioral Health & Wellness will move too, from Northgate Plaza to Washington Highway, in the building where Morrisville Family Health currently lives, creating a hub of patient care centered just across from Copley Hospital.

In the end, only Stowe Family Practice and the Community Dental Clinic will remain at their current locations.

Community Health Services has recently teamed up Appleseed Pediatrics with the Lamoille Family Center to bring a new program Developmental Understanding and Legal Collaboration for Everyone to Lamoille County, ensuring that newborns and their families receive high-quality medical care as well as the social services and community support they need during the first six months of the newborns life.

Its a three-year pilot program, and the Lamoille Valley is one of five communities participating countrywide.

In line with that mission, Community Health Services will bring the special supplemental nutrition program for Women, Infants and Children, which is a federal program not under the health service conglomerates control, into the building for a few days a week.

While all these services will still function as separate entities, having one central location will allow for better collaboration on patient care, and pooling resources should provide greater efficiencies and savings.

Now, Kelley and his team are working with Copley Hospital to find any services that can be moved across the street to the health center to provide even further ease of access, especially when the cold winter months could dissuade patients from walking across the street.

Community Health Services is already working with Copley to reduce emergency-room use for primary care by placing a social worker in the emergency department to talk to patients who dont have health insurance or arent being treated by a primary care physician.

So far, 58 percent of the patients the social worker has spoken with have set up primary care physicians, and scheduled follow-up appointments with them instead of with emergency room doctors.

As a federally qualified health center, the organization has been awarded a $1 million federal grant toward the project cost. The Vermont Economic Development Authority has also awarded the project $1.5 million in partial financing; Union Bank is providing the rest of the financing for the building itself.

Community Health Services of Lamoille Valley is kick-starting a campaign to raise $200,000 for ancillary equipment in the new building thats not covered by current funding.

We are reaching out for the first time to the community for funding, Kelley said. Its an opportunity for patients to give back to help sustain health care for the future.

Donations can be made at chslv.org/gift-giving.

Read more:

Streamlining health care - Stowe Today

GOP can improve health care and lower taxes | Opinion – Sun Sentinel

Congressional Republicans are gearing up for a battle over tax reform. Nearly everyone in the caucus would like to slash corporate and individual taxes. But they will need to close some loopholes in the tax code if they hope to offset the revenue they will lose by lowering rates.

One of the sacred cows Republicans ought to target is the "employer exclusion," which exempts employer-sponsored health benefits from income and payroll taxes. By effectively subsidizing health insurance, the exclusion has exacerbated our nation's health cost crisis.

Taxing health benefits would make America's health insurance market fairer and more economically efficient. The exclusion is a relic of the World War II era, when employers began offering workers generous health benefits to get around government wage and price controls.

Before then, most people did not receive health insurance through their jobs. The employer exclusion has since become the single largest break in the tax code. This year, the federal government will forego about $350 billion because of the exclusion.

Businesses understandably want to preserve this loophole, which helps them recruit and retain workers. Employees like the loophole as well. To them, a dollar of tax-free health benefits is worth more than a dollar of taxable income.The exclusion might be popular but it is bad public policy.

First, it is deeply unfair to Americans whose employers do not offer health coverage. Many of these folks do not receive a subsidy to buy their own policies on the individual market.

Second, it distorts the labor market. Over 155 million people receive health insurance through their jobs. By tethering health insurance to employers, the government has made it less likely these folks will seek out new jobs or start their own businesses, since they would have to give up their health plans.

Third, it is highly regressive. In 2016, the top two-fifths of earners received nearly 70 percent of the benefit from the tax break. The bottom fifth of the income distribution, meanwhile, captured one-half of 1 percent of the exclusion's benefits.

Worst of all, the loophole drives up health costs. When employers pick up most of the cost of coverage "first dollar coverage" people have less incentive to consume health care responsibly. This leads to wasteful spending that inflates insurance premiums.

The average employer contribution to a family insurance plan more than tripled percent between 1999 and 2016, rising from $4,247 to $12,865. The explosive growth in premiums has left businesses with less money for wages. Combined salaries, wages, and bonuses increased just 58 percent from 1999 to 2015.

In short, the federal government is sacrificing hundreds of billions of dollars a year to subsidize needlessly lavish health coverage for wealthy folks. It's hard to imagine a loophole more deserving of the axe.

It would be politically impossible to do away with the loophole all at once. But Republicans could start the reform process by capping the exclusion at $8,000 for individual plans and $20,000 for family plans. These limits are slightly higher than the average premium for an employer-sponsored plan. So the majority of workers wouldn't be affected.

To keep up with the gradual rise in healthcare costs over time, these caps could grow at the inflation rate plus 1 percent. Such a reform would not ban employers from offering extravagant health benefits. But it would stop subsidizing such decisions with taxpayer dollars.

Many employers would respond by sponsoring less comprehensive high-deductible plans, and pay workers higher wages instead. These high-deductible plans, especially if paired with tax-advantaged Health Savings Accounts, would encourage workers to shop around for health care. And that would put downward pressure on overall healthcare spending.

It's time for Congress to restore some fairness and fiscal discipline to our health care sector by capping the employer exclusion. Lawmakers could use the tens of billions in new revenue to finance permanent tax cuts that boost economic growth, increase wages, and create jobs.

Sally C. Pipes is president, CEO, and Thomas W. Smith Fellow in Health Care Policy at the Pacific Research Institute. Her latest book is "The Way Out of Obamacare" (Encounter 2016). Follow her on Twitter @sallypipes.

See the original post:

GOP can improve health care and lower taxes | Opinion - Sun Sentinel

The AI Revolution: The Road to Superintelligence | Inverse

Nothing will make you appreciate human intelligence like learning about how unbelievably challenging it is to try to create a computer as smart as we are.

This article originally appeared on Wait But Why by Tim Urban. This is Part 1 Part 2 is here.

We are on the edge of change comparable to the rise of human life on Earth.

-Vernor Vinge

What does it feel like to stand here?

It seems like a pretty intense place to be standing but then you have to remember something about what its like to stand on a time graph: You cant see whats to your right. So heres how it actually feels to stand there:

Which probably feels pretty normal

Imagine taking a time machine back to 1750 a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. When you get there, you retrieve a dude, bring him to 2015, and then walk him around and watch him react to everything. Its impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someones face and chat with them even though theyre on the other side of the country, and worlds of other inconceivable sorcery. This is all before you show him the internet or explain things like the International Space Station, the Large Hadron Collider, nuclear weapons, or general relativity.

This experience for him wouldnt be surprising or shocking or even mind-blowing those words arent big enough. He might actually die.

But heres the interesting thing if he then went back to 1750 and got jealous that we got to see his reaction and decided he wanted to try the same thing, hed take the time machine and go back the same distance, get someone from around the year 1500, bring him to 1750, and show him everything. And the 1500 guy would be shocked by a lot of thingsbut he wouldnt die. It would be far less of an insane experience for him, because while 1500 and 1750 were very different, they were much less different than 1750 to 2015. The 1500 guy would learn some mind-bending shit about space and physics, hed be impressed with how committed Europe turned out to be with that new imperialism fad, and hed have to do some major revisions of his world map conception. But watching everyday life go by in 1750 transportation, communication, etc. definitely wouldnt make him die.

No, in order for the 1750 guy to have as much fun as we had with him, hed have to go much farther back maybe all the way back to about 12,000 BC, before the First Agricultural Revolution gave rise to the first cities and to the concept of civilization. If someone from a purely hunter-gatherer world from a time when humans were, more or less, just another animal species saw the vast human empires of 1750 with their towering churches, their ocean-crossing ships, their concept of being inside, and their enormous mountain of collective, accumulated human knowledge and discovery hed likely die.

And then what if, after dying, he got jealous and wanted to do the same thing. If he went back 12,000 years to 24,000 BC and got a guy and brought him to 12,000 BC, hed show the guy everything and the guy would be like, Okay whats your point who cares. For the 12,000 BC guy to have the same fun, hed have to go back over 100,000 years and get someone he could show fire and language to for the first time.

In order for someone to be transported into the future and die from the level of shock theyd experience, they have to go enough years ahead that a die level of progress, or a Die Progress Unit (DPU) has been achieved. So a DPU took over 100,000 years in hunter-gatherer times, but at the post-Agricultural Revolution rate, it only took about 12,000 years. The post-Industrial Revolution world has moved so quickly that a 1750 person only needs to go forward a couple hundred years for a DPU to have happened.

This pattern human progress moving quicker and quicker as time goes on is what futurist Ray Kurzweil calls human historys Law of Accelerating Returns. This happens because more advanced societies have the ability to progress at a faster rate than less advanced societies because theyre more advanced. 19th century humanity knew more and had better technology than 15th century humanity, so its no surprise that humanity made far more advances in the 19th century than in the 15th century 15th century humanity was no match for 19th century humanity.

This works on smaller scales too. The movie Back to the Future came out in 1985, and the past took place in 1955. In the movie, when Michael J. Fox went back to 1955, he was caught off-guard by the newness of TVs, the prices of soda, the lack of love for shrill electric guitar, and the variation in slang. It was a different world, yes but if the movie were made today and the past took place in 1985, the movie could have had much more fun with much bigger differences. The character would be in a time before personal computers, internet, or cell phones todays Marty McFly, a teenager born in the late 90s, would be much more out of place in 1985 than the movies Marty McFly was in 1955.

This is for the same reason we just discussed the Law of Accelerating Returns. The average rate of advancement between 1985 and 2015 was higher than the rate between 1955 and 1985 because the former was a more advanced world so much more change happened in the most recent 30 years than in the prior 30.

So advances are getting bigger and bigger and happening more and more quickly. This suggests some pretty intense things about our future, right?

Kurzweil suggests that the progress of the entire 20th century would have been achieved in only 20 years at the rate of advancement in the year 2000 in other words, by 2000, the rate of progress was five times faster than the average rate of progress during the 20th century. He believes another 20th centurys worth of progress happened between 2000 and 2014 and that another 20th centurys worth of progress will happen by 2021, in only seven years. A couple decades later, he believes a 20th centurys worth of progress will happen multiple times in the same year, and even later, in less than one month. All in all, because of the Law of Accelerating Returns, Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century.

If Kurzweil and others who agree with him are correct, then we may be as blown away by 2030 as our 1750 guy was by 2015 i.e. the next DPU might only take a couple decades and the world in 2050 might be so vastly different than todays world that we would barely recognize it.

This isnt science fiction. Its what many scientists smarter and more knowledgeable than you or I firmly believe and if you look at history, its what we should logically predict.

So then why, when you hear me say something like the world 35 years from now might be totally unrecognizable, are you thinking, Cool.but nahhhhhhh? Three reasons were skeptical of outlandish forecasts of the future:

1) When it comes to history, we think in straight lines. When we imagine the progress of the next 30 years, we look back to the progress of the previous 30 as an indicator of how much will likely happen. When we think about the extent to which the world will change in the 21st century, we just take the 20th century progress and add it to the year 2000. This was the same mistake our 1750 guy made when he got someone from 1500 and expected to blow his mind as much as his own was blown going the same distance ahead. Its most intuitive for us to think linearly, when we should be thinking exponentially. If someone is being more clever about it, they might predict the advances of the next 30 years not by looking at the previous 30 years, but by taking the current rate of progress and judging based on that. Theyd be more accurate, but still way off. In order to think about the future correctly, you need to imagine things moving at a much faster rate than theyre moving now.

2) The trajectory of very recent history often tells a distorted story. First, even a steep exponential curve seems linear when you only look at a tiny slice of it, the same way if you look at a little segment of a huge circle up close, it looks almost like a straight line. Second, exponential growth isnt totally smooth and uniform. Kurzweil explains that progress happens in S-curves:

An S is created by the wave of progress when a new paradigm sweeps the world. The curve goes through three phases:

If you look only at very recent history, the part of the S-curve youre on at the moment can obscure your perception of how fast things are advancing. The chunk of time between 1995 and 2007 saw the explosion of the internet, the introduction of Microsoft, Google, and Facebook into the public consciousness, the birth of social networking, and the introduction of cell phones and then smart phones. That was Phase 2: the growth spurt part of the S. But 2008 to 2015 has been less groundbreaking, at least on the technological front. Someone thinking about the future today might examine the last few years to gauge the current rate of advancement, but thats missing the bigger picture. In fact, a new, huge Phase 2 growth spurt might be brewing right now.

3) Our own experience makes us stubborn old men about the future. We base our ideas about the world on our personal experience, and that experience has ingrained the rate of growth of the recent past in our heads as the way things happen. Were also limited by our imagination, which takes our experience and uses it to conjure future predictions but often, what we know simply doesnt give us the tools to think accurately about the future. When we hear a prediction about the future that contradicts our experience-based notion of how things work, our instinct is that the prediction must be naive. If I tell you, later in this post, that you may live to be 150, or 250, or not die at all, your instinct will be, Thats stupid if theres one thing I know from history, its that everybody dies. And yes, no one in the past has not died. But no one flew airplanes before airplanes were invented either.

So while nahhhhh might feel right as you read this post, its probably actually wrong. The fact is, if were being truly logical and expecting historical patterns to continue, we should conclude that much, much, much more should change in the coming decades than we intuitively expect. Logic also suggests that if the most advanced species on a planet keeps making larger and larger leaps forward at an ever-faster rate, at some point, theyll make a leap so great that it completely alters life as they know it and the perception they have of what it means to be a human kind of like how evolution kept making great leaps toward intelligence until finally it made such a large leap to the human being that it completely altered what it meant for any creature to live on planet Earth. And if you spend some time reading about whats going on today in science and technology, you start to see a lot of signs quietly hinting that life as we currently know it cannot withstand the leap thats coming next.

If youre like me, you used to think Artificial Intelligence was a silly sci-fi concept, but lately youve been hearing it mentioned by serious people, and you dont really quite get it.

There are three reasons a lot of people are confused about the term AI:

1) We associate AI with movies. Star Wars. Terminator. 2001: A Space Odyssey. Even the Jetsons. And those are fiction, as are the robot characters. So it makes AI sound a little fictional to us.

2) AI is a broad topic. It ranges from your phones calculator to self-driving cars to something in the future that might change the world dramatically. AI refers to all of these things, which is confusing.

3) We use AI all the time in our daily lives, but we often dont realize its AI. John McCarthy, who coined the term Artificial Intelligence in 1956, complained that as soon as it works, no one calls it AI anymore. Because of this phenomenon, AI often sounds like a mythical future prediction more than a reality. At the same time, it makes it sound like a pop concept from the past that never came to fruition. Ray Kurzweil says he hears people say that AI withered in the 1980s, which he compares to insisting that the Internet died in the dot-com bust of the early 2000s.

So lets clear things up. First, stop thinking of robots. A robot is a container for AI, sometimes mimicking the human form, sometimes not but the AI itself is the computer inside the robot. AI is the brain, and the robot is its body if it even has a body. For example, the software and data behind Siri is AI, the womans voice we hear is a personification of that AI, and theres no robot involved at all.

Secondly, youve probably heard the term singularity or technological singularity. This term has been used in math to describe an asymptote-like situation where normal rules no longer apply. Its been used in physics to describe a phenomenon like an infinitely small, dense black hole or the point we were all squished into right before the Big Bang. Again, situations where the usual rules dont apply. In 1993, Vernor Vinge wrote a famous essay in which he applied the term to the moment in the future when our technologys intelligence exceeds our own a moment for him when life as we know it will be forever changed and normal rules will no longer apply. Ray Kurzweil then muddled things a bit by defining the singularity as the time when the Law of Accelerating Returns has reached such an extreme pace that technological progress is happening at a seemingly-infinite pace, and after which well be living in a whole new world. I found that many of todays AI thinkers have stopped using the term, and its confusing anyway, so I wont use it much here (even though well be focusing on that idea throughout).

Finally, while there are many different types or forms of AI since AI is a broad concept, the critical categories we need to think about are based on an AIs caliber. There are three major AI caliber categories:

AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. Theres AI that can beat the world chess champion in chess, but thats the only thing it does. Ask it to figure out a better way to store data on a hard drive, and itll look at you blankly.

AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board a machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and were yet to do it. Professor Linda Gottfredson describes intelligence as a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. AGI would be able to do all of those things as easily as you can.

AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. Artificial Superintelligence ranges from a computer thats just a little smarter than a human to one thats trillions of times smarter across the board. ASI is the reason the topic of AI is such a spicy meatball and why the words immortality and extinction will both appear in these posts multiple times.

As of now, humans have conquered the lowest caliber of AI ANI in many ways, and its everywhere. The AI Revolution is the road from ANI, through AGI, to ASI a road we may or may not survive but that, either way, will change everything.

Lets take a close look at what the leading thinkers in the field believe this road looks like and why this revolution might happen way sooner than you might think:

Artificial Narrow Intelligence is machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing. A few examples:

ANI systems as they are now arent especially scary. At worst, a glitchy or badly-programmed ANI can cause an isolated catastrophe like knocking out a power grid, causing a harmful nuclear power plant malfunction, or triggering a financial markets disaster (like the 2010 Flash Crash when an ANI program reacted the wrong way to an unexpected situation and caused the stock market to briefly plummet, taking $1 trillion of market value with it, only part of which was recovered when the mistake was corrected).

But while ANI doesnt have the capability to cause an existential threat, we should see this increasingly large and complex ecosystem of relatively-harmless ANI as a precursor of the world-altering hurricane thats on the way. Each new ANI innovation quietly adds another brick onto the road to AGI and ASI. Or as Aaron Saenz sees it, our worlds ANI systems are like the amino acids in the early Earths primordial ooze the inanimate stuff of life that, one unexpected day, woke up.

Why Its So Hard

Nothing will make you appreciate human intelligence like learning about how unbelievably challenging it is to try to create a computer as smart as we are. Building skyscrapers, putting humans in space, figuring out the details of how the Big Bang went down all far easier than understanding our own brain or how to make something as cool as it. As of now, the human brain is the most complex object in the known universe.

Whats interesting is that the hard parts of trying to build AGI (a computer as smart as humans in general, not just at one narrow specialty) are not intuitively what youd think they are. Build a computer that can multiply two ten-digit numbers in a split second incredibly easy. Build one that can look at a dog and answer whether its a dog or a cat spectacularly difficult. Make AI that can beat any human in chess? Done. Make one that can read a paragraph from a six-year-olds picture book and not just recognize the words but understand the meaning of them? Google is currently spending billions of dollars trying to do it. Hard things like calculus, financial market strategy, and language translationare mind-numbingly easy for a computer, while easy things like vision, motion, movement, and perception are insanely hard for it. Or, as computer scientist Donald Knuth puts it, AI has by now succeeded in doing essentially everything that requires thinking but has failed to do most of what people and animals do without thinking.

What you quickly realize when you think about this is that those things that seem easy to us are actually unbelievably complicated, and they only seem easy because those skills have been optimized in us (and most animals) by hundreds of million years of animal evolution. When you reach your hand up toward an object, the muscles, tendons, and bones in your shoulder, elbow, and wrist instantly perform a long series of physics operations, in conjunction with your eyes, to allow you to move your hand in a straight line through three dimensions. It seems effortless to you because you have perfected software in your brain for doing it. Same idea goes for why its not that malware is dumb for not being able to figure out the slanty word recognition test when you sign up for a new account on a siteits that your brain is super impressive for being able to.

On the other hand, multiplying big numbers or playing chess are new activities for biological creatures and we havent had any time to evolve a proficiency at them, so a computer doesnt need to work too hard to beat us. Think about itwhich would you rather do, build a program that could multiply big numbers or one that could understand the essence of a B well enough that you could show it a B in any one of thousands of unpredictable fonts or handwriting and it could instantly know it was a B?

One fun examplewhen you look at this, you and a computer both can figure out that its a rectangle with two distinct shades, alternating:

Tied so far. But if you pick up the black and reveal the whole image

you have no problem giving a full description of the various opaque and translucent cylinders, slats, and 3-D corners, but the computer would fail miserably. It would describe what it seesa variety of two-dimensional shapes in several different shadeswhich is actually whats there. Your brain is doing a ton of fancy shit to interpret the implied depth, shade-mixing, and room lighting the picture is trying to portray. And looking at the picture below, a computer sees a two-dimensional white, black, and gray collage, while you easily see what it really isa photo of an entirely-black, 3-D rock:

And everything we just mentioned is still only taking in stagnant information and processing it. To be human-level intelligent, a computer would have to understand things like the difference between subtle facial expressions, the distinction between being pleased, relieved, content, satisfied, and glad, and why Braveheart was great but The Patriot was terrible.

Daunting.

So how do we get there?

First Key to Creating AGI: Increasing Computational Power

One thing that definitely needs to happen for AGI to be a possibility is an increase in the power of computer hardware. If an AI system is going to be as intelligent as the brain, itll need to equal the brains raw computing capacity.

One way to express this capacity is in the total calculations per second (cps) the brain could manage, and you could come to this number by figuring out the maximum cps of each structure in the brain and then adding them all together.

Ray Kurzweil came up with a shortcut by taking someones professional estimate for the cps of one structure and that structures weight compared to that of the whole brain and then multiplying proportionally to get an estimate for the total. Sounds a little iffy, but he did this a bunch of times with various professional estimates of different regions, and the total always arrived in the same ballpark around 1016, or 10 quadrillion cps.

Currently, the worlds fastest supercomputer, Chinas Tianhe-2, has actually beaten that number, clocking in at about 34 quadrillion cps. But Tianhe-2 is also a dick, taking up 720 square meters of space, using 24 megawatts of power (the brain runs on just 20 watts, and costing $390 million to build. Not especially applicable to wide usage, or even most commercial or industrial usage yet.

Kurzweil suggests that we think about the state of computers by looking at how many cps you can buy for $1,000. When that number reaches human-level 10 quadrillion cps then thatll mean AGI could become a very real part of life.

Moores Law is a historically-reliable rule that the worlds maximum computing power doubles approximately every two years, meaning computer hardware advancement, like general human advancement through history, grows exponentially. Looking at how this relates to Kurzweils cps/$1,000 metric, were currently at about 10 trillion cps/$1,000, right on pace with this graphs predicted trajectory:

So the worlds $1,000 computers are now beating the mouse brain and theyre at about a thousandth of human level. This doesnt sound like much until you remember that we were at about a trillionth of human level in 1985, a billionth in 1995, and a millionth in 2005. Being at a thousandth in 2015 puts us right on pace to get to an affordable computer by 2025 that rivals the power of the brain.

So on the hardware side, the raw power needed for AGI is technically available now, in China, and well be ready for affordable, widespread AGI-caliber hardware within 10 years. But raw computational power alone doesnt make a computer generally intelligent the next question is, how do we bring human-level intelligence to all that power?

Second Key to Creating AGI: Making it Smart

This is the icky part. The truth is, no one really knows how to make it smart were still debating how to make a computer human-level intelligent and capable of knowing what a dog and a weird-written B and a mediocre movie is. But there are a bunch of far-fetched strategies out there and at some point, one of them will work. Here are the three most common strategies I came across:

1) Plagiarize the brain.

This is like scientists toiling over how that kid who sits next to them in class is so smart and keeps doing so well on the tests, and even though they keep studying diligently, they cant do nearly as well as that kid, and then they finally decide k fuck it Im just gonna copy that kids answers. It makes sensewere stumped trying to build a super-complex computer, and there happens to be a perfect prototype for one in each of our heads.

The science world is working hard on reverse engineering the brain to figure out how evolution made such a rad thing optimistic estimates say we can do this by 2030. Once we do that, well know all the secrets of how the brain runs so powerfully and efficiently and we can draw inspiration from it and steal its innovations. One example of computer architecture that mimics the brain is the artificial neural network. It starts out as a network of transistor neurons, connected to each other with inputs and outputs, and it knows nothinglike an infant brain. The way it learns is it tries to do a task, say handwriting recognition, and at first, its neural firings and subsequent guesses at deciphering each letter will be completely random. But when its told it got something right, the transistor connections in the firing pathways that happened to create that answer are strengthened; when its told it was wrong, those pathways connections are weakened. After a lot of this trial and feedback, the network has, by itself, formed smart neural pathways and the machine has become optimized for the task. The brain learns a bit like this but in a more sophisticated way, and as we continue to study the brain, were discovering ingenious new ways to take advantage of neural circuitry.

More extreme plagiarism involves a strategy called whole brain emulation, where the goal is to slice a real brain into thin layers, scan each one, use software to assemble an accurate reconstructed 3-D model, and then implement the model on a powerful computer. Wed then have a computer officially capable of everything the brain is capable of it would just need to learn and gather information. If engineers get really good, theyd be able to emulate a real brain with such exact accuracy that the brains full personality and memory would be intact once the brain architecture has been uploaded to a computer. If the brain belonged to Jim right before he passed away, the computer would now wake up as Jim (?, which would be a robust human-level AGI, and we could now work on turning Jim into an unimaginably smart ASI, which hed probably be really excited about.

How far are we from achieving whole brain emulation? Well so far, weve not yet just recently been able to emulate a 1mm-long flatworm brain, which consists of just 302 total neurons. The human brain contains 100 billion. If that makes it seem like a hopeless project, remember the power of exponential progress now that weve conquered the tiny worm brain, an ant might happen before too long, followed by a mouse, and suddenly this will seem much more plausible.

2) Try to make evolution do what it did before but for us this time.

So if we decide the smart kids test is too hard to copy, we can try to copy the way he studies for the tests instead.

Heres something we know. Building a computer as powerful as the brain is possible our own brains evolution is proof. And if the brain is just too complex for us to emulate, we could try to emulate evolution instead. The fact is, even if we can emulate a brain, that might be like trying to build an airplane by copying a birds wing-flapping motionsoften, machines are best designed using a fresh, machine-oriented approach, not by mimicking biology exactly.

So how can we simulate evolution to build AGI? The method, called genetic algorithms, would work something like this: there would be a performance-and-evaluation process that would happen again and again (the same way biological creatures perform by living life and are evaluated by whether they manage to reproduce or not). A group of computers would try to do tasks, and the most successful ones would be bred with each other by having half of each of their programming merged together into a new computer. The less successful ones would be eliminated. Over many, many iterations, this natural selection process would produce better and better computers. The challenge would be creating an automated evaluation and breeding cycle so this evolution process could run on its own.

The downside of copying evolution is that evolution likes to take a billion years to do things and we want to do this in a few decades.

But we have a lot of advantages over evolution. First, evolution has no foresight and works randomly it produces more unhelpful mutations than helpful ones, but we would control the process so it would only be driven by beneficial glitches and targeted tweaks. Secondly, evolution doesnt aim for anything, including intelligence sometimes an environment might even select against higher intelligence (since it uses a lot of energy). We, on the other hand, could specifically direct this evolutionary process toward increasing intelligence. Third, to select for intelligence, evolution has to innovate in a bunch of other ways to facilitate intelligence like revamping the ways cells produce energy when we can remove those extra burdens and use things like electricity. Its no doubt wed be much, much faster than evolution but its still not clear whether well be able to improve upon evolution enough to make this a viable strategy.

3) Make this whole thing the computers problem, not ours.

This is when scientists get desperate and try to program the test to take itself. But it might be the most promising method we have.

The idea is that wed build a computer whose two major skills would be doing research on AI and coding changes into itself allowing it to not only learn but to improve its own architecture. Wed teach computers to be computer scientists so they could bootstrap their own development. And that would be their main job figuring out how to make themselves smarter. More on this later.

All of This Could Happen Soon

Rapid advancements in hardware and innovative experimentation with software are happening simultaneously, and AGI could creep up on us quickly and unexpectedly for two main reasons:

1) Exponential growth is intense and what seems like a snails pace of advancement can quickly race upwardsthis GIF illustrates this concept nicely:

2) When it comes to software, progress can seem slow, but then one epiphany can instantly change the rate of advancement (kind of like the way science, during the time humans thought the universe was geocentric, was having difficulty calculating how the universe worked, but then the discovery that it was heliocentric suddenly made everything much easier). Or, when it comes to something like a computer that improves itself, we might seem far away but actually be just one tweak of the system away from having it become 1,000 times more effective and zooming upward to human-level intelligence.

At some point, well have achieved AGI computers with human-level general intelligence. Just a bunch of people and computers living together in equality.

Oh actually not at all.

The thing is, AGI with an identical level of intelligence and computational capacity as a human would still have significant advantages over humans. Like:

AI, which will likely get to AGI by being programmed to self-improve, wouldnt see human-level intelligence as some important milestone its only a relevant marker from our point of view and wouldnt have any reason to stop at our level. And given the advantages over us that even human intelligence-equivalent AGI would have, its pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence.

This may shock the shit out of us when it happens. The reason is that from our perspective, A) while the intelligence of different kinds of animals varies, the main characteristic were aware of about any animals intelligence is that its far lower than ours, and B) we view the smartest humans as WAY smarter than the dumbest humans. Kind of like this:

So as AI zooms upward in intelligence toward us, well see it as simply becoming smarter, for an animal. Then, when it hits the lowest capacity of humanityNick Bostrom uses the term the village idiot well be like, Oh wow, its like a dumb human. Cute! The only thing is, in the grand spectrum of intelligence, all humans, from the village idiot to Einstein, are within a very small range so just after hitting village idiot-level and being declared to be AGI, itll suddenly be smarter than Einstein and we wont know what hit us:

And what happensafter that?

I hope you enjoyed normal time, because this is when this topic gets unnormal and scary, and its gonna stay that way from here forward. I want to pause here to remind you that every single thing Im going to say is real real science and real forecasts of the future from a large array of the most respected thinkers and scientists. Just keep remembering that.

Anyway, as I said above, most of our current models for getting to AGI involve the AI getting there by self-improvement. And once it gets to AGI, even systems that formed and grew through methods that didnt involve self-improvement would now be smart enough to begin self-improving if they wanted to.

And heres where we get to an intense concept: recursive self-improvement. It works like this

An AI system at a certain level lets say human village idiot is programmed with the goal of improving its own intelligence. Once it does, its smarter maybe at this point its at Einsteins level so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. These leaps make it much smarter than any human, allowing it to make even bigger leaps. As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the superintelligent level of an ASI system. This is called an Intelligence Explosion, and its the ultimate example of The Law of Accelerating Returns.

There is some debate about how soon AI will reach human-level general intelligence the median year on a survey of hundreds of scientists about when they believed wed be more likely than not to have reached AGI was 2040 thats only 25 years from now, which doesnt sound that huge until you consider that many of the thinkers in this field think its likely that the progression from AGI to ASI happens very quickly. Like this could happen:

It takes decades for the first AI system to reach low-level general intelligence, but it finally happens. A computer is able to understand the world around it as well as a human four-year-old. Suddenly, within an hour of hitting that milestone, the system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do. 90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human.

Superintelligence of that magnitude is not something we can remotely grasp, any more than a bumblebee can wrap its head around Keynesian Economics. In our world, smart means a 130 IQ and stupid means an 85 IQ we dont have a word for an IQ of 12,952.

What we do know is that humans utter dominance on this Earth suggests a clear rule: with intelligence comes power. Which means an ASI, when we create it, will be the most powerful being in the history of life on Earth, and all living things, including humans, will be entirely at its whim and this might happen in the next few decades.

Read the original here:

The AI Revolution: The Road to Superintelligence | Inverse

Friendly artificial intelligence – Wikipedia

A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive rather than negative effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained.

The term was coined by Eliezer Yudkowsky[1] to discuss superintelligent artificial agents that reliably implement human values. Stuart J. Russell and Peter Norvig's leading artificial intelligence textbook, Artificial Intelligence: A Modern Approach, describes the idea:[2]

Yudkowsky (2008) goes into more detail about how to design a Friendly AI. He asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the challenge is one of mechanism designto define a mechanism for evolving AI systems under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes.

'Friendly' is used in this context as technical terminology, and picks out agents that are safe and useful, not necessarily ones that are "friendly" in the colloquial sense. The concept is primarily invoked in the context of discussions of recursively self-improving artificial agents that rapidly explode in intelligence, on the grounds that this hypothetical technology would have a large, rapid, and difficult-to-control impact on human society.[3]

The roots of concern about artificial intelligence are very old. Kevin LaGrandeur showed that the dangers specific to AI can be seen in ancient literature concerning artificial humanoid servants such as the golem, or the proto-robots of Gerbert of Aurillac and Roger Bacon. In those stories, the extreme intelligence and power of these humanoid creations clash with their status as slaves (which by nature are seen as sub-human), and cause disastrous conflict.[4] By 1942 these themes prompted Isaac Asimov to create the "Three Laws of Robotics" - principles hard-wired into all the robots in his fiction, intended to prevent them from turning on their creators, or allow them to come to harm.[5]

In modern times as the prospect of superintelligent AI looms nearer, philosopher Nick Bostrom has said that superintelligent AI systems with goals that are not aligned with human ethics are intrinsically dangerous unless extreme measures are taken to ensure the safety of humanity. He put it this way:

Basically we should assume that a 'superintelligence' would be able to achieve whatever goals it has. Therefore, it is extremely important that the goals we endow it with, and its entire motivation system, is 'human friendly.'

Ryszard Michalski, a pioneer of machine learning, taught his Ph.D. students decades ago that any truly alien mind, including a machine mind, was unknowable and therefore dangerous to humans.[citation needed]

More recently, Eliezer Yudkowsky has called for the creation of friendly AI to mitigate existential risk from advanced artificial intelligence. He explains: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."[6]

Steve Omohundro says that a sufficiently advanced AI system will, unless explicitly counteracted, exhibit a number of basic "drives", such as resource acquisition, because of the intrinsic nature of goal-driven systems and that these drives will, "without special precautions", cause the AI to exhibit undesired behavior.[7][8]

Alexander Wissner-Gross says that AIs driven to maximize their future freedom of action (or causal path entropy) might be considered friendly if their planning horizon is longer than a certain threshold, and unfriendly if their planning horizon is shorter than that threshold.[9][10]

Luke Muehlhauser, writing for the Machine Intelligence Research Institute, recommends that machine ethics researchers adopt what Bruce Schneier has called the "security mindset": Rather than thinking about how a system will work, imagine how it could fail. For instance, he suggests even an AI that only makes accurate predictions and communicates via a text interface might cause unintended harm.[11]

Yudkowsky advances the Coherent Extrapolated Volition (CEV) model. According to him, coherent extrapolated volition is people's choices and the actions people would collectively take if "we knew more, thought faster, were more the people we wished we were, and had grown up closer together."[12]

Rather than a Friendly AI being designed directly by human programmers, it is to be designed by a "seed AI" programmed to first study human nature and then produce the AI which humanity would want, given sufficient time and insight, to arrive at a satisfactory answer.[12] The appeal to an objective though contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of "Friendliness", is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity.

Ben Goertzel, an artificial general intelligence researcher, believes that friendly AI cannot be created with current human knowledge. Goertzel suggests humans may instead decide to create an "AI Nanny" with "mildly superhuman intelligence and surveillance powers", to protect the human race from existential risks like nanotechnology and to delay the development of other (unfriendly) artificial intelligences until and unless the safety issues are solved.[13]

Steve Omohundro has proposed a "scaffolding" approach to AI safety, in which one provably safe AI generation helps build the next provably safe generation.[14]

Stefan Pernar argues along the lines of Meno's paradox to point out that attempting to solve the FAI problem is either pointless or hopeless depending on whether one assumes a universe that exhibits moral realism or not. In the former case a transhuman AI would independently reason itself into the proper goal system and assuming the latter, designing a friendly AI would be futile to begin with since morals can not be reasoned about.[15]

James Barrat, author of Our Final Invention, suggested that "a public-private partnership has to be created to bring A.I.-makers together to share ideas about securitysomething like the International Atomic Energy Agency, but in partnership with corporations." He urges AI researchers to convene a meeting similar to the Asilomar Conference on Recombinant DNA, which discussed risks of biotechnology.[14]

John McGinnis encourages governments to accelerate friendly AI research. Because the goalposts of friendly AI aren't necessarily clear, he suggests a model more like the National Institutes of Health, where "Peer review panels of computer and cognitive scientists would sift through projects and choose those that are designed both to advance AI and assure that such advances would be accompanied by appropriate safeguards." McGinnis feels that peer review is better "than regulation to address technical issues that are not possible to capture through bureaucratic mandates". McGinnis notes that his proposal stands in contrast to that of the Machine Intelligence Research Institute, which generally aims to avoid government involvement in friendly AI.[16]

According to Gary Marcus, the annual amount of money being spent on developing machine morality is tiny.[17]

Some critics believe that both human-level AI and superintelligence are unlikely, and that therefore friendly AI is unlikely. Writing in The Guardian, Alan Winfeld compares human-level artificial intelligence with faster-than-light travel in terms of difficulty, and states that while we need to be "cautious and prepared" given the stakes involved, we "don't need to be obsessing" about the risks of superintelligence.[18]

Some philosophers claim that any truly "rational" agent, whether artificial or human, will naturally be benevolent; in this view, deliberate safeguards designed to produce a friendly AI could be unnecessary or even harmful.[19] Other critics question whether it is possible for an artificial intelligence to be friendly. Adam Keiper and Ari N. Schulman, editors of the technology journal The New Atlantis, say that it will be impossible to ever guarantee "friendly" behavior in AIs because problems of ethical complexity will not yield to software advances or increases in computing power. They write that the criteria upon which friendly AI theories are based work "only when one has not only great powers of prediction about the likelihood of myriad possible outcomes, but certainty and consensus on how one values the different outcomes.[20]

Read the original here:

Friendly artificial intelligence - Wikipedia

Why won’t everyone listen to Elon Musk about the robot apocalypse? – Ladders

When Elon Musk is not the billionaire CEO running three companies, he has a side hustle as our greatest living prophet of the upcoming war between humans and machines.

In his latest public testimony about the dark future that awaits us all, Musk urged the United Nations to ban artificially intelligent killer robots. And he and other fellow prophets emphasized that we have no time. No time.

In an open letterto the U.N., Musk, along with 115 other experts in robotics, co-signed a grim future where artificial superintelligence would lead to lethal autonomous weapons that would bring the third revolution in warfare.

And to add some urgency to the matter, the letter said that this future wasnt a distant science fiction, it was a near and present danger.

Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend, the letter states. We do not have long to act. Once this Pandoras box is opened, it will be hard to close.

Although lethal autonomous weapons are not mainstream yet, they do already exist. Samsungs SGR-A1 sentry robot is reportedly used by the South Korean army to monitor the Korean Demilitarized Zone with guns capable of autonomous firing.Taranis, an unmanned combat air vehicle, is being developed by the U.K. So autonomous weapons are already here. It remains to be seen, however, if this brings a new World War.

This is not the first time Musk has sounded the alarm on machines taking over. Heres a look at all the ways Musk has tried to convince humanity of its impending doom.

And he hasnt been mild in his warnings. If youre going to get people to pay attention to your robot visions, you need to raise the stakes.

Thats what Musk did when he toldMassachusetts Institute of Technology students in 2014 that artificial intelligence was our biggest existential threat. And in case he didnt get the students attention there, Musk compared artificial intelligence research to a metaphor of Good and Evil.

With artificial intelligence we are summoning the demon, in all those stories where theres the guy with the pentagram and the holy water, its like yeah hes sure he can control the demon. Didnt work out, Musk said.

So what can humans use as prayer beads against these robotic demons? Musk thinks that well need to use artificial intelligence to beat artificial intelligence. In a Vanity Fair profile of his artificial intelligence ambitions, Musk said that the human A.I. collective could beat rogue algorithms that could arise with artificial superintelligence.

If youre not sold with A.I. being an existential threat to humanity, are you more alarmed when you consider a world where our robot overlords treat us like pets? This is an argument Musk tried in 2017.

When Musk founded his brain-implant company, Neuralink, in 2017, he needed to explain why developing a connection between brains and machines was necessary. As part of this press tour, he talked with Wait But Why about the background behind his latest company and the existential risk were facing with artificial intelligence.

Were going to have the choice of either being left behind and being effectively useless or like a pet you know, like a house cat or something or eventually figuring out some way to be symbiotic and merge with AI, Musk told the blog. A house cats a good outcome, by the way.

Musk meant that being housecats for the demonic robot overlords is the best possible outcome, of course. But its also worth considering that housecats are not only well-treated and largely adored, but also, by acclamation, came to dominate the internet. Humanity could do worse.

Our impending irrelevance means well have to become cyborgs to stay useful to this world, according to Musk. While computers can communicate at a a trillion bits per second, Musk has said, we flawed humans are built with much slower bandwidths our puny brains and sluggish fingers that process information more slowly. We will need to evolve past this to stay useful.

To do this, humans and robots will need to form a merger so that we can achieve a symbiosis between human and machine intelligence, and maybe solves the control problem and the usefulness problem, Musk told an audience at the World Government Summit in Dubai in 2017, according to CNBC.

In other words, one day in the future, humans will have to join forces with artificial intelligence to keep up with the times or become the collared felines Musk fears well become without intervention.

What will it take for our robot prophet to be heard, so that his proclamations dont keep falling on deaf ears?

Although Musk may seem like a minority opinion now, his ideas around the threat of artificial intelligence are becoming more mainstream. For instance, his idea has been widely adopted that we are living right now in a computer simulation staged by future scientists.

Although Facebook CEO Mark Zuckerberg disagrees with Musks dark future, more tech leaders are siding with Musk when it comes to killer robots. Alphabets artificial intelligence expert, Mustafa Suleyman, was one of the U.N. open letters signatories. In the past, Bill Gates has said that the intelligence in A.I. is strong enough to be a concern.

So we can laugh now at these outlandish science fiction worlds where were robots domestic pets. But Musk has been sounding the alarm for years and he has held firm to his beliefs. What may be one mans outlier theory now may become a reality in the future. If nothing else, hes making sure you listen.

Monica Torres is a reporter for Ladders. She is based in New York City and can be reached at mtorres@theladders.com.

Visit link:

Why won't everyone listen to Elon Musk about the robot apocalypse? - Ladders

Being human in the age of artificial intelligence – Science Weekly podcast – The Guardian

Subscribe & Review on iTunes, Soundcloud, Audioboom, Mixcloud & Acast, and join the discussion on Facebook and Twitter

In 2014, a new research and outreach organisation was born in Boston. Calling itself The Future of Life Institute, its founders included Jaan Tallinn - who helped create Skype - and a physicist from Massachusetts Institute of Technology. That physicist was Professor Max Tegmark.

With a mission to help safeguard life and develop optimistic visions of the future, the Institute has focused largely on Artificial Intelligence (AI). Of particular concern is the potential for AI to leapfrog humans and achieve so-called superintelligence something discussed in depth in Tegmarks latest book Life 3.0. This week Ian Sample asks the physicist and author what would happen if we did manage to create superintelligent AI? Do we even know how to build human-level AI? And with no sign of computers outsmarting us yet, why talk about it now?

The rest is here:

Being human in the age of artificial intelligence - Science Weekly podcast - The Guardian

Infographic: Visualizing the Massive $15.7 Trillion Impact of AI – Visual Capitalist (blog)

on August 21, 2017 at 12:24 pm

For the people most immersed in the tech sector, its hard to think of a more controversial topic than the ultimate impact of artificial intelligence (AI) on society.

By eventually empowering machines with a level of superintelligence, there are many different possible outcomes ranging from Kurzweils technological singularity to the more dire predictions popularized by Elon Musk.

Despite this wide gap in potential outcomes, most technologists do agree on one thing: AI will have a profound impact on the society and the way we do business.

Todays infographic comes from the Extraordinary Future 2017, a new conference in Vancouver, BC that focuses on emerging technologies such as AI, autonomous vehicles, fintech, and blockchain tech.

In the below infographic, we look recent projections from PwC and Accenture regarding AIs economic impact, as well as the industries and countries that will be the most profoundly affected.

According to PwCs most recent report on the topic, the impact of artificial intelligence (AI) will be transformative.

By 2030, AI is expected to provide a $15.7 trillion boost to GDP worldwide the equivalent of adding 13 new Australias to the global economy.

Where will AIs impact be most pronounced?

According to PwC, China will be the region receiving the most economic benefit ($7.0 trillion) from AI being integrated into various industries:

Further, the global growth from AI can be divided into two major areas, according to PwC: labor productivity improvements ($6.6 trillion) and increased consumer demand ($9.1 trillion).

But how will AI impact industries on an individual level?

For that, we turn to Accentures recent report, which breaks down a similar projection of $14 trillion of gross value added (GVA) by 2035, with estimates for AIs impact on specific industries.

Manufacturing will see nearly $4 trillion in growth from AI alone and many other industries will undergo significant changes as well.

To learn more about other tech that will have a big impact on our future, see a Timeline of Future Technology.

Thank you!

Given email address is already subscribed, thank you!

Please provide a valid email address.

Please complete the CAPTCHA.

Oops. Something went wrong. Please try again later.

Embed This Image On Your Site (copy code below):

Jeff Desjardins is a founder and editor of Visual Capitalist, a media website that creates and curates visual content on investing and business.

Previous Article

This Map Shows Which States Will Benefit From Solar Eclipse Tourism

Next Article

Interactive: Visualizing Median Income For All 3,000+ U.S. Counties

See original here:

Infographic: Visualizing the Massive $15.7 Trillion Impact of AI - Visual Capitalist (blog)