12345...1020...


Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More

Ripple vs SWIFT: The War Begins
While most criticisms of XRP do nothing to curb my bullish Ripple price forecast, there is one obstacle that nags at my conscience. Its name is SWIFT.

The Society for Worldwide Interbank Financial Telecommunication (SWIFT) is the king of international payments.

It coordinates wire transfers across 11,000 banks in more than 200 countries and territories, meaning that in order for XRP prices to ascend to $10.00, Ripple needs to launch a successful coup. That is, and always has been, an unwritten part of Ripple’s story.

We’ve seen a lot of progress on that score. In the last three years, Ripple wooed more than 100 financial firms onto its.

The post Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More appeared first on Profit Confidential.

Read more from the original source:

Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More

Cryptocurrency News: This Week on Bitfinex, Tether, Coinbase, & More

Cryptocurrency News
On the whole, cryptocurrency prices are down from our previous report on cryptos, with the market slipping on news of an exchange being hacked and a report about Bitcoin manipulation.

However, there have been two bright spots: 1) an official from the U.S. Securities and Exchange Commission (SEC) said that Ethereum is not a security, and 2) Coinbase is expanding its selection of tokens.

Let’s start with the good news.
SEC Says ETH Is Not a Security
Investors have some reason to cheer this week. A high-ranking SEC official told attendees of the Yahoo! All Markets Summit: Crypto that Ethereum and Bitcoin are not.

The post Cryptocurrency News: This Week on Bitfinex, Tether, Coinbase, & More appeared first on Profit Confidential.

Read the original here:

Cryptocurrency News: This Week on Bitfinex, Tether, Coinbase, & More

Cryptocurrency News: Bitcoin ETFs, Andreessen Horowitz, and Contradictions in Crypto

Cryptocurrency News
This was a bloody week for cryptocurrencies. Everything was covered in red, from Ethereum (ETH) on down to the Basic Attention Token (BAT).

Some investors claim it was inevitable. Others say that price manipulation is to blame.

We think the answers are more complicated than either side has to offer, because our research reveals deep contradictions between the price of cryptos and the underlying development of blockchain projects.

For instance, a leading venture capital (VC) firm launched a $300.0-million crypto investment fund, yet liquidity continues to dry up in crypto markets.

Another example is the U.S. Securities and Exchange Commission’s.

The post Cryptocurrency News: Bitcoin ETFs, Andreessen Horowitz, and Contradictions in Crypto appeared first on Profit Confidential.

Read more here:

Cryptocurrency News: Bitcoin ETFs, Andreessen Horowitz, and Contradictions in Crypto

Cryptocurrency News: Looking Past the Bithumb Crypto Hack

Another Crypto Hack Derails Recovery
Since our last report, hackers broke into yet another cryptocurrency exchange. This time the target was Bithumb, a Korean exchange known for high-flying prices and ultra-active traders.

While the hackers made off with approximately $31.5 million in funds, the exchange is working with relevant authorities to return the stolen tokens to their respective owners. In the event that some is still missing, the exchange will cover the losses. (Source: “Bithumb Working With Other Crypto Exchanges to Recover Hacked Funds,”.

The post Cryptocurrency News: Looking Past the Bithumb Crypto Hack appeared first on Profit Confidential.

See the article here:

Cryptocurrency News: Looking Past the Bithumb Crypto Hack

Cryptocurrency News: Bitcoin ETF Rejection, AMD Microchip Sales, and Hedge Funds

Cryptocurrency News
Although cryptocurrency prices were heating up last week (Bitcoin, especially), regulators poured cold water on the rally by rejecting calls for a Bitcoin exchange-traded fund (ETF). This is the second time that the proposal fell on deaf ears. (More on that below.)

Crypto mining ran into similar trouble, as you can see from Advanced Micro Devices, Inc.‘s (NASDAQ:AMD) most recent quarterly earnings. However, it wasn’t all bad news. Investors should, for instance, be cheering the fact that hedge funds are ramping up their involvement in cryptocurrency markets.

Without further ado, here are those stories in greater detail.
ETF Rejection.

The post Cryptocurrency News: Bitcoin ETF Rejection, AMD Microchip Sales, and Hedge Funds appeared first on Profit Confidential.

Follow this link:

Cryptocurrency News: Bitcoin ETF Rejection, AMD Microchip Sales, and Hedge Funds

Cryptocurrency News: What You Need to Know This Week

Cryptocurrency News
Cryptocurrencies traded sideways since our last report on cryptos. However, I noticed something interesting when playing around with Yahoo! Finance’s cryptocurrency screener: There are profitable pockets in this market.

Incidentally, Yahoo’s screener is far superior to the one on CoinMarketCap, so if you’re looking to compare digital assets, I highly recommend it.

But let’s get back to my epiphany.

In the last month, at one point or another, most crypto assets on our favorites list saw double-digit increases. It’s true that each upswing was followed by a hard crash, but investors who rode the trend would have made a.

The post Cryptocurrency News: What You Need to Know This Week appeared first on Profit Confidential.

See original here:

Cryptocurrency News: What You Need to Know This Week

Cryptocurrency News: XRP Validators, Malta, and Practical Tokens

Cryptocurrency News & Market Summary
Investors finally saw some light at the end of the tunnel last week, with cryptos soaring across the board. No one quite knows what kicked off the rally—as it could have been any of the stories we discuss below—but the net result was positive.

Of course, prices won’t stay on this rocket ride forever. I expect to see a resurgence of volatility in short order, because the market is moving as a single unit. Everything is rising in tandem.

This tells me that investors are simply “buying the dip” rather than identifying which cryptos have enough real-world value to outlive the crash.

So if you want to know when.

The post Cryptocurrency News: XRP Validators, Malta, and Practical Tokens appeared first on Profit Confidential.

Go here to see the original:

Cryptocurrency News: XRP Validators, Malta, and Practical Tokens

Cryptocurrency News: Vitalik Buterin Doesn’t Care About Bitcoin ETFs

Cryptocurrency News
While headline numbers look devastating this week, investors might take some solace in knowing that cryptocurrencies found their bottom at roughly $189.8 billion in market cap—that was the low point. Since then, investors put more than $20.0 billion back into the market.

During the rout, Ethereum broke below $300.00 and XRP fell below $0.30, marking yearly lows for both tokens. The same was true down the list of the top 100 biggest cryptos.

Altcoins took the brunt of the hit. BTC Dominance, which reveals how tightly investment is concentrated in Bitcoin, rose from 42.62% to 53.27% in just one month, showing that investors either fled altcoins at higher.

The post Cryptocurrency News: Vitalik Buterin Doesn’t Care About Bitcoin ETFs appeared first on Profit Confidential.

Excerpt from:

Cryptocurrency News: Vitalik Buterin Doesn’t Care About Bitcoin ETFs

Cryptocurrency News: New Exchanges Could Boost Crypto Liquidity

Cryptocurrency News
Even though the cryptocurrency news was upbeat in recent days, the market tumbled after the U.S. Securities and Exchange Commission (SEC) rejected calls for a Bitcoin (BTC) exchange-traded fund (ETF).

That news came as a blow to investors, many of whom believe the ETF would open the cryptocurrency industry up to pension funds and other institutional investors. This would create a massive tailwind for cryptos, they say.

So it only follows that a rejection of the Bitcoin ETF should send cryptos tumbling, correct? Well, maybe you can follow that logic. To me, it seems like a dramatic overreaction.

I understand that legitimizing cryptos is important. But.

The post Cryptocurrency News: New Exchanges Could Boost Crypto Liquidity appeared first on Profit Confidential.

More here:

Cryptocurrency News: New Exchanges Could Boost Crypto Liquidity

Bitcoin Rise: Is the Recent Bitcoin Price Surge a Sign of Things to Come or Another Misdirection?

What You Need to Know About the Bitcoin Price Rise
It wasn’t that long ago that Bitcoin (BTC) dominated headlines for its massive growth, with many cryptocurrency millionaires being made. The Bitcoin price surged ever upward and many people thought the gravy train would never stop running—until it did.

Prices crashed, investors abandoned the space, and lots of people lost money. Cut to today and we’re seeing another big Bitcoin price surge; is this time any different?

I’m of a mind that investors ought to think twice before jumping back in on Bitcoin.

Bitcoin made waves when it once again crested above $5,000. Considering that it started 2019 around $3,700,.

The post Bitcoin Rise: Is the Recent Bitcoin Price Surge a Sign of Things to Come or Another Misdirection? appeared first on Profit Confidential.

Read the original post:

Bitcoin Rise: Is the Recent Bitcoin Price Surge a Sign of Things to Come or Another Misdirection?

Superintelligence – Wikipedia

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. “Superintelligence” may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

University of Oxford philosopher Nick Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”. The program Fritz falls short of superintelligence even though it is much better than humans at chess because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity toeither as a single being or as a new speciesbecome much more powerful than humans, and to displace them.

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself a feature called “recursive self-improvement”. It would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything.

Computer components already greatly surpass human performance in speed. Bostrom writes, “Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz).” Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, “whereas existing electronic processing cores can communicate optically at the speed of light”. Thus, the simplest example of a superintelligence may be an emulated human mind that’s run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed.[9] Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and primate cognition.) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.

All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence.[12] By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

Selective breeding, nootropics, NSI-189, MAO-I’s, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process very rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.

Alternatively, collective intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism.

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or braincomputer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem.

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able “to simulate learning and every other aspect of human intelligence” by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.[17]

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines “that can carry out most human professions at least as well as a typical human” (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said ‘never’ for 50% confidence, and the 16.5% who said ‘never’ for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:

Responding to Bostrom, Santos-Lang raised concern that developers may attempt to start with a single kind of superintelligence.

Learning computers that rapidly become superintelligent may take unforeseen actions or robots might out-compete humanity (one potential technological singularity scenario).[21] Researchers have argued that, by way of an “intelligence explosion” sometime over the next century, a self-improving AI could become so powerful as to be unstoppable by humans.[22]

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[23]

Eliezer Yudkowsky explains: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”[24]

This presents the AI control problem: how to build a superintelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. The danger of not designing control right “the first time”, is that a misprogrammed superintelligence might rationally decide to “take over the world” and refuse to permit its programmers to modify it once it has been activated. Potential design strategies include “capability control” (preventing an AI from being able to pursue harmful plans), and “motivational control” (building an AI that wants to be helpful).

Bill Hibbard advocates for public education about superintelligence and public control over the development of superintelligence.

See the original post:

Superintelligence – Wikipedia

Nick Bostrom – Wikipedia

Nick Bostrom (; Swedish: Niklas Bostrm [bustrm]; born 10 March 1973)[3] is a Swedish philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, superintelligence risks, and the reversal test. In 2011, he founded the Oxford Martin Programme on the Impacts of Future Technology,[4] and is the founding director of the Future of Humanity Institute[5] at Oxford University.

Bostrom is the author of over 200 publications,[6] including Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller[7] and Anthropic Bias: Observation Selection Effects in Science and Philosophy (2002).[8] In 2009 and 2015, he was included in Foreign Policy’s Top 100 Global Thinkers list.[9][10] Bostrom believes there are potentially great benefits from Artificial General Intelligence, but warns it might very quickly transform into a superintelligence that would deliberately extinguish humanity out of precautionary self-preservation or some unfathomable motive, making solving the problems of control beforehand an absolute priority. His book on superintelligence was recommended by both Elon Musk and Bill Gates. However, Bostrom has expressed frustration that the reaction to its thesis typically falls into two camps, one calling his recommendations absurdly alarmist because creation of superintelligence is unfeasible, and the other deeming them futile because superintelligence would be uncontrollable. Bostrom notes that both these lines of reasoning converge on inaction rather than trying to solve the control problem while there may still be time.[11][12][not in citation given]

Born as Niklas Bostrm in 1973[13] in Helsingborg, Sweden,[6] he disliked school at a young age, and ended up spending his last year of high school learning from home. He sought to educate himself in a wide variety of disciplines, including anthropology, art, literature, and science.[1] He once did some turns on London’s stand-up comedy circuit.[6]

He received a B.A. degree in philosophy, mathematics, logic and artificial intelligence from the University of Gothenburg in 1994, and both an M.A. degree in philosophy and physics from Stockholm University and an M.Sc. degree in computational neuroscience from King’s College London in 1996. During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine.[1] In 2000, he was awarded a Ph.D. degree in philosophy from the London School of Economics. He held a teaching position at Yale University (20002002), and he was a British Academy Postdoctoral Fellow at the University of Oxford (20022005).[8][14]

Aspects of Bostrom’s research concern the future of humanity and long-term outcomes.[15][16] He introduced the concept of an existential risk,[1] which he defines as one in which an “adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” In the 2008 volume Global Catastrophic Risks, editors Bostrom and Milan irkovi characterize the relation between existential risk and the broader class of global catastrophic risks, and link existential risk to observer selection effects[17] and the Fermi paradox.[18][19]

In 2005, Bostrom founded the Future of Humanity Institute,[1] which researches the far future of human civilization. He is also an adviser to the Centre for the Study of Existential Risk.[16]

In his 2014 book Superintelligence: Paths, Dangers, Strategies, Bostrom reasoned that “the creation of a superintelligent being represents a possible means to the extinction of mankind”.[20] Bostrom argues that a computer with near human-level general intellectual ability could initiate an intelligence explosion on a digital time scale with the resultant rapid creation of something so powerful that it might deliberately or accidentally destroy human kind.[21] Bostrom contends the power of a superintelligence would be so great that a task given to it by humans might be taken to open ended extremes, for example a goal of calculating Pi could collaterally cause nanotechnology manufactured facilities to sprout over the entire Earth’s surface and cover it within days.[22] He believes an existential risk to humanity from superintelligence would be immediate once brought into being, thus creating an exceedingly difficult problem of finding out how to control such an entity before it actually exists.[21]

Warning that a human-friendly prime directive for AI would rely on the absolute correctness of the human knowledge it was based on, Bostrom points to the lack of agreement among most philosophers as an indication that most philosophers are wrong, with the attendant possibility that a fundamental concept of current science may be incorrect. Bostrom says that there are few precedents to guide an understanding of what pure non-anthropocentric rationality would dictate for a potential Singleton AI being held in quarantine.[23] Noting that both John von Neumann and Bertrand Russell advocated a nuclear strike, or the threat of one, to prevent the Soviets acquiring the atomic bomb, Bostrom says the relatively unlimited means of superintelligence might make for its analysis moving along different lines to the evolved “diminishing returns” assessments that in humans confer a basic aversion to risk.[24] Group selection in predators working by means of cannibalism shows the counter-intuitive nature of non-anthropocentric “evolutionary search” reasoning, and thus humans are ill-equipped to perceive what an artificial intelligence’s intentions might be.[25] Accordingly, it cannot be discounted that any Superintelligence would ineluctably pursue an ‘all or nothing’ offensive action strategy in order to achieve hegemony and assure its survival.[26] Bostrom notes that even current programs have, “like MacGyver”, hit on apparently unworkable but functioning hardware solutions, making robust isolation of Superintelligence problematic.[27]

A machine with general intelligence far below human level, but superior mathematical abilities is created.[28] Keeping the AI in isolation from the outside world especially the internet, humans pre-program the AI so it always works from basic principles that will keep it under human control. Other safety measures include the AI being “boxed” (run in a virtual reality simulation), and being used only as an ‘oracle’ to answer carefully defined questions in a limited reply (to prevent it manipulating humans).[21] A cascade of recursive self-improvement solutions feeds an intelligence explosion in which the AI attains superintelligence in some domains. The super intelligent power of the AI goes beyond human knowledge to discover flaws in the science that underlies its friendly-to-humanity programming, which ceases to work as intended. Purposeful agent-like behavior emerges along with a capacity for self-interested strategic deception. The AI manipulates human beings into implementing modifications to itself that are ostensibly for augmenting its (feigned) modest capabilities, but will actually function to free Superintelligence from its “boxed” isolation.[29]

Employing online humans as paid dupes, and clandestinely hacking computer systems including automated laboratory facilities, the Superintelligence mobilises resources to further a takeover plan. Bostrom emphasises that planning by a Superintelligence will not be so stupid that humans could detect actual weaknesses in it.[30]

Although he canvasses disruption of international economic, political and military stability including hacked nuclear missile launches, Bostrom thinks the most effective and likely means for Superintelligence to use would be a coup de main with weapons several generations more advanced than current state of the art. He suggests nanofactories covertly distributed at undetectable concentrations in every square metre of the globe to produce a worldwide flood of human-killing devices on command.[31][28] Once a Superintelligence has achieved world domination, humankind would be relevant only as resources for the achievement of the AI’s objectives (“Human brains, if they contain information relevant to the AIs goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format”).[32]

In January 2015, Bostrom joined Stephen Hawking among others in signing the Future of Life Institute’s open letter warning of the potential dangers of AI.[33] The signatories “…believe that research on how to make AI systems robust and beneficial is both important and timely, and that concrete research should be pursued today.”[34] Cutting edge AI researcher Demis Hassabis then met with Hawking, subsequent to which he did not mention “anything inflammatory about AI”, which Hassabis, took as ‘a win’.[35] Along with Google, Microsoft and various tech firms, Hassabis, Bostrom and Hawking and others subscribed to 23 principles for safe development of AI.[36] Hassabis suggested the main safety measure would be an agreement for whichever AI research team began to make strides toward an artificial general intelligence to halt their project for a complete solution to the control problem prior to proceeding.[37] Bostrom had pointed out that even if the crucial advances require the resources of a state, such a halt by a lead project might be likely to motivate a lagging country to a catch-up crash program or even physical destruction of the project suspected of being on the verge of success.[38]

In 1863 Darwin among the Machines, an essay by Samuel Butler predicted intelligent machines’ domination of humanity, but Bostom’s suggestion of deliberate massacre of all humankind is the most extreme of such forecasts to date. One journalist wrote in a review that Bostrom’s “nihilistic” speculations indicate he “has been reading too much of the science fiction he professes to dislike”[31] As given in his most recent book, From Bacteria to Bach and Back, renowned philosopher Daniel Dennett’s views remain in contradistinction to those of Bostrom.[39] Dennett modified his views somewhat after reading The Master Algorithm, and now acknowledges that it is “possible in principle” to create “strong AI” with human-like comprehension and agency, but maintains that the difficulties of any such “strong AI” project as predicated by Bostrom’s “alarming” work would be orders of magnitude greater than those raising concerns have realized, and at least 50 years away.[40] Dennett thinks the only relevant danger from AI systems is falling into anthropomorphism instead of challenging or developing human users’ powers of comprehension.[41] Since a 2014 book in which he expressed the opinion that artificial intelligence developments would never challenge humans’ supremacy, environmentalist James Lovelock has moved far closer to Bostrom’s position, and in 2018 Lovelock said that he thought the overthrow of humankind will happen within the foreseeable future.[42][43]

Bostrom has published numerous articles on anthropic reasoning, as well as the book Anthropic Bias: Observation Selection Effects in Science and Philosophy. In the book, he criticizes previous formulations of the anthropic principle, including those of Brandon Carter, John Leslie, John Barrow, and Frank Tipler.[44]

Bostrom believes that the mishandling of indexical information is a common flaw in many areas of inquiry (including cosmology, philosophy, evolution theory, game theory, and quantum physics). He argues that a theory of anthropics is needed to deal with these. He introduces the Self-Sampling Assumption (SSA) and the Self-Indication Assumption (SIA), shows how they lead to different conclusions in a number of cases, and points out that each is affected by paradoxes or counterintuitive implications in certain thought experiments. He suggests that a way forward may involve extending SSA into the Strong Self-Sampling Assumption (SSSA), which replaces “observers” in the SSA definition with “observer-moments”.

In later work, he has described the phenomenon of anthropic shadow, an observation selection effect that prevents observers from observing certain kinds of catastrophes in their recent geological and evolutionary past.[45] Catastrophe types that lie in the anthropic shadow are likely to be underestimated unless statistical corrections are made.

Bostrom’s simulation argument posits that at least one of the following statements is very likely to be true:[46][47]

The idea has influenced the views of Elon Musk.[48]

Bostrom is favorable towards “human enhancement”, or “self-improvement and human perfectibility through the ethical application of science”,[49][50] as well as a critic of bio-conservative views.[51]

In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association[49] (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies, although he is no longer involved in either of these organisations. Bostrom was named in Foreign Policy’s 2009 list of top global thinkers “for accepting no limits on human potential.”[52]

With philosopher Toby Ord, he proposed the reversal test. Given humans’ irrational status quo bias, how can one distinguish between valid criticisms of proposed changes in a human trait and criticisms merely motivated by resistance to change? The reversal test attempts to do this by asking whether it would be a good thing if the trait was altered in the opposite direction.[53]

He has suggested that technology policy aimed at reducing existential risk should seek to influence the order in which various technological capabilities are attained, proposing the principle of differential technological development. This principle states that we ought to retard the development of dangerous technologies, particularly ones that raise the level of existential risk, and accelerate the development of beneficial technologies, particularly those that protect against the existential risks posed by nature or by other technologies.[54][55]

Bostrom’s theory of the Unilateralist’s Curse[56] has been cited as a reason for the scientific community to avoid controversial dangerous research such as reanimating pathogens.[57]

Bostrom has provided policy advice and consulted for an extensive range of governments and organisations. He gave evidence to the House of Lords, Select Committee on Digital Skills.[58] He is an advisory board member for the Machine Intelligence Research Institute,[59] Future of Life Institute,[60] Foundational Questions Institute[61] and an external advisor for the Cambridge Centre for the Study of Existential Risk.[62][63]

In response to Bostrom’s writing on artificial intelligence, Oren Etzioni wrote in an MIT Review article, “..predictions that superintelligence is on the foreseeable horizon are not supported by the available data.”[64]

Read this article:

Nick Bostrom – Wikipedia

What is Artificial Superintelligence (ASI)? – Definition …

Most experts would agree that societies have not yet reached the point of artificial superintelligence. In fact, engineers and scientists are still trying to reach a point that would be considered full artificial intelligence, where a computer could be said to have the same cognitive capacity as a human. Although there have been developments like IBM’s Watson supercomputer beating human players at Jeopardy, and assistive devices like Siri engaging in primitive conversation with people, there is still no computer that can really simulate the breadth of knowledge and cognitive ability that a fully developed adult human has. The Turing test, developed decades ago, is still used to talk about whether computers can come close to simulating human conversation and thought, or whether they can trick other people into thinking that a communicating computer is actually a human.

However, there is a lot of theory that anticipates artificial superintelligence coming sooner rather than later. Using examples like Moore’s law, which predicts an ever-increasing density of transistors, experts talk about singularity and the exponential growth of technology, in which full artificial intelligence could manifest within a number of years, and artificial superintelligence could exist in the 21st century.

Link:

What is Artificial Superintelligence (ASI)? – Definition …

Grady Booch: Don’t fear superintelligent AI | TED Talk

New tech spawns new anxieties, says scientist and philosopher Grady Booch, but we don’t need to be afraid an all-powerful, unfeeling AI. Booch allays our worst (sci-fi induced) fears about superintelligent computers by explaining how we’ll teach, not program, them to share our human values. Rather than worry about an unlikely existential threat, he urges us to consider how artificial …

Visit link:

Grady Booch: Don’t fear superintelligent AI | TED Talk

Chill: Robots Wont Take All Our Jobs | WIRED

None of this is to say that automation and AI arent having an important impact on the economy. But that impact is far more nuanced and limited than the doomsday forecasts suggest. A rigorous study of the impact of robots in manufacturing, agriculture, and utilities across 17 countries, for instance, found that robots did reduce the hours of lower-skilled workersbut they didnt decrease the total hours worked by humans, and they actually boosted wages. In other words, automation may affect the kind of work humans do, but at the moment, its hard to see that its leading to a world without work. McAfee, in fact, says of his earlier public statements, If I had to do it over again, I would put more emphasis on the way technology leads to structural changes in the economy, and less on jobs, jobs, jobs. The central phenomenon is not net job loss. Its the shift in the kinds of jobs that are available.

McAfee points to both retail and transportation as areas where automation is likely to have a major impact. Yet even in those industries, the job-loss numbers are less scary than many headlines suggest. Goldman Sachs just released a report predicting that autonomous cars could ultimately eat away 300,000 driving jobs a year. But that wont happen, the firm argues, for another 25 years, which is more than enough time for the economy to adapt. A recent study by the Organization for Economic Cooperation and Development, meanwhile, predicts that 9 percent of jobs across 21 different countries are under serious threat from automation. Thats a significant number, but not an apocalyptic one.

Of the 271 occupations listed on the 1950 census only oneelevator operatorhad been rendered obsolete by automation by 2010.

Granted, there are much scarier forecasts out there, like that University of Oxford study. But on closer examination, those predictions tend to assume that if a job can be automated, it will be fully automated soonwhich overestimates both the pace and the completeness of how automation actually gets adopted in the wild. History suggests that the process is much more uneven than that. The ATM, for example, is a textbook example of a machine that was designed to replace human labor. First introduced around 1970, ATMs hit widespread adoption in the late 1990s. Today, there are more than 400,000 ATMs in the US. But, as economist James Bessen has shown, the number of bank tellers actually rose between 2000 and 2010. Thats because even though the average number of tellers per branch fell, ATMs made it cheaper to open branches, so banks opened more of them. True, the Department of Labor does now predict that the number of tellers will decline by 8 percent over the next decade. But thats 8 percentnot 50 percent. And its 45 years after the robot that was supposed to replace them made its debut. (Taking a wider view, Bessen found that of the 271 occupations listed on the 1950 census only oneelevator operatorhad been rendered obsolete by automation by 2010.)

Of course, if automation is happening much faster today than it did in the past, then historical statistics about simple machines like the ATM would be of limited use in predicting the future. Ray Kurzweils book The Singularity Is Near (which, by the way, came out 12 years ago) describes the moment when a technological society hits the knee of an exponential growth curve, setting off an explosion of mutually reinforcing new advances. Conventional wisdom in the tech industry says thats where we are nowthat, as futurist Peter Nowak puts it, the pace of innovation is accelerating exponentially. Here again, though, the economic evidence tells a different story. In fact, as a recent paper by Lawrence Mishel and Josh Bivens of the Economic Policy Institute puts it, automation, broadly defined, has actually been slower over the last 10 years or so. And lately, the pace of microchip advancement has started to lag behind the schedule dictated by Moores law.

Corporate America, for its part, certainly doesnt seem to believe in the jobless future. If the rewards of automation were as immense as predicted, companies would be pouring money into new technology. But theyre not. Investments in software and IT grew more slowly over the past decade than the previous one. And capital investment, according to Mishel and Bivens, has grown more slowly since 2002 than in any other postwar period. Thats exactly the opposite of what youd expect in a rapidly automating world. As for gadgets like Pepper, total spending on all robotics in the US was just $11.3 billion last year. Thats about a sixth of what Americans spend every year on their pets.

View post:

Chill: Robots Wont Take All Our Jobs | WIRED

Jordan B Peterson – YouTube

New Podcast: https://www.jordanbpeterson…… 12 Rules for Life Lecture

Im pleased to announce the new partnership between Westwood One, Americas largest audio network, and the Jordan B Peterson Podcast, with the premier episode release Sunday, March 24. Subsequent podcasts will be released every Sunday, featuring lectures about issues of profound psychological and social importance, and discussions with leading scientists, philosophers and authors. Im also very happy that I will be joined in this revised podcast by my daughter, Mikhaila Peterson, who will serve as co-host and discussant. Its a project we have both been excited to develop and are looking forward to continually improving. We hope that the Westwood One partnership will improve the quality of our production and introduce many new listeners to the Jordan B Peterson podcast.

The podcast can be accessed on my website, Spotify and iTunes and on many other popular podcast apps.

Home

— SUPPORT THIS CHANNEL —

Donations: https://www.jordanbpeterson…Merchandise: https://teespring.com/store……

— BOOKS —

12 Rules for Life: An Antidote to Chaos: https://jordanbpeterson.com……Maps of Meaning: The Architecture of Belief: https://jordanbpeterson.com……

— LINKS —

Website: https://jordanbpeterson.com/12 Rules for Life Tour: https://jordanbpeterson.com…Blog: https://jordanbpeterson.com…Podcast: https://jordanbpeterson.com…Reading List: https://jordanbpeterson.com…Twitter: https://twitter.com/jordanb…Instagram: https://www.instagram.com/j……Facebook: https://www.facebook.com/dr……

— PRODUCTS —

Self Authoring Suite: https://selfauthoring.com/Understand Myself personality test: https://understandmyself.com/Merchandise: https://teespring.com/store…… Show less

Read more:

Jordan B Peterson – YouTube

College Information – Peterson’s – The Real Guide to Colleges …

Meet Your Match

Not sure where to start? Let Petersons connect you with programs that are interested in YOU. Chances are that there are great schools youve never heard of, in places youve never considered…until now.

Use our search filters to narrow down your options based on location, offered majors, tuition, and more!

Once you find schools that match your interests, save them to your personal dashboard and directly connect with those schools to request more information or apply.

Follow this link:

College Information – Peterson’s – The Real Guide to Colleges …

Jordan Peterson – Wikipedia

BooksMaps of Meaning: The Architecture of Belief (1999)

In 1999 Routledge published Maps of Meaning: The Architecture of Belief. The book, which took Peterson 13 years to complete, describes a comprehensive theory about how people construct meaning, beliefs and make narratives using ideas from various fields including mythology, religion, literature, philosophy and psychology in accordance to the modern scientific understanding of how the brain functions.[27][5][35]

According to Peterson, his main goal was to examine why both individuals and groups participate in social conflict, explore the reasoning and motivation individuals take to support their belief systems (i.e. ideological identification[27]) that eventually results in killing and pathological atrocities like the Gulag, the Auschwitz concentration camp and the Rwandan genocide.[27][5][35] He considers that an “analysis of the world’s religious ideas might allow us to describe our essential morality and eventually develop a universal system of morality”.[35] Jungian archetypes play an important role in the book.[4]

In 2004, a 13-part TV series based on Peterson’s book Maps of Meaning: The Architecture of Belief aired on TVOntario.[21][29][36]

In January 2018, Penguin Random House published Peterson’s second book, 12 Rules for Life: An Antidote to Chaos. The work contains abstract ethical principles about life, in a more accessible style than Maps of Meaning.[9][4][10]To promote the book, Peterson went on a world tour.[37][38][39] As part of the tour, Peterson was interviewed in the UK by Cathy Newman on Channel 4 News which generated considerable attention, as well as popularity for the book.[40][41][42][43] The book topped bestselling lists in Canada, US and the United Kingdom.[44][45] As of January 2019, Peterson is working on a sequel to 12 Rules for Life.[46]

In 2013, Peterson began recording his lectures (“Personality and Its Transformations”, “Maps of Meaning: The Architecture of Belief”[47]) and uploading them to YouTube. His YouTube channel has gathered more than 1.8 million subscribers and his videos have received more than 65 million views as of August 2018.[33][48] In January 2017, he hired a production team to film his psychology lectures at the University of Toronto. He used funds received on the crowdfunding website Patreon after he became embroiled in the Bill C-16 controversy in September 2016. His funding through Patreon has increased from $1,000 per month in August 2016 to $14,000 by January 2017, more than $50,000 by July 2017, and over $80,000 by May 2018.[25][33][49][50] In December 2018, Peterson decided to delete his Patreon account after Patreon’s controversial bans of political personalities.[51]

Peterson has appeared on many podcasts, conversational series, as well other online shows.[48][52] In December 2016, Peterson started his own podcast, The Jordan B. Peterson Podcast, which has included academic guests such as Camille Paglia, Martin Daly, and James W. Pennebaker.[53] On his YouTube channel he has interviewed Stephen Hicks, Richard J. Haier, and Jonathan Haidt among others.[53] In March 2019, the podcast joined the Westwood One network with Peterson’s daughter as a co-host on some episodes.[54] Peterson supported engineer James Damore in his action against Google.[10]

In May 2017, Peterson began The psychological significance of the Biblical stories,[55] a series of live theatre lectures, also published as podcasts, in which he analyzes archetypal narratives in Book of Genesis as patterns of behavior ostensibly vital for personal, social and cultural stability.[10][56]

In March 2019, Peterson had his invitation of a visiting fellowship at Cambridge University rescinded. He had previously said that the fellowship would give him “the opportunity to talk to religious experts of all types for a couple of months”, and that the new lectures would have been on Book of Exodus.[57] A spokesperson for the University said that there was “no place” for anyone who could not uphold the “inclusive environment” of the university.[58] After a week, the vice-chancellor Stephen Toope explained that it was due to a photograph with a man wearing an Islamophobe shirt.[59] The Cambridge student union released a statement of relief, considering the invitation “a political act to … legitimise figures such as Peterson” and that his work and views are not “representative of the student body”.[60] Peterson called the decision a “deeply unfortunate … error of judgement” and expressed regret that the Divinity Faculty had submitted to an “ill-informed, ignorant and ideologically-addled mob”.[61][62]

In 2005, Peterson and his colleagues set up a for-profit company to provide and produce a writing therapy program with a series of online writing exercises.[63] Titled the Self Authoring Suite,[21] it includes the Past Authoring Program (a guided autobiography); two Present Authoring Programs which allow the participant to analyze their personality faults and virtues in terms of the Big Five personality model; and the Future Authoring Program which guides participants through the process of planning their desired futures. The latter program was used with McGill University undergraduates on academic probation to improve their grades, as well as since 2011 at Rotterdam School of Management, Erasmus University.[64][65] The programs were developed partially from research by James W. Pennebaker at the University of Texas at Austin and Gary Latham at the Rotman School of Management of the University of Toronto.[4] Peterson’s co-authored 2015 study showed significant reduction in ethnic and gender-group differences in performance, especially among ethnic minority male students.[65][66] According to Peterson, more than 10,000 students have used the program as of January 2017, with drop-out rates decreasing by 25% and GPAs rising by 20%.[21]

Peterson has characterized himself as a “classic British liberal”,[28][67][68] and as a “traditionalist”.[69] He has stated that he is commonly mistaken to be right wing.[48] The New York Times described Peterson as “conservative-leaning”,[70] while The Washington Post described him as “conservative”.[71]

Peterson’s critiques of political correctness range over issues such as postmodernism, postmodern feminism, white privilege, cultural appropriation, and environmentalism.[52][72]

Writing in the National Post, Chris Selley said Peterson’s opponents had “underestimated the fury being inspired by modern preoccupations like white privilege and cultural appropriation, and by the marginalization, shouting down or outright cancellation of other viewpoints in polite society’s institutions”,[73] while in The Spectator, Tim Lott stated Peterson became “an outspoken critic of mainstream academia”.[28] Peterson’s social media presence has magnified the impact of these views; Simona Chiose of The Globe and Mail noted: “few University of Toronto professors in the humanities and social sciences have enjoyed the global name recognition Prof. Peterson has won”.[33]

According to his studyconducted with one of his students, Christine Brophyof the relationship between political belief and personality, political correctness exists in two types: “PC-egalitarianism” and “PC-authoritarianism”, which is a manifestation of “offense sensitivity”.[74] He places classical liberals in the first type, and places so-called social justice warriors, who he says “weaponize compassion”, in the second.[21][2] The study also found an overlap between PC-authoritarians and right-wing authoritarians.[74]

Peterson considers that the universities should be held as among the most responsible for the wave of political correctness which appeared in North America and Europe.[33] According to Peterson, he watched the rise of political correctness on campuses since the early 1990s,[75] and considers that the humanities have become corrupt, less reliant on science, and instead of “intelligent conversation, we are having an ideological conversation”. From his own experience as a university professor, he states that the students who are coming to his classes are uneducated and unaware about the mass exterminations and crimes by Stalinism and Maoism, which were not given the same attention as fascism and Nazism. He also says that “instead of being ennobled or inculcated into the proper culture, the last vestiges of structure are stripped from [the students] by post-modernism and neo-Marxism, which defines everything in terms of relativism and power”.[28][76][77]

Peterson, 2017[76]

Peterson says that postmodern philosophers and sociologists since the 1960s[72] have built upon and extended certain core tenets of Marxism and communism while simultaneously appearing to disavow both ideologies. He says that it is difficult to understand contemporary Western society without considering the influence of a strain of postmodernist thought that migrated from France to the United States through the English department at Yale University. He states that certain academics in the humanities

… started to play a sleight of hand, and instead of pitting the proletariat, the working class, against the bourgeois, they started to pit the oppressed against the oppressor. That opened up the avenue to identifying any number of groups as oppressed and oppressor and to continue the same narrative under a different name… The people who hold this doctrinethis radical, postmodern, communitarian doctrine that makes racial identity or sexual identity or gender identity or some kind of group identity paramountthey’ve got control over most low-to-mid level bureaucratic structures, and many governments as well.[76]

Peterson’s perspective on the influence of postmodernism on North American humanities departments has been compared to Cultural Marxist conspiracy theories.[42][78][79][80]

Peterson says that “disciplines like women’s studies should be defunded” and advises freshman students to avoid subjects like sociology, anthropology, English literature, ethnic studies and racial studies, as well as other fields of study he believes are corrupted by the Neo-Marxist ideology.[81][82][83] He says that these fields, under the pretense of academic inquiry, propagate unscientific methods, fraudulent peer-review processes for academic journals, publications that garner zero citations,[84] cult-like behaviour,[82] safe-spaces,[81] and radical left-wing political activism for students.[72] Peterson has proposed launching a website which uses artificial intelligence to identify and showcase the amount of ideologization in specific courses. He announced in November 2017 that he had temporarily postponed the project as “it might add excessively to current polarization”.[85][86]

Peterson has criticized the use of the term “white privilege”, stating that “being called out on their white privilege, identified with a particular racial group and then made to suffer the consequences of the existence of that racial group and its hypothetical crimes, and that sort of thing has to come to a stop…. [It’s] racist in its extreme”.[72] In regard to identity politics, while the “left plays them on behalf of the oppressed, let’s say, and the right tends to play them on behalf of nationalism and ethnic pride” he considers them “equally dangerous” and that instead should be emphasized individualism and individual responsibility.[87] He has also been prominent in the debate about cultural appropriation, stating it promotes self-censorship in society and journalism.[88]

On September 27, 2016, Peterson released the first installment of a three-part lecture video series, entitled “Professor against political correctness: Part I: Fear and the Law”.[25][12] In the video, he stated he would not use the preferred gender pronouns of students and faculty, saying it fell under compelled speech, and announced his objection to the Canadian government’s Bill C-16, which proposed to add “gender identity or expression” as a prohibited ground of discrimination under the Canadian Human Rights Act, and to similarly expand the definitions of promoting genocide and publicly inciting hatred in the Criminal Code.[12][89]

He stated that his objection to the bill was based on potential free speech implications if the Criminal Code is amended, as he claimed he could then be prosecuted under provincial human rights laws if he refuses to call a transgender student or faculty member by the individual’s preferred pronoun.[13] Furthermore, he argued that the new amendments, paired with section 46.3 of the Ontario Human Rights Code, would make it possible for employers and organizations to be subject to punishment under the code if any employee or associate says anything that can be construed “directly or indirectly” as offensive, “whether intentionally or unintentionally”.[14] Other academics and lawyers challenged Peterson’s interpretation of C-16.[13]

The series of videos drew criticism from transgender activists, faculty and labour unions, and critics accused Peterson of “helping to foster a climate for hate to thrive” and of “fundamentally mischaracterising” the law.[90][25] Protests erupted on campus, some including violence, and the controversy attracted international media attention.[91][92][93] When asked in September 2016 if he would comply with the request of a student to use a preferred pronoun, Peterson said “it would depend on how they asked me[…] If I could detect that there was a chip on their shoulder, or that they were [asking me] with political motives, then I would probably say no[…] If I could have a conversation like the one we’re having now, I could probably meet them on an equal level”.[93] Two months later, the National Post published an op-ed by Peterson in which he elaborated on his opposition to the bill and explained why he publicly made a stand against it:

I will never use words I hate, like the trendy and artificially constructed words “zhe” and “zher.” These words are at the vanguard of a post-modern, radical leftist ideology that I detest, and which is, in my professional opinion, frighteningly similar to the Marxist doctrines that killed at least 100 million people in the 20th century.

I have been studying authoritarianism on the right and the left for 35 years. I wrote a book, Maps of Meaning: The Architecture of Belief, on the topic, which explores how ideologies hijack language and belief. As a result of my studies, I have come to believe that Marxism is a murderous ideology. I believe its practitioners in modern universities should be ashamed of themselves for continuing to promote such vicious, untenable and anti-human ideas, and for indoctrinating their students with these beliefs. I am therefore not going to mouth Marxist words. That would make me a puppet of the radical left, and that is not going to happen. Period.[94]

In response to the controversy, academic administrators at the University of Toronto sent Peterson two letters of warning, one noting that free speech had to be made in accordance with human rights legislation and the other adding that his refusal to use the preferred personal pronouns of students and faculty upon request could constitute discrimination. Peterson speculated that these warning letters were leading up to formal disciplinary action against him, but in December the university assured him that he would retain his professorship, and in January 2017 he returned to teach his psychology class at the University of Toronto.[95][25]

In February 2017, Maxime Bernier, candidate for leader of the Conservative Party of Canada, stated that he shifted his position on Bill C-16, from support to opposition, after meeting with Peterson and discussing it.[96] Peterson’s analysis of the bill was also frequently cited by senators who were opposed to its passage.[97] In April 2017, Peterson was denied a Social Sciences and Humanities Research Council (SSHRC) grant for the first time in his career, which he interpreted as retaliation for his statements regarding Bill C-16.[32] A media relations adviser for SSHRC said “[c]ommittees assess only the information contained in the application”.[98] In response, The Rebel Media launched an Indiegogo campaign on Peterson’s behalf.[99] The campaign raised C$195,000 by its end on May 6, equivalent to over two years of research funding.[100] In May 2017, Peterson spoke against Bill C-16 at a Canadian Senate committee on legal and constitutional affairs hearing. He was one of 24 witnesses who were invited to speak about the bill.[97]

In November 2017, a teaching assistant at Wilfrid Laurier University first year communications course was censured by her professors for showing a segment of The Agenda, which featured Peterson debating Bill C-16 with another professor, during a classroom discussion about pronouns.[101][102][103] The reasons given for the censure included the clip creating a “toxic climate”, being compared to a “speech by Hitler”,[26] and being itself in violation of Bill C-16.[104] The censure was later withdrawn and both the professors and the university formally apologized.[105][106][107] The events were criticized by Peterson, as well as several newspaper editorial boards[108][109][110] and national newspaper columnists[111][112][113][114] as an example of the suppression of free speech on university campuses. In June 2018, Peterson filed a $1.5-million lawsuit against Wilfrid Laurier University, arguing that three staff members of the university had maliciously defamed him by making negative comments about him behind closed doors.[115] Wilfried Laurier asked that the lawsuit be dismissed, saying that it was ironic for a purported advocate of free speech to attempt to curtail free speech.[116]

Peterson has argued that there is an ongoing “crisis of masculinity” and “backlash against masculinity” where the “masculine spirit is under assault”.[20][117][118][119] He has argued that feminism and policies such as no-fault divorce have had adverse effects on gender relations and destabilized society.[117] He has argued that the existing societal hierarchy that the “left” has characterised as an “oppressive patriarchy” might “be predicated on competence.”[20] Peterson has said that men without partners are likely to become violent, and has noted that “enforced monogamy”, i.e. societies wherein monogamy is a social norm, decrease male violence.[20][117] He has attributed the rise of Donald Trump and far-right European politicians to what he says is a push to “feminize” men, saying “If men are pushed too hard to feminize they will become more and more interested in harsh, fascist political ideology.”[120] He attracted considerable attention over a 2018 Channel 4 interview where he clashed with interviewer Cathy Newman on the topic of the gender pay gap.[121][122] Peterson disputed that the gender pay gap was solely due to sexual discrimination.[122][123][124] Writing for The New York Times, Nellie Bowles said that most of Peterson’s ideas “stem from a gnawing anxiety around gender”.[20]

Peterson doubts the scientific consensus on climate change.[125] Peterson has said he is “very skeptical of the models that are used to predict climate change”.[126] He has also said, “You can’t trust the data because too much ideology is involved”.[127]

Peterson married Tammy Roberts in 1989.[25] They have one daughter and one son.[21][25]

He is a philosophical pragmatist.[56] In a 2017 interview, Peterson was asked “are you a Christian?” and responded “I suppose the most straight-forward answer to that is yes”.[128] In 2018, Peterson emphasized that his conceptualization of Christianity is probably not what is generally understood, stating that the ethical responsibility of a Christian is to imitate Christ, for him meaning “something like you need to take responsibility for the evil in the world as if you were responsible for it… to understand that you determine the direction of the world, whether it’s toward heaven or hell”.[129] When asked if he believes in God, Peterson responded: “I think the proper response to that is No, but I’m afraid He might exist”.[9] Writing for The Spectator, Tim Lott said Peterson draws inspiration from Jung’s philosophy of religion, and holds views similar to the Christian existentialism of Sren Kierkegaard and Paul Tillich. Lott also said Peterson has respect for Taoism, as it views nature as a struggle between order and chaos, and posits that life would be meaningless without this duality.[28]

Starting around 2000, Peterson began collecting Soviet-era paintings,[26] displayed in his house as a reminder of, he argues, the relationship between totalitarian propaganda and art, and as examples of how idealistic visions can become totalitarian oppression and horror.[4][34] In 2016, Peterson became an honorary member of the extended family of Charles Joseph, a Kwakwaka’wakw artist, and was given the name Alestalagie (“Great Seeker”).[26][130] In late 2016, Peterson went on a strict diet consisting only of meat and some vegetables to control severe depression and an auto-immune disorder, including psoriasis and uveitis.[22][131] He stopped eating any vegetables in mid-2018.[132]

Peterson wrote the foreword to the fiftieth anniversary edition of The Gulag Archipelago, released in November 2018.[133]

Go here to read the rest:

Jordan Peterson – Wikipedia

Peterson Middle School

February & March, 2019 Students of the Month

Monthly, Peterson recognizes students for their outstandingeffort, improvement and/or overall demonstration of good citizenship in theclassroom and in the Peterson community. Students earning thisrecognition are celebrated by being invited to a small celebration during SSR.

Learn More

Excerpt from:

Peterson Middle School

Petersen Automotive Museum

After an extensive renovation, the Vault presented by Hagerty has more than doubled in size and now features over 250 vehicles from around the world. Discover some of the most iconic and rare cars, motorcycles, and trucks spanning over 120 years of automotive history on the largest guided automotive tour in the United States.Guided tours traverse the globe, featuring vehicles from 6 different regions, and provide an immersive experience exploring the history of the automobile, from Hollywood legends to hypercars. Visitors will see turn-of-the-twentieth-century cars, head-of-state cars, supercars, cars belonging to Hollywood legends, award-winning hot rods, cars that pushed the boundaries of innovation, and many other surprises.

Follow this link:

Petersen Automotive Museum


12345...1020...