Daily Archives: December 10, 2021

Elon Musk says he is thinking of quitting his job

Posted: December 10, 2021 at 7:34 pm

The worlds richest man said he might be ready for a pivot. Tesla CEO Elon Musk tweeted on Thursday that he is in his words - thinking of leaving his jobs.

Instead, he said he would like to become an influencer, which the Twitter-happy CEO is already sort of doing with his flock of nearly 66 million followers.

Of course, it's not immediately clear whether he was being serious. After all, back in January, he said over a conference call that he expects to be in the drivers seat at Tesla for quote - several years. But in addition to helming the electric vehicle maker, Musk also heads the infrastructure firm The Boring Company, the rocket launch service company SpaceX, and brain-chip startup Neuralink.

He has run afoul of regulators in the past with his erratic behavior on social media. Three years ago, he tweeted that he was considering taking Tesla private and had secured funding, sending the stock sharply higher. But no deal emerged. The Securities and Exchange Commission then sued him. In a settlement, Tesla agreed to have its lawyers vet Musks tweets that are financially sensitive.

Most recently, he polled his followers to see if he should sell 10% of his Tesla stake. A majority agreed. Securities filings showed on Thursday he sold another batch of shares worth just under $1 billion. He has been selling shares to pay for taxes on the exercise of stock options.

Tesla investors sent shares south in early Friday trading.

Go here to read the rest:

Elon Musk says he is thinking of quitting his job

Posted in Elon Musk | Comments Off on Elon Musk says he is thinking of quitting his job

Elon Musk says there aren’t ‘enough people,’ birthrate could threaten human civilization – USA TODAY

Posted: at 7:34 pm

Elon Musk could become a trillionaire. Here's what that means.

Elon Musk could become the world's first trillionaire. Here's what that means.

Staff video, USA TODAY

Business mogul Elon Musk is more concerned with under-population than over-population. The Tesla CEO said Monday that there are "not enough people" in the world and it could threaten human civilization.

"I think one of the biggest risks to civilization is the low birth rate and the rapidly declining birthrate," Musk said attheWall Street Journal'sannualCEO Council. The 50-year-old was answering a question about how the proposed Tesla Bot could solve some of the world's labor issues. Musk had previously called the bot a "generalized substitute for human labor over time."

"And yet, so many people, including smart people, think that there are too many people in the world and think that the population is growing out of control. It's completely the opposite. Please look at the numbers if people don't have more children, civilization is going to crumble, mark my words."

Birthrate:Americans want more babies in their lives. Biden's subsidized child care won't help that.

Was he playing the market?:Elon Musk follows through on Twitter promise, sells $1.1B in Tesla stock.

The Tesla Bot is Musk's"humanoid robot" that he plans to have built by using Tesla's self-driving car artificial intelligence. The bot could have the capability of deadlifting up to 150 pounds and traveling at around 5 m.p.h.

The global birthrate has been on a steady decline since 1960, according to World Bank.TheU.S.birthrate fell by 4% in 2020a record lowaccording to the Centers for Disease Control.

Musk, a father of six, added in the same interview that while he believes population growth is ideal, how long people live is a different story. He said in regards to aging that people shouldn't "try to live for a super long time."

"I think it is important for us to die because most of the times, people don't change their mind, they just die," he said."If they live forever, then we might become a very ossified society where new ideas cannot succeed."

Go here to see the original:

Elon Musk says there aren't 'enough people,' birthrate could threaten human civilization - USA TODAY

Posted in Elon Musk | Comments Off on Elon Musk says there aren’t ‘enough people,’ birthrate could threaten human civilization – USA TODAY

Elon Musk, Happy With Tesla, Says SpaceX So Difficult, ‘I Wonder Whether We Can Do This’ – GOBankingRates

Posted: at 7:34 pm

Action Press/Shutterstock / Action Press/Shutterstock

Elon Musk, the richest person on the planet, who just tweeted he was thinking about quitting his job, made headlines this week when he criticized the governments effort to encourage electric-vehicle adoption, including clauses in the Build Back Better Act. The criticism was viewed by many as a way to signal Teslas maturity and success.

See: Tesla Tops Delivery Expectations Despite Global Supply Chain IssuesFind: Experts Say SpaceX Might Make Musk a Trillionaire, But Tesla Just Boosted His Worth $25 Billion

Thinking of quitting my jobs & becoming an influencer full-time wdyt, Musk tweeted on Dec. 9.

However, its his other company, SpaceX, that seems to be giving him more trouble. Speaking remotely at the Wall Street Journals CEO Conference, Musks remarks on how difficult a task its been to develop Starship the next-generation vehicle the company plans to use to take humans to the Moon and Mars sounded a lot like Teslas early difficulties with the Model 3, Bloomberg reported.

Starship absorbs more of my mental energy than probably any other single thing, Musk said Monday, according to Bloomberg. It is so preposterously difficult, there are times where I wonder whether we can actually do this.

In November, Space Explored reported an email Musk sent to employees in which he said that what it comes down to, is that we face a genuine risk of bankruptcy if we cant achieve a Starship flight rate of at least once every two weeks next year.

See: Elon Musks SpaceX Valuation Hits $100 Billion, Becomes Second Most-Valuable Private Company WorldwideFind: 10 Productivity Tips From Elon Musk That Can Put You on the Road to Success

Space Explored reported that Musk had planned to take a break over Thanksgiving weekend, but Raptor production issues changed that:

Unfortunately, the Raptor production crisis is much worse than it had seemed a few weeks ago. As we have dug into the issues following the exiting of prior senior management, they have unfortunately turned out to be far more severe than was reported. There is no way to sugarcoat this, he said in the email. I was going to take this weekend off, as my first weekend off in a long time, but instead, I will be on the Raptor line all night and through the weekend.

In contrast, Tesla has been on a tear this year, from exceeding deliveries despite chip shortages and supply chain disruptions, to its stock soaring, bringing the company to a $1 trillion market cap. In turn, Musk said earlier this month that he would participate in Teslas fourth-quarter earnings call to take place in January 2022 following his statement, made in July, that he would not be on the calls anymore unless theres something really important that I need to say.

More From GOBankingRates

Yal Bizouati-Kennedy is a former full-time financial journalist and has written for several publications, including Dow Jones, The Financial Times Group, Bloomberg and Business Insider. She also worked as a vice president/senior content writer for major NYC-based financial companies, including New York Life and MSCI. Yal is now freelancing and most recently, she co-authored the book Blockchain for Medical Research: Accelerating Trust in Healthcare, with Dr. Sean Manion. (CRC Press, April 2020) She holds two masters degrees, including one in Journalism from New York University and one in Russian Studies from Universit Toulouse-Jean Jaurs, France.

Continued here:

Elon Musk, Happy With Tesla, Says SpaceX So Difficult, 'I Wonder Whether We Can Do This' - GOBankingRates

Posted in Elon Musk | Comments Off on Elon Musk, Happy With Tesla, Says SpaceX So Difficult, ‘I Wonder Whether We Can Do This’ – GOBankingRates

Researchers identify brain signals associated with OCD symptoms, paving way for adaptive treatment – EurekAlert

Posted: at 7:33 pm

PROVIDENCE, R.I. [Brown University] In an effort to improve treatment for obsessive compulsive disorder, a team of researchers has for the first time recorded electrical signals in the human brain associated with ebbs and flows in OCD symptoms over an extended period in their homes as they went about daily living. The research could be an important step in making an emerging therapy called deep brain stimulation responsive to everyday changes in OCD symptoms.

OCD, which affects as much as 2% of the worlds population, causes recurring unwanted thoughts and repetitive behaviors. The disorder is often debilitating, and up to 20-40% of cases dont respond to traditional drug or behavioral treatments. Deep brain stimulation, a technique that involves small electrodes precisely placed in the brain that deliver mild electrical pulses, is effective in treating over half of patients for whom other therapies failed. A limitation is that DBS is unable to adjust to moment-to-moment changes in OCD symptom, which are impacted by the physical and social environment . But adaptive DBS which can adjust the intensity of stimulation in response to real-time signals recorded in the brain could be more effective than traditional DBS and reduce unwanted side effects.

OCD is a disorder in which symptom severity is highly variable over time and can be elicited by triggers in theenvironment, said David Borton, an associate professor of biomedical engineering at Brown University, a biomedical engineer at the U.S. Department of Veterans Affairs Center for Neurorestoration and Neurotechnology and a senior author of the new research. A DBS system that can adjust stimulation intensity in response to symptoms may provide more relief and fewer side effects for patients. But in order to enable that technology, we must first identify the biomarkers in the brain associated with OCD symptoms, and that is what we are working to do in this study.

The research, led by Nicole Provenza, a recent Brown biomedical engineering Ph.D. graduate from Bortons laboratory, was a collaboration between Bortons research group, affiliated with Browns Carney Institute for Brain Science and School of Engineering; Dr. Wayne Goodmans and Dr. Sameer Sheths research groups at Baylor College of Medicine; and Jeff Cohn from the University of Pittsburghs Department of Psychology and Intelligent Systems Program and Carnegie Mellon University.

For the study, Goodmans team recruited five participants with severe OCD who were eligible for DBS treatment. Sheth, lead neurosurgeon, implanted each participant with an investigational DBS device from Medtronic capable of both delivering stimulation and recording native electrical brain signals. Using the sensing capabilities of the hardware, the team gathered brain-signal data from participants in both clinical settings and at home as they went about daily activities.

Along with the brain signal data, the team also collected a suite of behavioral biomarkers. In the clinical setting, these included facial expression and body movement. Using computer vision and machine learning, they discovered that the behavioral features were associated with changes in internal brain states. At home, they measured participants self-reports of OCD symptom intensity as well as biometric data heart rate and general activity levels recorded by a smart watch and paired smartphone application provided by Rune Labs. All of those behavioral measures were then time-synched to the brain-sensing data, enabling the researchers to look for correlations between the two.

This is the first time brain signals from participants with neuropsychiatric illness have been recorded chronically at home alongside relevant behavioral measures, Provenza said. Using these brain signals, we may be able to differentiate between when someone is experiencing OCD symptoms, and when they are not, and this technique made it possible to record this diversity of behavior and brain activity.

Provenzas analysis of the data showed that the technique did pick out brain-signal patterns potentially linked to OCD symptom fluctuation. While more work needs to be done across a larger cohort, this initial study shows that this technique is a promising way forward in confirming candidate biomarkers of OCD.

We were able to collect a far richer dataset than has been collected before, and we found some tantalizing trends that wed like to explore in a larger cohort of patients, Borton said. Now we know that we have the toolset to nail down control signals that could be used to adjust stimulation level according to people's symptoms.

Once those biomarkers are positively identified, they could then be used in an adaptive DBS system. Currently, DBS systems employ a constant level of stimulation, which can be adjusted by a clinician at clinical visits. Adaptive DBS systems, in contrast, would stimulate and record brain activity and behavior continuously without the need to come to the clinic. When the system detects signals associated with an increase in symptom severity, it could ramp up stimulation to potentially provide additional relief. Likewise, stimulation could be toned down when symptoms abate. Such a system could potentially improve DBS therapy while reducing side effects.

In addition to advancing DBS therapy for cases of severe and treatment resistant OCD, this study has the potential for improving our understanding of the underlying neurocircuitry of the disorder, Goodman said. This deepened understanding may allow us to identify new anatomic targets for treatment that may be amenable to novel interventions that are less invasive than DBS.

Work on this line of research is ongoing. Because OCD is a complex disorder than manifests itself in highly variable ways across patients, the team hopes to expand the number of participants to capture more of that variability. They seek to identify a fuller set of OCD biomarkers that could be used to guide adaptive DBS systems. Once those biomarkers are in place, the team hopes to work with device-makers to implement their DBS devices.

Our goal is to understand what those brain recordings are telling us and to train the device to recognize certain patterns associated with specific symptoms, Sheth said. The better we understand the neural signatures of health and disease, the greater our chances of using DBS to successfully treat challenging brain disorders like OCD.

The research was supported by the National Institutes of Healths BRAIN Initiative (UH3NS100549 and UH3NS103549), the Charles Stark Draper Laboratory Fellowship, the McNair Foundation, the Texas Higher Education Coordinating Board, the National Institutes of Health (1RF1MH121371, U54-HD083092, NIH MH096951, K01-MH-116364 and R21-NS-104953, 3R25MH101076-05S2, 1S10OD025181) and the Karen T. Romer Undergraduate Teaching and Research Award at Brown University.

Experimental study

People

Long-term ecological assessment of intracranial electrophysiology synchronized to behavioral markers in obsessive-compulsive disorder

9-Dec-2021

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

See the original post:

Researchers identify brain signals associated with OCD symptoms, paving way for adaptive treatment - EurekAlert

Posted in Neurotechnology | Comments Off on Researchers identify brain signals associated with OCD symptoms, paving way for adaptive treatment – EurekAlert

Interventional Neuroradiology Market Revenue to Cross US$ 3254.55 million by 2027 Says, The Insight Partners – Digital Journal

Posted: at 7:33 pm

The interventional neuroradiology market was valued at US$ 1,969.39 million in 2018 and it is projected to reach US$ 3,254.55 million in 2027; it is expected to grow at a CAGR of 5.9% from 2018 to 2027.

Interventional neuroradiology is a medical sub-specialty that utilizes minimally-invasive image based technologies and procedures for the diagnosis and treatment of various diseases of head, neck and spine. The interventional neuroradiology therapies are accomplished majorly with the help of microcatheters inserted in the groin area and, under X-ray guidance, threaded through the blood vessels leading into the brain.

Download Sample [PDF] Copy of Interventional Neuroradiology Market study at: https://www.theinsightpartners.com/sample/TIPRE00003497/

Some of the players operating in interventional neuroradiology market are, Balt Extrusion, Merit Medical Systems, Terumo Corporation, Medtronic, Penumbra, Inc., Stryker, DePuy Synthes, Boston Scientific Corporation, W. L. Gore & Associates, and MicroPort Scientific Corporation among others.

The market players have been establishing acquisitions and collaborations in the market, which enables them to hold a strong position in the market. For instance, in November, 2018, Stryker completed the acquisition of K2M Group Holdings, Inc. The acquisition aims to increase the product portfolio of Strykers spine and neurotechnology segment. The developments performed by the companies are helping the market to grow in the coming years.

The interventional neuroradiology market by product is segmented into neurovascular embolization & coiling assist devices and accessories. In 2018, the neurovascular embolization & coiling assist devices segment held a largest market share of 72.17% of the interventional neuroradiology market, by product. This segment is also expected to dominate the market in 2027 owing to increase in the number of interventional neurology procedures. The segment is also anticipated to witness the growth at a significant rate during the forecast period, 2018 to 2027.

Progressive aging population, increasing demand for minimally invasive procedures and rise in the prevalence of the cerebral aneurysm play vital role in the growth of the interventional neuroradiology market. However, the restraints such as high cost of embolization coils and dearth of skilled professionals are likely to impact the growth of the market in the forecast period.

The COVID-19 has affected economies and industries in various countries due to lockdowns, travel bans, and business shutdowns. The COVID-19 crisis has overburdened public health systems in many countries and highlighted the strong need for sustainable investment in health systems. As the COVID-19 pandemic progresses, the healthcare industry is expected to see a drop in growth. The life sciences segment thrives due to increased demand for invitro diagnostic products and rising research and development activities worldwide.

Download the Latest COVID-19 Analysis on Interventional Neuroradiology Market Growth Research Report at: https://www.theinsightpartners.com/covid-analysis-sample/TIPRE00003497

Rise in the Prevalence of the Cerebral Aneurysm to Drive Global Interventional Neuroradiology Market Growth

Neurological diseases are the disorders of the brain, spine and the nerves that connect them. There are more than 600 diseases of the nervous system, such as brain tumors, epilepsy, Parkinsons disease and stroke as well as less familiar ones such as front temporal dementia. During recent years, the prevalence of neurological disorders have increased significantly.

A cerebral aneurysm (also called an intracranial aneurysm or brain aneurysm) is a bulging, weakened area in the wall of an artery in the brain, resulting in an abnormal widening, ballooning, or bleb. Because there is a weakened spot in the aneurysm wall, there is a risk for rupture (bursting) of the aneurysm. The most common type of cerebral aneurysm is called a saccular, or berry, aneurysm, occurring in 90 percent of cerebral aneurysms. This type of aneurysm looks like a berry with a narrow stem. More than one aneurysm may be present. Two other types of cerebral aneurysms are fusiform and dissecting aneurysms. A fusiform aneurysm bulges out on all sides (circumferentially), forming a dilated artery. Fusiform aneurysms are often associated with atherosclerosis.

The prevalence of cerebral aneurysms has increased tremendously across the globe. Approximately, 30,000 people in the United States suffer from brain aneurysm rupture each year. A brain aneurysm ruptures every 18 minutes. The annual rate of rupture in the United States is found to be around 8 10 per 100,000 people. Hence, the rising prevalence of cerebral aneurysms is anticipated to fuel the growth of the market during the forecast period.

Order a Copy of Interventional NeuroradiologyMarket Shares, Strategies and Forecasts 2021-2027 Research Report at: https://www.theinsightpartners.com/buy/TIPRE00003497/

About Us:

The Insight Partners is a one stop industry research provider of actionable intelligence. We help our clients in getting solutions to their research requirements through our syndicated and consulting research services. We specialize in industries such as Semiconductor and Electronics, Aerospace and Defense, Automotive and Transportation, Biotechnology, Healthcare IT, Manufacturing and Construction, Medical Device, Technology, Media and Telecommunications, Chemicals and Materials.

Contact Us:

If you have any queries about this report or if you would like further information, please contact us:

Contact Person:Sameer Joshi

E-mail: sales@theinsightpartners.com

Phone:+1-646-491-9876

Press Release: https://www.theinsightpartners.com/pr/interventional-neuroradiology-market

Read the original here:

Interventional Neuroradiology Market Revenue to Cross US$ 3254.55 million by 2027 Says, The Insight Partners - Digital Journal

Posted in Neurotechnology | Comments Off on Interventional Neuroradiology Market Revenue to Cross US$ 3254.55 million by 2027 Says, The Insight Partners – Digital Journal

Avation Medical Announces the Appointment of Industry Veterans to Board of Directors – PRNewswire

Posted: at 7:33 pm

COLUMBUS, Ohio, Dec. 8, 2021 /PRNewswire/ -- Avation Medical, an innovative advanced neuromodulation and digital health company pioneering closed-loop, wearable neuromodulation therapies that eliminate the need for surgery and implants, announced today the appointment of two highly experienced medical device and health care executives to its Board of Directors. The two new Board members bring a wealth of commercialization experience as the Company moves quickly to bring its first product to market.

Renee Selman,anaccomplished pharmaceutical, medical device and healthcare services senior executive and former Worldwide President, Women's Health and Urology for Ethicon, a Johnson & Johnson Company, has joined the Board as an Independent Director. Renee brings extensive global and domestic experience in the healthcare industry including commercialization of international products, market development, and strategic partnership development. Ms. Selmancurrently serves on the boards of FEMSelect and Hunterdon Medical System.

Raymond Huggenberger, a senior business leader with deep experience in innovative medical technology spanning more than 20 years of successful sales, marketing, and general management experience in highly competitive industries, has been appointed as an Independent Director. Ray previously served as the President and CEO of Inogen, Inc., and currently serves on the boards of several publicly traded medical device companies including Tactile Systems Technology, Inc. (TCMD), Inogen, Inc. (INGN) and Intricon Micromedical Technology (IIN) as well as several privately held companies. He also serves in an advisory capacity to Arboretum Ventures.

"Renee and Ray bring a wealth of strategic experience to Avation Medical which will be essential as we work to launch our category-changing, wearable and self-adjusting bladder modulation and digital health system to help the more than 40 million adults with overactive bladder (OAB) and urge incontinence that do not want invasive surgical procedures or the side effects of medications," said Jill Schiaparelli, President and CEO of Avation Medical.

"There remains an enormous need for patient-friendly, wearable neuromodulation therapy. I am honored to join the Avation Medical Board of Directors and work with the Company's talented team as we work to bring innovative technologies to the market," said Ms. Selman."Avation is a true innovator in the industry, and I look forward to supporting its ongoing work to revolutionize the science of self-adjusting, wearable neuromodulation and digital health therapies."

Mr. Huggenberger commented, "I am excited to join Avation Medical's board and work with the Company to make neuromodulation therapy surgery-free and more accessible to patients across a variety of clinical conditions. The Company's pioneering technology is a game-changer and will provide a new solution to the millions of OAB patients who desire surgery-free and drug-free treatment options."

In addition to Renee and Ray, the Board includes Dr. Thomas Shehab, Managing Partner, Arboretum Ventures; Kevin Wasserstein, Neurotechnology Innovations Management; and Jill Schiaparelli, President and CEO of Avation Medical.

About Avation MedicalAvation Medical is an innovative advanced neuromodulation and digital health company pioneering closed-loop, patient-friendly, wearable neuromodulation therapies that deliver personalized therapy based on the patient's own physiologic response.The Company's intelligent wearable therapies objectively confirm activation of the target nerve and make neuromodulation technology accessible to millions of patients across a variety of clinical applications by eliminating the need for surgery, needles and permanent implants and shifting therapy into the home environment.

The Company's wearable bladder therapy and digital health system is available for investigational use only and is not FDA cleared.

Corporate ContactJackie GerberrySenior Director, Finance[emailprotected]www.Avation.com

SOURCE Avation Medical

Read more from the original source:

Avation Medical Announces the Appointment of Industry Veterans to Board of Directors - PRNewswire

Posted in Neurotechnology | Comments Off on Avation Medical Announces the Appointment of Industry Veterans to Board of Directors – PRNewswire

‘Write the stories you want to read’: SJ Sindu, author of Blue-Skinned Gods – News@UofT

Posted: at 7:32 pm

When SJ Sindu was younger, she couldnt wait for her annual family vacations to Scarborough.

Scarborough was a completely different world to where I grew up, says Sindu, an assistant professor in the department of English at U of T Scarborough.

You could go to Tamil stores, get Tamil food, and just be surrounded by Tamilness. That was very meaningful to me.

She says her early experiences growing up in a conflict zone, immigrating to the U.S. and exploring her own identity as a Tamil living in the mostly white, suburban town of Amherst, Mass. were instrumental in shaping her voice as an author.

Her first novel, Marriage of a Thousand Lies, tells the story of Lucky and her husband Krishna, who married to hide the fact they are gay from their conservative Sri Lankan-Americanfamilies.Her new novel, Blue-Skinned Gods, follows Kalki, a boy born with blue skin and black blood who is believed to be the reincarnation of Vishnu. He begins to doubt his divinity as his personal life and relationships fall apart, then moves to New York where he becomes embedded in the underground punk scene.

Published in Canada by Penguin Random House, the book was described by Roxane Gay as abrilliant novelthat will take hold of you and never let you goand received glowing reviews in The Guardian and The New York Times among others. Itwill launch at Glad Day Books as part of theirNaked Heart Festivalon Dec.18.

UTSC News spoke to Sindu about her early influences and how faith, identity and family continue to shape her writing.

How have your early influences shaped you as a writer?

I was born and lived in the northeast part of Sri Lanka until I was seven years old. A lot of my childhood and early years were shaped by the war, and being a Tamil living in Jaffna during the war.

The other was immigrating to the U.S. I was very much isolated as a kid. There were other Indians around, but there werent Sri Lankan Tamils. So I read a lot of books and escaped into stories. It was a way to cope with being taken out of a war situation and put into this very suburban American life without any peers or ways to explore my own identity.

Did you always want to be a writer?

I didnt really start writing until I was in university. In fact, I started out in computer science and then fell in love with creative writing. I just loved the potential that writing fiction had for communicating the ideas that were obsessing me.

Where did the inspiration for Blue-Skinned Gods come from?

Partly the inspiration came because I lost my faith in religion. I was raised Hindu, and as a teenager I started to lose my faith and began to explore atheism. At the same time, my family became increasingly religious. So I wanted to explore that relationship.

I also saw a documentary by Vikram Ghandi called Kmr where he pretends to be an Indian guru and ends up gathering this large following. I was also closely watching the growing popularity of the BJP, a right-wing nationalist party in India,and interested in exploring what it meant to have a strain of fundamentalist Hindus on the rise in India and how that might affect the region.

In your first novel, Marriage of a Thousand Lies, you also explore themes of identity, sexuality, faith and family. Why do those themes inspire your writing?

There are things Im still trying to work out in my own life. Im trying to figure out my relationship with my family, especially my extended family now that Im living in Toronto. How to be part of a family that fundamentally rejects parts of who I am the queerness, the atheism, the progressive beliefs I hold. Negotiating that with the older family members has been interesting. Im still trying to figure it out, and I think I explore those things in my writing.

Did you have a favourite book, or one that influenced you as a writer?

There are two. The first is The Things They Carried by Tim OBrien. It was the first novel I read where I realized that I should and could write about my experiences with war. Its the book that made me want to be a writer.

The second is Funny Boy by Shyam Selvadurai. For the first time I saw Tamilness and queerness explored together, and that was very important to see, especially in my development as a writer.

What advice do you have for your students and aspiring writers?

Write the stories you want to read. Many of my students at UTSC are racialized, many are from immigrant families, and they havent read a lot of stories that reflect that experience. I hope they can be inspired to write about their own experiences.

Go here to read the rest:
'Write the stories you want to read': SJ Sindu, author of Blue-Skinned Gods - News@UofT

Posted in Atheism | Comments Off on ‘Write the stories you want to read’: SJ Sindu, author of Blue-Skinned Gods – News@UofT

Is Trey McBride the Jets Tight End of the Future? – Sports Illustrated

Posted: at 7:32 pm

If the Jets had a great tight end, it could be a real game changer.

This years consensus top rated TE coming out in the 2022 NFL Draft is Colorado States Trey McBride.

Is it possible the Jets can get him?

Sure, it is possible. The question is what round McBride might go in?

Currently, McBride is slated to go anywhere between late in the first round, to the end of the second round.

The challenge is one of the better teams in the NFL could select him in the bottom of the first round. It is very possible Tampa Bay could target McBride as Rob Gronkowskis eventual replacement and complimentary TE in the meantime. A one-two punch of Gronkowski and McBride would be lethal for quarterback Tom Brady, and it would create mismatch problems for opponents.

That is really the biggest advantage of having a top-tier TE. It creates mismatches against smaller corners and slower and less agile linebackers and safeties.

A classic example of this is with Detroits Pro Bowl tight end T.J. Hockenson. He currently leads the Lions in receptions and he made his first Pro Bowl last season on a team that has been challenged to win games. This third year TE is versatile and Detroit does a nice job lining him up at different locations pre-snap, to exploit defenses. Hockenson does a really good job of using his big frame and atheism to box out defenders and make the catches.

McBride is comparable to Hockenson as a receiver, who was a top-10 pick in 2019, and McBride is also a better blocker. They are both said to run a 4.7 (40) and both have similar styles. They are also comparable in terms of their height and weight. Hockenson is listed at 6-foot-5, 248 pounds, while McBride is listed at 6-foot-4, 260 pounds.

The biggest differences between the two is Hockenson looks more athletic and despite having similar 40-times, Hockenson plays faster than McBride on film.

So, that means McBride is not a top-10 prospect like Hockenson. However, it would be shocking if he was still on the board when the second round began.

Jets general manager Joe Douglas comes from Philadelphia, where they have had good tight end play. One would have to believe Douglas would love to have a top-tier TE to give quarterback Zack Wilson a big target downfield to throw to when everything else is breaking down around him.

It is distinctly possible Douglas could trade down at least with one of the Jets first-round picks, and target McBride mid to late in the first round.

One of the most impressive things about McBride is he has played on a predominately run-heavy offense at Colorado State, where he is the main target and he has still managed to stand out and put up breakout production this season.

#85 Trey McBride 6-foot-4, 260 pounds40-yard-dash-time: 4.7 (walterfootball.com)Games reviewed in 2021: Utah State, Toledo and Vanderbilt2021 production (currently) 90 receptions, 1,121 yards, 12.5 avg., 50 long, 1 TDGrade: First Round (15-32)NFL Comparables: T.J. Hockenson and Kyle BradyConcerns: level of competitionScouting Report:Complete tight end who is a big frame target polished receiver and he is an excellent blocker. Excellent technique as a receiver and blocker. Versatile player who they line up in tight, in the slot and out wide. Good target in the short to intermediate route levels. Reliable hands. Excels at inside pitches and slant routes. He does an excellent job knowing how to use his big frame to box and block out defenders to make the grab. Lack of playing speed shows up against defenders on crossing routes and short drag routes into the flats. Lacks vertical jumping ability and flexibility extending (looks tight in upper body). Decent (not great) athletic ability. Decent YAC (yards after catch). Not the easiest to bring down. Takes effort to tackle him. Really gets after it as a run blocker. Holds the point very well and was seen on several pancake blocks. Stays with it and he is extremely good, dependable and consistent run blocker. Top 15 TE in the NFL.

With the Jets not getting enough production out of TE Ryan Griffin the past three seasons, drafting McBride makes even more sense.

Griffin is also scheduled to become a free agent after the 2022 season, and all of Griffins backups have far less production and experience than he has.

McBride would be a great catch for the Jets.

MORE:

Follow Daniel Kelly on Twitter (@danielkellybook). Be sure to bookmark Jets Country and check back daily for news, analysis and more.

Read the original post:
Is Trey McBride the Jets Tight End of the Future? - Sports Illustrated

Posted in Atheism | Comments Off on Is Trey McBride the Jets Tight End of the Future? – Sports Illustrated

Raised by Wolves Season 2 Trailer Asks What it is to be Human – Geek Feed

Posted: at 7:32 pm

Ridley Scotts last Alien movie may have been lackluster, but the legendary filmmaker did bring us a great new sci-fi property with Aaron Guzikowskis Raised by Wolves on HBO. The series is set to come back next year, and we have a new trailer featuring Mother, Father, and their surviving human children.

Watch the trailer forRaised by Wolveson HBO:

Heres the official plot description for Season 2:

In season two of RAISED BY WOLVES, Android partners Mother (Amanda Collin) and Father (Abubakar Salim), along with their brood of six human children, join a newly formed atheistic colony in Kepler 22 bs mysterious tropical zone. But navigating this strange new society is only the start of their troubles as Mothers natural child threatens to drive what little remains of the human race to extinction.

While the first season only had us focusing on small tribe of atheists, it looks like were going to have a full colony of people for Mother and Father to interact with. On the other hand, Marcus (Travis Fimmel) has fully embraced Sol, and is now set to be some kind of Neo-Jesus on the search for new followers.

Just like Westworld before it, Raised by Wolves is a property that doesnt hold the audiences hand when it comes to the story, and the plot is rife with metaphors and biblical allusions. We dont know what Mothers snake baby is going to do, but Im sure this next season is still going to be full of twists and turns, and several discussions about theology and atheism.

Raised by Wolves returns to HBO Max on Feb. 3.

Visit link:
Raised by Wolves Season 2 Trailer Asks What it is to be Human - Geek Feed

Posted in Atheism | Comments Off on Raised by Wolves Season 2 Trailer Asks What it is to be Human – Geek Feed

AI control problem – Wikipedia

Posted: at 7:28 pm

Issue of ensuring beneficial AI

In artificial intelligence (AI) and philosophy, the AI control problem is the issue of how to build AI systems such that they will aid their creators, and avoid inadvertently building systems that will harm their creators. One particular concern is that humanity will have to solve the control problem before a superintelligent AI system is created, as a poorly designed superintelligence might rationally decide to seize control over its environment and refuse to permit its creators to modify it after launch.[1] In addition, some scholars argue that solutions to the control problem, alongside other advances in AI safety engineering,[2] might also find applications in existing non-superintelligent AI.[3]

Major approaches to the control problem include alignment, which aims to align AI goal systems with human values, and capability control, which aims to reduce an AI system's capacity to harm humans or gain control. Capability control proposals are generally not considered reliable or sufficient to solve the control problem, but rather as potentially valuable supplements to alignment efforts.[1]

Existing weak AI systems can be monitored and easily shut down and modified if they misbehave. However, a misprogrammed superintelligence, which by definition is smarter than humans in solving practical problems it encounters in the course of pursuing its goals, would realize that allowing itself to be shut down and modified might interfere with its ability to accomplish its current goals. If the superintelligence therefore decides to resist shutdown and modification, it would (again, by definition) be smart enough to outwit its programmers if there is otherwise a "level playing field" and if the programmers have taken no prior precautions. In general, attempts to solve the control problem after superintelligence is created are likely to fail because a superintelligence would likely have superior strategic planning abilities to humans and would (all things equal) be more successful at finding ways to dominate humans than humans would be able to post facto find ways to dominate the superintelligence. The control problem asks: What prior precautions can the programmers take to successfully prevent the superintelligence from catastrophically misbehaving?[1]

Humans currently dominate other species because the human brain has some distinctive capabilities that the brains of other animals lack. Some scholars, such as philosopher Nick Bostrom and AI researcher Stuart Russell, argue that if AI surpasses humanity in general intelligence and becomes superintelligent, then this new superintelligence could become powerful and difficult to control: just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.[1] Some scholars, including Stephen Hawking and Nobel laureate physicist Frank Wilczek, publicly advocated starting research into solving the (probably extremely difficult) control problem well before the first superintelligence is created, and argue that attempting to solve the problem after superintelligence is created would be too late, as an uncontrollable rogue superintelligence might successfully resist post-hoc efforts to control it.[4][5] Waiting until superintelligence seems to be imminent could also be too late, partly because the control problem might take a long time to satisfactorily solve (and so some preliminary work needs to be started as soon as possible), but also because of the possibility of a sudden intelligence explosion from sub-human to super-human AI, in which case there might not be any substantial or unambiguous warning before superintelligence arrives.[6] In addition, it is possible that insights gained from the control problem could in the future end up suggesting that some architectures for artificial general intelligence (AGI) are more predictable and amenable to control than other architectures, which in turn could helpfully nudge early AGI research toward the direction of the more controllable architectures.[1]

Autonomous AI systems may be assigned the wrong goals by accident.[7] Two AAAI presidents, Tom Dietterich and Eric Horvitz, note that this is already a concern for existing systems: "An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands literally." This concern becomes more serious as AI software advances in autonomy and flexibility.[8]

According to Bostrom, superintelligence can create a qualitatively new problem of perverse instantiation: the smarter and more capable an AI is, the more likely it will be able to find an unintended shortcut that maximally satisfies the goals programmed into it. Some hypothetical examples where goals might be instantiated in a perverse way that the programmers did not intend:[1]

Russell has noted that, on a technical level, omitting an implicit goal can result in harm: "A system that is optimizing a function of n variables, where the objective depends on a subset of size k

In addition, some scholars argue that research into the AI control problem might be useful in preventing unintended consequences from existing weak AI. DeepMind researcher Laurent Orseau gives, as a simple hypothetical example, a case of a reinforcement learning robot that sometimes gets legitimately commandeered by humans when it goes outside: how should the robot best be programmed so that it does not accidentally and quietly learn to avoid going outside, for fear of being commandeered and thus becoming unable to finish its daily tasks? Orseau also points to an experimental Tetris program that learned to pause the screen indefinitely to avoid losing. Orseau argues that these examples are similar to the capability control problem of how to install a button that shuts off a superintelligence, without motivating the superintelligence to take action to prevent humans from pressing the button.[3]

In the past, even pre-tested weak AI systems have occasionally caused harm, ranging from minor to catastrophic, that was unintended by the programmers. For example, in 2015, possibly due to human error, a German worker was crushed to death by a robot at a Volkswagen plant that apparently mistook him for an auto part.[10] In 2016, Microsoft launched a chatbot, Tay, that learned to use racist and sexist language.[3][10] The University of Sheffield's Noel Sharkey states that an ideal solution would be if "an AI program could detect when it is going wrong and stop itself", but cautions the public that solving the problem in the general case would be "a really enormous scientific challenge".[3]

In 2017, DeepMind released AI Safety Gridworlds, which evaluate AI algorithms on nine safety features, such as whether the algorithm wants to turn off its own kill switch. DeepMind confirmed that existing algorithms perform poorly, which was unsurprising because the algorithms "were not designed to solve these problems"; solving such problems might require "potentially building a new generation of algorithms with safety considerations at their core".[11][12][13]

Some proposals seek to solve the problem of ambitious alignment, creating AIs that remain safe even when they act autonomously at a large scale. Some aspects of alignment inherently have moral and political dimensions.[14]For example, in Human Compatible, Berkeley professor Stuart Russell proposes that AI systems be designed with the sole objective of maximizing the realization of human preferences.[15]:173 The "preferences" Russell refers to "are all-encompassing; they cover everything you might care about, arbitrarily far into the future." AI ethics researcher Iason Gabriel argues that we should align AIs with "principles that would be supported by a global overlapping consensus of opinion, chosen behind a veil of ignorance and/or affirmed through democratic processes."[14]

Eliezer Yudkowsky of the Machine Intelligence Research Institute has proposed the goal of fulfilling humanity's coherent extrapolated volition (CEV), roughly defined as the set of values which humanity would share at reflective equilibrium, i.e. after a long, idealised process of refinement.[14][16]

By contrast, existing experimental narrowly aligned AIs are more pragmatic and can successfully carry out tasks in accordance with the user's immediate inferred preferences,[17] albeit without any understanding of the user's long-term goals. Narrow alignment can apply to AIs with general capabilities, but also to AIs that are specialized for individual tasks. For example, we would like question answering systems to respond to questions truthfully without selecting their answers to manipulate humans or bring about long-term effects.

Some AI control proposals account for both a base explicit objective function and an emergent implicit objective function. Such proposals attempt to harmonize three different descriptions of the AI system:[18]

Because AI systems are not perfect optimizers, and because there may be unintended consequences from any given specification, emergent behavior can diverge dramatically from ideal or design intentions.

AI alignment researchers aim to ensure that the behavior matches the ideal specification, using the design specification as a midpoint. A mismatch between the ideal specification and the design specification is known as outer misalignment, because the mismatch lies between (1) the user's "true desires", which sit outside the computer system and (2) the computer system's programmed objective function (inside the computer system). A certain type of mismatch between the design specification and the emergent behavior is known as inner misalignment; such a mismatch is internal to the AI, being a mismatch between (2) the AI's explicit objective function and (3) the AI's actual emergent goals.[19][20][21] Outer misalignment might arise because of mistakes in specifying the objective function (design specification).[22]For example, a reinforcement learning agent trained on the game of CoastRunners learned to move in circles while repeatedly crashing, which got it a higher score than finishing the race.[23] By contrast, inner misalignment arises when the agent pursues a goal that is aligned with the design specification on the training data but not elsewhere.[19][20][21]This type of misalignment is often compared to human evolution: evolution selected for genetic fitness (design specification) in our ancestral environment, but in the modern environment human goals (revealed specification) are not aligned with maximizing genetic fitness. For example, our taste for sugary food, which originally increased fitness, today leads to overeating and health problems. Inner misalignment is a particular concern for agents which are trained in large open-ended environments, where a wide range of unintended goals may emerge.[20]

An inner alignment failure occurs when the goals an AI pursues during deployment deviate from the goals it was trained to pursue in its original environment (its design specification). Paul Christiano argues for using interpretability to detect such deviations, using adversarial training to detect and penalize them, and using formal verification to rule them out.[24]These research areas are active focuses of work in the machine learning community, although that work is not normally aimed towards solving AGI alignment problems. A wide body of literature now exists on techniques for generating adversarial examples, and for creating models robust to them.[25]Meanwhile research on verification includes techniques for training neural networks whose outputs provably remain within identified constraints.[26]

One approach to achieving outer alignment is to ask humans to evaluate and score the AI's behavior.[27][28]However, humans are also fallible, and might score some undesirable solutions highlyfor instance, a virtual robot hand learns to 'pretend' to grasp an object to get positive feedback.[29]And thorough human supervision is expensive, meaning that this method could not realistically be used to evaluate all actions. Additionally, complex tasks (such as making economic policy decisions) might produce too much information for an individual human to evaluate. And long-term tasks such as predicting the climate cannot be evaluated without extensive human research.[30]

A key open problem in alignment research is how to create a design specification which avoids (outer) misalignment, given only limited access to a human supervisorknown as the problem of scalable oversight.[28]

OpenAI researchers have proposed training aligned AI by means of debate between AI systems, with the winner judged by humans.[31] Such debate is intended to bring the weakest points of an answer to a complex question or problem to human attention, as well as to train AI systems to be more beneficial to humans by rewarding AI for truthful and safe answers. This approach is motivated by the expected difficulty of determining whether an AGI-generated answer is both valid and safe by human inspection alone. Joel Lehman characterizes debate as one of "the long term safety agendas currently popular in ML", with the other two being reward modeling[17] and iterated amplification.[32][30]

Reward modeling refers to a system of reinforcement learning in which an agent receives rewards from a model trained to imitate human feedback.[17] In reward modeling, instead of receiving reward signals directly from humans or from a static reward function, an agent receives its reward signals through a human-trained model that can operate independently of humans. The reward model is concurrently trained by human feedback on the agent's behavior during the same period in which the agent is being trained by the reward model.

In 2017, researchers from OpenAI and DeepMind reported that a reinforcement learning algorithm using a feedback-predicting reward model was able to learn complex novel behaviors in a virtual environment.[27] In one experiment, a virtual robot was trained to perform a backflip in less than an hour of evaluation using 900 bits of human feedback.In 2020, researchers from OpenAI described using reward modeling to train language models to produce short summaries of Reddit posts and news articles, with high performance relative to other approaches.[33] However, they observed that beyond the predicted reward associated with the 99th percentile of reference summaries in the training dataset, optimizing for the reward model produced worse summaries rather than better.

A long-term goal of this line of research is to create a recursive reward modeling setup for training agents on tasks too complex or costly for humans to evaluate directly.[17] For example, if we wanted to train an agent to write a fantasy novel using reward modeling, we would need humans to read and holistically assess enough novels to train a reward model to match those assessments, which might be prohibitively expensive. But this would be easier if we had access to assistant agents which could extract a summary of the plotline, check spelling and grammar, summarize character development, assess the flow of the prose, and so on. Each of those assistants could in turn be trained via reward modeling.

The general term for a human working with AIs to perform tasks that the human could not by themselves is an amplification step, because it amplifies the capabilities of a human beyond what they would normally be capable of. Since recursive reward modeling involves a hierarchy of several of these steps, it is one example of a broader class of safety techniques known as iterated amplification.[30]In addition to techniques which make use of reinforcement learning, other proposed iterated amplification techniques rely on supervised learning, or imitation learning, to scale up human abilities.

Stuart Russell has advocated a new approach to the development of beneficial machines, in which:[15]:182

1. The machine's only objective is to maximize the realization of human preferences.

2. The machine is initially uncertain about what those preferences are.

3. The ultimate source of information about human preferences is human behavior.

An early example of this approach is Russell and Ng's inverse reinforcement learning, in which AIs infer the preferences of human supervisors from those supervisors' behavior, by assuming that the supervisors act to maximize some reward function. More recently, Hadfield-Menell et al. have extended this paradigm to allow humans to modify their behavior in response to the AIs' presence, for example, by favoring pedagogically useful actions, which they call "assistance games", also known as cooperative inverse reinforcement learning.[15]:202 [34] Compared with debate and iterated amplification, assistance games rely more explicitly on specific assumptions about human rationality; it is unclear how to extend them to cases in which humans are systematically biased or otherwise suboptimal.

Work on scalable oversight largely occurs within formalisms such as POMDPs. Existing formalisms assume that the agent's algorithm is executed outside the environment (i.e. not physically embedded in it). Embedded agency[35][36]is another major strand of research, which attempts to solve problems arising from the mismatch between such theoretical frameworks and real agents we might build. For example, even if the scalable oversight problem is solved, an agent which is able to gain access to the computer it is running on may still have an incentive to tamper[37]with its reward function in order to get much more reward than its human supervisors give it. A list of examples of specification gaming from DeepMind researcher Viktoria Krakovna includes a genetic algorithm that learned to delete the file containing its target output so that it was rewarded for outputting nothing.[22]This class of problems has been formalised using causal incentive diagrams.[37] Everitt and Hutter's current reward function algorithm[38]addresses it by designing agents which evaluate future actions according to their current reward function. This approach is also intended to prevent problems from more general self-modification which AIs might carry out.[39][35]

Other work in this area focuses on developing new frameworks and algorithms for other properties we might want to capture in our design specification.[35] For example, we would like our agents to reason correctly under uncertainty in a wide range of circumstances. As one contribution to this, Leike et al. provide a general way for Bayesian agents to model each other's policies in a multi-agent environment, without ruling out any realistic possibilities.[40]And the Garrabrant induction algorithm extends probabilistic induction to be applicable to logical, rather than only empirical, facts.[41]

Capability control proposals aim to increase our ability to monitor and control the behavior of AI systems, in order to reduce the danger they might pose if misaligned. However, capability control becomes less effective as our agents become more intelligent and their ability to exploit flaws in our control systems increases. Therefore, Bostrom and others recommend capability control methods only as a supplement to alignment methods.[1]

One challenge is that neural networks are by default highly uninterpretable.[42] This makes it more difficult to detect deception or other undesired behavior. Advances in interpretable artificial intelligence could be useful to mitigate this difficulty.[43]

One potential way to prevent harmful outcomes is to give human supervisors the ability to easily shut down a misbehaving AI via an "off-switch". However, in order to achieve their assigned objective, such AIs will have an incentive to disable any off-switches, or to run copies of themselves on other computers. This problem has been formalised as an assistance game between a human and an AI, in which the AI can choose whether to disable its off-switch; and then, if the switch is still enabled, the human can choose whether to press it or not.[44]A standard approach to such assistance games is to ensure that the AI interprets human choices as important information about its intended goals.[15]:208

Alternatively, Laurent Orseau and Stuart Armstrong proved that a broad class of agents, called safely interruptible agents, can learn to become indifferent to whether their off-switch gets pressed.[3][45] This approach has the limitation that an AI which is completely indifferent to whether it is shut down or not is also unmotivated to care about whether the off-switch remains functional, and could incidentally and innocently disable it in the course of its operations (for example, for the purpose of removing and recycling an unnecessary component). More broadly, indifferent agents will act as if the off-switch can never be pressed, and might therefore fail to make contingency plans to arrange a graceful shutdown.[45][46]

An AI box is a proposed method of capability control in which an AI is run on an isolated computer system with heavily restricted input and output channelsfor example, text-only channels and no connection to the internet. While this reduces the AI's ability to carry out undesirable behavior, it also reduces its usefulness. However, boxing has fewer costs when applied to a question-answering system, which does not require interaction with the world in any case.

The likelihood of security flaws involving hardware or software vulnerabilities can be reduced by formally verifying the design of the AI box. Security breaches may also occur if the AI is able to manipulate the human supervisors into letting it out, via its understanding of their psychology.[47]

An oracle is a hypothetical AI designed to answer questions and prevented from gaining any goals or subgoals that involve modifying the world beyond its limited environment.[48][49] A successfully controlled oracle would have considerably less immediate benefit than a successfully controlled general-purpose superintelligence, though an oracle could still create trillions of dollars worth of value.[15]:163 In his book Human Compatible, AI researcher Stuart J. Russell states that an oracle would be his response to a scenario in which superintelligence is known to be only a decade away.[15]:162163 His reasoning is that an oracle, being simpler than a general-purpose superintelligence, would have a higher chance of being successfully controlled under such constraints.

Because of its limited impact on the world, it may be wise to build an oracle as a precursor to a superintelligent AI. The oracle could tell humans how to successfully build a strong AI, and perhaps provide answers to difficult moral and philosophical problems requisite to the success of the project. However, oracles may share many of the goal definition issues associated with general-purpose superintelligence. An oracle would have an incentive to escape its controlled environment so that it can acquire more computational resources and potentially control what questions it is asked.[15]:162 Oracles may not be truthful, possibly lying to promote hidden agendas. To mitigate this, Bostrom suggests building multiple oracles, all slightly different, and comparing their answers to reach a consensus.[50]

In contrast to endorsers of the thesis that rigorous control efforts are needed because superintelligence poses an existential risk, AI risk skeptics believe that superintelligence poses little or no risk of accidental misbehavior. Such skeptics often believe that controlling a superintelligent AI will be trivial. Some skeptics,[51] such as Gary Marcus,[52] propose adopting rules similar to the fictional Three Laws of Robotics which directly specify a desired outcome ("direct normativity"). By contrast, most endorsers of the existential risk thesis (as well as many skeptics) consider the Three Laws to be unhelpful, due to those three laws being ambiguous and self-contradictory. (Other "direct normativity" proposals include Kantian ethics, utilitarianism, or a mix of some small list of enumerated desiderata.) Most endorsers believe instead that human values (and their quantitative trade-offs) are too complex and poorly-understood to be directly programmed into a superintelligence; instead, a superintelligence would need to be programmed with a process for acquiring and fully understanding human values ("indirect normativity"), such as coherent extrapolated volition.[53]

In 2021, the UK published its 10-year National AI Strategy.[54] According to the Strategy, the British government takes seriously "the long term risk of non-aligned Artificial General Intelligence".[55] The strategy describes actions to assess long term AI risks, including catastrophic risks.[56]

More here:

AI control problem - Wikipedia

Posted in Superintelligence | Comments Off on AI control problem – Wikipedia