Voyager Therapeutics Provides Update on AbbVie Vectorized Antibody Collaborations – GlobeNewswire

CAMBRIDGE, Mass., Aug. 03, 2020 (GLOBE NEWSWIRE) -- Voyager Therapeutics, Inc. (NASDAQ: VYGR), a clinical-stage gene therapy company focused on developing life-changing treatments for severe neurological diseases, today announced the termination of its tau and alpha-synuclein vectorized antibody collaborations with AbbVie. Voyager retains full rights to the vectorization technology and certain novel vectorized antibodies developed as part of the collaborations.

Our efforts to harness AAV-based gene therapy to produce antibodies directly in the brain and overcome major limitations with delivery of current biologics across the blood-brain barrier have been highly productive, said Omar Khwaja, M.D., Ph.D., Chief Medical Officer and Head of R&D at Voyager. Through the tau and alpha-synuclein collaborations, we believe we have made considerable progress against targets for neurodegenerative diseases with this novel approach, reinforcing our enthusiasm for its potential to deliver therapeutically efficacious levels of biologics to the brain and central nervous system. We believe our continued work on discovery and design of novel AAV capsids with substantially improved blood-brain barrier penetrance will also considerably broaden the potential of AAV-based gene therapy, including vectorized antibodies or other biologics, for the treatment of severe neurological diseases.

The tau and alpha-synuclein research collaborations were formed in 2018 and 2019, respectively. Under the terms of the collaboration agreements, Voyager received upfront payments to perform research and preclinical development of vectorized antibodies directed against tau and alpha-synuclein. With the conclusion of the collaborations, Voyager has regained full clinical development and commercialization rights to certain product candidates developed within the context of the collaboration for the tau program. Voyager is free to pursue vectorized antibody programs for tau and alpha-synuclein alone or in collaboration with another partner.

Voyager does not anticipate any changes to its cash runway guidance due to the termination of the agreements. As of March 31, 2020, the Company had cash, cash equivalents and marketable debt securities of $250.9 million, which, along with amounts expected to be received for reimbursement of development costs from Neurocrine Biosciences, is expected to be sufficient to meet Voyagers projected operating expenses and capital expenditure requirements into mid-2022.

About Voyager Therapeutics

Voyager Therapeutics is a clinical-stage gene therapy company focused on developing life-changing treatments for severe neurological diseases. Voyager is committed to advancing the field of AAV gene therapy through innovation and investment in vector engineering and optimization, manufacturing, and dosing and delivery techniques. Voyagers wholly owned and partnered pipeline focuses on severe neurological diseases for which effective new therapies are needed, including Parkinsons disease, Huntingtons disease, Friedreichs ataxia, and other severe neurological diseases. For more information, please visit http://www.voyagertherapeutics.com or follow @VoyagerTx on Twitter and LinkedIn.

Forward-Looking Statements

This press release contains forward-looking statements for the purposes of the safe harbor provisions under The Private Securities Litigation Reform Act of 1995 and other federal securities laws. The use of words such as may, might, will, would, should, expect, plan, anticipate, believe, estimate, undoubtedly, project, intend, future, potential, or continue, and other similar expressions are intended to identify forward-looking statements. For example, all statements Voyager makes regarding the ability of Voyager to maintain research and development activities currently included within the collaboration agreements with AbbVie; Voyagers ability to advance its AAV-based gene therapies and its ability to continue to develop its gene therapy platform; the scope of the intellectual property rights and other rights that will be available to Voyager following the termination of the AbbVie collaboration agreements; the anticipated effects of the termination of the AbbVie collaboration agreements on Voyagers anticipated financial results, including Voyagers available cash, cash equivalents and marketable debt securities; and Voyagers ability to fund its operating expenses with its current cash, cash equivalents and marketable debt securities through a stated time period are forward looking. All forward-looking statements are based on estimates and assumptions by Voyagers management that, although Voyager believes such forward-looking statements to be reasonable, are inherently uncertain. All forward-looking statements are subject to risks and uncertainties that may cause actual results to differ materially from those that Voyager expected. Such risks and uncertainties include, among others, the continued cooperation of AbbVie in activities arising from the termination of the AbbVie collaboration agreements, the development of the gene therapy platform; Voyagers scientific approach and general development progress; Voyagers ability to create and protect its intellectual property; and the sufficiency of Voyagers cash resources. These statements are also subject to a number of material risks and uncertainties that are described in Voyagers most recent Quarterly Report on Form 10-Q filed with the Securities and Exchange Commission, as updated by its subsequent filings with the Securities and Exchange Commission. All information in the press release is as of the date of this press release, and any forward-looking statement speaks only as of the date on which it was made. Voyager undertakes no obligation to publicly update or revise this information or any forward-looking statement, whether as a result of new information, future events or otherwise, except as required by law.

Investors:Paul CoxVP, Investor Relations857-201-3463pcox@vygr.com

Media:Sheryl SeapyW2Opure949-903-4750sseapy@purecommunications.com

See more here:

Voyager Therapeutics Provides Update on AbbVie Vectorized Antibody Collaborations - GlobeNewswire

And they’re off! Campaign signs popping up – Las Cruces Bulletin

By Mike Cook

As of Aug. 7, there are 88 days until the Tuesday, Nov. 3, General Election. Yard signs and billboards are allowed 90 days before an election, and they have already begun to appear.

There are 28 federal, statewide and local races on Doa Ana County ballots, including 59 candidates: 28 Democrats, 25 Republicans, four Libertarians, one Constitution Party candidate and one declined-to-state (DTS) candidate. There are eight incumbent local district judges and one state Supreme Court justice up for voter retention.

There also will be five ballot initiatives for voters to consider: two constitutional amendments and three statewide bond issues that would allocate $200 million for senior centers, libraries, colleges and universities across the state.

Because state legislators and county commissioners are elected by districts, not everyone will see the same names on their ballots. Voters will choose from the same group of candidates for president, U.S. Senate and U.S. House New Mexico district two, county clerk and treasurer and Third Judicial District attorney, and will vote up or down for statewide and local judicial retentions.

But depending on where they live, voters will see different candidates in the six state Senate and eight state House of Representatives races that include Doa Ana County. Three of five county commission seats are also on this years ballot. The other two commission seats along with the county sheriff, assessor and probate judge will be up in 2022.

Democrats are unopposed in one statewide and two local races: Court of Appeals position three, district attorney and county commission district two. Gerald Byers, who also had no primary opponent, will succeed Mark DAntonio, who is retiring after two four-year terms, as district attorney.

Anthony Mayor Diane Murillo-Trujillo defeated incumbent Ramon Gonzalez in the June county commission district two Primary and will become a member of the commission next January.

The four Libertarian candidates are running for president, U.S. Senate, U.S. House, Court of Appeals position two (a write-in candidate whose status is being evaluated by the New Mexico Secretary of States office) and county commission district four.

The lone Constitution Party candidate is running for president and the only DTS candidate is running for U.S. House district two.

There are 22 incumbents running: 21 Democrats hoping to hold U.S. House district two, two state Supreme Court and three Court of Appeals seats, four state Senate and eight state House seats, one county commission seat, county clerk and county treasurer; and two Republicans, President Donald Trump and state Sen. Ron Griggs of Alamogordo, whose district includes two of Doa Ana Countys 170 precincts.

Two long-time state Senate Democrats John Arthur Smith (32 years) of Deming and Mary Kay Papen (20 years) of Las Cruces, lost in the Primary, along with County Commissioner Gonzalez. Another county commissioner, Isabella Solis, was elected to the commission as a Democrat in 2016, switched to Republican in 2019 and chose to run for state representative this year instead of running for re-election to the commission.

Read more from the original source:

And they're off! Campaign signs popping up - Las Cruces Bulletin

What’s on the ballot? A rundown of races and issues facing Greene County voters on Tuesday – News-Leader

The United States has never delayed an election, even during the Civil War and World War II. USA TODAY

On Tuesday, Missouri voters will head to the polls to cast their ballots.

There are several primaries for federal,state and local races on the ballot, as well as a state constitutional amendment, and for Springfield voters, a question about fees for short-term lenders.

Greene County polling sites will have cleaning supplies, hand sanitizer and gloves on hand when residents show up to vote in the primary on Tuesday.(Photo: Nathan Papes/Springfield News-Leader)

Here's a rundown of what's on the ballot.

Primary races for Missouri governor, lieutenant governor, secretary of state, treasurer and attorney general are all up for grabs.

The Republicans running are:

Governor

Lieutenant Governor

Secretary of State

Treasurer

Attorney General

The Democrats running are:

Governor

Lieutenant Governor

Secretary of State

Treasurer

Attorney General

The Libertarian candidates running are:

Governor

Lieutenant Governor

Secretary of State

Treasurer

Attorney General

Green Party candidates running are:

Governor

Lieutenant Governor

Secretary of State

Treasurer

There is one Constitution Party candidate, Paul Venable, who is running for secretary of state.

The only federal nomination up for grabs in this election is oneencompassing Greene, Polk, Christian, Taney, Stone, Barry, McDonald, Newton, Jasper and Lawrence counties, as well as the southwest corner of Webster County.

The Republicans running are:

Democrat Teresa Montseny is running unopposed in her party's primary, as isLibertarian candidate Kevin Craig.

Several Greene County state seats are up for grabs this election. If you don't know your district, you can find out athttps://house.mo.gov/legislatorlookup.aspx.

District 130

There are three Republicans running for this open seat, which covers Republic, Willard and western Greene County.They are:

Democrat Dave Gragg is running unopposed in his party's primary.

District 131

There are two Republican candidates running for this open seat, which covers northern Springfield and north-central Greene County. They are:

Democrat Allison Schoolcraft is unopposed in her party's primary.

District 132

Both incumbent Democrat Crystal Quade and Republican Sarah Semple are running unopposed in their primaries for this seat, which covers parts of north and northwest Springfield.

District 133

Both incumbent Republican Curtis Trent and Democratic candidate Cindy Slimp are running unopposed in their primaries for this seat, which includes west and southwest Springfield and extends down to the cityof Battlefield.

District 134

There are two Republican candidates running for this open seat, which covers south-central Springfield, running from Bass Pro Shops to the James River. They are:

Democrat Derrick Nowlin is running unopposed in his party's primary.

District 135

Incumbent Republican Steve Helms,Democratic candidate Betsy Fogle and Green Party candidate Vicke Keplingare each running unopposed in their primaries for this seat, which covers east Springfield.

District 136

Incumbent Republican Craig Fishel and Democratic candidate Jeff Munzinger are each running unopposed in their primaries for this seat, which covers southeast Springfield and Greene County.

District 137

Incumbent Republican John F. Black and Democratic candidate Raymond Lampert are each running unopposed in their primaries for this district, which covers parts of northeast Greene County and western Webster County.

There are several county races up for grabs on the ballot.

Greene County Sheriff Jim Arnott, Treasurer Justin Hill and Public Administrator Sherri Eagon Martin,allRepublicans, are running unopposed.

District 1 Commissioner

Two people are running on the Republican ballot for the first commission district, which covers Western Greene County. They are:

Democratic candidate Wes Zongker is running unopposed in his party's primary.

District 2 Commissioner

Incumbent Republican John Russell and Libertarian candidate Cecil A. Ince are each running unopposed in their party's primaries.

Assessor

There are three Republican candidates running for Greene County Assessor. They are:

Constitutional amendment No. 2

This issue will go to all voters across the state, asking whether they want to amend the state's constitution to allow people from 19 to 64 who have an income level at or less than 133 percent of the federal poverty line to qualify for health care coverage.

The debate about expansion has been lengthy, but a News-Leader series examining the impact found:

When voters go to the ballot box, they should mark"Yes" if they support expansion, or "No" if they don't. That ballot language is as follows:

"Do you want to amend the Missouri Constitution to:

State government entities are estimated to have one-time costs of approximately $6.4 million and an unknown annual net fiscal impact by 2026 ranging from increased costs of at least $200 million to savings of $1 billion. Local governments expect costs to decrease by an unknown amount."

City of Springfield Question 1

Voters in the city of Springfield will also consider their own ballot initiative, which would require short-term lending establishments, such as payday or car title lenders, to pay an annual registration fee of $5,000.

The proposal,which city voters will see on the Aug. 4 ballot,was approved in May by City Council along with a bill requiring lenders toadvertise interest rates, disclose how long it will take people to pay off a loan and provide clear explanations about the agreement the borrower is signing.

The fee is intended to make sure lenders comply with city requirements, and the money will be used to provide alternatives to short-term lenders, help people get out of debt and educate the community about the reality of taking out a payday or car title loan.

Voters who support imposing the fee should vote "Yes," and those who don't should vote "No."The ballot language is as follows:

"Shall the City of Springfield, Missouri, be authorized to impose a fee for a Short-Term Loan Establishment permit in the amount of $5,000 annually, new or renewal, or $2,500 for a permit issued with less than 6 months remaining in the calendar year?"

Polling places citywide are open from 6 a.m. to 7p.m. Tuesday.

To find your polling place, visithttps://greenecountymo.gov/county_clerk/election/precinct_information.phpor call 417-868-4060.

Voters should remember to bring a valid state or federal ID with them to the polling place, such as a driver's license, military ID or passport.

If you don't have a government-issued ID, youcan bring a voter registration card, a Missouri university, college, vocational or technical school ID or a current utility bill, bank statement, government check, paycheck or other government document containing yourname and address.

People voting in the city should also remember to wear a mask, which is required by ordinance. Hand sanitizer and other cleaning supplies will also be available at polling places.

Katie Kull covers local government for the News-Leader. Got a story to tell? Give her a call at 417-408-1025 or email herat kkull@news-leader.com. You can also support local journalism atNews-Leader.com/subscribe.

Read or Share this story: https://www.news-leader.com/story/news/local/ozarks/2020/08/01/primary-election-missouri-ballot-greene-county/5539124002/

The rest is here:

What's on the ballot? A rundown of races and issues facing Greene County voters on Tuesday - News-Leader

COVID-19 and Maine’s budget crisis require action on health care costs – Bangor Daily News

Gov. Janet Mills has led Maines response to the health crisis with compassion and clarity. Yet Maine is not immune from the seismic impacts of COVID-19. A $1.4 billion budget shortfall is estimated over the next three years, including a loss of more than $520 million this fiscal year. Maine holds the unsavory distinction of the greatest racial disparity in COVID-19 infection rates, with Black Mainers more than 20 times more likely to contract the virus than their white neighbors. A recent report shows 14,000 Mainers will be newly uninsured after tens of thousands have lost employer-provided insurance since February.

Health insurance companies are proposing raising rates for small businesses, with initial filings showing the highest requested increase for 2021 topping out at over 10 percent, following rate hikes in the double-digits last year for many plans. The Maine Health Data Organization reports that overall, the 25 most costly drugs in Maine increased in cost by nearly 11 percent last year and the cost per person increased by 27 percent. In 2018, Maines per-capita health expenditures were 10 percent higher than the U.S. average.

Our most recent polling shows over two-thirds of Mainers are concerned about not being able to afford health coverage, copays and deductibles. Nearly three quarters are concerned about prescription drug prices, with two out of three worried they wont be able to afford the medicine they need. These concerns are growing with more Mainers losing coverage.

State policymakers have taken significant steps to improve health care affordability, and this moment calls for continued action to control rising costs and expand accessibility without cutting vital access to programs. We need solutions that not only stop the spread of the virus but make sure Maine can reopen its doors and stay open. This is especially important as vulnerable Mainers return to work, caring for older Mainers and providing other essential services.

Expanded MaineCare is helping thousands access the coverage and care they need, and laws enacted last year improve affordability and access to health care in Maines individual and small business markets. Bipartisan support of measures to address skyrocketing prescription drug prices, including the creation of a Prescription Drug Affordability Board to help contain drug costs in public health programs, shows Maine policymakers can work together to address the problems we face. And that work must continue with urgency.

It starts with our federal lawmakers. Initial increases in federal match rates for state Medicaid programs have been helpful but are nowhere close to what is needed to help fill the gaps in state revenue. The HEROES Act passed by the House includes increased Medicaid funding to help avoid devastating health care cuts at the state level, but the Senates HEALS Act does not.

State policymakers have an opportunity to address rising costs with Senate President Troy Jacksons bill, LD 2110, An Act to Lower Health Care Costs. It passed in the Maine House and Senate, but sits awaiting final action as the Legislature contemplates a special session. The bill creates an independent entity to examine and identify ways to lower health care costs. It would also provide staffing to Maines Prescription Drug Affordability Board, which has only met once since its creation due in part to the lack of dedicated staff.

With an ongoing pandemic and revenue losses, reining in health care costs while also ensuring access to care has never been more important. A similar effort in Massachusetts has already produced very promising results.

There are real opportunities to protect the health care gains we have made in Maine and to help those who are going without. I am more than hopeful and confident policy makers at both the federal and state level will put politics aside and work together to protect the health and well-being of the people they represent.

Ann Woloson is the executive director of Consumers for Affordable Health Care.

More here:

COVID-19 and Maine's budget crisis require action on health care costs - Bangor Daily News

Messenger: From COVID to Medicaid expansion, Missouri governor’s race revolves around health care – STLtoday.com

Its that process that creates the dichotomy Silvey lamented. The reason that lawmakers are out of touch with statewide voters isnt just because of the states longstanding rural-urban divide, its also because they long ago gerrymandered legislative districts to protect incumbent Republicans. Doing so made the districts look less like their actual communities and created primaries where, in most cases, only the most extreme Republican could win.

There are very few legislative districts left in Missouri that could elect a thoughtful Republican voice like Silvey or Barnes, and that puts the state at a loss.

So in November, as Parson is running from his COVID-19 record and his opposition to providing health care to the working poor, the bipartisan coalition that passed Medicaid, passed the minimum wage, fought right-to-work and supported medical marijuana, will be back to defend Clean Missouri.

I think you will see similar voices of support, for the Vote No on Amendment 3 campaign that Missouri saw with Medicaid expansion, says political strategist Sean Nicholson, who is getting the Clean Missouri band back together. There will be business and labor groups and community groups. There is a disconnect between what the Legislature has been working on and where the people are at.

The people, says Silvey, want health care. They want the government to solve problems. Yes, even Republicans. Medicaid expansion passed overwhelmingly in Kansas City, St. Louis and Columbia, but it also passed in the two Republican hotbeds of St. Charles County and Green County.

See the original post:

Messenger: From COVID to Medicaid expansion, Missouri governor's race revolves around health care - STLtoday.com

Hospital’s food delivery service is a blessing | Health Care – Grand Haven Tribune

Editors note: This is the fourth in a series celebrating our local health care workers.

I never thought spending a week in the North Ottawa Community Hospital intensive care unit with my almost 98-year-old mom would feel like such a blessing.

Javascript is required for you to be able to read premium content. Please enable it in your browser settings.

kAm%9C@F89 E96 A2DE H66J >@> 6G6CJ 52J[ 3FE 2=D@ E96 9@?@C @7 D66:?8 E96 <:?5[ 4@>A2DD:@?2E6 2?5 4@>A6E6?E DE277 >6>36CD 2E }~rw 2E H@C<]k^Am

Donna Bullock, the service representative for North Ottawa Community Hospitals food service department, said she loves the family environment in her workplace.

kAmtG6CJ A2E:6?E ?665D ?@FC:D9>6?E[ 2?5 E96 5:2=FA 2?5 @C56CH92EJ@F H2?E A=2? @776CD 2>2K:?8 7=6I:3:=:EJ 2?5 >6?F :E6>D C2?8:?8 7C@> >256E@@C56C @>6=6EE6D[ D2?5H:496D 2?5 D@FAD E@ 32<65 7:D9[ DE62<[ DE:C 7CJ 2?5 A:KK2j 2?5 2? 23F?52?46 @7 D:56 5:D96D[ 5C:?

kAmu@@5 D6CG:46D C6AC6D6?E2E:G6 s@??2 qF==@4< @7E6? 2?DH6CD E96 dbag 6IE6?D:@? H96? H6 42== E@ A=246 2? @C56C] qF==@4<[ H9@ 92D H@C<65 2E }~rw 7@C E9C66 J62CD[ D2:5 D96 =@G6D E96 2E>@DA96C6 @7 E96 D>2==E@H? 9@DA:E2=]k^Am

kAmx =:<6 E96 72>:=J 6?G:C@?>6?E @7 E96 9@DA:E2=[ D96 D2:5] x E9:?< H6C6 C62==J =F4>F?:EJ]k^Am

Read more here:

Hospital's food delivery service is a blessing | Health Care - Grand Haven Tribune

Big Tech’s assault on free speech | TheHill – The Hill

For years, there have been whispers about Big Techs tendency to muffle those who dare to challenge mainstream liberal orthodoxy. In 2018, thePew Research Centerfound, 72% of the public thinks it likely that social media platforms actively censor political views that those companies find objectionable. By a four-to-one margin, respondents were more likely to say Big Tech supports the views of liberals over conservatives than vice versa.

As the 2020 elections approach, Big Tech has upped the ante in its limiting of free speech. This is a dangerous development that undermines the fundamental principles upon which the United States was founded. If left unchecked, it could lead to an Orwellian nightmare and, ultimately, to the end of the republic as we know it.

In the past few years, there have been countless cases of social media giants Facebook, Instagram, Twitter and YouTube muzzling conservatives and libertarians, for apparent political motives. For example, it is well documented thatTwitter uses shadow bansto prevent users from sharing their posts to the hundreds of millions of Twitter users.

Somehow,shadow bans overwhelmingly have been applied to those on the rightend of the political spectrum. Coincidence? I think not.

Although those on the left claim this is exaggerated, it happens all the time. And it seems that Twitter and others are clamping down more and more on prominent users who have the audacity to question the so-called consensus on a variety of issues.

Recently, Twitter has come under increased scrutiny because it has targeted conservatives such asDonald Trump Jr.who have posted material that question mainstream narrative about protests, coronavirus treatments, the wisdom of lockdowns and several other pressing issues.

The Trump Jr. case is particularly spine-chilling because all the presidents son did was post a video from a group of doctors who presented a case for using hydroxychloroquine as a treatment for COVID-19. According to Twitter, Tweets with the video are in violation of our Covid-19 misinformation policy. We are taking action in line with our policy here.

Shortly after, Facebook and YouTube also scrubbed the video. Although this may seem like no big deal, it certainly is.

In 2020,most Americans receive their news via social media. The sheer power held by these companies concerning the flow of information is mind-boggling. And they can use their power to shift public opinion, as demonstrated in the 2010 election whenFacebook launched a get-out-the-vote campaignthat it claims resulted in 343,000 more voters going to the polls.

If Facebook and other social media giants can nudge Americans to vote, how long before they also shift public opinion in the direction they desire? It seems as if this Rubicon may have already been crossed.

In some ways, Google has more power over information than the social media companies because Google completely dominates internet searches. Over the past year,Googles market share of worldwide internet searches has hovered around 92 percent.

According to a recentstudytitled An analysis of political bias in search engine results, Googles top search results were almost 40% more likely to contain pages with a Left or Far Left slant than they were pages from the right. Moreover, 16% of political keywords contained no right-leaning pages at all within the first page of results.

In other words, according to that study, Googles algorithm is politically biased to favor the left over the right. Maybe that explains why Google and other Big Tech companies contribute so much money to the Democratic Party compared to the Republican Party.

According to the Center for Responsive Politics,70 percentof donations by Facebook and its employees in the 2020 campaign cycle have gone to Democrats. Eighty-one percent of Googles political contributions have gone to Democrats. The same trend applies to Amazon (74 percent) and Apple (91 percent).

Fortunately, Big Techs bias is becoming more and more apparent. Most Americans are well aware that in general, Big Tech favors leftwing causes, politicians and opinions.

Since it seems that Congress is unwilling to do anything about this in the near future, the question is, what can and should we the people do about it?

Chris Talgo(ctalgo@heartland.org)is an editor at The Heartland Institute.

Go here to read the rest:

Big Tech's assault on free speech | TheHill - The Hill

Influential think tank urges Govt to protect free speech in universities – The Christian Institute

The Government must legislate to ensure freedom of speech is protected in university students unions, a leading think tank has said.

A report by the influential Policy Exchange said Parliament needed to make current legislation clearer and more robust, and impress upon universities and colleges their duty to ensure academic freedom and freedom of speech.

In recent years, students unions in England have denied pro-life and Christian student groups access to funding, and facilities such as stalls at freshers fairs.

The report called for a new Director for Academic Freedom at the Office for Students to promote tolerance for viewpoint diversity in universities and students unions.

The role would encourage compliance and investigate possible breaches.

It added that guidance should be updated to ensure students unions fulfil their freedom of speech duties and universities and colleges had to be being willing to support events in the face of intimidation and threats.

Policy Exchange has called for the Government to provide examples of sanctions that universities and colleges can apply to non-complying students unions.

It stated that universities and colleges would be expected to impose such fines against individual members of the University and those groups that fail to uphold freedom of speech, including fines for Student Unions who discriminate on grounds of viewpoint.

Where a Student Union denies a student group access to services, the report says there should be a process to appeal.

Education Secretary Gavin Williamson indicated in February that the Government is ready to defend students rights to freedom of speech.

Writing in The Times, he said: If universities dont take action, the government will. If necessary, Ill look at changing the underpinning legal framework, perhaps to clarify the duties of students unions or strengthen free speech rights.

I dont take such changes lightly, but I believe we have a responsibility to do whatever necessary to defend this right.

In 2017, Balliol College of Oxford University banned the Christian Union from its Freshers Fair, because Christianity was labelled as an excuse for homophobia and certain forms of neo-colonialism.

Organiser Freddy Potts claimed that the presence of CU members would be alienating for students and constituted a microaggression, but a backlash from Balliol students forced the organising committee to back down.

Office for Students defends free speech in no-platforming row

Security guards for Oxford prof after trans activists threats

Universities launch free speech societies

Read more here:

Influential think tank urges Govt to protect free speech in universities - The Christian Institute

Assessing Indias obsession with data localisation – Deccan Herald

Covid-19 has spawned contact-tracing worldwide, triggering collection and processing of personal data. Privacy protections surrounding this are nascent, raising significant concerns about their permanence in our society. The Supreme Courts landmark Puttaswamy judgement recognised privacy as intrinsic to personal liberty under Article 21.

Concurrently, it recognised that a legitimate interest, say, an epidemic, might restrain the right provided the doctrines of necessity and proportionality are satisfied. In this context, a recent order from the Kerala High Court in Balu Gopalakrishnan assumes significance.

The Kerala government contracted US-based Sprinklr Inc for Covid-related medical data analysis. Petitioners assailed this contract for lacking adequate privacy safeguards, arguing that the jurisdictional choice of New York virtually renders Indian citizens defenceless against a breach.

The courts order pervasively focuses on data localisation, that data concerning Indian residents must reside within India to secure jurisdiction of her courts. This sentiment has been echoed by Union ministers as well. We submit that data localisation is an anachronism, and severely inhibits privacy protections envisaged under the Constitution.

A comprehensive safeguard instead necessitates attaching jurisdiction through the residence of the data subject. In fact, Delhis obsession with data localisation stalls the resolution of another obsolescence ailing Indias privacy regime the absence of a data-protection legislation.

Currently, statutory protections are entirely contained within the Information Technology Act, 2000 (IT Act). Data localisation advocates, and respondents in Gopalakrishnan argue that localisation attaches jurisdiction using Section 75(2) of the IT Act, which applies the Act extra-territorially (outside India) if a breach involves a computer located in India.

Any reassurance from Section 75(2) is a facade. Consider this, Sprinklr decides to use a supercomputer in Ohio and copies data from Indian servers. The supercomputer at Ohio containing data of Indian nationals is breached. In such a case, Section 75(2) will not operate since the computer located in India was not breached, and absurdly, an Indian will be without remedy.

The IT Act was designed to facilitate e-commerce, not for data protection. Thus, virtually, the entirity of its penal provisions are predicated on tangible loss (see Sections 43A, 66, 66C, 66D, and 66E). Disclosure that someone is diabetic may not cause a loss but is still a privacy violation yet, the IT Act provides no remedy here.

Resolving these absurdities requires a fundamental re-imagination of our privacy jurisprudence. Jurisdiction should attach to any entity collecting, processing, and/or storing personal data based on the residence of the data subject, not its location. This approach allows greater flexibility for processing while also comprehensively protecting privacy.

The spatial approach of data-localisation is incongruent to the very concept of privacy. This was first enunciated by the US Supreme Court (Scotus) in Katz v United States, where wiretapping without entering a persons home was challenged as a violation of Fourth Amendment rights.

The Fourth Amendment is textually spatial; it protects against unreasonable search and seizure of someones persons, houses, papers, and effects. Drafted around 1791, its text could not possibly predict the intrusion that remote technologies can accomplish today.

Therefore, like data-localisation, it was written with spatial limitations and a literal interpretation renders it redundant today. Cognizant of this vulnerability, Scotus held that privacy attaches to people, not places, and therefore, wiretapping even absent a literal intrusion was unconstitutional.

The Indian Supreme Court, in Dist Registrar & Collector v Canara Bank, adopted Katz with approval, placing individuals at the locus of privacy. In Puttaswamy, Justice Chandrachud wrote, Privacy is a concomitant of the right of the individual to exercise control over his or her personality. Justice Nariman distilled an informational aspect of privacy, distinct from an individuals physical body. As a principle seeking to preserve privacy, therefore, data localisation ignores its evolution and attempts to restrict it to an obsolete conception of tangibility and spatiality.

Restrictive view

To argue that Indian courts cannot pursue offenders abroad without data localisation is a restrictive view of jurisdiction. The Supreme Court in GVK Industries acknowledged Parliaments power to legislate extra-territorially for the interests or welfare of inhabitants of India. Article 73 of the Constitution makes the Union executive power contemporaneous with Parliaments legislative authority.

Therefore, where the welfare of Indians is concerned, legislative and executive powers of extend outside India too.The Constitutions Fundamental Rights Charter is meant to check state authority. Consequently, it too, must operate abroad if the state pursues extra-territorial acts.

Concluding otherwise would confer absolute impunity to state action abroad, even when it infringes the rights, interests or welfare of the people of India. The Constitution provides for writs under Articles 32 and 226 for enforcing rights of Indians, indicating that the jurisdiction of the Supreme Court and high courts would extend extra-territorially in such cases.

There is precedent for this understanding of jurisdiction. Section 4 of the IPC provides that an Indian citizen may be charged with an IPC offence committed while she is abroad, even if it is not an offence in that country. Parliament has therefore attempted to regulate the conduct of Indian citizens abroad to accord with Indias standards of criminality. In such cases, Indian courts gain congruent jurisdiction already. For data protection, Europes General Data Protection Regulation statutorily attaches jurisdiction based on residence of data-subject, rejecting data-localisation. Under the Protective Principle, international law also permits extra-territorial jurisdiction of states for its own preservation or protecting its interests. Clearly, critical personal data of its residents is at the core of a states interests.

In Maneka Gandhi, the SC noted that courts should expand the reach and ambit of Fundamental Rights, rather than to attenuate their meaning and content by a process of judicial construction. By relying on constricted and overly simplistic anachronisms like data-localisation, policy makers are turning away from this guiding principle.

(Maniktala is an LLB student, Campus Law Center, University of Delhi; Khurana, is an LLM graduate from the UCLA School of Law, USA)

See the rest here:

Assessing Indias obsession with data localisation - Deccan Herald

Alexa is starting to ask questions. How should we respond? – CNET

In the future, software in products like the Amazon Echo Studio will feature give-and-take conversations.

Two years ago, Amazon announced a new feature for Alexa: the ability to ask questions.Hunches, as they're called, have slowly rolled out since the announcement, and now it's fairly common to hear Alexa move outside her old "answer questions, obey commands" routine. The voice assistant usually asks these questions as follow-ups to your commands or questions, and they're a result of Alexa trying to anticipate your requests -- for instance, reminding you to lock the door at night.

Hunches are only the start.

Subscribe to the CNET Now newsletter for our editors' picks of the most important stories of the day.

During July'sAlexa Live developers conference, Amazon announced another new upgrade: give-and-take conversations with the voice assistant. The tools for such conversation are already being implemented by third-party developers and it wouldn't be a surprise to hear Alexa, in the next few months, begin to ask follow-up questions after you give the usual commands.

These might seem like incremental improvements, but they could dramatically change how we understand and use voice assistants. After all, we've seen movies in which AI creations banter with their creators, but few of us have actually spent time wondering if we'd actuallywantto spend much time chatting with Alexa over coffee each morning. And more importantly, we haven't grappled enough with the costs of such advances.

It's almost passe to talk about the immense troves of data companies like Amazon and Google can tap nowadays, but that data is the fuel powering the smart home's proverbial engine -- and Alexa is the fracking apparatus gathering it.

Amazon's release of the Echo Dot with Clock last year gave a small window into the usefulness of such data: Alexa fields questions about the time of day over a billion times per year, so Amazon built a device to answer that question more effectively. It's simple supply and demand, but where Amazon can quantify the demand with unprecedented precision.

2019's Echo Dot with Clock represents Amazon's data-gathering tools in action.

Now, Amazon is testing out more proactive behaviors for Alexa, having the assistant prompt users on occasion -- and the company can track in real time the rate of success in those predictions. People are responding positively (that is, affirming Alexa's suggested actions) "the vast majority of the time," according to Vice President of Smart Home at Amazon Daniel Rausch.

Rausch and I spoke on the phone before July's conference and he was as excited as ever about the innovations in the voice-driven smart home space. He said more developers than ever are designing Alexa skills and devices to work with the voice assistant -- over 750,000 were registered for the conference -- and it's cheaper than ever to incorporate Alexa-compatibility into any given device, at a jaw-dropping $4.

The growth in third-party development means the instant feedback loop, in which Amazon can roll out features, test them and receive immediate customer response data, is only growing in value for Amazon -- especially as they push deeper into uncharted consumer territory.

Amazon's voice assistant is making itself at home in more than the house, thanks to the Alexa app, Echo Auto and other out-of-home devices.

Perhaps, like the hours of time we spend on our phones each day, we'll arrive at a new norm without ever having time to seriously consider the route we're taking, the destination ahead. Or perhaps, the time to consider such things is now.

The EU is currently looking into Google, Amazon and other tech giants for precisely this kind of data-driven market dominance in the smart home space in Europe -- though the stated goal is to maintain healthy competition.

Another type of inquiry -- formal or informal -- is in order: What exactly could the unforeseen outcomes of expanded voice technology be? Is there a way to progress technologically without risking such outcomes?

Daniel Rausch and others at Amazon are typically hesitant to talk about specific goals in the far future, but the investment the tech giant is making into its voice technology tells us more than you might think about the vision Amazon is pursuing. It's a vision that's simultaneously exciting and concerning.

We're not likely to reach the sci-fi levels Iron Man,Moonor Her too soon, but as we become more accustomed to a give-and-take mode of interacting with Alexa, we're moving toward voice technology taking a much more central spot in our daily lives. As Rausch told me over the phone, Alexa use has quadrupled in the past two years and the increase in Alexa-use is non-linear: Growth over the next year will likely outpace growth over the past year.

As Alexa and other voice assistants find homes in new devices -- controlling our TVs, phones and even microwaves -- and as they also become more predictive and proactive in their interactions with us, we could see the voice landscape dramatically change in increasingly short periods of time.

The Amazon Basics microwave is likely only an early example of what will become normal over the next decade: voice-driven appliances.

More concretely: Within a year, we could conceivably see Alexa (and other voice assistants) hear you walk into the kitchen using abilities akin to Alexa Guard (which can distinguish between human and pet footsteps), ask if you'd like it to preheat the oven for your usual lunch and so on -- all unprompted. Many customers might be happy for such convenience, even given the cost to privacy it represents.

It's not just privacy at stake: People are turning to voice assistants for information on the COVID-19, on mental health, on exercise and more -- and Alexa dutifully provides skills, sometimes hundreds of skills, to address such needs. As one Atlantic writer mused about the future of voice assistants, "With their perfect cloud-based memories, they will be omniscient; with their occupation of our most intimate spaces, they'll be omnipresent. And with their eerie ability to elicit confessions, they could acquire a remarkable power over our emotional lives."

As Alexa changes, so do we. Many of us who use voice assistants regularly have found tricks to interacting with them. Alexa never understands when I ask for the album KTSE by Teyana Taylor, for instance, so I have to play an individual song from it, then tell the assistant to "play this whole album." My wife, who is convinced Alexa is sexist for never understanding her commands as well as the assistant understands mine ("I have more practice," I always assure her, only mostly certain of myself), is much more willing to insult Alexa -- and, strangely enough, to apologize.

I worry about how our three- and four-year-old will interact with voice assistants and I honestly don't know what type of interaction is "right" anyway.

In short, Alexa, Google Assistant, Siri and any number of other assistants are changing privacy norms, changing culture and changing us.

Cameras connected to Alexa and other voice assistants only add another layer of complexity to the conversation.

Can we preserve our privacy -- and ourselves -- and also experience the convenience afforded by such advances? If we try, it will certainly slow things down -- something companies like Amazon are likely keen to avoid.

Privacy policy, messy as it may be, is important here. Bills like California's CCPA (which has only just started being enforced as of July) help cite businesses for violating user privacy or failing to properly inform users about the data being collected on them. Such bills, with the rapid expansion of voice and smart home technology, need to be living documents, developing alongside Alexa and other voice assistants, challenging them where appropriate.

On an individual level, it's still worth practicing privacy hygiene -- deleting apps from your phone if you don't use them regularly, opting for the strictest privacy options from social media and voice assistants and so on. More fundamentally, now is the best time to start asking ourselves what we want our futures to look like, and how much access voice assistants should have to our lives, our homes and our selves.

If a time traveler from the future had told us in 2007 the sleep problems and behavioral changes touch screens would usher into our lives, would it or should it have changed the trajectory of our phone innovations over the next thirteen years to 2020?

If the answer is yes, then another question is worth asking: As we see Amazon actively build toward a future that centrally positions its voice assistant in the home, should we do more to protect what privacy we have left?

Link:

Alexa is starting to ask questions. How should we respond? - CNET

From the Manhattan Project, a legacy of discovery and a national burden – Stars and Stripes

The bomb-bay doors on the B-29 Superfortress Bockscar swung open over Nagasaki, Japan, a little before noon on Aug. 9, 1945, and at 11:58 a.m. one 10,800-pound bomb fell away.

Minutes later, a 5,300-pound sphere of high explosives imploded inside the bomb casing. The blast squeezed a softball-sized, 13.6-pound plutonium core to the size of a tennis ball, a super-critical mass that started a chain reaction.

The resulting nuclear explosion killed approximately 39,000 people and injured another 25,000, according to the online Atomic Archive. It was the second use of a nuclear weapon in war and the first to employ a plutonium implosion device, still a mainstay of nuclear weapons technology.

Scientists and engineers of the Manhattan Project, the top-secret World War II nuclear weapons program, fused raw science and practical engineering to create the implosion bomb at Project Y, the Los Alamos laboratory in New Mexico. The Hanford Engineer Works along the banks of the Columbia River in central Washington produced the plutonium. The bomb was tested at an isolated desert flat near Alamogordo, N.M., known as Trinity Site.

Trinity Site today is a once-a-year tourist attraction. But 75 years later, national laboratories at Los Alamos and Hanford, part of an extensive network that is the Manhattan Project legacy, are still in business.

The two-year crash effort to build the bomb that encompassed a handful of locations nationwide has grown into 17 national laboratories and dozens of affiliated sites overseen by the Department of Energy on a budget this year of more than $34 billion.

They continue to design new weapons and maintain the nations nuclear arsenal, but most of their work is geared toward basic science that yields amazing discoveries.

Theres a lot of impressive work going on at the lab outside of the nuclear weapons programs, whether its on energy or on computing or on any number of scientific areas. They still maintain a high caliber of research in the national interest, said Steven Aftergood, a freedom-of-information advocate for the Federation of American Scientists. I wouldnt want to overlook that.

On top of its work as a weapons designer, Los Alamos National Laboratory, where the critical work of the Manhattan Project took place, today engages in basic research in myriad topics, from black holes to cloud computing and climate change. The lab is also using genomics to diagnose cases of the coronavirus.

When the Cold War ended, lab experts also turned their expertise to helping the former Soviet Union dismantle its nuclear weapons.

Los Alamos laboratory may be the most famous Manhattan Project site, but it wasnt the only one and it wasnt even the first. That distinction belongs to Argonne National Laboratory, on the outskirts of Chicago, that grew out of physicist Enrico Fermis search at the University of Chicago for the first sustained nuclear reaction.

They were trying to figure out what the critical mass is, how much uranium 235 fissile core do you actually need to start a chain reaction, said Robert Rosner, former Argonne lab director.

Argonne is one of 10 national laboratories under the Department of Energys Office of Science. While some, like Argonne, Hanford (today the Pacific Northwest National Laboratory) and Oak Ridge, have roots in the Manhattan Project, they no longer work primarily on weapons development. The Pacific Northwest lab, for example, played a part in the detection of gravity waves in 2015.

Argonne, originally known by its code name, the Metallurgical Lab, became the home of the civilian nuclear power program, Rosner said. It created the worlds very first power reactor, the Experimental Breeder Reactor, at Argonne West, now the Idaho National Laboratory.

Three national laboratories are still primarily devoted to the work of nuclear weapons, including their non-nuclear components. Los Alamos, Lawrence Livermore National Laboratory in Livermore, Calif., and Sandia National Laboratory in Albuquerque, N.M., fall under the authority of the National Nuclear Security Administration.

The Manhattan Project employed as many as 130,000 people and cost nearly $2 billion, about $28.6 billion today. Work at Los Alamos alone cost taxpayers about $74 million, or $1.06 billion today, according to the Brookings Institution.

The Energy Department in fiscal year 2019 budgeted $2.9 billion for Los Alamos National Laboratory, of which 66%, or $1.9 billion, was intended for weapons programs.

At its height during World War II, Los Alamos employed about 5,000 people. Today there are over 12,000 people in the lab, just the lab, Rosner said during a phone interview July 15.

In addition to the raw and applied sciences the labs produce, they preserve a model for integrating scientists, engineers and other experts across a variety of fields that is not widely practiced in the commercial world, Rosner said.

Integrated teams are the secret behind national laboratories, he said. Universities traditionally cannot do this, and the reason is that were a silo. We have a physics department, a chemistry department theres a math department.

Academics find rewards in their own disciplines, said Rosner, who is now a professor of astronomy and astrophysics at the University of Chicago. Most physicists working at Los Alamos are astrophysicists, he said.

Astrophysicists are a good example of that. Astronomers, Rosner said. Theyre not thinking about money; theyre thinking about the universe, right? The Big Bang.

Few commercial enterprises can afford research and development the way the labs do it, he said. The old Bell Laboratories, before its break-up in 1982, produced significant advances, such as the silicon chip.

Ask yourself, does AT&T or Verizon or all of the other what used to be called Baby Bells, do they have big, basic research labs? he said.

The uglier legacy left by the Manhattan Project and the weapons labs is written in starker terms, including cleanup decrees, damage awards and the burden of nuclear weapons themselves.

As the Cold War ended, public attention came to bear on health risks to workers at Los Alamos and other sites; the accumulation of toxic waste, documented or not; poor management; and a culture of secrecy.

The worst example, the Hanford Nuclear Reservation, is what remains of the dirty work of bombmaking: 586 square miles that include nine decommissioned reactors that produced weapons-grade plutonium and a staggering amount of radioactive waste, according to the Northwest Power and Conservation Council.

About 53 million gallons of chemicals used to separate plutonium from uranium remains stored in 177 underground tanks, of which 70 are leaking and sending a radioactive plume toward the nearby Columbia River, according to the council. The site, one of the most dangerous and polluted in the U.S., includes 1,700 individual waste sites and about 500 contaminated buildings.

At Los Alamos, self-appointed watchdog Greg Mello, founder of the Los Alamos Study Group, has documented decades of worker health problems, industrial accidents and toxic waste. He also campaigns against a program underway to expand the lab to make plutonium pits for a new generation of nuclear warheads.

Theres been a pretty high cost across the warhead complex for pursuing the nuclear arms race, Mello said by phone July 28.

Drawing on reports from the Department of Labor and by investigative journalists, he estimates the federal government has paid out billions for 1,599 death claims at Los Alamos alone from its beginnings through June 2016.

This is a technology that has had horrible effects, Mello said. Direct health effects, as well as, I would say, effects on world politics and on the shape of American democracy have been even worse.

Although a government program enacted in 2000 has paid thousands of claims by workers across the nuclear weapons complex for work-related illnesses, the link to some of those illnesses with weapons work is disputed by some as tenuous, at best.

However, some problems with the labs are indisputable. An era of mismanagement at Los Alamos gave rise in 2000 to the National Nuclear Security Administration, the new overseer within the Energy Department. The state of New Mexico has issued Los Alamos lab several cleanup decrees and federal audits have found mishandled or missing materials.

A 2018 report by the Energy Department inspector general, for example, found discrepancies in the way the Los Alamos lab handled beryllium, a toxic metallic element used in nuclear reactors.

Los Alamos sometimes has problems accounting for nuclear materials, Aftergood said. He directs the Federation of American Scientists Project on Government Secrecy. Every now and then theres either an espionage case or an episode of misplaced classified records.

The worldwide nuclear stockpile peaked at more than 70,000 warheads around 1987, most of them held by the former Soviet Union, according to the federation. Today that arsenal is less than 20,000 warheads, including those held by China, Pakistan, India, North Korea, the U.K., France and Israel.

Part of the mission at weapons labs is stockpile stewardship, ensuring in an age of nuclear and thermonuclear test bans that aging weapons will work if deployed.

Tests above ground, underwater and in space were outlawed in 1963. The last U.S. nuclear test took place underground on Sept. 23, 1992. The Comprehensive Nuclear Test Ban Treaty has been awaiting Senate ratification since 1997.

The U.S. instead tests its weapons using supercomputer simulations fed by data collected from the real things.

I understand fully why we have atomic weapons, nuclear weapons. This is not a mystery to me, said Rosner, who is also a member of The Bulletin of Atomic Scientists, another group that sprang from the original Manhattan Project scientists, and chairs its Science and Security Board. And if youd asked me was it a good idea that we had the Manhattan Project my answer is: Hell, yes.

Discoveries in nuclear physics made the bomb inevitable, he said. Its one of those things; the genies out of the bottle and here we are.

Unlike anti-weapons advocates, Rosner said he believes the U.S. will always have atomic weapons if potential adversaries have them, too. However, hes against actual atomic testing, a move that would permit China, Russia and other nuclear powers to catch up with the U.S. hedge in testing data.

He also believes in adhering to and renewing existing nuclear nonproliferation treaties.

What has happened over the last five years? Were at the point of almost undoing everything that was done, something that took decades, you know, to put in place is basically now almost completely gone, he said.

Mello, an advocate for nuclear disarmament, agrees the U.S. seeking advantage by abrogating longstanding treaties is a terrible idea, is stupid.

He said nuclear weapons are a national burden, and not just in terms of the health effects, toxic waste and expense surrounding them, he said.

We never became the kind of country that we might have become, given since we devoted and still devote a majority of our discretionary income to military affairs, Mello said, and the acme of violence of this is nuclear weapons.

ditzler.joseph@stripes.comTwitter: @JosephDitzler

Read more from the original source:

From the Manhattan Project, a legacy of discovery and a national burden - Stars and Stripes

Shared Grid Offshore New York Cuts Over USD 500 Million in Costs (Report) – Offshore WIND

A multi-user, planned transmission system for offshore wind in New York could result in electric grid cost savings of over USD 500 million and significantly reduce environmental impacts and project risks, according to a new report prepared by The Brattle Group for Anbaric.

The Offshore Wind Transmission: An Analysis of Options forNew York report evaluates the challenges of connecting each wind farm to shore individually compared to a high-capacity transmission system serving multiple wind farms.

Relying on individual generator lead lines would require extensive onshore grid upgrades costing four times as much as a planned approach, the report writes, emphasizing that by using fewer cable routes and more robust grid connections, a planned transmission system reduces grid congestion and the need for expensive, disruptive onshore transmission upgrades, thus reducing the impacts on the marine environment and coastal communities.

According to the study, a planned transmission approach would reduce cabling by almost 60%, preventing 660 miles of seabed disturbance and reducing the impact on fisheries and marine ecosystems.

Additionally, the report finds that planned transmission would more fully utilize lease areas and more easily reduce offshore wind curtailment. This approach using more efficient direct current technology would deliver more power to shore than alternating current technology.

Developing a shared ocean grid is critical to achievingNew Yorksambitious offshore wind goals, saidKevin Knobloch, President of Anbarics New York OceanGrid.

The next phase in achievingNew Yorksgoals depends on building transmission infrastructure in a way that reduces overall costs and feasibility risks, protects fisheries, coastal communities and the environment, and enables developing the offshore wind industry to scale.

Read more from the original source:

Shared Grid Offshore New York Cuts Over USD 500 Million in Costs (Report) - Offshore WIND

Voltaire to debut at Dogger Bank offshore the UK – Offshore Oil and Gas Magazine

The Voltaire will be the first ultra-low emission vessel.

(Courtesy Jan De Nul)

Offshore staff

LUXEMBOURG SSE Renewables and Equinor have contracted Jan De Nul Group to transport and install the GE Haliade-X wind turbines at Dogger Bank A and Dogger Bank B in the UK North Sea.

This will be the first assignment for Jan De Nuls jackup installation vessel Voltaire. Installation is expected to start in 2023.

Located 130 km (81 mi) off the Yorkshire coast, Dogger Bank consists of three 1.2-GW phases: Dogger Bank A, Dogger Bank B, and Dogger Bank C. The final investment decision on Dogger Bank A and Dogger Bank B is expected in late 2020 and on Dogger Bank C in 2021.

SSE Renewables is leading the development and construction phases of the Dogger Bank wind farm.Equinor will lead on operations for its lifetime of at least 25 years.

When complete Dogger Bank is expected to be the worlds largest offshore wind farm and power more than 4.5 million homes every year around 5% of the UKs electricity needs.

08/07/2020

See the original post here:

Voltaire to debut at Dogger Bank offshore the UK - Offshore Oil and Gas Magazine

Heavyweights Join US Floating Wind Project – Offshore WIND

The University of Maine will collaborate with the Mitsubishi Corporation and RWE Renewables to develop UMaines floating offshore wind technology demonstration project off the coast of Maine.

New England Aqua Ventus, LLC (NEAV), a joint venture between Diamond Offshore Wind, a subsidiary of the Mitsubishi Corporation, and RWE Renewables, will as the developer own and manage all aspects of permitting, construction and assembly, deployment and ongoing operations for the project.

UMaines Advanced Structures and Composites Center will continue with design and engineering, research and development and post-construction monitoring.

The project will consist of a single semisubmersible concrete floating platform that will support a commercial 1012 megawatt wind turbine and will be deployed in a state-designated area two miles south of Monhegan Island and 14 miles from the Maine coast.

The purpose of the demonstration project is to further evaluate the floating technology, monitor environmental factors, and develop best practices for offshore wind to coexist with traditional marine activities.

Construction, following all permitting, is expected to be completed in 2023.

Diamond Offshore Wind and RWE Renewables will invest USD 100 million to build the project and help demonstrate the technology at full scale.

The project is projected to produce more than USD 150 million in total economic output and create hundreds of Maine-based jobs during the construction period.

We see great potential for floating wind farms worldwide, especially in countries like the U.S., with deeper coastal waters, said Sven Utermhlen, chief operating officer, Wind Offshore Global of RWE Renewables.

This innovative project combines the University of Maines knowledge with the states maritime heritage, allowing RWE Renewables to gain the experience that can help us provide future opportunities to grow local economies and produce clean, renewable power.

NEAV will continue to involve Maine companies in permitting, construction and assembly, deployment, and ongoing operations and maintenance of the project. In addition, NEAV has committed to working with the University of Maine on research, development, and design to take the technology elsewhere in the US and the world.

The developers will also work with the University of Maine System, the Maine Community College System and Maine Maritime Academy to attract K12 students to science, engineering and business programs, prepare college students and help to create a skilled workforce in Maine with the technical skills necessary to support offshore wind development and operation.

This project south of Monhegan is a perfect opportunity to demonstrate a new technology that can be built in Maine, create jobs in Maine, and demonstrate how fishing and offshore wind can co-exist, said Chris Wissemann of Diamond Offshore Wind.

Together with RWE, our engineers conducted an extensive due-diligence review of UMaines VolturnUS floating wind technology, and believe it is a world leader in floating wind that reduces costs and creates local jobs. We are really focused on creating economic opportunities for Maine as this new carbon-free economy emerges.

The University of Maine has researched floating offshore wind technology since 2008. After winning funding from the U.S. Department of Energy (DOE), the university worked with Maine-based construction firm Cianbro to build and deploy the first grid-connected offshore wind turbine in the US in 2013, a one-eighth scale prototype of its VolturnUS floating hull technology.

The success of the project led to additional funding from the DOE to further advance the VolturnUS technology, which has been issued 43 patents to date. UMaine will continue to own its VolturnUS floating hull intellectual property and license it to NEAV for this project.

Excerpt from:

Heavyweights Join US Floating Wind Project - Offshore WIND

Galloper Makes Offshore Wind Scarecrow Tried-and-True – Offshore WIND

A scarecrow system installed on the substation of the Galloper offshore wind farm has reduced seabird guano on the structure from approximately 50-60% coverage to almost none in the last 12 months.

The Scaretech system was installed on the substation located 27km off the Suffolk coast in the UK in July last year to address the guano problem.

Seabird poo or guano is said to be a huge problem for the global offshore wind industry as it poses a serious health risk due to its carcinogenic qualities, and is expensive and unpleasant to remove.

The Scaretech device is based on a traditional scarecrow concept and adapted for the offshore environment of a wind farm or oil platform. It emits sporadic loud noises and high-intensity strobe lights which deters seabirds from landing on the structure.

There is an abundance of seabass around our Galloper site, which attracts large numbers of seabirds. These in turn generate significant quantities of guano, which poses an unpleasant health and safety hazard for us, said Kieron Drew, Interim O&M Manager at Galloper.

This is a new innovation for the wind industry and it certainly worked for us. Once we installed the Scaretch device, we saw dramatic reductions in the amount of guano. In fact, the problem is now almost non-existent.

See the original post here:

Galloper Makes Offshore Wind Scarecrow Tried-and-True - Offshore WIND

Trademark Board Harshes the Mellow of CBD Oil Manufacturer – JD Supra

The relationship between the cannabis industry and intellectual property laws in the United States is unique and complicated, in many ways mirroring the nations collective views on the cannabis plant. This is unfortunate, in part because the law abhors uncertainty and, in part, because cannabis companies are currently undergoing a renaissance, which has fostered an explosion of novel and creative concepts that the intellectual property laws of this county were designed to protect.

A recent decision shows, however, that despite changes in cannabis laws in many states and the growing cannabis industry throughout the country, obtaining a federal trademark for hemp derived products remains an uphill battle. On June 16, 2020, in In re Stanley Brothers Social Enterprises, LLC, the federal Trademark Trial and Appeal Board (TTAB) affirmed the refusal to register a trademark in connection with hemp oil extracts sold as an integral component of dietary and nutritional supplements on the grounds that hemp oil extracts be marketed and sold as dietary supplements were per se illegal under the Food, Drug & Cosmetics Act (FDCA).

In order for a trademark to qualify for federal registration, the mark must lawfully be used in commerce. When a trademark application is reviewed by the USPTO, however, the marks use will be presumed lawful unless the application record indicates a violation of federal law. The USPTO will evaluate whether the goods or services associated with a mark are per se illegal.

Stanley Brothers Social Enterprises, LLC is a Colorado marijuana grower that produces various cannabis derivative products. One of those products is an oil extracted from the cannabis plant that is high in cannabidiol (CBD) content and low in tetrahydrocannabinol (THC). Stanley Brothers sought registration of the CW mark for its CBD oil, which was marketed as a dietary supplement that can be used to promote mind and body wellness. The examining attorney refused registration on the grounds that the marks use in commerce was illegal because the goods are illegal under the FDCA and the Controlled Substance Act (CSA).

On appeal, the TTAB did not address the legality of the CBD oil products under the CSA. Rather, the board held that the CBD oil products were per se illegal under the FDCA and thus ineligible for trademark registration. In refusing registration, the TTAB focused on a provision of the FDCA that bans any foods to which has been added . . . a drug or biological product which substantial clinical investigations have been instituted and for which the existence of such investigations has been made public.

The TTAB was unpersuaded by Stanley Brothers argument that their dietary supplements are not food under the FDCA and ruled that FDCA definition of foods includes certain products marketed as dietary supplements and affirmed the examining attorneys contention that hemp oil extracts, such as CBD oil, are food to which CBD has been added. Specifically, because Stanley Brothers identified their CW CBD oil products as an integral component of dietary and nutritional supplements, the products are deemed to be food under the FDCA. The TTAB also rejected Stanley Brothers argument that the 2014 Farm Bills Industrial Hemp Provision exempted it from the FDCA provision regarding food. The TTAB reasoned that the Industrial Hemp Provision permits authorized entities to grow or cultivate industrial hemp, but did not permit the distribution or sale of CBD in food when CBD is the subject of clinical investigation, even if the CBD is derived from industrial hemp which falls outside the CSA. Stanley Brothers also argued that their product was in the market prior to the institution of any substantial clinical investigation; however, the TTAB found that this argument was unsupported by the evidence.

The ruling in In re Stanley is not an absolute bar on trademarks for CBD products, in fact, numerous trademark registrations for various CBD products, such as essential oils, have been issued. Nevertheless, companies in the hemp and cannabis industry will need to consider their trademark strategy and product marketing carefully. While, for now, at least, the USPTO has made it clear that trademarks for CBD products used in food and dietary supplements are illegal under the FDCA and not eligible for registration, companies may still be able to acquire trademark protections for related or ancillary non-food CBD products.

Visit link:

Trademark Board Harshes the Mellow of CBD Oil Manufacturer - JD Supra

Mind Uploading

Welcome

Minduploading.org is a collection of pages and articles designed to explore the concepts underlying mind uploading. The articles are intended to be a readable introduction to the basic technical and philosophical topics covering mind uploading and substrate-independent minds. The focus is on careful definitions of the common terms and what the implications are if mind uploading becomes possible.

Mind uploading is an ongoing area of active research, bringing together ideas from neuroscience, computer science, engineering, and philosophy. This site refers to a number of participants and researchers who are helping to make mind uploading possible.

Realistically, mind uploading likely lies many decades in the future, but the short-term offers the possibility of advanced neural prostheses that may benefit us.

Mind uploading is a popular term for a process by which the mind, a collection of memories, personality, and attributes of a specific individual, is transferred from its original biological brain to an artificial computational substrate. Alternative terms for mind uploading have appeared in fiction and non-fiction, such as mind transfer, mind downloading, off-loading, side-loading, and several others. They all refer to the same general concept of transferring the mind to a different substrate.

Once it is possible to move a mind from one substrate to another, it is then called a substrate-independent mind (SIM). The concept of SIM is inspired by the idea of designing software that can run on multiple computers with different hardware without needing to be rewritten. For example, Javas design principle write once, run everywhere makes it a platform independent system. In this context, substrate is a term referring to a generalized concept of any computational platform that is capable of universal computation.

We take the materialist position that the human mind is solely generated by the brain and is a function of neural states. Additionally, we assume that the neural states are computational processes and devices capable of universal computing are sufficient to generate the same kind of computational processes found in a brain.

Read more from the original source:

Mind Uploading

Mind uploading | Transhumanism Wiki | Fandom

In transhumanism and science fiction, mind uploading (also occasionally referred to by other terms such as mind transfer, whole brain emulation, or whole body emulation) refers to the hypothetical transfer of a human mind to a substrate different from a biological brain, such as a detailed computer simulation of an individual human brain.

The human brain contains a little more than 100 billion nerve cells called neurons, each individually linked to other neurons by way of connectors called axons and dendrites. Signals at the junctures (synapses) of these connections are transmitted by the release and detection of chemicals known as neurotransmitters. The brain contains cell types other than neurons (such as glial cells), some of which are structurally similar to neurons, but the information processing of the brain is thought to be conducted by the network of neurons.

Current biomedical and neuropsychological thinking is that the human mind is a product of the information processing of this neural network. To use an analogy from computer science, if the neural network of the brain can be thought of as hardware, then the human mind is the software running on it.

Mind uploading, then, is the act of copying or transferring this "software" from the hardware of the human brain to another processing environment, typically an artificially created one.

The concept of mind uploading then is strongly mechanist, relying on several assumptions about the nature of human consciousness and the philosophy of artificial intelligence. It assumes that strong AI machine intelligence is not only possible, but is indistinguishable from human intelligence, and denies the vitalist view of human life and consciousness.

Mind uploading is completely speculative at this point in time; no technology exists which can accomplish this.

The relationship between the human mind and the neural circuitry of the brain is currently poorly understood. Thus, most theoretical approaches to mind uploading are based on the idea of recreating or simulating the underlying neural network. This approach would theoretically eliminate the need to understand how such a system works if the component neurons and their connections can be simulated with enough accuracy.

It is unknown how precise the simulation of such a neural network would have to be to produce a functional simulation of the brain. It is possible, however, that simulating the functions of a human brain at the cellular level might be much more difficult than creating a human level artificial intelligence, which relied on recreating the functions of the human mind, rather than trying to simulate the underlying biological systems.[citation needed]

Thinkers with a strongly mechanistic view of human intelligence (such as Marvin Minsky) or a strongly positive view of robot-human social integration (such as Hans Moravec and Ray Kurzweil) have openly speculated about the possibility and desirability of this.

In the case where the mind is transferred into a computer, the subject would become a form of artificial intelligence, sometimes called an infomorph or "nomorph." In a case where it is transferred into an artificial body, to which its consciousness is confined, it would also become a robot. In either case it might claim ordinary human rights, certainly if the consciousness within was feeling (or was doing a good job of simulating) as if it were the donor.

Uploading consciousness into bodies created by robotic means is a goal of some in the artificial intelligence community. In the uploading scenario, the physical human brain does not move from its original body into a new robotic shell; rather, the consciousness is assumed to be recorded and/or transferred to a new robotic brain, which generates responses indistinguishable from the original organic brain.

The idea of uploading human consciousness in this manner raises many philosophical questions which people may find interesting or disturbing, such as matters of individuality and the soul. Vitalists would say that uploading was a priori impossible. Many people also wonder whether, if they were uploaded, it would be their sentience uploaded, or simply a copy.

Even if uploading is theoretically possible, there is currently no technology capable of recording or describing mind states in the way imagined, and no one knows how much computational power or storage would be needed to simulate the activity of the mind inside a computer. On the other hand, advocates of uploading have made various estimates of the amount of computing power that would be needed to simulate a human brain, and based on this a number have estimated that uploading may become possible within decades if trends such as Moore's Law continue.[citation needed]

If it is possible for human minds to be modeled and treated as software objects which can be instanced multiple times, in multiple processing environments, many potentially desirable possibilities open up for the individual.

If the mental processes of the human mind can be disassociated from its original biological body, it is no longer tied to the limits and lifespan of that body. In theory, a mind could be voluntarily copied or transferred from body to body indefinitely and therefore become immortal, or at least exercise conscious control of its lifespan.

Alternatively, if cybernetic implants could be used to monitor and record the structure of the human mind in real time then, should the body of the individual be killed, such implants could be used to later instance another working copy of that mind. It is also possible that periodic backups of the mind could be taken and stored external to the body and a copy of the mind instanced from this backup, should the body (and possibly the implants) be lost or damaged beyond recovery. In the latter case, any changes and experiences since the time of the last backup would be lost.

Such possibilities have been explored extensively in fiction: This Number Speaks, Nancy Farmer's The House of the Scorpion, Newton's Gate, John Varley's Eight Worlds series, Greg Egan's Permutation City, Diaspora, Schild's Ladder and Incandescence, the Revelation Space series, Peter Hamilton's Pandora's Star duology, Bart Kosko's Fuzzy Time, Armitage III series, the Takeshi Kovacs universe, Iain M. Banks Culture novels, Cory Doctorow's Down and Out in the Magic Kingdom, and the works of Charles Stross. And in television sci-fi shows: Battlestar Galactica, Stargate SG-1, among others.

Another concept explored in science fiction is the idea of more than one running "copy" of a human mind existing at once. Such copies could either be full copies, or limited subsets of the complete mentality designed for a particular limited functions. Such copies would allow an "individual" to experience many things at once, and later integrate the experiences of all copies into a central mentality at some point in the future, effectively allowing a single sentient being to "be many places at once" and "do many things at once".

The implications of such entities have been explored in science fiction. In his book Eon, Greg Bear uses the terms "partials" and "ghosts", while Charles Stross's novels Accelerando and Glasshouse deal with the concepts of "forked instances" of conscious beings as well as "backups".

In Charles Sheffield's Tomorrow and Tomorrow, the protagonist's consciousness is duplicated thousands of times electronically and sent out on probe ships and uploaded into bodies adapted to native environments of different planets. The copies are eventually reintegrated back into the "master" copy of the consciousness in order to consolidate their findings.

Such partial and complete copies of a sentient being again raise issues of identity and personhood: is a partial copy of sentient being itself sentient? What rights might such a being have? Since copies of a personality are having different experiences, are they not slowly diverging and becoming different entities? At what point do they become different entities?

If the body and the mind of the individual can be disassociated, then the individual is theoretically free to choose their own incarnation. They could reside within a completely human body, within a modified physical form, or within simulated realities. Individuals might change their incarnations many times during their existence, depending on their needs and desires.

Choices of the individuals in this matter could be restricted by the society they exist within, however. In the novel Eon by Greg Bear, individuals could incarnate physically (within "natural" biological humans, or within modified bodies) a limited number of times before being legally forced to reside with the "city memory" as infomorphic "ghosts".

Once an individual is moved to virtual simulation, the only input needed would be energy, which would be provided by large computing device hosting those minds. All the food, drink, moving, travel or any imaginable thing would just need energy to provide those computations.

Almost all scientists, thinkers and intelligent people would be moved to this virtual environment once they die. In this virtual environment, their brain capacity would be expanded by speed and storage of quantum computers. In virtual environment idea and final product are not different. This way more and more innovations will be sent to real world and it will speed up our technological development.

Regardless of the techniques used to capture or recreate the function of a human mind, the processing demands of such venture are likely to be immense.

Henry Markram, lead researcher of the "Blue Brain Project", has stated that "it is not [their] goal to build an intelligent neural network", based solely on the computational demands such a project would have[1].

Advocates of mind uploading point to Moore's law to support the notion that the necessary computing power may become available within a few decades, though it would probably require advances beyond the integrated circuit technology which has dominated since the 1970s. Several new technologies have been proposed, and prototypes of some have been demonstrated, such as the optical neural network based on the silicon-photonic chip (harnessing special physical properties of Indium Phosphide) which Intel showed the world for the first time on September 18, 2006.[3] Other proposals include three-dimensional integrated circuits based on carbon nanotubes (researchers have already demonstrated individual logic gates built from carbon nanotubes[4]) and also perhaps the quantum computer, currently being worked on internationally as well as most famously by computer scientists and physicists at the IBM Almaden Research Center, which promises to be useful in simulating the behavior of quantum systems; such ability would enable protein structure prediction which could be critical to correct emulation of intracellular neural processes.

Present methods require use of massive computational power (as the BBP does with IBM's Blue Gene Supercomputer) to use the essentially classical computing architecture for serial deduction of the quantum mechanical processes involved in ab initio protein structure prediction. If necessary, should the quantum computer become a reality, its capacity for exactly such rapid calculations of quantum mechanical physics may well help the effort by reducing the required computational power per physical size and energy needs, as Markram warns would be needed (and thus why he thinks it would be difficult, besides unattractive) should an entire brain's simulation, let alone emulation (at both cellular and molecular levels) be feasibly attempted. Reiteration may also be useful for distributed simulation of a common, repeated function (e.g., proteins).

Ultimately, nano-computing is projected by some[citation needed] to hold the requisite capacity for computations per second estimated necessary, in surplus. If Kurzweil's Law of Accelerating Returns (a variation on Moore's Law) shows itself to be true, the rate of technological development should accelerate exponentially towards the technological singularity, heralded by the advent of viable though relatively primitive mind uploading and/or "strong" (human-level) AI technologies, his prediction being that the Singularity may occur around the year 2045.[5]

The structure of a neural network is also different from classical computing designs. Memory in a classical computer is generally stored in a two state design, or bit, although one of the two components is modified in dynamic RAM and some forms of flash memory can use more than two states under some circumstances. Gates inside central processing units will often also use this two state or digital type of design as well. In some ways a neural network or brain could be thought of like a memory unit in a computer, but with an extremely vast number of states, corresponding with the total number of neurons. Beyond that, whether the action potential of a neuron will form, based upon the summation of the inputs of different dendrites, might be something that is more analog in nature than that which happens in a computer. One great advantage that a modern computer has over a biological brain, however, is that the speed of each electronic operation in a computer is many orders of magnitude faster than the time scales involved for the firing and transmission of individual nerve impulses. A brain, however, uses far more parallel processing than exists in most classical computing designs, and so each of the slower neurons can make up for it by operating at the same time.

There are many ethical issues concerning mind uploading. Viable mind uploading technology might challenge the ideas of human immortality, property rights, capitalism, human intelligence, an afterlife, and the Abrahamic view of man as created in God's image. These challenges often cannot be distinguished from those raised by all technologies that extend human technological control over human bodies, e.g. organ transplant. Perhaps the best way to explore such issues is to discover principles applicable to current bioethics problems, and question what would be permissible if they were applied consistently to a future technology. This points back to the role of science fiction in exploring such problems, as powerfully demonstrated in the 20th century by such works as Brave New World and Nineteen Eighty-Four, each of which frame current ethical problems in a future environment where those have come to dominate the society.

Another issue with mind uploading is whether an uploaded mind is really the "same" sentience, or simply an exact copy with the same memories and personality. Although this difference would be undetectable to an external observer (and the upload itself would probably be unable to tell), it could mean that uploading a mind would actually kill it and replace it with a clone. Some people would be unwilling to upload themselves for this reason. If their sentience is deactivated even for a nanosecond, they assert, it is permanently wiped out. Some more gradual methods may avoid this problem by keeping the uploaded sentience functioning throughout the procedure.

True mind uploading remains speculative. The technology to perform such a feat is not currently available, however a number of possible mechanisms, and research approaches, have been proposed for developing mind uploading technology.

Since the function of the human mind, and how it might arise from the working of the brain's neural network, are poorly understood issues, many theoretical approaches to mind uploading rely on the idea of emulation. Rather than having to understand the functioning of the human mind, the structure of underlying neural network is captured and simulated with a computer system. The human mind then, theoretically, is generated by the simulated neural network in an identical fashion to it being generated by the biological neural network.

These approaches require only that we understand the nature of neurons and how their connections function, that we can simulate them well enough, that we have the computational power to run such large simulations, and that the state of the brain's neural network can be captured with enough fidelity to create an accurate simulation.

A possible method for mind uploading is serial sectioning, in which the brain tissue and perhaps other parts of the nervous system are frozen and then scanned and analyzed layer by layer, thus capturing the structure of the neurons and their interconnections[6]. The exposed surface of frozen nerve tissue would be scanned (possibly with some variant of an electron microscope) and recorded, and then the surface layer of tissue removed (possibly with a conventional cryo-ultramicrotome if scanning along an axis, or possibly through laser ablation if scans are done radially "from the outside inwards"). While this would be a very slow and labor intensive process, research is currently underway to automate the collection and microscopy of serial sections[7]. The scans would then be analyzed, and a model of the neural net recreated in the system that the mind was being uploaded into.

There are uncertainties with this approach using current microscopy techniques. If it is possible to replicate neuron function from its visible structure alone, then the resolution afforded by a scanning electron microscope would suffice for such a technique[7]. However, as the function of brain tissue is partially determined by molecular events (particularly at synapses, but also at other places on the neuron's cell membrane), this may not suffice for capturing and simulating neuron functions. It may be possible to extend the techniques of serial sectioning and to capture the internal molecular makeup of neurons, through the use of sophisticated immunohistochemistry staining methods which could then be read via confocal laser scanning microscopy[citation needed].

A more advanced hypothetical technique that would require nanotechnology might involve infiltrating the intact brain with a network of nanoscale machines to "read" the structure and activity of the brain in situ, much like the electrode meshes used in current brain-computer interface research, but on a much finer and more sophisticated scale. The data collected from these probes could then be used to build up a simulation of the neural network they were probing, and even check the behavior of the model against the behavior of the biological system in real time.

In his 1998 book, Mind children, Hans Moravec describes a variation of this process. In it, nanomachines are placed in the synapses of the outer layer of cells in the brain of a conscious living subject. The system then models the outer layer of cells and recreates the neural net processes in whatever simulation space is being used to house the uploaded consciousness of the subject. The nanomachines can then block the natural signals sent by the biological neurons, but send and receive signals to and from the simulated versions of the neurons. Which system is doing the processing biological or simulated can be toggled back and forth, both automatically by the scanning system and manually by the subject, until it has been established that the simulation's behavior matches that of the biological neurons and that the subjective mental experience of the subject is unchanged. Once this is the case, the outer layer of neurons can be removed and their function turned solely over to the simulated neurons. This process is then repeated, layer by layer, until the entire biological brain of the subject has been scanned, modeled, checked, and disassembled. When the process is completed, the nanomachines can be removed from the spinal column of the subject, and the mind of the subject exists solely within the simulated neural network.

Alternatively, such a process might allow for the replacement of living neurons with artificial neurons one by one while the subject is still conscious, providing a smooth transition from an organic to synthetic brain - potentially significant for those who worry about the loss of personal continuity that other uploading processes may entail. This method has been likened to upgrading the whole internet by replacing, one by one, each computer connected to it with similar computers using newer hardware.

While many people are more comfortable with the idea of the gradual replacement of their natural selves than they are with some of the more radical and discontinuous mental transfer, it still raises questions of identity. Is the individual preserved in this process, and if not, at what point does the individual cease to exist? If the original entity ceases to exist, what is the nature and identity of the individual created within the simulated neural network, or can any individual be said to exist there at all? This gradual replacement leads to a much more complicated and sophisticated version of the Ship of Theseus paradox.

It may also be possible to use advanced neuroimaging technology (such as Magnetoencephalography) to build a detailed three-dimensional model of the brain using non-invasive and non-destructive methods. However, current imaging technology lacks the resolution needed to gather the information needed for such a scan.

Such a process would leave the original entity intact, but the existence, nature, and identity of the resulting being in the simulated network are still open philosophical questions.

Another recently conceived possibility[citation needed] is the use of genetically engineered viruses to attach to synaptic junctions, and then release energy-emitting molecular compounds, which could be detected externally, and used to generate a functional model of the synapses in question, and, given enough time, the whole brain and nervous system.

An alternate set of possible theoretical approaches to mind uploading would require that we first understand the functions of the human mind sufficiently well to create abstract models of parts, or the totality, of human mental processes. It would require that strong AI be not only a possibility, but that the techniques used to create a strong AI system could also be used to recreate a human type mentality.

Such approaches might be more desirable if the abstract models required less computational power to execute than the neural network simulation of the emulation techniques described above.

Another theoretically possible method of mind uploading from organic to inorganic medium, related to the idea described above of replacing neurons one at a time while consciousness remained intact, would be a much less precise but much more feasible (in terms of technology currently known to be physically possible) process of "cyborging". Once a given person's brain is mapped, it is replaced piece-by-piece with computer devices which perform the exact same function as the regions preceding them, after which the patient is allowed to regain consciousness and validate that there has not been some radical upheaval within his own subjective experience of reality. At this point, the patient's brain is immediately "re-mapped" and another piece is replaced, and so on in this fashion until, the patient exists on a purely hardware medium and can be safely extricated from the remaining organic body.

However, critics contend[citation needed] that, given the significant level of synergy involved throughout the neural plexus, alteration of any given cell that is functionally correspondent with (a) neighboring cell(s) may well result in an alteration of its electrical and chemical properties that would not have existed without interference, and so the true individual's signature is lost. Revokability of that disturbance may be possible with damage anticipation and correction (seeing the original by the particular damage rendered unto it, in reverse chronological fashion), although this would be easier in a stable system, meaning a brain subjected to cryosleep (which would imbue its own damage and alterations).[citation needed]

It has also been suggested (for example, in Greg Egan's "jewelhead" stories[8]) that a detailed examination of the brain itself may not be required, that the brain could be treated as a black box instead and effectively duplicated "for all practical purposes" by merely duplicating how it responds to specific external stimuli. This leads into even deeper philosophical questions of what the "self" is.

On June 6, 2005 IBM and the Swiss Federal Institute of Technology in Lausanne announced the launch of a project to build a complete simulation of the human brain, entitled the "Blue Brain Project".[9] The project will use a supercomputer based on IBM's Blue Gene design to map the entire electrical circuitry of the brain. The project seeks to research aspects of human cognition, and various psychiatric disorders caused by malfunctioning neurons, such as autism. Initial efforts are to focus on experimentally accurate, programmed characterization of a single neocortical column in the brain of a rat, as it is very similar to that of a human but at a smaller scale, then to expand to an entire neocortex (the alleged seat of higher intelligence) and eventually the human brain as a whole.

It is interesting to note that the Blue Brain project seems to use a combination of emulation and simulation techniques. The first stage of their program was to simulate a neocortical column at the molecular level. Now the program seems to be trying to create a simplified functional simulation of the neocortical column in order to simulate many of them, and to model their interactions.

With most projected mind uploading technology it is implicit that "copying" a consciousness could be as feasible as "moving" it, since these technologies generally involve simulating the human brain in a computer of some sort, and digital files such as computer programs can be copied precisely. It is also possible that the simulation could be created without the need to destroy the original brain, so that the computer-based consciousness would be a copy of the still-living biological person, although some proposed methods such as serial sectioning of the brain would necessarily be destructive. In both cases it is usually assumed that once the two versions are exposed to different sensory inputs, their experiences would begin to diverge, but all their memories up until the moment of the copying would remain the same.

By many definitions, both copies could be considered the "same person" as the single original consciousness before it was copied. At the same time, they can be considered distinct individuals once they begin to diverge, so the issue of which copy "inherits" what could be complicated. This problem is similar to that found when considering the possibility of teleportation, where in some proposed methods it is possible to copy (rather than only move) a mind or person. This is the classic philosophical issue of personal identity. The problem is made even more serious by the possibility of creating a potentially infinite number of initially identical copies of the original person, which would of course all exist simultaneously as distinct beings.

Philosopher John Locke published "An Essay Concerning Human Understanding" in 1689, in which he proposed the following criterion for personal identity: if you remember thinking something in the past, then you are the same person as he or she who did the thinking. Later philosophers raised various logical snarls, most of them caused by applying Boolean logic, the prevalent logic system at the time. It has been proposed that modern fuzzy logic can solve those problems,[10] showing that Locke's basic idea is sound if one treats personal identity as a continuous rather than discrete value.

In that case, when a mind is copied -- whether during mind uploading, or afterwards, or by some other means -- the two copies are initially two instances of the very same person, but over time, they will gradually become different people to an increasing degree.

The issue of copying vs moving is sometimes cited as a reason to think that destructive methods of mind uploading such as serial sectioning of the brain would actually destroy the consciousness of the original and the upload would itself be a mere "copy" of that consciousness. Whether one believes that the original consciousness of the brain would transfer to the upload, that the original consciousness would be destroyed, or that this is simply a matter of definition and the question has no single "objectively true" answer, is ultimately a philosophical question that depends on one's views of philosophy of mind.

Because of these philosophical questions about the survival of consciousness, there are some who would feel more comfortable about a method of uploading where the transfer is gradual, replacing the original brain with a new substrate over an extended period of time, during which the subject appears to be fully conscious (this can be seen as analogous to the natural biological replacement of molecules in our brains with new ones taken in from eating and breathing, which may lead to almost all the matter in our brains being replaced in as little as a few months[11]). As mentioned above, this would likely take place as a result of gradual cyborging, either nanoscopically or macroscopically, wherein the brain (the original copy) would slowly be replaced bit by bit with artificial parts that function in a near-identical manner, and assuming this was possible at all, the person would not necessarily notice any difference as more and more of their brain became artificial. A gradual transfer also brings up questions of identity similar to the classical Ship of Theseus paradox, although the above-mentioned natural replacement of molecules in the brain through eating and breathing brings up these questions as well.

A computer capable of simulating a person may require microelectromechanical systems (MEMS), or else perhaps optical or nano computing for comparable speed and reduced size and sophisticated telecommunication between the brain and body (whether it exists in virtual reality, artificially as an android, or cybernetically as in sync with a biological body through a transceiver), but would not seem to require molecular nanotechnology.

If minds and environments can be simulated, the Simulation Hypothesis posits that the reality we see may in fact be a computer simulation, and that this is actually the most likely possibility.[12]

Uploading is a common theme in science fiction. Some of the earlier instances of this theme were in the Roger Zelazny 1968 novel Lord of Light and in Frederik Pohl's 1955 short story "Tunnel Under the World." A near miss was Neil R. Jones' 1931 short story "The Jameson Satellite", wherein a person's organic brain was installed in a machine, and Olaf Stapledon's "Last and First Men" (1930) had organic human-like brains grown into an immobile machine.

Another of the "firsts" is the novel Detta r verkligheten (This is reality), 1968, by the renowned philosopher and logician Bertil Mrtensson, in which he describes people living in an uploaded state as a means to control overpopulation. The uploaded people believe that they are "alive", but in reality they are playing elaborate and advanced fantasy games. In a twist at the end, the author changes everything into one of the best "multiverse" ideas of science fiction. Together with the 1969 book Ubik by Philip K. Dick it takes the subject to its furthest point of all the early novels in the field.

Frederik Pohl's Gateway series (also known as the Heechee Saga) deals with a human being, Robinette Broadhead, who "dies" and, due to the efforts of his wife, a computer scientist, as well as the computer program Sigfrid von Shrink, is uploaded into the "64 Gigabit space" (now archaic, but Fred Pohl wrote Gateway in 1976). The Heechee Saga deals with the physical, social, sexual, recreational, and scientific nature of cyberspace before William Gibson's award-winning Neuromancer, and the interactions between cyberspace and "meatspace" commonly depicted in cyberpunk fiction. In Neuromancer, a hacking tool used by the main character is an artificial infomorph of a notorious cyber-criminal, Dixie Flatline. The infomorph only assists in exchange for the promise that he be deleted after the mission is complete.

In the 1982 novel Software, part of the Ware Tetralogy by Rudy Rucker, one of the main characters, Cobb Anderson, has his mind uploaded and his body replaced with an extremely human-like android body. The robots who persuade Anderson into doing this sell the process to him as a way to become immortal.

In the 1997 novel "Shade's Children" by Garth Nix, one of the main characters Shade (a.k.a. Robert Ingman) is an uploaded consciousness that guides the other characters through the post-apocolyptic world in which they live.

The fiction of Greg Egan has explored many of the philosophical, ethical, legal, and identity aspects of mind uploading, as well as the financial and computing aspects (i.e., hardware, software, processing power) of maintaining "copies". In Egan's Permutation City and Diaspora, "copies" are made by computer simulation of scanned brain physiology. Also, in Egan's "Jewelhead" stories, the mind is transferred from the organic brain to a small, immortal backup computer at the base of the skull, with the organic brain then being surgically removed.

The Takeshi Kovacs novels by Richard Morgan was set in a universe where mind transfers were a part of standard life. With the use of cortical stacks, which record a person's memories and personality into a device implanted in the spinal vertebrae, it was possible to copy the individual's mind to a storage system at the time of death. The stack could be uploaded to a virtual reality environment for interrogation, entertainment, or to pass the time for long distance travel. The stack could also be implanted into a new body or "sleeve" which may or may not have biomechanical, genetic, or chemical "upgrades" since the sleeve could be grown or manufactured. Interstellar travel is most often accomplished by digitized human freight ("dhf") over faster-than-light needlecast transmission.

In the "Requiem for Homo Sapiens" series of novels by David Zindell (Neverness, The Broken God, The Wild, and War in Heaven), the verb "cark" is used for uploading one's mind (and also for changing one's DNA). Carking is done for soul-preservation purposes by the members of the Architects church, and also for more sinister (or simply unknowable) purposes by the various "gods" that populate the galaxy such gods being human minds that have now grown into planet- or nebula-sized synthetic brains. The climax of the series centers around the struggle to prevent one character from creating a Universal Computer (under his control) that will incorporate all human minds (and indeed, the entire structure of the universe).

In the popular computer game Total Annihilation, the 4,000-year war that eventually culminated with the destruction of the Milky Way galaxy was started over the issue of mind transfer, with one group (the Arm) resisting another group (the Core) who were attempting to enforce a 100% conversion rate of humanity into machines, because machines are durable and modular, thereby making it a "public health measure."

In the popular science fiction show Stargate SG-1 the alien race who call themselves the Asgard rely solely on cloning and mind transferring to continue their existence. This was not a choice they made, but a result of the decay of the Asgard genome due to excessive cloning, which also caused the Asgard to lose their ability to reproduce. In the episode "Tin Man", SG-1 encounter Harlan, the last of a race that transferred their minds to robots in order to survive. SG-1 then discover that their minds have also been transferred to robot bodies. Eventually they learn that their minds were copied rather than uploaded and that the "original" SG-1 are still alive.

The Thirteenth Floor is a film made in 1999 directed by Josef Rusnak. In the film, a scientific team discovers a technology to create a fully functioning virtual world which they could experience by taking control of the bodies of simulated characters in the world, all of whom were self-aware. One plot twist was that if the virtual body a person had taken control of was killed in the simulation while they were controlling it, then the mind of the simulated character the body originally belonged to would take over the body of that person in the "real world".

The Matrix is a film released the same year as The Thirteenth Floor that has the same kind of solipsistic philosophy. In The Matrix, the protagonist Neo finds out that the world he has been living in is nothing but a simulated dreamworld. However, this should be considered as virtual reality rather than mind uploading, since Neo's physical brain still is required to reside his mind. The mind (the information content of the brain) is not copied into an emulated brain in a computer. Neo's physical brain is connected into the Matrix via a brain-machine interface. Only the rest of the physical body is simulated. Neo is disconnected from this dreamworld by human rebels fighting against AI-driven machines in what seems to be a neverending war. During the course of the movie, Neo and his friends are connected back into the Matrix dreamworld in order to fight the machine race.

In the series Battlestar Galactica the antagonists of the story are the Cylons, sentient computers created by man which developed to become nearly identical to human beings. When they die they rely on mind transferring to keep on living so that "death becomes a learning experience".

The 1995 movie Strange Days explores the idea of a technology capable of recording a conscious event. However, in this case, the mind itself is not uploaded into the device. The recorded event, which time frame is limited to that of the recording session, is frozen in time on a data disc much like today's audio and video. Wearing the "helmet" in playback mode, another person can experience the external stimuli interpretation of the brain, the memories, the feelings, the thoughts and the actions that the original person recorded from his/her life. During playback, the observer temporarily quits his own memories and state of consciousness (the real self). In other words, one can "live" a moment in the life of another person, and one can "live" the same moment of his/her life more than once. In the movie, a direct link to a remote helmet can also be established, allowing another person to experience a live event.

Followers of the Ralian religion advocate mind uploading in the process of human cloning to achieve eternal life. Living inside of a computer is also seen by followers as an eminent possibility.[13]

However, mind uploading is also advocated by a number of secular researchers in neuroscience and artificial intelligence, such as Marvin Minsky. In 1993, Joe Strout created a small web site called the Mind Uploading Home Page, and began advocating the idea in Cryonics circles and elsewhere on the net. That site has not been actively updated in recent years, but it has spawned other sites including MindUploading.org, run by Randal A. Koene, Ph.D., who also moderates a mailing list on the topic. These advocates see mind uploading as a medical procedure which could eventually save countless lives.

Many Transhumanists look forward to the development and deployment of mind uploading technology, with many predicting that it will become possible within the 21st century due to technological trends such as Moore's Law. Many view it as the end phase of the Transhumanist project, which might be said to begin with the genetic engineering of biological humans, continue with the cybernetic enhancement of genetically engineered humans, and finally obtain with the replacement of all remaining biological aspects.

The book Beyond Humanity: CyberEvolution and Future Minds by Gregory S. Paul & Earl D. Cox, is about the eventual (and, to the authors, almost inevitable) evolution of computers into sentient beings, but also deals with human mind transfer.

Raymond Kurzweil, a prominent advocate of transhumanism and the likelihood of a technological singularity, has suggested that the easiest path to human-level artificial intelligence may lie in "reverse-engineering the human brain", which he usually uses to refer to the creation of a new intelligence based on the general "principles of operation" of the brain, but he also sometimes uses the term to refer to the notion of uploading individual human minds based on highly detailed scans and simulations. This idea is discussed on pp. 198-203 of his book The Singularity is Near, for example.

Hans Moravec describes and advocates mind uploading in both his 1988 book Mind Children: The Future of Robot and Human Intelligence and also his 2000 book Robot: Mere Machine to Transcendent Mind. Moravec is referred to by Marvin Minsky in Minsky's essay Will Robots Inherit the Earth?.[14]

fr:Tlchargement de l'espritja:ru:

See the original post here:

Mind uploading | Transhumanism Wiki | Fandom

Mind uploading – RationalWiki

This page contains too many unsourced statements and needs to be improved.

Mind uploading could use some help. Please research the article's assertions. Whatever is credible should be sourced, and what is not should be removed.

Mind uploading is a science fictional trope and popular desired actualization among transhumanists. It's also one of the hypothesised solutions to bringing people back from cryonics. It posits that your soul 'mind pattern' can be implemented in a computer.

The first, and main, problem is that the "mind" isn't a physical thing. "Minds" are emergent properties of living brains. So what you would need to do is preserve is all the electrical, chemical and physical information contained in a living connected-up brain at one particular instant, and then recreate that exact instantaneous set of electrical and chemical data in a new physical substrate and get it set up so that it immediately created the same set of emergent properties. This is not going to happen soon and, perhaps, ever.

Nevertheless proponents typically will say you just need to preserve a dead person's brain, slice it very thinly, scan each piece with microscopes, and reconstruct and run the connections on a computer. With continued exponential improvements in computing, this will soon be possible!

Except it isn't that simple. The brain is not a 'computer' as such, and the neurons are much more complicated than the simplified 'neurons' of machine learning. It isn't feasible to preserve a dying brain before cell death destroys much of the information you are trying to get. Even if it were, preservation techniques only allow one to see the structure of the connections between neurons, but further electrical and chemical detail is lost.

The brain, like any organ, works via biochemistry. It doesn't have a standardized computer architecture whereby you can download data. Vital information of the distribution of various molecules and how they are distributed and interact needs to be recorded, but this is heavily damaged by any preservation solution. There does not appear to be a way, even in theory, to preserve the biochemistry in a readable state. Not only that, but the brain is a wet, organic analogue processor; it will certainly not be possible to copy it to dry, inorganic digital silicon without massive changes to the enormous amounts of data you would need to obtain.

As biologist PZ Myers - who freezes zebrafish brains a whole lot, and would be delighted to have anything recoverable at the end - explained:

We dont have a method to lock down the state of a 1.5kg brain. What youre going to be recording is the dying brain, with cells spewing and collapsing and triggering apoptotic activity everywhere. And thats another thing: what the heck is going to be recorded? You need to measure the epigenetic state of every nucleus, the distribution of highly specific, low copy number molecules in every dendritic spine, the state of molecules in flux along transport pathways, and the precise concentration of all ions in every single compartment. Does anyone have a fixation method that preserves the chemical state of the tissue? All the ones I know of involve chemically modifying the cells and proteins and fluid environment. Does anyone have a scanning technique that records a complete chemical breakdown of every complex component present?

The concept has been criticized further by Myers[2][3][4] and by neuroscientist Kenneth D. Miller.[5]

Additionally, computer emulations of brain activity, even if it was just the connections between neurons, are not going to be affordable. This means that the price of computing cannot keep falling like it has, so the enormous supercomputers that would be required to run any uploaded mind would be unaffordable, even in the future.

It seems likely that the best and most efficient medium for running a human mind is a human brain, so keep yours in good working order.

The less crazy transhumanist think that brain uploading would involve cutting up the brain. [6] The more crazy guys think that nanotech would allow a slow and steady replacement of brain's tissue to the computing substrate. [7]

Several metaphysical questions are brought up by the prospect of mind uploading. Like many such questions, these may not be objectively answerable, and philosophers would no doubt continue to debate them even if uploading somehow became a reality.

The first major philosophical question is more or less falsifiable: whether consciousness is artificially replicable in its entirety. In other words, assuming that consciousness is not magic, and that the brain is the seat of consciousness, does it depend on any special functions or quantum mechanical effects that cannot ever be replicated on another substrate? This question, of course, remains unanswered although, considering the current state of cognitive science, it is not unreasonable to think that consciousness will be found to be replicable in the future.

Assuming that consciousness is proven to be artificially replicable, the second question is whether the "strong AI hypothesis" is justified or not: if a machine accurately replicates consciousness, such that it passes a Turing Test or is otherwise indistinguishable from a natural human being, is the machine really conscious, or is it a soulless mechanism that merely imitates consciousness?

Third, assuming that a machine can actually be conscious (which is no great stretch of the imagination, considering that the human brain is essentially a biological machine), is a copy of your consciousness really you? Is it even possible to copy consciousness? Is mind uploading really a ticket to immortality, in that "you" or your identity can be "uploaded"?

Advocates of mind uploading take the functionalist/reductionist approach of defining human existence as the identity, which is based on memories and personalities rather than physical substrates or subjectivity.[8] They believe that the identity is essential; the copy of the mind holds just as much claim to being that person as the original, even if both were to exist simultaneously. When the physical body of a copied person dies, nothing that defines the person as an individual has been lost. In this context, all that matters is that the memories and personality of the individual are preserved. As the recently murdered protagonist states in Down and Out in the Magic Kingdom, "I feel like me and no one else is making that claim. Who cares if I've been restored from a backup?"

Skeptics of mind uploading[9] question if it's possible to transfer a consciousness from one substrate to another, and hold that this is critical to the life-extension application of mind uploading. The transfer of identity is similar to the process of transferring data from one computer hard drive to another. The new person would be a copy of the original; a new consciousness with the same identity. With this approach, mind uploading would simply create a "mind-clone"[10] an artificial person with an identity gleaned from another. The philosophical problem with uploading "yourself" to a computer is very similar to the "swamp man" and teleportation thought experiments. [11] Suppose Alec Davidson goes hiking in the swamp and is struck and killed by a lightning bolt. At the same time, nearby in the swamp another lightning bolt spontaneously rearranges a bunch of molecules such that, entirely by coincidence, they take on exactly the same form that Dr. Holland's Davidson's body had at the moment of his untimely death. This being, whom Davidson terms Swamp Thing "Swampman," has, of course, a brain which is structurally identical to that which Davidson had, and will thus, presumably, behave exactly as Davidson would have. He will walk out of the swamp, return to Davidson's office at Berkeley, and write the same essays he would have written; he will interact like an amicable person with all of Davidson's friends and family, and so forth. This is one reason that has led critics to say it's not at all clear that the concept mind uploading is even meaningful. [12] For the skeptic, the thought of permanently losing subjective consciousness (death), while another consciousness that shares their identity lives on yields no comfort. Daniel Dennett, in Consciousness Explained, has called into question the validity of these sorts of thought experiments altogether, maintaining that when a thought experiment is too far removed from the actual state of affairs, our intuitions cease to be meaningful.

Consciousness is currently (poorly) understood to be an epiphenomenon of brain activity specifically of the cerebral cortex[13]. Identity and consciousness are distinct from one another though presumably the former could not exist without the latter. Unlike an identity, which is a composition of information stored within a brain it is reasonable to assume that a particular subjective consciousness is an intrinsic property of a particular physical brain. Thus, even a perfect physical copy of that brain would not share the subjective consciousness of that brain. This holds true of all 'brains' (consciousness-producing machines), biological or otherwise. When/if non-biological brains are ever developed/discovered it would be reasonable to assume that each would have its own intrinsic, non-transferable subjective consciousness, independent of its identity. It is likely that mind uploading would preserve an identity, if not the subjective consciousness that begot it. If identity rather than subjective consciousness is taken to be the essential, mind uploading succeeds in the opinion of mind-uploading-immortalist advocates.

Mind uploading has also ethical issues, especially in what refers to duplicates of a given self, as well as others relatives to the harmful things that could be done on what basically would now be an equivalent of a computer file or program, and that (at least for now and at least not so easily too) cannot happen in a human mind -namely, erasing it or destroying the computer that is running the simulation/storing the uploaded mind killing for good the person, modifying its contents deleting and/or adding others, merging two or more previous selves into other and vice-versa, being copied or moved ad infinitum, messing with inputs (sort of sending someone to a "digital heaven" or a "digital hell" -or worse-), messing with the way time is felt by the uploaded speeding or slowing the simulation (or causing it to enter into an infinite loop), infecting a mind with the equivalent of a computer virus (or rather the equivalent of a neurological disease)... the list goes on-.

Believing that there is some mystical "essence" to consciousness that isn't preserved by copying is ultimately a form of dualism, however. Humans lose consciousness at least daily, yet still remain the same person in the morning. In the extreme, humans completely cease all activity, brain or otherwise, during deep hypothermic circulatory arrest, yet still remain the same person on resuscitation,[14] demonstrating that continuity of consciousness is not necessary for identity or personhood. Rather, the properties that make us identifiable as individuals are stored in the physical structure of the brain.

Ultimately, this is a subjective problem, not an objective one: If a copy is made of a book, is it still the same book? It depends if you subjectively consider "the book" to be the physical artifact or the information contained within. Is it the same book that was once held by Isaac Newton? No. Is it the same book that was once read by Isaac Newton? Yes.

See more here:

Mind uploading - RationalWiki

COVID-19 Daily Update 8-7-2020 – West Virginia Department of Health and Human Resources

The West Virginia Department of Health andHuman Resources (DHHR) reports as of 10:00 a.m., on August 7,2020, there have been 312,521 total confirmatorylaboratory results received for COVID-19, with 7,433 totalcases and 127 deaths.

DHHR has confirmed the deaths of an81-year old female from Pleasants County, a 66-year old male from Mingo Countyand a 73-year old male from Mingo County. We offer our deepest sympathies to thefamilies as our state grieves more losses due to COVID-19, said Bill J.Crouch, DHHR Cabinet Secretary.

In alignment with updated definitions fromthe Centers for Disease Control and Prevention, the dashboard includes probablecases which are individuals that have symptoms and either serologic (antibody)or epidemiologic (e.g., a link to a confirmed case) evidence of disease, but noconfirmatory test.

CASESPER COUNTY (Case confirmed by lab test/Probable case):Barbour (29/0), Berkeley (658/28), Boone(97/0), Braxton (8/0), Brooke (61/1), Cabell (364/9), Calhoun (6/0), Clay(17/1), Doddridge (5/0), Fayette (140/0), Gilmer (16/0), Grant (116/1),Greenbrier (91/0), Hampshire (76/0), Hancock (105/4), Hardy (57/1), Harrison(213/1), Jackson (162/0), Jefferson (288/6), Kanawha (885/13), Lewis (28/1),Lincoln (81/0), Logan (209/0), Marion (179/4), Marshall (126/4), Mason (54/0),McDowell (57/1), Mercer (177/0), Mineral (115/2), Mingo (156/2), Monongalia(918/17), Monroe (20/1), Morgan (25/1), Nicholas (35/1), Ohio (263/3),Pendleton (39/1), Pleasants (11/1), Pocahontas (40/1), Preston (101/21), Putnam(185/1), Raleigh (208/7), Randolph (204/4), Ritchie (3/0), Roane (15/0),Summers (7/0), Taylor (55/1), Tucker (11/0), Tyler (13/0), Upshur (36/3), Wayne(198/2), Webster (4/0), Wetzel (42/0), Wirt (6/0), Wood (231/12), Wyoming(31/0).

As case surveillance continues at thelocal health department level, it may reveal that those tested in a certaincounty may not be a resident of that county, or even the state as an individualin question may have crossed the state border to be tested.Such is the case of Preston County in this report.

Specificallyregarding the change in cases for Grant and Pendleton counties in this report,when the tests were administered in these counties the facility left someaddress fields blank therefore the address on file resorted back to thehistoric address on file for an individual which was not necessarily consideredtheir current address.

Pleasenote that delays may be experienced with the reporting of information from thelocal health department to DHHR. Visitthe dashboard at http://www.coronavirus.wv.gov for more detailed information.

On July 24,2020, Gov. Jim Justice announced that DHHR, the agency in charge of reportingthe number of COVID-19 cases, will transition from providing twice-dailyupdates to one report every 24 hours. This became effective August 1, 2020.

Read the original post:

COVID-19 Daily Update 8-7-2020 - West Virginia Department of Health and Human Resources