China Is Installing “AI Guards” in Prison Cells

A Chinese prison's new surveillance system will monitor inmates constantly to prevent escape attempts — but perhaps at the cost of prisoners' mental health.

Escape-Proof Prison

If any of the inmates at Yancheng prison are considering an escape attempt, they’d better do it soon.

The Chinese prison is currently wrapping up months of construction on a new “smart” surveillance system designed to monitor prisoners at every moment, including while they are in their cells.

According to officials, this digital panopticon will make prison breaks virtually impossible — but it might also wreak havoc on prisoners’ psyches.

Under Constant Watch

On Monday, the South China Morning Post published a story detailing a new surveillance system at China’s Yancheng prison, which houses some of the nation’s most high-profile inmates.

According to the outlet’s sources, the Yanjiao-based facility is almost finished upgrading its surveillance system to include a network of cameras and sensors capable of constantly tracking inmates.

These cameras and sensors will feed into an AI system that uses facial identification and movement analysis technologies to monitor each individual inmate at the Chinese prison, producing a daily report about each one and flagging any unusual behavior.

“For instance,” project representative Meng Qingbiao told SCMP, “if an inmate has been spotted pacing up and down in a room for some time, the machine may regard the phenomenon as suspicious and suggest close-up checks with a human guard.”

Mental Turmoil

This isn’t the first example we’ve seen of prison officials attempting to make facilities “smart” — in February, Hong Kong’s Correctional Services Department announced the implementation of several technologies designed to help prisoners stay safe during their incarceration.

There’s a chance the Yancheng facility’s system could serve the same purpose — if the prisoner in Meng’s scenario was pacing due to thoughts of hurting themselves, for example, the flagging of their behavior and subsequent check by a human guard could prevent that.

However, Zhang Xuemin, a physiology professor at Beijing Normal University, told SCMP the new system will “definitely affect” the prisoners’ mental state.

And while he didn’t elaborate on what that effect might be, past research has shown that constant surveillance can increase a person’s stress and anxiety levels, while decreasing their trust in others — meaning the trade off for an escape-proof “smart” prison might be the mental health of its inmates.

READ MORE: No escape? Chinese VIP jail puts AI monitors in every cell ‘to make prison breaks impossible’ [South China Morning Post]

More on prisons: Hong Kong Has a Plan to Make All of Its Prisons “Smart”

The post China Is Installing “AI Guards” in Prison Cells appeared first on Futurism.

Here is the original post:
China Is Installing “AI Guards” in Prison Cells

Watch Russia’s New Shotgun-Wielding Drone in Action

Russia's weapon maker Almaz-Antey built a drone that can fire a semi-automatic 12 caliber shotgun. Video footage shows the thing in action.

Death From Above

Earlier this year, Russian weapons manufacturer Almaz-Antey filed a patent for a new drone that was little more than a shotgun with wings.

When Futurism first reported on the drone, details were scarce. But now, video footage of flight tests has surfaced showing the drone — which looks like a murderous model plane ­— in action.

Balancing Act

The drone carries a 12-caliber Vepr-12 semi-automatic shotgun, Tom’s Guide reports. One might justifiably expect the recoil of a shotgun blast to send a drone veering off course, but that doesn’t seem to be the case.

Scientists from the Moscow Aviation Institute who worked on the drone published a press release saying that the drone can continue on its path after firing, and that its electric battery allows for 40 minutes of uninterrupted flight.

Human Touch

Video footage shows the drone taking off and landing with its nose — and barrel — pointed straight into the air.

But when all else fails, a video shows that a human can remove the drone’s wings and shoot it just like any other shotgun.

READ MORE: Watch Russia’s Flying Rifle In Action For the First Time Ever [Tom’s Guide]

More on the shotgun drone: Russian Arms Maker Invents Drone With Built-in Rifle

The post Watch Russia’s New Shotgun-Wielding Drone in Action appeared first on Futurism.

Read more here:
Watch Russia’s New Shotgun-Wielding Drone in Action

These $17,000 Mice Are Gene-Edited to Mimic Human Diseases

A fascinating new story looks at the growing market for mice that have been gene-edited to mimic human disease ranging from prostate cancer to diabetes.

Gene Hackman

A fascinating new Bloomberg story looks at the growing market for mice scientists have gene-edited to mimic human diseases ranging from prostate cancer to diabetes.

Researchers are shelling out big money for the CRISPR-modified rodents, according to Bloomberg, sometimes paying as much as $17,000 for a pair — a medicine-disrupting development that’s projected to be a $1.59 billion industry by 2022.

Mouse Model

Bloomberg talked to Charles Lee, a former Harvard Medical School professor who’s using the gene-edited mice to develop new personalized cancer treatments in China.

“When you do these experiments, you want to be as close as possible to the way these tumors are growing in our bodies,” Lee told the magazine. “That’s not in a test tube or a petri dish. That’s in a living organism.”

READ MORE: China’s Selling Genetically-Modified Mice for $17,000 a Pair [Bloomberg]

More on mice: Scientists Give Mice “Super Vision” With Eye Injections

The post These $17,000 Mice Are Gene-Edited to Mimic Human Diseases appeared first on Futurism.

More:
These $17,000 Mice Are Gene-Edited to Mimic Human Diseases

Europe Is Stockpiling Wind Energy by Converting It to Hydrogen

Right now there are 45 experimental projects in Europe working to generate hydrogen fuel using renewable sources of electricity.

Clean Chemistry

Ørsted, an energy company in Denmark, announced in March its new plans to convert electricity from its wind turbines into hydrogen fuel, joining the ranks of several other prominent European power companies.

While it’s expensive, stockpiling renewable electricity as hydrogen makes sense as Europe tries to reach its ambitious climate goalsaccording to Scientific American — it could be used for power on windless days instead of fossil fuels.

Wind Power

Ørsted’s plan is to use electricity generated from wind turbines to power electrolysis plants that split water into oxygen and useable hydrogen.

This means that the renewable electricity could also be used, albeit indirectly, to fuel cars that would otherwise have relied on fossil fuels.

One Of Many

SciAm reports that there are currently 45 European projects working to improve the renewable-to-hydrogen pipeline. The most complex and expensive hurdle is splitting the water.

But the cost of the equipment necessary to do so has dropped by about 40 percent over the last ten years, suggesting that renewable hydrogen fuel may come around sooner than expected.

READ MORE: Europe Stores Electricity in Gas Pipes [Scientific American]

More on hydrogen fuel: Toyota is Selling a Hydrogen Fuel Cell Car for $50,000

The post Europe Is Stockpiling Wind Energy by Converting It to Hydrogen appeared first on Futurism.

Read the original:
Europe Is Stockpiling Wind Energy by Converting It to Hydrogen

Restaurant Analyzes Your Bodily Fluids to Make Ultra-Nutritious Sushi

A futuristic restaurant plans to collect samples of diners' bodily fluids so it can create meals hyper-personalized to meet their nutritional needs.

Hyper-Personal

You’ll need more than a reservation to dine at Sushi Singularity — you’ll also need to be willing to share samples of your bodily fluids.

The futuristic restaurant, which is set to open in Tokyo in 2020, collects samples of reservation-holders’ saliva, feces, and urine two weeks prior to their visits. Then it analyzes the samples to determine each diner’s unique nutritional requirements, tailoring their meal to meet those needs.

“Hyper-personalisation will become common for future foods,” Open Meals, the design studio behind the restaurant, told Dezeen. “Based on DNA, urine, and intestinal tests, people will each have individual health IDs.”

Stunning Sushi

Not only will each meal be hyper-personalized, it will also be constructed using non-traditional tools, including a CNC machine and 3D printer.

“There will be 14 cylinders with different nutrients attached to the food-fabrication machine,” Open Meals said, “and when it 3D prints a dish of sushi, for example, some nutrients that are necessary for the customer will be added automatically.”

If the food Sushi Singularity serves is half as stunning as that featured in the restaurant’s promo video, it’ll be a futuristic feast for the eyes and tastebuds.

READ MORE: Sushi Singularity makes a bespoke dinner based on your bodily fluids [Dezeen]

More on futuristic food: AI Trained on Decades of Food Research Is Making Brand-New Foods

The post Restaurant Analyzes Your Bodily Fluids to Make Ultra-Nutritious Sushi appeared first on Futurism.

See the original post here:
Restaurant Analyzes Your Bodily Fluids to Make Ultra-Nutritious Sushi

Scientists Gene-Hacked Bacteria to Make Bullet-Proof Spider Silk

A new genetic trick allowed researchers to use bacteria to manufacture super-strong spider silk faster than spiders do themselves.

Bacteria Farms

Scientists have figured out how to genetically alter bacteria to churn out super-strong spider silk.

Pound for pound, spider silk is much stronger than steel, but farming spiders is incredibly inefficient, according to a press release — so finding a way to mass produce the material could lead to super-strong fabrics and perhaps even next-generation space suits.

Genetic Trickery

Put enough spiders as you’d need to farm silk together, and they tend to eat each other. Edit the gene for spider silk production into bacteria as is — now a common manufacturing process — and it gets rejected.

“In nature, there are a lot of protein-based materials that have amazing mechanical properties, but the supply of these materials is very often limited,” lead researcher Fuzhong Zhang from Washington University in St. Louis said in the press release. “My lab is interested in engineering microbes so that we can not only produce these materials, but make them even better.”

To get around those limitations, the scientists chopped up the spider silk genes into smaller pieces that re-assembled once they had been integrated into the bacterial genome, in research set to be presented Tuesday at the American Chemical Society national Spring 2019 meeting.

Space Spiders

With their new methodology, the scientists managed to manufacture two grams of spider silk — just as strong as silk that actually came from a spider — for each liter of gene-spliced bacteria. That’s not all that much silk for an unsettling amount of bacteria, but the press release reports that it’s a vast improvement over other attempts to mass produce silk.

If this research scales up, though, NASA may want to bring the bacteria along on future missions to space, giving crew members a new supply of materials for repairs.

READ MORE: Bacterial factories could manufacture high-performance proteins for space missions [American Chemical Society newsroom via Phys.org]

More on bacteria: Scientists Gene-Edited Tequila Bacteria to Make Cannabinoids

The post Scientists Gene-Hacked Bacteria to Make Bullet-Proof Spider Silk appeared first on Futurism.

Read more here:
Scientists Gene-Hacked Bacteria to Make Bullet-Proof Spider Silk

New Singapore Law Would Force Facebook to Issue “Corrections”

Singapore's newly proposed fake news bill would force Facebook and other platforms to issue corrections alongside posts containing false statements.

You Asked for It

On Saturday, Facebook founder Mark Zuckerberg penned an op-ed in The Washington Post asking governments to create new rules and regulations for the internet.

Two days later, Singapore submitted legislation in parliament designed to govern how sites such as Zuckerberg’s handle “fake news” on their platforms.

If it passes, the bill would require sites to place warnings or “corrections” alongside any posts containing false statements, and force them to remove comments that any of the nation’s ministers believe are “against the public interest.”

Failure to comply with the fake news bill could result in fines or prison time — a startling escalation, albeit by a small country, of public concerns about online misinformation.

Censoring Speech

Singapore’s Law Minister K. Shanmugam claims any assertions that the fake news bill would hinder free speech are misguided.

“This legislation deals with false statements of facts,” he told reporters on Monday, according to Reuters. “It doesn’t deal with opinions, it doesn’t deal with viewpoints.”

Phil Robertson, deputy director of Human Rights Watch’s Asian division, disagrees with that assessment.

“This draft law will be a disaster for human rights, particularly freedom of expression and media freedom,” Robertson told Reuters. “The definitions in the law are broad and poorly defined, leaving maximum regulatory discretion to the government officers skewed to view as ‘misleading’ or ‘false’ the sorts of news that challenge Singapore’s preferred political narratives.”

READ MORE: Facebook, rights groups hit out at Singapore’s fake news bill [Reuters]

More on Facebook: Mark Zuckerberg Asks Governments for “New Rules” for the Internet

The post New Singapore Law Would Force Facebook to Issue “Corrections” appeared first on Futurism.

Read more:
New Singapore Law Would Force Facebook to Issue “Corrections”

Computer chess – Wikipedia

Computer hardware and software capable of playing chess

Computer chess includes both hardware (dedicated computers) and software capable of playing chess. Computer chess provides opportunities for players to practice even in the absence of human opponents, and also provides opportunities for analysis, entertainment and training. Since around 2005, chess engines have been able to defeat even the strongest human players. Nevertheless, it is considered unlikely that computers will ever solve chess due to its computational complexity.

The idea of creating a chess-playing machine dates back to the eighteenth century. Around 1769, the chess playing automaton called The Turk became famous before being exposed as a hoax. Before the development of digital computing, serious trials based on automata such as El Ajedrecista of 1912, were too complex and limited to be useful for playing full games of chess. The field of mechanical chess research languished until the advent of the digital computer in the 1950s. Since then, chess enthusiasts and computer engineers have built, with increasing degrees of seriousness and success, chess-playing machines and computer programs.

Chess-playing computers and software came onto the market in the mid-1970s. There are many chess engines such as Stockfish, Crafty, Fruit and GNU Chess that can be downloaded from the Internet free of charge. Top programs such as Stockfish have surpassed even world champion caliber players.

CEGT,[42] CSS,[43] SSDF,[44] and WBEC[45] maintain rating lists allowing fans to compare the strength of engines. As of 3 February 2016, Stockfish is the top rated chess program on the IPON rating list.[46]

CCRL (Computer Chess Rating Lists) is an organisation that tests computer chess engines' strength by playing the programs against each other. CCRL was founded in 2006 by Graham Banks, Ray Banks, Sarah Bird, Kirill Kryukov and Charles Smith, and as of June 2012 its members are Graham Banks, Ray Banks (who only participates in Chess960, or Fischer Random Chess), Shaun Brewer, Adam Hair, Aser Huerga, Kirill Kryukov, Denis Mendoza, Charles Smith and Gabor Szots.[47]

The organisation runs three different lists: 40/40 (40 minutes for every 40 moves played), 40/4 (4 minutes for every 40 moves played), and 40/4 FRC (same time control but Chess960).[Note 1] Pondering (or permanent brain) is switched off and timing is adjusted to the AMD64 X2 4600+ (2.4 GHz) CPU by using Crafty 19.17 BH as a benchmark. Generic, neutral opening books are used (as opposed to the engine's own book) up to a limit of 12 moves into the game alongside 4 or 5 man tablebases.[47][48][49]

Using "ends-and-means" heuristics a human chess player can intuitively determine optimal outcomes and how to achieve them regardless of the number of moves necessary, but a computer must be systematic in its analysis. Most players agree that looking at least five moves ahead (ten plies) when necessary is required to play well. Normal tournament rules give each player an average of three minutes per move. On average there are more than 30 legal moves per chess position, so a computer must examine a quadrillion possibilities to look ahead ten plies (five full moves); one that could examine a million positions a second would require more than 30 years.[50]

After discovering refutation screeningthe application of alpha-beta pruning to optimizing move evaluationin 1957, a team at Carnegie Mellon University predicted that a computer would defeat the world human champion by 1967.[51] It did not anticipate the difficulty of determining the right order to evaluate branches. Researchers worked to improve programs' ability to identify killer heuristics, unusually high-scoring moves to reexamine when evaluating other branches, but into the 1970s most top chess players believed that computers would not soon be able to play at a Master level.[50] In 1968 International Master David Levy made a famous bet that no chess computer would be able to beat him within ten years,[6] and in 1976 Senior Master and professor of psychology Eliot Hearst of Indiana University wrote that "the only way a current computer program could ever win a single game against a master player would be for the master, perhaps in a drunken stupor while playing 50 games simultaneously, to commit some once-in-a-year blunder".[50]

In the late 1970s chess programs suddenly began defeating top human players.[50] The year of Hearst's statement, Northwestern University's Chess 4.5 at the Paul Masson American Chess Championship's Class B level became the first to win a human tournament. Levy won his bet in 1978 by beating Chess 4.7, but it achieved the first computer victory against a Master-class player at the tournament level by winning one of the six games.[6] In 1980 Belle began often defeating Masters. By 1982 two programs played at Master level and three were slightly weaker.[50]

The sudden improvement without a theoretical breakthrough surprised humans, who did not expect that Belle's ability to examine 100,000 positions a secondabout eight plieswould be sufficient. The Spracklens, creators of the successful microcomputer program Sargon, estimated that 90% of the improvement came from faster evaluation speed and only 10% from improved evaluations. New Scientist stated in 1982 that computers "play terrible chess ... clumsy, inefficient, diffuse, and just plain ugly", but humans lost to them by making "horrible blunders, astonishing lapses, incomprehensible oversights, gross miscalculations, and the like" much more often than they realized; "in short, computers win primarily through their ability to find and exploit miscalculations in human initiatives".[50]

By 1982, microcomputer chess programs could evaluate up to 1,500 moves a second and were as strong as mainframe chess programs of five years earlier, able to defeat almost all players. While only able to look ahead one or two plies more than at their debut in the mid-1970s, doing so improved their play more than experts expected; seemingly minor improvements "appear to have allowed the crossing of a psychological threshold, after which a rich harvest of human error becomes accessible", New Scientist wrote.[50] While reviewing SPOC in 1984, BYTE wrote that "Computersmainframes, minis, and microstend to play ugly, inelegant chess", but noted Robert Byrne's statement that "tactically they are freer from error than the average human player". The magazine described SPOC as a "state-of-the-art chess program" for the IBM PC with a "surprisingly high" level of play, and estimated its USCF rating as 1700 (Class B).[52]

At the 1982 North American Computer Chess Championship, Monroe Newborn predicted that a chess program could become world champion within five years; tournament director and International Master Michael Valvo predicted ten years; the Spracklens predicted 15; Ken Thompson predicted more than 20; and others predicted that it would never happen. The most widely held opinion, however, stated that it would occur around the year 2000.[53] In 1989, Levy was defeated by Deep Thought in an exhibition match. Deep Thought, however, was still considerably below World Championship Level, as the then reigning world champion Garry Kasparov demonstrated in two strong wins in 1989. It was not until a 1996 match with IBM's Deep Blue that Kasparov lost his first game to a computer at tournament time controls in Deep Blue - Kasparov, 1996, Game 1. This game was, in fact, the first time a reigning world champion had lost to a computer using regular time controls. However, Kasparov regrouped to win three and draw two of the remaining five games of the match, for a convincing victory.

In May 1997, an updated version of Deep Blue defeated Kasparov 32 in a return match. A documentary mainly about the confrontation was made in 2003, titled Game Over: Kasparov and the Machine. IBM keeps a web site of the event.

With increasing processing power and improved evaluation functions, chess programs running on commercially available workstations began to rival top flight players. In 1998, Rebel 10 defeated Viswanathan Anand, who at the time was ranked second in the world, by a score of 53. However most of those games were not played at normal time controls. Out of the eight games, four were blitz games (five minutes plus five seconds Fischer delay (see time control) for each move); these Rebel won 31. Two were semi-blitz games (fifteen minutes for each side) that Rebel won as well (1). Finally, two games were played as regular tournament games (forty moves in two hours, one hour sudden death); here it was Anand who won 1.[54] In fast games, computers played better than humans, but at classical time controls at which a player's rating is determined the advantage was not so clear.

In the early 2000s, commercially available programs such as Junior and Fritz were able to draw matches against former world champion Garry Kasparov and classical world champion Vladimir Kramnik.

In October 2002, Vladimir Kramnik and Deep Fritz competed in the eight-game Brains in Bahrain match, which ended in a draw. Kramnik won games 2 and 3 by "conventional" anti-computer tactics play conservatively for a long-term advantage the computer is not able to see in its game tree search. Fritz, however, won game 5 after a severe blunder by Kramnik. Game 6 was described by the tournament commentators as "spectacular." Kramnik, in a better position in the early middlegame, tried a piece sacrifice to achieve a strong tactical attack, a strategy known to be highly risky against computers who are at their strongest defending against such attacks. True to form, Fritz found a watertight defense and Kramnik's attack petered out leaving him in a bad position. Kramnik resigned the game, believing the position lost. However, post-game human and computer analysis has shown that the Fritz program was unlikely to have been able to force a win and Kramnik effectively sacrificed a drawn position. The final two games were draws. Given the circumstances, most commentators still rate Kramnik the stronger player in the match.[citation needed]

In January 2003, Garry Kasparov played Junior, another chess computer program, in New York City. The match ended 33.

In November 2003, Garry Kasparov played X3D Fritz. The match ended 22.

In 2005, Hydra, a dedicated chess computer with custom hardware and sixty-four processors and also winner of the 14th IPCCC in 2005, defeated seventh-ranked Michael Adams 5 in a six-game match (though Adams' preparation was far less thorough than Kramnik's for the 2002 series).[55]

In NovemberDecember 2006, World Champion Vladimir Kramnik played Deep Fritz. This time the computer won; the match ended 24. Kramnik was able to view the computer's opening book. In the first five games Kramnik steered the game into a typical "anti-computer" positional contest. He lost one game (overlooking a mate in one), and drew the next four. In the final game, in an attempt to draw the match, Kramnik played the more aggressive Sicilian Defence and was crushed.

There was speculation that interest in human-computer chess competition would plummet as a result of the 2006 Kramnik-Deep Fritz match.[56] According to Newborn, for example, "the science is done".[57]

Human-computer chess matches showed the best computer systems overtaking human chess champions in the late 1990s. For the 40 years prior to that, the trend had been that the best machines gained about 40 points per year in the Elo rating while the best humans only gained roughly 2 points per year.[58] The highest rating obtained by a computer in human competition was Deep Thought's USCF rating of 2551 in 1988 and FIDE no longer accepts human-computer results in their rating lists. Specialized machine-only Elo pools have been created for rating machines, but such numbers, while similar in appearance, should not be directly compared.[59] In 2016, the Swedish Chess Computer Association rated computer program Komodo at 3361.

Chess engines continue to improve. In 2009, chess engines running on slower hardware have reached the grandmaster level. A mobile phone won a category 6 tournament with a performance rating 2898: chess engine Hiarcs 13 running inside Pocket Fritz 4 on the mobile phone HTC Touch HD won the Copa Mercosur tournament in Buenos Aires, Argentina with 9 wins and 1 draw on August 414, 2009.[33] Pocket Fritz 4 searches fewer than 20,000 positions per second.[60] This is in contrast to supercomputers such as Deep Blue that searched 200 million positions per second.

Advanced Chess is a form of chess developed in 1998 by Kasparov where a human plays against another human, and both have access to computers to enhance their strength. The resulting "advanced" player was argued by Kasparov to be stronger than a human or computer alone, this has been proven in numerous occasions, at Freestyle Chess events. In 2017, a win by a computer engine in the freestyle Ultimate Challenge tournament.[41] was the source of a lengthy debate, in which the organisers declined to participate.

Players today are inclined to treat chess engines as analysis tools rather than opponents.[61]

The developers of a chess-playing computer system must decide on a number of fundamental implementation issues. These include:

Computer chess programs usually support a number of common de facto standards. Nearly all of today's programs can read and write game moves as Portable Game Notation (PGN), and can read and write individual positions as ForsythEdwards Notation (FEN). Older chess programs often only understood long algebraic notation, but today users expect chess programs to understand standard algebraic chess notation.

Starting in the late 1990s, programmers began to develop separately engines (with a command-line interface which calculates which moves are strongest in a position) or a graphical user interface(GUI) which provides the player with a chessboard they can see, and pieces that can be moved. Engines communicate their moves to the GUI using a protocol such as the Chess Engine Communication Protocol (CECP) or Universal Chess Interface (UCI). By dividing chess programs into these two pieces, developers can write only the user interface, or only the engine, without needing to write both parts of the program. (See also chess engines.)

Developers have to decide whether to connect the engine to an opening book and/or endgame tablebases or leave this to the GUI.

The data structure used to represent each chess position is key to the performance of move generation and position evaluation. Methods include pieces stored in an array ("mailbox" and "0x88"), piece positions stored in a list ("piece list"), collections of bit-sets for piece locations ("bitboards"), and huffman coded positions for compact long-term storage.

The first paper on the subject was by Claude Shannon in 1950.[62] He predicted the two main possible search strategies which would be used, which he labeled "Type A" and "Type B",[63] before anyone had programmed a computer to play chess.

Type A programs would use a "brute force" approach, examining every possible position for a fixed number of moves using the minimax algorithm. Shannon believed this would be impractical for two reasons.

First, with approximately thirty moves possible in a typical real-life position, he expected that searching the approximately 109 positions involved in looking three moves ahead for both sides (six plies) would take about sixteen minutes, even in the "very optimistic" case that the chess computer evaluated a million positions every second. (It took about forty years to achieve this speed.)

Second, it ignored the problem of quiescence, trying to only evaluate a position that is at the end of an exchange of pieces or other important sequence of moves ('lines'). He expected that adapting type A to cope with this would greatly increase the number of positions needing to be looked at and slow the program down still further.

Instead of wasting processing power examining bad or trivial moves, Shannon suggested that "type B" programs would use two improvements:

This would enable them to look further ahead ('deeper') at the most significant lines in a reasonable time. The test of time has borne out the first approach; all modern programs employ a terminal quiescence search before evaluating positions. The second approach (now called forward pruning) has been dropped in favor of search extensions.

Adriaan de Groot interviewed a number of chess players of varying strengths, and concluded that both masters and beginners look at around forty to fifty positions before deciding which move to play. What makes the former much better players is that they use pattern recognition skills built from experience. This enables them to examine some lines in much greater depth than others by simply not considering moves they can assume to be poor.

More evidence for this being the case is the way that good human players find it much easier to recall positions from genuine chess games, breaking them down into a small number of recognizable sub-positions, rather than completely random arrangements of the same pieces. In contrast, poor players have the same level of recall for both.

The problem with type B is that it relies on the program being able to decide which moves are good enough to be worthy of consideration ('plausible') in any given position and this proved to be a much harder problem to solve than speeding up type A searches with superior hardware and search extension techniques.

One of the few chess grandmasters to devote himself seriously to computer chess was former World Chess Champion Mikhail Botvinnik, who wrote several works on the subject. He also held a doctorate in electrical engineering. Working with relatively primitive hardware available in the Soviet Union in the early 1960s, Botvinnik had no choice but to investigate software move selection techniques; at the time only the most powerful computers could achieve much beyond a three-ply full-width search, and Botvinnik had no such machines. In 1965 Botvinnik was a consultant to the ITEP team in a US-Soviet computer chess match (see Kotok-McCarthy).

One developmental milestone occurred when the team from Northwestern University, which was responsible for the Chess series of programs and won the first three ACM Computer Chess Championships (197072), abandoned type B searching in 1973. The resulting program, Chess 4.0, won that year's championship and its successors went on to come in second in both the 1974 ACM Championship and that year's inaugural World Computer Chess Championship, before winning the ACM Championship again in 1975, 1976 and 1977.

One reason they gave for the switch was that they found it less stressful during competition, because it was difficult to anticipate which moves their type B programs would play, and why. They also reported that type A was much easier to debug in the four months they had available and turned out to be just as fast: in the time it used to take to decide which moves were worthy of being searched, it was possible just to search all of them.

In fact, Chess 4.0 set the paradigm that was and still is followed essentially by all modern Chess programs today. Chess 4.0 type programs won out for the simple reason that their programs played better chess. Such programs did not try to mimic human thought processes, but relied on full width alpha-beta and negascout searches. Most such programs (including all modern programs today) also included a fairly limited selective part of the search based on quiescence searches, and usually extensions and pruning (particularly null move pruning from the 1990s onwards) which were triggered based on certain conditions in an attempt to weed out or reduce obviously bad moves (history moves) or to investigate interesting nodes (e.g. check extensions, passed pawns on seventh rank, etc.). Extension and pruning triggers have to be used very carefully however. Over extend and the program wastes too much time looking at uninteresting positions. If too much is pruned, there is a risk cutting out interesting nodes. Chess programs differ in terms of how and what types of pruning and extension rules are included as well as in the evaluation function. Some programs are believed to be more selective than others (for example Deep Blue was known to be less selective than most commercial programs because they could afford to do more complete full width searches), but all have a base full width search as a foundation and all have some selective components (Q-search, pruning/extensions).

Though such additions meant that the program did not truly examine every node within its search depth (so it would not be truly brute force in that sense), the rare mistakes due to these selective searches was found to be worth the extra time it saved because it could search deeper. In that way Chess programs can get the best of both worlds.

Furthermore, technological advances by orders of magnitude in processing power have made the brute force approach far more incisive than was the case in the early years. The result is that a very solid, tactical AI player aided by some limited positional knowledge built in by the evaluation function and pruning/extension rules began to match the best players in the world. It turned out to produce excellent results, at least in the field of chess, to let computers do what they do best (calculate) rather than coax them into imitating human thought processes and knowledge. In 1997 Deep Blue defeated World Champion Garry Kasparov, marking the first time a computer has defeated a reigning world chess champion in standard time control.

Computer chess programs consider chess moves as a game tree. In theory, they examine all moves, then all counter-moves to those moves, then all moves countering them, and so on, where each individual move by one player is called a "ply". This evaluation continues until a certain maximum search depth or the program determines that a final "leaf" position has been reached (e.g. checkmate).

A naive implementation of this approach can only search to a small depth in a practical amount of time, so various methods have been devised to greatly speed the search for good moves.

The AlphaZero program uses a variant of Monte Carlo tree search without rollout.[64]

For more information, see:

For most chess positions, computers cannot look ahead to all possible final positions. Instead, they must look ahead a few plies and compare the possible positions, known as leaves. The algorithm that evaluates leaves is termed the "evaluation function", and these algorithms are often vastly different between different chess programs.

Evaluation functions typically evaluate positions in hundredths of a pawn (called a centipawn), and consider material value along with other factors affecting the strength of each side. When counting up the material for each side, typical values for pieces are 1 point for a pawn, 3 points for a knight or bishop, 5 points for a rook, and 9 points for a queen. (See Chess piece relative value.) The king is sometimes given an arbitrary high value such as 200 points (Shannon's paper) or 1,000,000,000 points (1961 USSR program) to ensure that a checkmate outweighs all other factors (Levy & Newborn 1991:45). By convention, a positive evaluation favors White, and a negative evaluation favors Black.

In addition to points for pieces, most evaluation functions take many factors into account, such as pawn structure, the fact that a pair of bishops are usually worth more, centralized pieces are worth more, and so on. The protection of kings is usually considered, as well as the phase of the game (opening, middle or endgame).

Endgame play had long been one of the great weaknesses of chess programs, because of the depth of search needed. Some otherwise master-level programs were unable to win in positions where even intermediate human players can force a win.

To solve this problem, computers have been used to analyze some chess endgame positions completely, starting with king and pawn against king. Such endgame tablebases are generated in advance using a form of retrograde analysis, starting with positions where the final result is known (e.g., where one side has been mated) and seeing which other positions are one move away from them, then which are one move from those, etc. Ken Thompson was a pioneer in this area.

The results of the computer analysis sometimes surprised people. In 1977 Thompson's Belle chess machine used the endgame tablebase for a king and rook against king and queen and was able to draw that theoretically lost ending against several masters (see Philidor position#Queen versus rook). This was despite not following the usual strategy to delay defeat by keeping the defending king and rook close together for as long as possible. Asked to explain the reasons behind some of the program's moves, Thompson was unable to do so beyond saying the program's database simply returned the best moves.

Most grandmasters declined to play against the computer in the queen versus rook endgame, but Walter Browne accepted the challenge. A queen versus rook position was set up in which the queen can win in thirty moves, with perfect play. Browne was allowed 2 hours to play fifty moves, otherwise a draw would be claimed under the fifty-move rule. After forty-five moves, Browne agreed to a draw, being unable to force checkmate or win the rook within the next five moves. In the final position, Browne was still seventeen moves away from checkmate, but not quite that far away from winning the rook. Browne studied the endgame, and played the computer again a week later in a different position in which the queen can win in thirty moves. This time, he captured the rook on the fiftieth move, giving him a winning position (Levy & Newborn 1991:14448), (Nunn 2002:49).

Other positions, long believed to be won, turned out to take more moves against perfect play to actually win than were allowed by chess's fifty-move rule. As a consequence, for some years the official FIDE rules of chess were changed to extend the number of moves allowed in these endings. After a while, the rule reverted to fifty moves in all positions more such positions were discovered, complicating the rule still further, and it made no difference in human play, as they could not play the positions perfectly.

Over the years, other endgame database formats have been released including the Edward Tablebase, the De Koning Database and the Nalimov Tablebase which is used by many chess programs such as Rybka, Shredder and Fritz. Tablebases for all positions with six pieces are available.[65] Some seven-piece endgames have been analyzed by Marc Bourzutschky and Yakov Konoval.[66] Programmers using the Lomonosov supercomputers in Moscow have completed a chess tablebase for all endgames with seven pieces or fewer (trivial endgame positions are excluded, such as six white pieces versus a lone black king).[67][68] In all of these endgame databases it is assumed that castling is no longer possible.

Many tablebases do not consider the fifty-move rule, under which a game where fifty moves pass without a capture or pawn move can be claimed to be a draw by either player. This results in the tablebase returning results such as "Forced mate in sixty-six moves" in some positions which would actually be drawn because of the fifty-move rule. One reason for this is that if the rules of chess were to be changed once more, giving more time to win such positions, it will not be necessary to regenerate all the tablebases. It is also very easy for the program using the tablebases to notice and take account of this 'feature' and in any case if using an endgame tablebase will choose the move that leads to the quickest win (even if it would fall foul of the fifty-move rule with perfect play). If playing an opponent not using a tablebase, such a choice will give good chances of winning within fifty moves.

The Nalimov tablebases, which use state-of-the-art compression techniques, require 7.05 GB of hard disk space for all five-piece endings. To cover all the six-piece endings requires approximately 1.2 TB. It is estimated that a seven-piece tablebase requires between 50 and 200 TB of storage space.[69]

Endgame databases featured prominently in 1999, when Kasparov played an exhibition match on the Internet against the rest of the world. A seven piece Queen and pawn endgame was reached with the World Team fighting to salvage a draw. Eugene Nalimov helped by generating the six piece ending tablebase where both sides had two Queens which was used heavily to aid analysis by both sides.

Many other optimizations can be used to make chess-playing programs stronger. For example, transposition tables are used to record positions that have been previously evaluated, to save recalculation of them. Refutation tables record key moves that "refute" what appears to be a good move; these are typically tried first in variant positions (since a move that refutes one position is likely to refute another). Opening books aid computer programs by giving common openings that are considered good play (and good ways to counter poor openings). Many chess engines use pondering to increase their strength.

Of course, faster hardware and additional processors can improve chess-playing program abilities, and some systems (such as Deep Blue) use specialized chess hardware instead of only software. Another way to examine more chess positions is to distribute the analysis of positions to many computers. The ChessBrain project[70] was a chess program that distributed the search tree computation through the Internet. In 2004 the ChessBrain played chess using 2,070 computers.

It has been estimated that doubling the computer speed gains approximately fifty to seventy Elo points in playing strength (Levy & Newborn 1991:192).

Chess engines have been developed to play some chess variants such as Capablanca Chess, but the engines are almost never directly integrated with specific hardware. Even for the software that has been developed, most will not play chess beyond a certain board size, so games played on an unbounded chessboard (infinite chess) remain virtually untouched by both chess computers and software.

These chess playing systems include custom hardware or run on supercomputers.

In the 1980s and early 1990s, there was a competitive market for dedicated chess computers. This market changed in the mid-90s when computers with dedicated processors could no longer compete with the fast processors in personal computers. Nowadays, most dedicated units sold are of beginner and intermediate strength.

Recently, some hobbyists have been using the Multi Emulator Super System to run the chess programs created for Fidelity or Hegener & Glaser's Mephisto computers on modern 64 bit operating systems such as Windows 10.[72] The author of Rebel, Ed Schrder has also adapted three of the Hegener & Glaser Mephisto's he wrote to work as UCI engines.[73]

These chess programs run on obsolete hardware:

These programs can be run on MS-DOS, and can be run on 64 bit Windows 10 via emulators such as DOSBox or Qemu:[75]

Perhaps the most common type of chess software are programs that simply play chess. You make a move on the board, and the AI calculates and plays a response, and back and forth until one player resigns. Sometimes the chess engine, which calculates the moves, and the graphical user interface(GUI) are separate programs. A variety of engines can be imported into the GUI, so that you can play against different styles. Engines often have just a simple text command-line interface while GUIs may offer a variety of piece sets, board styles or even 3D or animated pieces. Because recent engines are so strong, engines or GUIs may offer some way of limiting the engine's strength, so the player has a better chance of winning. Universal Chess Interface(UCI) engines such Fritz or Rybka may have a built in mechanism for reducing the Elo rating of the engine (via UCI's uci_limitstrength and uci_elo parameters). Some versions of Fritz have a Handicap and Fun mode for limiting the current engine or changing the percentage of mistakes it makes or changing its style. Fritz also has a Friend Mode where during the game it tries to match the level of the player.

Chess databases allow users to search through a large library of historical games, analyze them, check statistics, and draw up an opening repertoire. Chessbase (for PC) is perhaps the most common program for this amongst professional players, but there are alternatives such as Shane's Chess Information Database (Scid) [76] for Windows, Mac or Linux, Chess Assistant[77] for PC,[78] Gerhard Kalab's Chess PGN Master for Android[79] or Giordano Vicoli's Chess-Studio for iOS.[80]

Programs such as Playchess allow you to play games against other players over the internet.

Chess training programs teach chess. Chessmaster had playthrough tutorials by IM Josh Waitzkin and GM Larry Christiansen. Stefan Meyer-Kahlen offers Shredder Chess Tutor based on the Step coursebooks of Rob Brunia and Cor Van Wijgerden. World champions Magnus Carlsen's Play Magnus company recently released a Magnus Trainer app for Android and iOS. Chessbase has Fritz and Chesster for children. Convekta has a large number of training apps such as CT-ART and its Chess King line based on tutorials by GM Alexander Kalinin and Maxim Blokh.

There is also Software for handling chess problems.

Well-known computer chess theorists include:

The prospects of completely solving chess are generally considered to be rather remote. It is widely conjectured that there is no computationally inexpensive method to solve chess even in the very weak sense of determining with certainty the value of the initial position, and hence the idea of solving chess in the stronger sense of obtaining a practically usable description of a strategy for perfect play for either side seems unrealistic today. However, it has not been proven that no computationally cheap way of determining the best move in a chess position exists, nor even that a traditional alpha-beta-searcher running on present-day computing hardware could not solve the initial position in an acceptable amount of time. The difficulty in proving the latter lies in the fact that, while the number of board positions that could happen in the course of a chess game is huge (on the order of at least 1043[82] to 1047), it is hard to rule out with mathematical certainty the possibility that the initial position allows either side to force a mate or a threefold repetition after relatively few moves, in which case the search tree might encompass only a very small subset of the set of possible positions. It has been mathematically proven that generalized chess (chess played with an arbitrarily large number of pieces on an arbitrarily large chessboard) is EXPTIME-complete,[83] meaning that determining the winning side in an arbitrary position of generalized chess provably takes exponential time in the worst case; however, this theoretical result gives no lower bound on the amount of work required to solve ordinary 8x8 chess.

Gardner's Minichess, played on a 55 board with approximately 1018 possible board positions, has been solved; its game-theoretic value is 1/2 (i.e. a draw can be forced by either side), and the forcing strategy to achieve that result has been described.

Progress has also been made from the other side: as of 2012, all 7 and fewer piece (2 kings and up to 5 other pieces) endgames have been solved.

A "chess engine" is software that calculates and orders which moves are the strongest to play in a given position. Engine authors focus on improving the play of their engines, often just importing the engine into a graphical user interface(GUI) developed by someone else. Engines communicate with the GUI by following standardized protocols such as the Universal Chess Interface developed by Stefan Meyer-Kahlen and Franz Huber or the Chess Engine Communication Protocol developed by Tim Mann for GNU Chess and Winboard. Chessbase has its own proprietary protocol, and at one time Millennium 2000 had another protocol used for ChessGenius. Engines designed for one operating system and protocol may be ported to other OS's or protocols.

In 1997, the Internet Chess Club released its first Java client for playing chess online against other people inside one's webbrowser.[84] This was probably one of the first chess web apps. Free Internet Chess Server followed soon after with a similar client.[85] In 2004, International Correspondence Chess Federation opened up a web server to replace their email based system.[86] Chess.com started offering Live Chess in 2007.[87] Chessbase/Playchess had long had a downloadable client, but they had a web interface by 2013.[88]

Another popular web app is tactics training. The now defunct Chess Tactics Server opened its site in 2006,[89] followed by Chesstempo the next year,[90] and Chess.com added its Tactics Trainer in 2008.[91] Chessbase added a tactics trainer web app in 2015.[92]

Chessbase took their chess game database online in 1998.[93] Another early chess game databases was Chess Lab, which started in 1999.[94] New In Chess had initially tried to compete with Chessbase by releasing a NICBase program for Windows 3.x, but eventually, decided to give up on software, and instead focus on their online database starting in 2002.[95]

One could play against the engine Shredder online from 2006.[96] In 2015, Chessbase added a play Fritz web app,[97] as well as My Games for storing one's games.[98]

Starting in 2007, Chess.com offered the content of the training program, Chess Mentor, to their customers online. [99] Top GMs such as Sam Shankland and Walter Browne have contributed lessons.

Media

Go here to read the rest:

Computer chess - Wikipedia

Nanochips & Smart Dust: New Face of the Human …

The human microchipping agenda has a new face: Nanochips & Smart Dust. What are they? Are you being set up to be a node on the grid? What can you do?

are the new technological means for the advancement of the human microchipping agenda. Due to their incredibly tiny size, both nanochips and Smart dust have the capacity to infiltrate the human body, become lodged within, and begin to set up a synthetic network on the inside which can be remotely controlled from the outside. Needless to say, this has grave freedom, privacy and health implications, because it means the New World Order would be moving from controlling the outside world (environment/society) to controlling the inside world (your body). This article explores what the advent of nanochips and Smart dust could mean for you.

Humanitys history is filled with examples of societies where the people were sharply divided into 2 categories: rulers and slaves.In the distant past, the slaves have usually been kept in place because the rulers had access to and control over the resources, such as money, food, water, weapons or other necessities of life (control of the environment).In our more recent history, control was implemented not only by monopolizing resources but also via propaganda (control of the mind). This has manifested itself in many ways, e.g. the caste system in India (you must remain in your position on the hierarchical ladder for life), the royal bloodlines in Rome, the Middle East and Europe (who claimed an inherent and divine right to rule), the centralization of power in Nazi Germany and Soviet Russia during the 1930s (where a single autocrat or a small committee decided the fate of millions), and finally in the West (especially in the US) with the advent of specialized PR and mind control techniques that were refined by the CIA. Projects like MKUltra gave the NWO controllers unheard of power to remotely and subconsciously influence people without them ever knowing, including the ability to create sex slaves and sleeper assassins.

Project MKUltra was at its height 60+ years ago, and things have moved on a lot since then. We are now entering an era where technological advancements are giving the NWO conspirators influence over a new realm control of the emotions, or more accurately, control over the entire mind-emotion nexus in the human body. I am talking about microchips, tiny electronic devices which can be embedded under your skin, and which receive and transmit information. Although microchips have been around awhile, they are now outdated. What we are facing is something much tinier than a microchip, and therefore much more of a threat: nanochips and smart dust.

The components of a Smart dust sensor or mote. Image credit: CatchUpdates.com

So what is ananochip? The word nano is 3 orders of magnitude smaller than micro. Nano means one billionth while micro means one millionth. While microchips are about the size of a grain of rice and measured in millimeters, nanochips are completely invisible to the human eye. Some nanochips are far smaller than human hair (e.g. the -chip that is 0.4 x 0.4 mm). In 2015, IBM announced thatthey had developed functional nanochips measuring just 7 nm or nanometers (7 billionths of 1 meter). In comparison, a strand of human DNA is about 2.5 nm and the diameter of a single red blood cell is about 7500 nm! These nanochips power themselves from their environment (they dont need batteries) and have a 100 year life span. They are slated to be rolled out first on products (so the corporatocracy can have total knowledge of consumer behavior in real time) before they can be used inside peoples bodies. Did you know that nerve cells grow onto/meld with the chip?

In this Leak Project video, the presenter claims that the NWO aimto introduce 100 trillion nanochips into the world, so that literally every single thing in the world is tagged, including you. He includes many patents and other docs as proof of this agenda. He singles out the company HP (Hewlett Packard) as being the executor of the plan to construct a synthetic central nervous system for the Earth linking all resources and people in real time.

Smart dust. Image credit: Waking Times

You may already be familiar with the Smart agenda or better put the Smart Deception. For those new to this, the Smart agenda is to create a giant electromagnetic grid or network that encompasses the entire Earth. Everything that moves is to be made or injected with some kind of sensor or motethat connects it to the grid including household products, appliances, food/drink items, animals, plants and humans too. Smart dust is another name for these motes which will act as mini computers, broadcasting and receiving. They aresmall wireless microelectromechanical sensors (MEMS). As of 2013, a mote was about the size of a grain of rice, but with technology advancing all the time, these will keep on reducing in size. Motes can be ingested through food (as will be discussed below).

The Smart agenda is basically synonymous with the UN Agenda 21 or Agenda 2030, and the Smart grid is synonymous with the IoT (Internet of Things) which is also going to use the new 5G network to achieve its desired saturation levels.

While this kind of technology can be used for the benefit of mankind, like many things today, it has been weaponized. The existence of smart dust forms a massive threat against the sovereignty of every human being alive. What we are up against is nothing less than the attempted technological possession of humanity.

In a fundamental way, vaccines, GMOs, bioengineered food and geoengineering/chemtrails are all connected, as they are delivery systems whereby this miniature technology of nanochips and Smart dust is planned to be inserted into our bodies. Some chemtrails contain Smart dust motes which readily infiltrate the body, communicate with other motes in your body, set up their own network and which can, unfortunately, be remotely controlled.Even if you are fastidious about what you eat and what you expose yourself to, it is difficult to see how you can avoid breathing in a mote of smart dust that was dropped on you by a plane spraying chemtrails.

With nanochips and motes inside your body, the NWO criminals can combine the IoT smart grid with brain mapping and other technological information in their attempt to pull off their ultimate endgame: to remotely influence and control an entire population by overriding (and programming) the thoughts, feelings and actions of the masses.

(The rabbit hole definitely does not stop at nanochips and Smart dust. An entire new category of lifeforms are being forged via synthetic biology.Morgellons fibers are self-aware, self-replicating and are likely assisting the dark agenda to remotely control the thoughts, feelings and bodily functions of the entire population. This will be explored in other articles.)

Naturally, the full scope and goal of this agenda will not be revealed to the public as the technology is rolled out. Instead, we will continue to be told how wonderful, cool, trendy and efficient it all is. Note especially how all of this will be promoted under the banner of speed and convenience (while people unwittingly flush their freedom, health and privacy down the toilet). Yes, being surrounded by fields of manmade EM radiation everywhere you go will be disastrous for your health too.

The nanochips will also be pushed using peer pressure, encouraging people to get in the game out of social conformity. Like many governmental programs, the chips may initially be voluntary before they become mandatory. There is already a segment of society that is willingly chipping itself using tattoo ink. Recently, a company in Wisconsin (Three Square Market or 32M) introduced such an internal system and began encouraging its employees to get chipped. Although it was not mandatory, reportedly about half of them (41 out of 85) stepped forward and chose to get chipped!

So what can you do about this? Firstly, get informed and make sure you understand the true nature and danger of nanochip and smart technology. Secondly, make sure you never acquiesce to getting chipped, no matter what reason youre given. Doing so is tantamount to opening yourself to being remotely controlled without your knowledge. Thirdly, if you do discover a chip inside your body, get it removed. There are various ways to do. Some people crudely cut the chips out if they are large enough (i.e. a microchip instead of a nanochip). Other people claim you can used magnets such as neodymium magnets to render the nanochips useless. Hopefully, there will be intelligent inventors to step forward with new technologies that we can use to deactivate, disable and remove nanochips inside of our bodies.

The human microchipping agenda is really the same thing as the transhumanist agenda to turn mankind into machine which will ultimately mean becoming not superhuman but subhuman.

We need to be very careful and think critically as we go forward into a world of fantastic technology. Like the surgeons knife, it can heal or it can kill. Given everything we know, it would be nave to believe that nanochips will only be used for good. If were not aware, this technology will be used by the power-hungry to enslave us by tricking us with promises of utopia. Nanobots are already being used in Western medicine for all sorts of diseases. Once the smart grid is established, how will you avoid being monitored, tracked and influenced 24/7 every day of the year?

No matter how good the technology becomes, it can never replace the spirit of consciousness inside of you, which is your true power.

*****

Want the latest commentaryand analysis on Conspiracy, Health, Geopolitics, Sovereignty, Consciousness and more? Sign up forfree blog updates!

Makia Freeman is the editor of alternative news / independent media siteThe Freedom Articlesand senior researcher atToolsForFreedom.com(FaceBookhere), writing on many aspects of truth and freedom, from exposing aspects of the worldwideconspiracy to suggesting solutions for how humanity can create a new system of peace and abundance.

Sources:

*http://www.newsweek.com/nano-chipnanochips7nm-chipnanotechnologymacbookiphone-6ibmmoore039s-law-602458

*https://www.youtube.com/watch?v=d5OeZWfRhhs

*http://freedom-articles.toolsforfreedom.com/smart-deception/

*http://freedom-articles.toolsforfreedom.com/agenda-21-human-habitat/

*http://freedom-articles.toolsforfreedom.com/5g-iot-technological-control-grid/

*https://www.youtube.com/watch?v=e-svcCIDvvk

*http://freedom-articles.toolsforfreedom.com/people-voluntarily-chipping-themselves/

*http://www.chicagotribune.com/bluesky/technology/ct-wisconsin-company-microchips-workers-20170801-story.html

*https://www.youtube.com/watch?v=_FEWrnPHFPw

*http://www.thetruthdenied.com/news/2014/11/11/how-to-remove-an-rfid-implant/comment-page-1/

Read this article:

Nanochips & Smart Dust: New Face of the Human ...

IVFML Season 2, Episode 9: Going From Childless To …

Erik and Melissa Jones were optimistic and grateful. After thinking theyd have to go through the long and expensive process of in vitro fertilization, their infertility problem was attributed to a seminal blockage in Eriks testicles, which could be cleared with a simple procedure that would be covered by insurance.

And if the issue was fixed, it could give them a chance to conceive naturally.

It was kind of a weight off because [the urologist] seemed really optimistic, Melissa said. We were really excited.

But after his outpatient procedure, Erik started experiencing severe abdominal pain. He later became constipated, and after two weeks, the pain was so intense that he wondered if he was about to die.

Egg retrievals, testicular surgeries and other infertility-related procedures are extremely safe, and deaths and near-deaths are exceedingly rare.

Thats why Melissa was in shock as Eriks health kept deteriorating. First, he was diagnosed with a perforated bowel caused by the surgery, which had allowed fecal matter to leak into his abdomen for weeks. The toxins caused an infection to set in. That triggered sepsis, which is when the body over-responds to a threat, putting organs at risk of failure.

Once Erik was stabilized, doctors performed surgery to reroute his intestine to a colostomy bag attached to the outside of his abdomen. The colostomy bag would collect his waste for a year to allow his large intestine to heal from the perforation.

All of Eriks complications weighed heavily on Melissa and filled her with immense guilt.

I just start bawling because Im thinking, okay my husband just did this procedure, mostly because I want to have children, Melissa recalled. And with everything hes already been through, now hes going to be in surgery that he may not come out of.

Erik did come out of his surgery, and after a year, his colostomy reversal surgery was a success.

Immediately after his surgery, it was difficult for the couple to imagine trying any other medical interventions to try to conceive. Still, after time passed, Erik consented to more treatment. This time it was IVF, and it was Melissa who had to deal with all the appointments and procedures.

After several unsuccessful cycles, they finally made the decision to stop trying.

The Isolating Pain Of Involuntary Childlessness

While experiencing infertility itself doesnt have long-term psychological consequences, involuntary childlessness does, said infertility sociologist Larry Greil of Alfred University.

The people who are distressed tend to be the people who wanted children but never had them, said Greil, explaining his 2003 study on the issue.

Whether it be through adoption, giving birth or some other means, many infertile people do end up having children. But while there is research on infertile women who end up giving birth (Greil estimates itswell below 50 percent), and research on infertile families who adopt children, there is no comprehensive estimate of how many infertile people become parents in the end.

Strange as it may seem, no one has actually come up with a conclusive answer to the question: What percentage of infertile couples actually end up with a child? Greil said. Media reports give the impression that everyone comes out with a baby, and that impression is false.

Erik and Melissa suspected that their story, which ended with the failure of infertility treatment, was more common than success stories. Yet they couldnt find any support from others who had gone through something similar. Instead, they encountered hostility from infertile people for deciding to stop treatment, and disbelief and a lack of support from some friends or family who wanted them to just keep trying, despite Eriks near death experience.

Even with what weve been through, theres still people who have said, Dont quit why are you quitting? Melissa said.

To create a support community for themselves and people like them, Erik and Melissa created a podcast called Living Childfree With Erik And Melissa, and are hoping that other people in similar situations will reach out about their own experiences.

Theres still sadness. We still feel like outcasts. We havent really figured out that great path, Erik said.

But for me, not to get too philosophical, but I like the idea of trying to figure it out, he continued. Maybe Melissa and I wont figure it out, but maybe somebody coming behind us will.

IVFML Becoming Family is produced and edited by Anna Almendrala, Simon Ganz, Nick Offenberg and Sara Patterson. Send us an email at IVFML@huffpost.com.

Here is the original post:

IVFML Season 2, Episode 9: Going From Childless To ...

liberal | Definition of liberal in English by Oxford Dictionaries

adjective

1Willing to respect or accept behaviour or opinions different from one's own; open to new ideas.

liberal views towards divorce

More example sentences

Synonyms

unbiased, unprejudiced, prejudice-free, accepting, non-partisan, neutral, non-aligned, non-judgemental, non-discriminatory, anti-discrimination, objective, disinterested, dispassionate, detached

liberal citizenship laws

More example sentences

Synonyms

tolerant, unprejudiced, unbigoted, broad-minded, open-minded, enlightened, forbearing

a liberal democratic state

More example sentences

Synonyms

progressive, advanced, modern, forward-looking, forward-thinking, progressivist, go-ahead, enlightened, reformist, radical

More example sentences

Example sentences

2attributive (of education) concerned with broadening a person's general knowledge and experience, rather than with technical or professional training.

the provision of liberal adult education

More example sentences

3(especially of an interpretation of a law) broadly construed or understood; not strictly literal.

they could have given the 1968 Act a more liberal interpretation

More example sentences

Synonyms

flexible, broad, loose, rough, non-restrictive, free, general, non-literal, non-specific, not literal, not strict, not close

4Given, used, or occurring in generous amounts.

liberal amounts of wine had been consumed

More example sentences

Synonyms

abundant, copious, ample, plentiful, generous, lavish, luxuriant, profuse, considerable, prolific, rich

Sam was too liberal with the wine

More example sentences

Synonyms

generous, magnanimous, open-handed, unsparing, unstinting, ungrudging, lavish, free, munificent, bountiful, beneficent, benevolent, big-hearted, kind-hearted, kind, philanthropic, charitable, altruistic, unselfish

1A person of liberal views.

a concern among liberals about the relation of the citizen to the state

More example sentences

Example sentences

Middle English: via Old French from Latin liberalis, from liber free (man). The original sense was suitable for a free man, hence suitable for a gentleman (one not tied to a trade), surviving in liberal arts. Another early sense generous (compare with liberal (sense 4 of the adjective)) gave rise to an obsolete meaning free from restraint, leading to liberal (sense 1 of the adjective) (late 18th century).

Read more:

liberal | Definition of liberal in English by Oxford Dictionaries

Joint School of Nanoscience & Nanoengineering – North …

The Joint School of Nanoscience and Nanoengineering (JSNN), is an academic collaboration between North Carolina Agricultural and Technical State Universtity (NC A&T) and The University of North Carolina at Greensboro (UNCG). Located on the South Campus of Gateway University Research Park, JSNN builds on the strengths of the universities to oer innovative, cross-disciplinary graduate programs in the emerging areas of nanoscience and nanoengineering.

JSNN oers four degree programs, a Professional Science Masters (PSM) in Nanoscience, a Ph.D. in Nanoscience, an M.S. in Nanoengineering and a Ph.D. in Nanoengineering. Distance learning options are also in development.

JSNN has six research focus areas:

These technical areas aord numerous opportunities for collaboration with industrial partners.

JSNN is a $56.3 million, 105,000 square foot state-of -the-art science and engineering research building with nanoelectronics and nanobio clean rooms, nanoengineering and nanoscience laboratories and extensive materials analysis facilities. JSNNs characterization capability includes a suite of microscopes from Carl Zeiss SMT, including the only Orion Helium Ion microscope in the southeast. Also a visualization center allows three-dimension imaging for modeling of nanotechnology problems.

JSNN collaborates with Guilford Technical Community College and Forsyth Technical Community College on an internship program that exposes students to the advanced technology at its facility. JSNN also is actively engaged with K-12 outreach with Guilford County Schools.

FOR MORE INFORMATION:

Phone: +1 (336) 285-2800

Web: http://jsnn.ncat.uncg.edu

Twitter: https://twitter.com/#JSNN2907

Facebook: https://www.facebook.com/JSNN2907

Rootle, PBS KIDS 24/7 Channel, moves from the screen to the scenekicking off its new Block Party LIVECollege Edition seriesSaturday, March 30, from 10 a.m.-2 p.m., at the North Carolina A&Ts Alumni-Foundation Event Center.

Limited-resource and minority small farmers seeking new strategies to keep their farms viable have a new resource the Small Farms Task Force announced today by Cooperative Extension at North Carolina Agricultural and Technical State University.

Land O Lakes will host the Bot Shot event April 7 in Minneapolis. The North Carolina A&T robotics team, AggieBots, has been chosen as an alternate team.

Hosts Small Farms Week March 24-30

. Event is open to the public RSVP.

. Tickets available Feb. 25.

Read more:

Joint School of Nanoscience & Nanoengineering - North ...

Singularitarianism Research Papers – Academia.edu

Given the contemporary ambivalent standpoints toward the future of artificial intelligence, recently denoted as the phenomenon of Singularitarianism, Gregory Batesons core theories of ecology of mind, schismogenesis, and double bind, are... more

Given the contemporary ambivalent standpoints toward the future of artificial intelligence, recently denoted as the phenomenon of Singularitarianism, Gregory Batesons core theories of ecology of mind, schismogenesis, and double bind, are hereby revisited, taken out of their respective sociological, anthropological, and psychotherapeutic contexts and recontextualized in the field of Roboethics as to a twofold aim: (a) the proposal of a rigid ethical standpoint toward both artificial and non-artificial agents, and (b) an explanatory analysis of the reasons bringing about such a polarized outcome of contradictory views in regard to the future of robots. Firstly, the paper applies the Batesonian ecology of mind for constructing a unified roboethical framework which endorses a flat ontology embracing multiple forms of agency, borrowing elements from Floridis information ethics, classic virtue ethics, Felix Guattaris ecosophy, Braidottis posthumanism, and the Japanese animist doctrine of Rinri. The proposed framework wishes to act as a pragmatic solution to the endless dispute regarding the nature of consciousness or the natural/artificial dichotomy and as a further argumentation against the recognition of future artificial agency as a potential existential threat. Secondly, schismogenic analysis is employed to describe the emergence of the hostile humanrobot cultural contact, tracing its origins in the early scientific discourse of manmachine symbiosis up to the contemporary countermeasures against superintelligent agents. Thirdly, Batesons double bind theory is utilized as an analytic methodological tool of humanitys collective agency, leading to the hypothesis of collective schizophrenic symptomatology, due to the constancy and intensity of confronting messages emitted by either proponents or opponents of artificial intelligence. The double binds treatment is the mirroring therapeutic double bind, and the article concludes in proposing the conceptual pragmatic imperative necessary for such a condition to follow: humanitys conscience of habitualizing danger and familiarization with its possible future extinction, as the result of a progressive blurrification between natural and artificial agency, succeeded by a totally non-organic intelligent form of agency.

Go here to see the original:

Singularitarianism Research Papers - Academia.edu

medicine | Definition, Fields, Research, & Facts | Britannica.com

Organization of health services

It is generally the goal of most countries to have their health services organized in such a way to ensure that individuals, families, and communities obtain the maximum benefit from current knowledge and technology available for the promotion, maintenance, and restoration of health. In order to play their part in this process, governments and other agencies are faced with numerous tasks, including the following: (1) They must obtain as much information as is possible on the size, extent, and urgency of their needs; without accurate information, planning can be misdirected. (2) These needs must then be revised against the resources likely to be available in terms of money, manpower, and materials; developing countries may well require external aid to supplement their own resources. (3) Based on their assessments, countries then need to determine realistic objectives and draw up plans. (4) Finally, a process of evaluation needs to be built into the program; the lack of reliable information and accurate assessment can lead to confusion, waste, and inefficiency.

Health services of any nature reflect a number of interrelated characteristics, among which the most obvious, but not necessarily the most important from a national point of view, is the curative function; that is to say, caring for those already ill. Others include special services that deal with particular groups (such as children or pregnant women) and with specific needs such as nutrition or immunization; preventive services, the protection of the health both of individuals and of communities; health education; and, as mentioned above, the collection and analysis of information.

In the curative domain there are various forms of medical practice. They may be thought of generally as forming a pyramidal structure, with three tiers representing increasing degrees of specialization and technical sophistication but catering to diminishing numbers of patients as they are filtered out of the system at a lower level. Only those patients who require special attention either for diagnosis or treatment should reach the second (advisory) or third (specialized treatment) tiers where the cost per item of service becomes increasingly higher. The first level represents primary health care, or first contact care, at which patients have their initial contact with the health-care system.

Primary health care is an integral part of a countrys health maintenance system, of which it forms the largest and most important part. As described in the declaration of Alma-Ata, primary health care should be based on practical, scientifically sound and socially acceptable methods and technology made universally accessible to individuals and families in the community through their full participation and at a cost that the community and country can afford to maintain at every stage of their development. Primary health care in the developed countries is usually the province of a medically qualified physician; in the developing countries first contact care is often provided by nonmedically qualified personnel.

The vast majority of patients can be fully dealt with at the primary level. Those who cannot are referred to the second tier (secondary health care, or the referral services) for the opinion of a consultant with specialized knowledge or for X-ray examinations and special tests. Secondary health care often requires the technology offered by a local or regional hospital. Increasingly, however, the radiological and laboratory services provided by hospitals are available directly to the family doctor, thus improving his service to patients and increasing its range. The third tier of health care, employing specialist services, is offered by institutions such as teaching hospitals and units devoted to the care of particular groupswomen, children, patients with mental disorders, and so on. The dramatic differences in the cost of treatment at the various levels is a matter of particular importance in developing countries, where the cost of treatment for patients at the primary health-care level is usually only a small fraction of that at the third level; medical costs at any level in such countries, however, are usually borne by the government.

Ideally, provision of health care at all levels will be available to all patients; such health care may be said to be universal. The well-off, both in relatively wealthy industrialized countries and in the poorer developing world, may be able to get medical attention from sources they prefer and can pay for in the private sector. The vast majority of people in most countries, however, are dependent in various ways upon health services provided by the state, to which they may contribute comparatively little or, in the case of poor countries, nothing at all.

The costs to national economics of providing health care are considerable and have been growing at a rapidly increasing rate, especially in countries such as the United States, Germany, and Sweden; the rise in Britain has been less rapid. This trend has been the cause of major concerns in both developed and developing countries. Some of this concern is based upon the lack of any consistent evidence to show that more spending on health care produces better health. There is a movement in developing countries to replace the type of organization of health-care services that evolved during European colonial times with some less expensive, and for them, more appropriate, health-care system.

In the industrialized world the growing cost of health services has caused both private and public health-care delivery systems to question current policies and to seek more economical methods of achieving their goals. Despite expenditures, health services are not always used effectively by those who need them, and results can vary widely from community to community. In Britain, for example, between 1951 and 1971 the death rate fell by 24 percent in the wealthier sections of the population but by only half that in the most underprivileged sections of society. The achievement of good health is reliant upon more than just the quality of health care. Health entails such factors as good education, safe working conditions, a favourable environment, amenities in the home, well-integrated social services, and reasonable standards of living.

The developing countries differ from one another culturally, socially, and economically, but what they have in common is a low average income per person, with large percentages of their populations living at or below the poverty level. Although most have a small elite class, living mainly in the cities, the largest part of their populations live in rural areas. Urban regions in developing and some developed countries in the mid- and late 20th century have developed pockets of slums, which are growing because of an influx of rural peoples. For lack of even the simplest measures, vast numbers of urban and rural poor die each year of preventable and curable diseases, often associated with poor hygiene and sanitation, impure water supplies, malnutrition, vitamin deficiencies, and chronic preventable infections. The effect of these and other deprivations is reflected by the finding that in the 1980s the life expectancy at birth for men and women was about one-third less in Africa than it was in Europe; similarly, infant mortality in Africa was about eight times greater than in Europe. The extension of primary health-care services is therefore a high priority in the developing countries.

The developing countries themselves, lacking the proper resources, have often been unable to generate or implement the plans necessary to provide required services at the village or urban poor level. It has, however, become clear that the system of health care that is appropriate for one country is often unsuitable for another. Research has established that effective health care is related to the special circumstances of the individual country, its people, culture, ideology, and economic and natural resources.

The rising costs of providing health care have influenced a trend, especially among the developing nations, to promote services that employ less highly trained primary health-care personnel who can be distributed more widely in order to reach the largest possible proportion of the community. The principal medical problems to be dealt with in the developing world include undernutrition, infection, gastrointestinal disorders, and respiratory complaints, which themselves may be the result of poverty, ignorance, and poor hygiene. For the most part, these are easy to identify and to treat. Furthermore, preventive measures are usually simple and cheap. Neither treatment nor prevention requires extensive professional training: in most cases they can be dealt with adequately by the primary health worker, a term that includes all nonprofessional health personnel.

Those concerned with providing health care in the developed countries face a different set of problems. The diseases so prevalent in the Third World have, for the most part, been eliminated or are readily treatable. Many of the adverse environmental conditions and public health hazards have been conquered. Social services of varying degrees of adequacy have been provided. Public funds can be called upon to support the cost of medical care, and there are a variety of private insurance plans available to the consumer. Nevertheless, the funds that a government can devote to health care are limited and the cost of modern medicine continues to increase, thus putting adequate medical services beyond the reach of many. Adding to the expense of modern medical practices is the increasing demand for greater funding of health education and preventive measures specifically directed toward the poor.

In many parts of the world, particularly in developing countries, people get their primary health care, or first-contact care, where available at all, from nonmedically qualified personnel; these cadres of medical auxiliaries are being trained in increasing numbers to meet overwhelming needs among rapidly growing populations. Even among the comparatively wealthy countries of the world, containing in all a much smaller percentage of the worlds population, escalation in the costs of health services and in the cost of training a physician has precipitated some movement toward reappraisal of the role of the medical doctor in the delivery of first-contact care.

In advanced industrial countries, however, it is usually a trained physician who is called upon to provide the first-contact care. The patient seeking first-contact care can go either to a general practitioner or turn directly to a specialist. Which is the wisest choice has become a subject of some controversy. The general practitioner, however, is becoming rather rare in some developed countries. In countries where he does still exist, he is being increasingly observed as an obsolescent figure, because medicine covers an immense, rapidly changing, and complex field of which no physician can possibly master more than a small fraction. The very concept of the general practitioner, it is thus argued, may be absurd.

The obvious alternative to general practice is the direct access of a patient to a specialist. If a patient has problems with vision, he goes to an eye specialist, and if he has a pain in his chest (which he fears is due to his heart), he goes to a heart specialist. One objection to this plan is that the patient often cannot know which organ is responsible for his symptoms, and the most careful physician, after doing many investigations, may remain uncertain as to the cause. Breathlessnessa common symptommay be due to heart disease, to lung disease, to anemia, or to emotional upset. Another common symptom is general malaisefeeling run-down or always tired; others are headache, chronic low backache, rheumatism, abdominal discomfort, poor appetite, and constipation. Some patients may also be overtly anxious or depressed. Among the most subtle medical skills is the ability to assess people with such symptoms and to distinguish between symptoms that are caused predominantly by emotional upset and those that are predominantly of bodily origin. A specialist may be capable of such a general assessment, but, often, with emphasis on his own subject, he fails at this point. The generalist with his broader training is often the better choice for a first diagnosis, with referral to a specialist as the next option.

It is often felt that there are also practical advantages for the patient in having his own doctor, who knows about his background, who has seen him through various illnesses, and who has often looked after his family as well. This personal physician, often a generalist, is in the best position to decide when the patient should be referred to a consultant.

The advantages of general practice and specialization are combined when the physician of first contact is a pediatrician. Although he sees only children and thus acquires a special knowledge of childhood maladies, he remains a generalist who looks at the whole patient. Another combination of general practice and specialization is represented by group practice, the members of which partially or fully specialize. One or more may be general practitioners, and one may be a surgeon, a second an obstetrician, a third a pediatrician, and a fourth an internist. In isolated communities group practice may be a satisfactory compromise, but in urban regions, where nearly everyone can be sent quickly to a hospital, the specialist surgeon working in a fully equipped hospital can usually provide better treatment than a general practitioner surgeon in a small clinic hospital.

Before 1948, general practitioners in Britain settled where they could make a living. Patients fell into two main groups: weekly wage earners, who were compulsorily insured, were on a doctors panel and were given free medical attention (for which the doctor was paid quarterly by the government); most of the remainder paid the doctor a fee for service at the time of the illness. In 1948 the National Health Service began operation. Under its provisions, everyone is entitled to free medical attention with a general practitioner with whom he is registered. Though general practitioners in the National Health Service are not debarred from also having private patients, these must be people who are not registered with them under the National Health Service. Any physician is free to work as a general practitioner entirely independent of the National Health Service, though there are few who do so. Almost the entire population is registered with a National Health Service general practitioner, and the vast majority automatically sees this physician, or one of his partners, when they require medical attention. A few people, mostly wealthy, while registered with a National Health Service general practitioner, regularly see another physician privately; and a few may occasionally seek a private consultation because they are dissatisfied with their National Health Service physician.

A general practitioner under the National Health Service remains an independent contractor, paid by a capitation fee; that is, according to the number of people registered with him. He may work entirely from his own office, and he provides and pays his own receptionist, secretary, and other ancillary staff. Most general practitioners have one or more partners and work more and more in premises built for the purpose. Some of these structures are erected by the physicians themselves, but many are provided by the local authority, the physicians paying rent for using them. Health centres, in which groups of general practitioners work have become common.

In Britain only a small minority of general practitioners can admit patients to a hospital and look after them personally. Most of this minority are in country districts, where, before the days of the National Health Service, there were cottage hospitals run by general practitioners; many of these hospitals continued to function in a similar manner. All general practitioners use such hospital facilities as X-ray departments and laboratories, and many general practitioners work in hospitals in emergency rooms (casualty departments) or as clinical assistants to consultants, or specialists.

General practitioners are spread more evenly over the country than formerly, when there were many in the richer areas and few in the industrial towns. The maximum allowed list of National Health Service patients per doctor is 3,500; the average is about 2,500. Patients have free choice of the physician with whom they register, with the proviso that they cannot be accepted by one who already has a full list and that a physician can refuse to accept them (though such refusals are rare). In remote rural places there may be only one physician within a reasonable distance.

Until the mid-20th century it was not unusual for the doctor in Britain to visit patients in their own homes. A general practitioner might make 15 or 20 such house calls in a day, as well as seeing patients in his office or surgery, often in the evenings. This enabled him to become a family doctor in fact as well as in name. In modern practice, however, a home visit is quite exceptional and is paid only to the severely disabled or seriously ill when other recourses are ruled out. All patients are normally required to go to the doctor.

It has also become unusual for a personal doctor to be available during weekends or holidays. His place may be taken by one of his partners in a group practice, a provision that is reasonably satisfactory. General practitioners, however, may now use one of several commercial deputizing services that employs young doctors to be on call. Although some of these young doctors may be well experienced, patients do not generally appreciate this kind of arrangement.

Whereas in Britain the doctor of first contact is regularly a general practitioner, in the United States the nature of first-contact care is less consistent. General practice in the United States has been in a state of decline in the second half of the 20th century, especially in metropolitan areas. The general practitioner, however, is being replaced to some degree by the growing field of family practice. In 1969 family practice was recognized as a medical specialty after the American Academy of General Practice (now the American Academy of Family Physicians) and the American Medical Association created the American Board of General (now Family) Practice. Since that time the field has become one of the larger medical specialties in the United States. The family physicians were the first group of medical specialists in the United States for whom recertification was required.

There is no national health service, as such, in the United States. Most physicians in the country have traditionally been in some form of private practice, whether seeing patients in their own offices, clinics, medical centres, or another type of facility and regardless of the patients income. Doctors are usually compensated by such state and federally supported agencies as Medicaid (for treating the poor) and Medicare (for treating the elderly); not all doctors, however, accept poor patients. There are also some state-supported clinics and hospitals where the poor and elderly may receive free or low-cost treatment, and some doctors devote a small percentage of their time to treatment of the indigent. Veterans may receive free treatment at Veterans Administration hospitals, and the federal government through its Indian Health Service provides medical services to American Indians and Alaskan natives, sometimes using trained auxiliaries for first-contact care.

In the rural United States first-contact care is likely to come from a generalist. The middle- and upper-income groups living in urban areas, however, have access to a larger number of primary medical care options. Children are often taken to pediatricians, who may oversee the childs health needs until adulthood. Adults frequently make their initial contact with an internist, whose field is mainly that of medical (as opposed to surgical) illnesses; the internist often becomes the family physician. Other adults choose to go directly to physicians with narrower specialties, including dermatologists, allergists, gynecologists, orthopedists, and ophthalmologists.

Patients in the United States may also choose to be treated by doctors of osteopathy. These doctors are fully qualified, but they make up only a small percentage of the countrys physicians. They may also branch off into specialties, but general practice is much more common in their group than among M.D.s.

It used to be more common in the United States for physicians providing primary care to work independently, providing their own equipment and paying their own ancillary staff. In smaller cities they mostly had full hospital privileges, but in larger cities these privileges were more likely to be restricted. Physicians, often sharing the same specialties, are increasingly entering into group associations, where the expenses of office space, staff, and equipment may be shared; such associations may work out of suites of offices, clinics, or medical centres. The increasing competition and risks of private practice have caused many physicians to join Health Maintenance Organizations (HMOs), which provide comprehensive medical care and hospital care on a prepaid basis. The cost savings to patients are considerable, but they must use only the HMO doctors and facilities. HMOs stress preventive medicine and out-patient treatment as opposed to hospitalization as a means of reducing costs, a policy that has caused an increased number of empty hospital beds in the United States.

While the number of doctors per 100,000 population in the United States has been steadily increasing, there has been a trend among physicians toward the use of trained medical personnel to handle some of the basic services normally performed by the doctor. So-called physician extender services are commonly divided into nurse practitioners and physicians assistants, both of whom provide similar ancillary services for the general practitioner or specialist. Such personnel do not replace the doctor. Almost all American physicians have systems for taking each others calls when they become unavailable. House calls in the United States, as in Britain, have become exceedingly rare.

In Russia general practitioners are prevalent in the thinly populated rural areas. Pediatricians deal with children up to about age 15. Internists look after the medical ills of adults, and occupational physicians deal with the workers, sharing care with internists.

Teams of physicians with experience in varying specialties work from polyclinics or outpatient units, where many types of diseases are treated. Small towns usually have one polyclinic to serve all purposes. Large cities commonly have separate polyclinics for children and adults, as well as clinics with specializations such as womens health care, mental illnesses, and sexually transmitted diseases. Polyclinics usually have X-ray apparatus and facilities for examination of tissue specimens, facilities associated with the departments of the district hospital. Beginning in the late 1970s was a trend toward the development of more large, multipurpose treatment centres, first-aid hospitals, and specialized medicine and health care centres.

Home visits have traditionally been common, and much of the physicians time is spent in performing routine checkups for preventive purposes. Some patients in sparsely populated rural areas may be seen first by feldshers (auxiliary health workers), nurses, or midwives who work under the supervision of a polyclinic or hospital physician. The feldsher was once a lower-grade physician in the army or peasant communities, but feldshers are now regarded as paramedical workers.

In Japan, with less rigid legal restriction of the sale of pharmaceuticals than in the West, there was formerly a strong tradition of self-medication and self-treatment. This was modified in 1961 by the institution of health insurance programs that covered a large proportion of the population; there was then a great increase in visits to the outpatient clinics of hospitals and to private clinics and individual physicians.

When Japan shifted from traditional Chinese medicine with the adoption of Western medical practices in the 1870s, Germany became the chief model. As a result of German influence and of their own traditions, Japanese physicians tended to prefer professorial status and scholarly research opportunities at the universities or positions in the national or prefectural hospitals to private practice. There were some pioneering physicians, however, who brought medical care to the ordinary people.

Physicians in Japan have tended to cluster in the urban areas. The Medical Service Law of 1963 was amended to empower the Ministry of Health and Welfare to control the planning and distribution of future public and nonprofit medical facilities, partly to redress the urban-rural imbalance. Meanwhile, mobile services were expanded.

The influx of patients into hospitals and private clinics after the passage of the national health insurance acts of 1961 had, as one effect, a severe reduction in the amount of time available for any one patient. Perhaps in reaction to this situation, there has been a modest resurgence in the popularity of traditional Chinese medicine, with its leisurely interview, its dependence on herbal and other natural medicines, and its other traditional diagnostic and therapeutic practices. The rapid aging of the Japanese population as a result of the sharply decreasing death rate and birth rate has created an urgent need for expanded health care services for the elderly. There has also been an increasing need for centres to treat health problems resulting from environmental causes.

On the continent of Europe there are great differences both within single countries and between countries in the kinds of first-contact medical care. General practice, while declining in Europe as elsewhere, is still rather common even in some large cities, as well as in remote country areas.

In The Netherlands, departments of general practice are administered by general practitioners in all the medical schoolsan exceptional state of affairsand general practice flourishes. In the larger cities of Denmark, general practice on an individual basis is usual and popular, because the physician works only during office hours. In addition, there is a duty doctor service for nights and weekends. In the cities of Sweden, primary care is given by specialists. In the remote regions of northern Sweden, district doctors act as general practitioners to patients spread over huge areas; the district doctors delegate much of their home visiting to nurses.

In France there are still general practitioners, but their number is declining. Many medical practitioners advertise themselves directly to the public as specialists in internal medicine, ophthalmologists, gynecologists, and other kinds of specialists. Even when patients have a general practitioner, they may still go directly to a specialist. Attempts to stem the decline in general practice are being made by the development of group practice and of small rural hospitals equipped to deal with less serious illnesses, where general practitioners can look after their patients.

Although Israel has a high ratio of physicians to population, there is a shortage of general practitioners, and only in rural areas is general practice common. In the towns many people go directly to pediatricians, gynecologists, and other specialists, but there has been a reaction against this direct access to the specialist. More general practitioners have been trained, and the Israel Medical Association has recommended that no patient should be referred to a specialist except by the family physician or on instructions given by the family nurse. At Tel Aviv University there is a department of family medicine. In some newly developing areas, where the doctor shortage is greatest, there are medical centres at which all patients are initially interviewed by a nurse. The nurse may deal with many minor ailments, thus freeing the physician to treat the more seriously ill.

Nearly half the medical doctors in Australia are general practitionersa far higher proportion than in most other advanced countriesthough, as elsewhere, their numbers are declining. They tend to do far more for their patients than in Britain, many performing such operations as removal of the appendix, gallbladder, or uterus, operations that elsewhere would be carried out by a specialist surgeon. Group practices are common.

See the original post:

medicine | Definition, Fields, Research, & Facts | Britannica.com

Political correctness (PC) | Britannica.com

Political correctness (PC), term used to refer to language that seems intended to give the least amount of offense, especially when describing groups identified by external markers such as race, gender, culture, or sexual orientation. The concept has been discussed, disputed, criticized, and satirized by commentators from across the political spectrum. The term has often been used derisively to ridicule the notion that altering language usage can change the publics perceptions and beliefs as well as influence outcomes.

The term first appeared in Marxist-Leninist vocabulary following the Russian Revolution of 1917. At that time it was used to describe adherence to the policies and principles of the Communist Party of the Soviet Union (that is, the party line). During the late 1970s and early 1980s the term began to be used wittily by liberal politicians to refer to the extremism of some left-wing issues, particularly regarding what was perceived as an emphasis on rhetoric over content. In the early 1990s the term was used by conservatives to question and oppose what they perceived as the rise of liberal left-wing curriculum and teaching methods on university and college campuses in the United States. By the late 1990s the usage of the term had again decreased, and it was most frequently employed by comedians and others to lampoon political language. At times it was also used by the left to scoff at conservative political themes.

Linguistically, the practice of what is called political correctness seems to be rooted in a desire to eliminate exclusion of various identity groups based on language usage. According to the Sapir-Whorf, or Whorfian, hypothesis, our perception of reality is determined by our thought processes, which are influenced by the language we use. In this way language shapes our reality and tells us how to think about and respond to that reality. Language also reveals and promotes our biases. Therefore, according to the hypothesis, using sexist language promotes sexism and using racial language promotes racism.

Those who are most strongly opposed to so-called political correctness view it as censorship and a curtailment of freedom of speech that places limits on debates in the public arena. They contend that such language boundaries inevitably lead to self-censorship and restrictions on behaviour. They further believe that political correctness perceives offensive language where none exists. Others believe that political correctness or politically correct has been used as an epithet to stop legitimate attempts to curb hate speech and minimize exclusionary speech practices. Ultimately, the ongoing discussion surrounding political correctness seems to centre on language, naming, and whose definitions are accepted.

See the original post here:

Political correctness (PC) | Britannica.com

Gene Therapy – REGENXBIO

A change or damage to a gene can affect the message the gene carries, and that message could be telling our cells to make a specific protein that the body needs in order to function properly. NAV Gene Therapy focuses on correcting these defects in genetic diseases by delivering a healthy, working copy of the gene to the cells in need of repair, which potentially enables the body to make the deficient protein. The NAV Technology Platform can also be used to deliver a gene that allows the body to produce a therapeutic protein to treat a specific disease.

Heres how the NAV Technology Platform works:

First, our scientists insert the gene of interest (that is, either the missing/defective gene or a gene to create a therapeutic protein) into a NAV Vector. A NAV Vector is a modified adeno-associated virus (AAV), which is not known to cause disease in humans. It is common for viruses to be used as vectors in gene and cell therapy. The NAV Vector acts as a delivery vehicle, transporting and unloading the gene into cells where the gene triggers production of the protein the body needs.

Our NAV Technology Platform includes more than 100 novel AAV vectors, including AAV8, AAV9 and AAVrh10, many of which are tailored to reach specific areas of the body where the gene is needed most. For example, gene therapy delivered to the liver has the potential to treat metabolic diseases like hemophilia, whereas gene therapy designed to reach the central nervous system (brain and spinal cord) may primarily impact symptoms of diseases that affect the brain and cognition.

Next, the NAV Vector is administered into the patient by injection or infusion, and is expected to make its way to cells that need the protein. The NAV Vector is designed to reach the target cells and deliver the gene it is carrying, enabling the cells to make the protein the body needs. These genes have the potential to correct disease by triggering production of a therapeutic protein or by allowing the bodys natural mechanisms to work the way they were intended.

Because gene therapies may have a long-term effect, a single administration of NAV Gene Therapy has the potential to do the same work as years of conventional chronic therapies.

Learn more about gene therapy below:

Read the original:

Gene Therapy - REGENXBIO

What is Gene Therapy? | Pfizer: One of the world’s premier …

Gene therapy is a technology aimed at correcting or fixing a gene that may be defective. This exciting and potentially transformative area of research is focused on the development of potential treatments for monogenic diseases, or diseases that are caused by a defect in one gene.

The technology involves the introduction of genetic material (DNA or RNA) into the body, often through delivering a corrected copy of a gene to a patients cells to compensate for a defective one, using a viral vector.

The technology involves the introduction of genetic material (DNA or RNA) into the body, often through delivering a corrected copy of a gene to a patients cells to compensate for a defective one, using a viral vector.

Viral vectors can be developed using adeno-associated virus (AAV), a naturally occurring virus which has been adapted for gene therapy use. Its ability to deliver genetic material to a wide range of tissues makes AAV vectors useful for transferring therapeutic genes into target cells. Gene therapy research holds tremendous promise in leading to the possible development of highly-specialized, potentially one-time delivery treatments for patients suffering from rare, monogenic diseases.

Pfizer aims to build an industry-leading gene therapy platform with a strategy focused on establishing a transformational portfolio through in-house capabilities, and enhancing those capabilities through strategic collaborations, as well as potential licensing and M&A activities.

We're working to access the most effective vector designs available to build a robust clinical stage portfolio, and employing a scalable manufacturing approach, proprietary cell lines and sophisticated analytics to support clinical development.

In addition, we're collaborating with some of the foremost experts in this field, through collaborations with Spark Therapeutics, Inc., on a potentially transformative gene therapy treatment for hemophilia B, which received Breakthrough Therapy designation from the US Food and Drug Administration, and 4D Molecular Therapeutics to discover and develop targeted next-generation AAV vectors for cardiac disease.

Gene therapy holds the promise of bringing true disease modification for patients suffering from devastating diseases, a promise were working to seeing become a reality in the years to come.

Continue reading here:

What is Gene Therapy? | Pfizer: One of the world's premier ...

Gene Therapy Technology Explanied

Virtually all cells in the human body contain genes, making them potential targets for gene therapy. However, these cells can be divided into two major categories: somatic cells (most cells of the body) or cells of the germline (eggs or sperm). In theory it is possible to transform either somatic cells or germ cells.

Gene therapy using germ line cells results in permanent changes that are passed down to subsequent generations. If done early in embryologic development, such as during preimplantation diagnosis and in vitro fertilization, the gene transfer could also occur in all cells of the developing embryo. The appeal of germ line gene therapy is its potential for offering a permanent therapeutic effect for all who inherit the target gene. Successful germ line therapies introduce the possibility of eliminating some diseases from a particular family, and ultimately from the population, forever. However, this also raises controversy. Some people view this type of therapy as unnatural, and liken it to "playing God." Others have concerns about the technical aspects. They worry that the genetic change propagated by germ line gene therapy may actually be deleterious and harmful, with the potential for unforeseen negative effects on future generations.

Somatic cells are nonreproductive. Somatic cell therapy is viewed as a more conservative, safer approach because it affects only the targeted cells in the patient, and is not passed on to future generations. In other words, the therapeutic effect ends with the individual who receives the therapy. However, this type of therapy presents unique problems of its own. Often the effects of somatic cell therapy are short-lived. Because the cells of most tissues ultimately die and are replaced by new cells, repeated treatments over the course of the individual's life span are required to maintain the therapeutic effect. Transporting the gene to the target cells or tissue is also problematic. Regardless of these difficulties, however, somatic cell gene therapy is appropriate and acceptable for many disorders, including cystic fibrosis, muscular dystrophy, cancer, and certain infectious diseases. Clinicians can even perform this therapy in utero, potentially correcting or treating a life-threatening disorder that may significantly impair a baby's health or development if not treated before birth.

In summary, the distinction is that the results of any somatic gene therapy are restricted to the actual patient and are not passed on to his or her children. All gene therapy to date on humans has been directed at somatic cells, whereas germline engineering in humans remains controversial and prohibited in for instance the European Union.

Somatic gene therapy can be broadly split into two categories:

Continue reading here:

Gene Therapy Technology Explanied

The EU Approved a Ban on Single-Use Plastics

The EU voted on Wednesday to support plans for the elimination of most uses of single-use plastic, including cutlery, straws, and plastic plates.

Complete Ban

The European parliament voted Wednesday to support plans for the elimination of most uses of single-use plastic, ranging from cutlery and straws to coffee stirrers and plastic plates.

It’s a significant step that could encourage other governments around the globe to also commit to reducing the amount of plastics that end up in landfills, waterways, and oceans — but it’s not going to be instituted overnight.

Plastic-less

In October 2018, the EU voted on the proposal that was overwhelmingly backed by the European Parliament. This week’s vote could lead to EU member states implementing a ban by 2021.

Also included are plans to improve the quality of tap water and reduce the use of plastic bottles. The proposal also would “tighten the maximum limits for certain pollutants such as lead (to be reduced by half), harmful bacteria, and introduce new caps for most polluting substances found in tap water,” according to a statement.

The new plans would also require plastic bottles to be made up of 25 percent recycled material by 2025.

Last Straw

We’ve reported previously about large cities banning single-use plastic straws, which pose a serious threat to marine life. Seattle became the first major U.S. city to ban them in July 2018 to avoid dumping more plastic into our planet’s oceans.

But the entire EU backing a ban is a major move — and one that could push other areas around the world to follow suit.

READ MORE: Europe bans single-use plastics. And glitter could be next. [The Washington Post]

More on the plastic ban: The EU Just Voted to Completely Ban Single-Use Plastics

The post The EU Approved a Ban on Single-Use Plastics appeared first on Futurism.

Read this article:
The EU Approved a Ban on Single-Use Plastics

Mini Helicopter Destined for Mars Aces Flight Tests

After two days of testing, NASA is confident its four-pound Mars Helicopter is ready to begin its journey to the Red Planet.

Dynamic Duo

If all goes as planned, when NASA’s Mars 2020 rover reaches the Red Planet in February 2021, it’ll bring a tiny buddy along with it.

In May 2018, NASA announced plans to create the Mars Helicopter, a four-pound autonomous rotorcraft designed to accompany the Mars 2020 rover on its upcoming mission.

After nearly two months of testing, the agency announced on Thursday that the helicopter is now Mars-ready, meaning it’ll likely be the first heavier-than-air craft to take flight on another planet — opening new options for the future of off-world exploration.

First Flight

According to a press release from NASA’s Jet Propulsion Laboratory, the first step to testing the mini helicopter destined for Mars was creating a mini model of Mars on Earth.

To do that, the Mars Helicopter team removed all the nitrogen, oxygen, and other gases from the air within JPL’s 25-foot-wide Space Simulator vacuum chamber, replacing it with carbon dioxide to mimic the composition of Mars’ atmosphere.

To replicate Mars’ lower gravity, the team attached a motorized lanyard to the helicopter. By tugging upward on the helicopter, this lanyard served as an effective “gravity offload system.”

“The gravity offload system performed perfectly, just like our helicopter,” test conductor Teddy Tzanetos said in the press release, later adding that it was “a heck of a first flight.”

Next Stop: Mars

The team tested the helicopter again the next day, and though the helicopter flew for less than one minute in total, the researchers claim it was long enough to ensure the craft’s more than 1,500 parts function as designed.

The team is now confident the mini helicopter is ready to begin its journey to Mars in July 2020, nestled under the Mars 2020 rover’s belly — conjuring images of a joey and its mother kangaroo.

A few months after the pair lands on Mars, the helicopter will set off on a series of test flights each up to 90 seconds long — and according to Thomas Zurbuchen, Associate Administrator for NASA’s Science Mission Directorate at NASA’s Washington headquarters, the implications of these flights could be profound.

“The ability to see clearly what lies beyond the next hill is crucial for future explorers,” Zurbuchen said in the press release first announcing the helicopter. “We already have great views of Mars from the surface as well as from orbit. With the added dimension of a bird’s-eye view from a ‘marscopter,’ we can only imagine what future missions will achieve.”

READ MORE: NASA’s Mars Helicopter Completes Flight Tests [Jet Propulsion Laboratory]

More on the Mars Helicopter: NASA Just Unveiled This Awesome, Tiny Helicopter That Will Cruise Over Mars

The post Mini Helicopter Destined for Mars Aces Flight Tests appeared first on Futurism.

Continue reading here:
Mini Helicopter Destined for Mars Aces Flight Tests