ChatGPT: The Most Advanced AI Chatbot in 2022

ChatGPTuses deep learning algorithms to generate text responses to prompts. The model is based on the GPT-3 architecture, which is a type of transformer model that uses self-attention mechanisms to process and generate text.

The GPT-3 architecture is a type of neural network that is composed of multiple layers of interconnected nodes. Each node in the network is designed to process a specific aspect of the input text, such as the overall meaning, the syntactic structure, or the contextual information. As the input text is passed through the network, the nodes work together to generate a coherent and grammatically correct response.

One of the key features of the GPT-3 architecture is its ability to learn from large amounts of data. The ChatGPT model has been trained on a massive corpus of text data, which includes a wide range of topics and styles. As a result, the model is able to generate responses that are highly relevant to the prompt and that exhibit a level of knowledge and understanding that is similar to that of a human.

Another advantage of the GPT-3 architecture is its ability to handle long-range dependencies in the input text. This is important because many natural language tasks, such as language translation or text summarization, require the model to understand the overall meaning and context of the text in order to generate a correct response. The self-attention mechanisms in the GPT-3 architecture allow the model to capture these long-range dependencies and generate accurate and fluent responses.

Overall, the technical principle of ChatGPT is based on the GPT-3 architecture, which uses deep learning algorithms and self-attention mechanisms to generate human-like text responses to prompts. This allows the model to handle a wide range of natural language tasks, such as text generation and language translation, with high accuracy and fluency.

If you want to learn ChatGPT code, go to OpenAI official website or ChatGPT Github to learn more technical articles about ChatGPT.

Go here to read the rest:

ChatGPT: The Most Advanced AI Chatbot in 2022

How Ashley Bidens Diary Made Its Way to Project Veritas

Roberta Kaplan, a lawyer for Ms. Biden, declined to comment.

The episode has its roots in the spring of 2020, as Ms. Bidens father was closing in on the Democratic presidential nomination. Ms. Biden, who has kept a low profile throughout her fathers vice presidency and presidency, had left a job the year before working for a criminal justice group in Delaware.

She was living in Delray Beach, Fla., a small city between Miami and West Palm Beach, with a friend who had rented a two-bedroom house lined with palm trees with a large swimming pool and wraparound driveway, according to people familiar with the events. Ms. Biden, who had little public role in her fathers campaign, had earlier been in rehab in Florida in 2019, and the friends house provided a haven where she could avoid the media and the glare of the campaign.

But in June, with the campaign ramping up, she headed to the Philadelphia area, planning to return to the Delray home in the fall before the lease expired in November. She decided to leave some of her belongings behind, including a duffel bag and another bag, people familiar with the events said.

Weeks after Ms. Biden headed to the Northeast, the friend who had been hosting Ms. Biden in the house allowed an ex-girlfriend named Aimee Harris and her two children to move in. Ms. Harris was in a contentious custody dispute and was struggling financially, according to Palm Beach County court records. At one point in February 2020, she had faced eviction while living at a rental property in nearby Jupiter.

Shortly after moving into the Delray home, Ms. Harris whose social media postings and conversations with friends suggested that she was a fan of Mr. Trump learned that Ms. Biden had stayed there previously and that some of her things were still there, according to two people familiar with the matter.

Exactly what happened next remains the subject of the federal investigation. But by September, the diary had been acquired from Ms. Harris and a friend by Project Veritas, whose operations against liberal groups and traditional news organizations had helped make it a favorite of Mr. Trump.

In a court filing, Project Veritas told a federal judge that around Sept. 3, 2020, someone the group described as a tipster called Project Veritas and left a voice message. The caller said a new occupant moved into a place where Ashley Biden had previously been staying and found Ms. Bidens diary and other personal items.

Read this article:

How Ashley Bidens Diary Made Its Way to Project Veritas

Overview | Project Veritas

James OKeefe established Project Veritas in 2011 as a non-profit journalism enterprise to continue his undercover reporting work. Today, Project Veritas investigates and exposes corruption, dishonesty, self-dealing, waste, fraud, and other misconduct in both public and private institutions to achieve a more ethical and transparent society.

Today, OKeefe serves as the CEO and Chairman of the Board, so that he can continue to lead and teach his fellow journalists, as well as protect and nurture the Project Veritas culture.

As a legally recognized and fully-reporting enterprise, Project Veritas is the most effective non-profit on the national scene, period.

Project Veritas journalists working undercover on their own or by, with and through idealistic insiders bring to the American people the corrupt private truths hidden behind the walls of their institutions.

Throughout this website, there are in-depth and honest discussions of Project Veritas success, mistakes and the lies opponents tell about OKeefe and his organization.

Mostly, there are stories about successful impacts the organization has led at the local, state and national levels: ending federal funding of the corrupt Association of Community Organizations for Reform Now, or ACORN, twice forcing the New Hampshire legislature to tighten voter ID lawsthe second time overriding the governors veto, exposing political bias in the mainstream media outlets like CNN and a report led to ABC News suspending senior correspondent David Wright and taking him off all political coverage upon his return.

The biggest audience for any Project Veritas video was the release of the hot mic confession by ABC News anchor Amy Robach to her studio crew she had the whole Jeffrey Epstein story, but her network suppressed it because of pressure from the British Royal family.

Maybe better than that, the ABC News insider who gave Project Veritas the tape is still inside ABC News.

When Project Veritas takes on an investigation, the pattern is clear:

Project Veritas launches an investigation with the placement of our undercover journalists. The rollout of our findings creates a growing and uncontainable firestorm of press coverage.

Corruption is exposed, leaders resign, and organizations are shut down.

Project Veritas gets immediate, measurable and impactful results--and our return on investment is unparalleled.

There are many ways to be a part of Project Veritas from becoming an insider, undercover journalist, a video editor or contributor.

Project Veritas is a registered 501(c)3 organization. Project Veritas does not advocate specific resolutions to the issues that are raised through its investigations, nor do we encourage others to do so. Our goal is to inform the public of wrongdoing and allow the public to make judgments on the issues.

THE MISSION OF PROJECT VERITAS, INC. IS TO INVESTIGATE AND EXPOSE CORRUPTION,DISHONESTY, SELF-DEALING, WASTE, FRAUD, AND OTHER MISCONDUCT IN BOTH PUBLIC AND PRIVATE INSTITUTIONS IN ORDER TO ACHIEVE A MORE ETHICAL AND TRANSPARENT SOCIETY. ALSO ENGAGE IN LITIGATION TO: PROTECT, DEFEND AND EXPAND HUMAN AND CIVIL RIGHTS SECURED BY LAW, SPECIFICALLY FIRST AMENDMENT RIGHTS INCLUDING PROMOTING THE FREE EXCHANGE OF IDEAS IN A DIGITAL WORLD; COMBAT AND DEFEAT CENSORSHIP OF ANY IDEOLOGY; PROMOTE TRUTHFUL REPORTING; AND DEFEND FREEDOM OF SPEECH AND ASSOCIATION ISSUES INCLUDING THE RIGHT TO ANONYMITY.

1. MORAL COURAGE - Courage is the virtue that sustains all others. We choose to overcome our fears.

2. WE ARE ALL LEADERS - Turning people into leaders. Completed staff work. Ownership.

3. COLLABORATION - Best not to work in silos. No one individual is as smart as all of us.

4. RESILIENCE - Persistence and determination alone are omnipotent. Never, ever, ever give up. We don't let mistakes or setbacks discourage us. Pursue perfection, knowing full well you will never attain it.

5. MISSION DRIVEN - The best people are motivated by purpose. We are passionate and truly believe in our cause. We must be externally focused, not internally focused.

6. MAKE THE STATUS QUO DO THE IMPOSSIBLE - We move mountains. Failure is not an option. We do whatever it takes.

7. THE TIP OF THE SPEAR - We are a loss leader. We do not shy away from conflict or litigation.

Rule #1 Truth is paramount. Our reporting is fact based with clear and irrefutable video and audio content. Truth is paramount. We never deceive our audience. We do not distort the facts or the context. We do not selectively edit.

Rule #2 We do not break the law. We maintain one-party consent when recording someone is inherently moral and ethical. We never record when there is zero-party consent. In areas where we are required to have consent from all parties, we seek legal guidance regarding the expectation of privacys impact on our right to record.

Rule #3 We adhere to the 1st Amendment rights of others. During our investigations we do not disrupt the peace. We do not infringe on the 1st Amendment rights of others.

Rule #4 The Zekman Test. The undercover investigations we pursue are judged by us to be of vital public interest and profound importance. The Zekman Test is our baseline. Undercover investigative reporting is necessary because, ...theres no other way to get the story... Whereas the Society of Professional Journalists allows for undercover techniques, if undercover techniques are necessary to expose issues of vital public importance; we believe they are not only allowed but required.

Rule #5 We Protect the Innocent When Possible - Embarrassing private details are not to be investigated. We stay away from irrelevant embarrassingly intimate details about private citizens personal lives. We look for individual wrong-doing and judge its public importance. The irrelevant religious or sexual dispositions of our targets are not to be investigated.

Rule #6 Transparency. Our methods and tactics must be reasonable and defensible. We use the Twelve Jurors on Our Shoulder rule. The work has to be done with such a degree of integrity that it can withstand scrutiny in both law and ethics. We are comfortable with transparency. We must be willing to be ready to disclose our methods upon publication.

Rule #7 Verifying and Corroborate Stories Evaluate impact on third parties and Newsworthiness of Statements Alone.We consistently consider the probable truth or falsity of statements, examine any reasons to doubt the veracity of underlying assertions and whether the assertions are newsworthy. When possible, we will confirm with our subjects that their statements captured on video are accurate and truthful.At the very least, we will give our subjects an opportunity to elaborate and/or respond. In all matters, we rely on the 1st Amendment to protect our ability to publish newsworthy items after our internal deliberations. On whether there is an obligation to ensure the veracity of statements made on video, 1.) consider whether the remarks may potentially impact an innocent third party. (Factors in support of releasing the content) and 2.)The Newsworthiness of the statement alone by itself. (Factors against releasing the content).

Rule #8 Raw Video. In certain circumstances we may release the raw video to the press and or the public. But as a rule, we do not.

Rule #9 Subject Anonymity. We investigate and question sources before promising anonymity. Once we confirm, we will do everything in our power to protect the identity of our confidential sources.

Rule #10 Being Accountable. Admit mistakes and correct them promptly.

Rule #11 We do not manufacture content. We do not put words in our investigative subjects' mouths. We do not lead the horse to water. Our purpose is to elicit truth.

Rule #12 With Great Power comes Great Responsibility.

Excerpt from:

Overview | Project Veritas

Project Veritas video of Greenwich teacher prompts investigation

Editors Note: This article is part of CT Mirrors Spanish-language news coverage developed in partnership withIdentidad Latina Multimedia.

Lea este artculo en espaol.

Attorney General William Tong opened an investigation Thursday into whether Project Veritas hidden-camera video of a Greenwich assistant principal is evidence of illegal bias on the basis of political beliefs, age or religious affiliation.

Discrimination, hate, bigotry against any person and against any religion or on the basis of age or otherwise, is reprehensible and wrong, Tong said. This video is disturbing. And if teachers, school staff or applicants for education jobs have been illegally discriminated against for any reason, I will take action.

Tong, a Democrat seeking a second term in November, said he was acting based on the publicly available video, not the calls for an investigation by Republicans, including the GOPs nominee for attorney general, Jessica Kordas.

I want to make two points absolutely clear. I do not play politics with my enforcement authority. And I do not play politics with civil rights investigations, Tong said. And I definitely do not play politics with schools, kids in schools, and teachers and students and families.

[RELATED: Project Veritas says video shows bias against conservatives and Catholics in Greenwich schools]

In the Project Veritas video, an assistant principal of the Cos Cob elementary school, Jeremy Boland, is seen telling a woman over drinks about using age and religion, among other things, to weed out conservative applicants for teaching jobs. Boland was suspended Wednesday.

Tongs inquiry is likely to be one of three: First Selectman Fred Camillo said he intends to hire outside counsel to investigate, and the Board of Education also is expected to investigate whether Boland was trying to impress a woman over drinks or actually had discriminated against applicants.

As an assistant principal, Boland plays a role on committees that screen and recommend hiring, but has no authority to make hires. Assists in the recruitment and selection of employees is an element of the job description.

In one clip, Boland said he used Catholicism and age to judge if an applicant was likely to be politically conservative. He did not say how he discerned religious affiliation.

Tong declined to comment in detail on the scope or structure of the investigation. Specifically, he would not say if his office would seek all the video recorded of Boland and not just the brief cuts used in the 12-minute report posted Tuesday.

But suffice it to say, we are going to investigate broadly the contents and circumstances of that video. And were going to assess and analyze and review all of the available evidence, Tong said.

Project Veritas, which has been accused of using video clips out of context in previous exposs, declined Wednesday to explain how Boland came to be targeted or how the woman working for the group engaged him in what appears to be at least three conversations over drinks and meals.

Tong was elected in 2018 on a promise to seek greater authority to pursue civil penalties for hate crimes and civil rights offenses, and the legislature responded in 2021 by passing a bill that gives him that authority.

I will not rush to judgment, and I will respect due process, Tong said Thursday. I am not going to do anything different just because this is a political season and people want to see me reach one conclusion or another. I also want to make very clear we will conduct a thorough investigation and review and analyze all of the evidence. This will not happen overnight.

Read the original:

Project Veritas video of Greenwich teacher prompts investigation

Project Veritas Says Justice Dept. Secretly Seized Its Emails

The conservative group Project Veritas said on Tuesday that the Justice Department began secretly seizing a trove of its internal communications in late 2020, just weeks after learning that the group had obtained a copy of President Bidens daughters diary.

In a court filing, a lawyer for Project Veritas assailed the Justice Departments actions, which involved subpoenas, search warrants and court production orders that had not been previously disclosed and gag orders imposed on Microsoft, whose servers housed the groups emails.

The disclosure underscored the scope and intensity of the legal battle surrounding the Justice Departments investigation into how Project Veritas, in the closing weeks of the 2020 presidential campaign, came into possession of a diary kept by Ashley Biden, the presidents daughter, and other possessions she had stored at a house in Florida.

And it highlighted how the Justice Department has resisted demands by the conservative group which regularly engages in sting operations and ambush interviews against news organizations and liberal groups and has targeted perceived political opponents to be treated as a news organization entitled to First Amendment protections.

It is highly unusual for the Justice Department to obtain the internal communications of journalists, as federal prosecutors are supposed to follow special guidelines to ensure they do not infringe on First Amendment rights.

Since the investigation was disclosed last fall, federal prosecutors have repeatedly said that because they have evidence that the group may have committed a crime in obtaining Ms. Bidens belongings, Project Veritas is not entitled to First Amendment protections.

But Project Veritas, in its filing on Tuesday, said that prosecutors had failed to be forthcoming with a federal judge about the nature of their inquiry by choosing not to disclose the secret subpoenas and warrants.

This is a fundamental, intolerable abridgment of the First Amendment by the Department of Justice, James OKeefe, the groups founder and leader, said in a video.

In its court filing, Project Veritas asked a federal judge to intervene to stop the Justice Department from using the materials it had obtained from Microsoft in the investigation. The group said that federal prosecutors had obtained voluminous materials which in many cases included the contents of emails from Microsoft for eight of its employees, including Mr. OKeefe.

The group also disclosed that Uber had told two of its operatives who are under investigation Spencer Meads and Eric Cochran that it had handed over information from their accounts in March of last year in response to demands from the government.

Microsoft said in response to questions about the matter that it had initially challenged the governments demands for Project Veritass information, but the company declined to describe what that entailed.

Weve believed for a long time that secrecy should be the exception and used only when truly necessary, said Frank X. Shaw, a spokesman for Microsoft. We always push back when the government is seeking the data of an enterprise customer under a secrecy order and always tell the customer as soon as were legally able.

According to a person with direct knowledge of the matter, Microsoft had pushed back on the Justice Departments subpoenas and warrants when the company was served with them in late 2020 and early 2021. But the government refused to drop its demands and Microsoft handed over the information that prosecutors were seeking, the person said.

Because of gag orders that had been imposed, Microsoft was barred from telling Project Veritas about the requests, the person said.

Shortly after the existence of the investigation was revealed publicly last fall, Microsoft asked the Justice Department whether it could tell Project Veritas about the requests, the person said. The department refused to lift the gag orders, the person said.

In response, Microsoft drafted a lawsuit against the Justice Department to try to get the gag orders lifted and told department officials that the company was prepared to file it. Soon afterward, the department went to court and had the gag orders lifted.

A little more than a week ago, Microsoft told Project Veritas about the warrants and subpoenas, the person said.

Project Veritas paid $40,000 for Ms. Bidens diary to a man and a woman from Florida who said that it had been obtained from a home where Ms. Biden had been staying until a few months earlier. Project Veritas also had possession of other items left at the house by Ms. Biden, and at the heart of the investigation is whether the group played a role in the removal of those items from the home.

Project Veritas has denied any wrongdoing and maintained that Ms. Bidens belongings had been abandoned. The group never published the diary.

Search warrants used in raids last fall on the homes of Mr. OKeefe and two other Project Veritas operatives showed that the Justice Department was investigating conspiracy to transport stolen property and possession of stolen goods, among other crimes.

In response to the searches, a federal judge, at the urging of Project Veritas, appointed a special master to oversee what evidence federal prosecutors could keep from the dozens of cellphones and electronic devices the authorities had obtained.

Project Veritas said in its filing on Tuesday that at the time the special master was appointed the government should have revealed that it had conducted other searches that could have infringed on the groups First Amendment rights or could have been protected by attorney-client privilege.

In the final year of the Trump administration, prosecutors in Washington, who were investigating a leak of classified information, secretly obtained court orders demanding that Google, which houses The New York Timess email accounts, hand over information from four Times reporters accounts. In response to requests from Google, the Justice Department allowed it to alert The Times to the demands so the newspaper could fight the orders. A lawyer for The Times, David McCraw, secretly fought the demands, which the government ultimately dropped.

More:

Project Veritas Says Justice Dept. Secretly Seized Its Emails

Rand Paul | Biography & Facts | Britannica

Rand Paul, byname of Randal Howard Paul, (born January 7, 1963, Pittsburgh, Pennsylvania, U.S.), American politician who was elected as a Republican to the U.S. Senate in 2010 and began his term representing Kentucky the following year. He sought his partys nomination in the U.S. presidential election of 2016.

Rand, the middle of five children, was the son of Ron Paul, a physician who, while serving in the U.S. House of Representatives (197677, 197985, and 19972013), helped swing the Republican Party rightward and toward libertarianism. Rand attended but did not graduate from Baylor University, leaving upon his admission to medical school at Duke University. He earned a medical degree in 1988, and he went on to specialize in ophthalmology. In 1989 he met Kelley Ashby, and they married two years later.

After about 15 years of working in partnerships and clinics, Paul established his own medical practice in Bowling Green, Kentucky. In 1997 he broke away from the medical board with oversight for certification in his field, the American Board of Ophthalmology, and founded a rival certification authority, the National Board of Ophthalmology. The latter group, the board of which was made up entirely of members of his family, disbanded in 2011. He was also active in the Lions Club International, which runs eye banks and offers humanitarian aid related to eye care around the world.

While a college student, Paul was involved in several conservative organizations, and he worked for his father during the 1988 U.S. presidential election, when his father was campaigning on the Libertarian Party ticket. In 1994 Paul founded the antitaxation group Kentucky Taxpayers United, with himself at the head. Two years later he helped his father defeat an establishment Republican candidate after the elder Paul decided to run for Congress after an absence of more than a decade.

In 2009, riding a wave of anti-Washington sentiment, Rand Paul took advantage of the unpopularity of incumbent Senator Jim Bunning of Kentucky and announced that he was running for the seat. Bunning subsequently withdrew from the race, and Paul, aligned with the Tea Party movement, won the Republican primary. He then easily defeated the Democratic candidate in the 2010 general election, despite controversy over a campaign trail statement in which Paul questioned the constitutionality of the Civil Rights Act of 1964.

With Utah Senator Mike Lee, Paul founded the Tea Party Caucus upon entering the Senate in 2011. He soon became a vocal opponent of his partys leadership and establishment Republicans. Among the issues he pursued were massive cuts in federal spending. Consistent with his generally libertarian position, Pauls proposed cuts involved not only social programs but also defense allocations. In addition, he sought the abolishment of all foreign aid. Although Paul generally voted on the losing side in arguments over the budget, he was an influential voice on some issues, such as the government shutdown of 2013. Adopting philosophically consistent but not ideologically rigid positions, he forged unlikely alliances with such groups as the American Civil Liberties Union and with such individuals as Democratic Senator Patrick Leahy, with whom he introduced legislation softening mandatory minimum sentencing penalties in federal cases. In April 2015 Paul announced that he was entering the U.S. presidential election race of 2016. He suspended his campaign in February 2016. He subsequently offered a tepid endorsement for the partys nominee, Donald Trump, whom he once called a delusional narcissist and orange-faced windbag.

After Trump won the presidential election, Paul became increasingly supportive of him, though he occasionally refused to back the administrations policies. While Paul voted for a massive tax reform bill in 2017, that year he also helped defeat a Republican-led effort to repeal and replace the Patient Protection and Affordable Care Act (PPACA; 2010); he opposed the proposed replacement plan, claiming it was too similar to the PPACA. In November 2017 Paul made additional news when he was attacked by his neighbour, who later pleaded guilty to felony assault; the altercation, which left Paul with bruised lungs and broken ribs, was allegedly motivated by a yard dispute.

In 2019 Trump was impeached by the House of Representatives following a whistleblowers allegation that Trump had extorted a foreign country to investigate one of his political rivals. The proceedings then moved to the Republican-controlled Senate, and Paul made headlines by revealing the alleged whistleblowers name, despite a law protecting the persons identity. In February 2020 Paul voted for Trumps acquittal; the president was acquitted in a near party-line vote. During this time, the coronavirus was spreading around the world, eventually becoming a global pandemic. As schools and businesses closed, the U.S. economy entered an economic downturn that rivaled the Great Depression. In March 2020 Paul became the first senator to test positive for the virus, and he went into a self-quarantine. The following month he resumed his public duties. Paul was elected to a third term in November 2022.

Paul wrote the books The Tea Party Goes to Washington (2011; with Jack Hunter), Government Bullies: How Everyday Americans Are Being Harassed, Abused, and Imprisoned by the Feds (2012; with Doug Stafford), and Taking a Stand: Moving Beyond Partisan Politics to Unite America (2015).

Link:

Rand Paul | Biography & Facts | Britannica

Rand Paul 2023: Wife, net worth, tattoos, smoking & body facts – Taddlr

On 7-1-1963 Rand Paul (nickname: Randal Howard Paul) was born in Pittsburgh, Pennsylvania, United States. He made his 1.5 million dollar fortune with United States Senator. The politician is married to Kelley Ashby, his starsign is Capricorn and he is now 60 years of age.Rand Paul Facts & WikiWhere does Rand Paul live? And how much money does Rand Paul earn?Birth Date7-1-1963Heritage/originAmericanEthnicityWhiteReligion - believes in God?ChristianResidenceHe owns a house in Lake Jackson, Texas, USA.Rand Paul Net Worth, Salary, Cars & HousesHousesCarsRELATED:These 10 Whopping Homes & Cars Of Celebrities Look Amazing!Rand Paul: Wife, Dating, Family & FriendsRand Paul with beautiful, Wife Kelley AshbyWho is Rand Paul dating in 2023?Relationship statusMarried (Since1990)SexualityStraightCurrent Wife of Rand PaulKelley AshbyEx-girlfriends or ex-wivesHas any kids?No Will the marriage of American politician Rand Paul and current Wife, Kelley Ashby survive 2023?

These Are The 15 Hottest Wives And Girlfriends Of Hollywood!

This friendly politician originating from Pittsburgh, Pennsylvania, United States has a slim body & square face type.

More:

Rand Paul 2023: Wife, net worth, tattoos, smoking & body facts - Taddlr

Is It Possible to Change the Structure Of The Brain With Meditation?

Yes, it is! Join us now to find out what happens in your brain when you meditate! 

description  Where do you stand when it comes to meditation? Have you tried it? Many people don’t know this, but this process of just sitting and breathing mindfully can change the structure of your brain! Wondering how can meditation change the brain? In that case, you came to the right place! description

Different types of meditation have existed for thousands of years now. However, until recently, it was not common to meet plenty of people who practice it in different parts of the world!  Do you know someone who does?

We didn’t really until we went to a self-defense meeting six or seven years ago! Honestly, we were a bit shocked because we expected only to learn Kung Fu, Muay Thai, and other martial arts. But the thing was that, although we did practice those things, the training included yoga, tai chi, mediation, etc., too! 

At first, meditation seemed awkward, you know!  Why would we spend time just sitting and doing nothing, we thought. What the heck is the point of it?  But as we kept on attending these meetings, our perspective started to change!  What made us see things differently? 

Well, we have noticed that all the people who have been practicing meditation for some time have an incredible sense of calmness, compassion, and self-awareness. They appear as if traveling on some kind of cloud of tranquility above the storm of anxiety most of us feel all the time. We noticed that they also have better dating sites. This realization inspired curiosity, so we started to read and research to learn more about meditation. Somewhere along the way, we learned that meditation changes brain structure!  Honestly, in the beginning, it sounded like some kind of science fiction! Okay, meditation can help you relax, but how does it change your brain chemistry? It sounds ridiculous! But it isn’t!  Further investigation showed that mediation changes the brain indeed! Of course, it is not something that happens after a session or two! But a long-term, properly done meditation has physical, tangible effects. Now we know that this is incredible for many of you! So we decided to write an article about what actually happens and how long does it take for meditation to change the brain. If you are the least bit curious, you will enjoy this text!

 

 

Meditation Changes Brain Structure 

Before moving on to explaining how to practice meditation to change your brain, let's see what is happening.  It is known that meditation renders a calming effect on the brain.  People who practice meditation regularly are far less anxious or stressed than those who don’t. But what does it mean to claim that meditation can change your brain? Well, as surprising as it may sound to you, various studies have shown that practicing mindfulness can change brain structures.  A study we found in Psychiatry Research showed, based on the scan analysis, that eight weeks mindfulness training program increased cortical thickness in the hippocampus. According to the scientists working on these issues, increases correlate to improved emotional regulations. Decreases, on the other hand, are attributed to increased risks of the development of negative emotions. 

Studies suggest that different kinds of meditation change eight specific areas of the brain, which are related to the regulation of emotions, meta-awareness, memory processing, body awareness, etc. Dr. Kristin Naragon – Gainey, who was in teams that conducted some of the studies, says that there is no denying that regular meditation greatly changes aspects of brain functioning. It not changes the areas of the brain but also the way they communicate with each other.

How Long Does it Take 

Many people told us, “Okay, I got it! But how many hours of meditation to change the brain are necessary?” It is essential to know that a single session of meditation can be enough for you to start feeling better.  Some studies showed that people experience better mood, decreased stress and blood pressure after one session. More long-term effects, such as increased focus, start kicking in after several weeks. But if the question is only related to the ways to change brain meditation, then you should know that you need to practice meditation to change brain waves for about eight consecutive weeks before you see any results. 

Once you decide to start changing brain chemistry meditation, keep in mind that consistency is critical for best results.  You won’t notice any drastic changes if you practice meditation, say once in a month or two!  This is not to say that you have to do it every single day either! But, the research shows that you need to practice 10 – 20 minutes at least three times per week if you want to enjoy all the benefits. 

How to Make Most of Meditation

  • For many people, the idea of sitting and doing nothing for thirty minutes seems impossible. But no one told you that you have to meditate that long. Start with short, five minutes sessions, and do them once or two times a week.  Setting a goal to practice meditation every day or for a long time is farfetched for beginners and is likely to fail. 
  • Designate a calm and safe space. Technically, you can meditate in the middle of the street if you want! But most people search for a quiet place where they can feel safe and focused. 
  • Concentrate on breathing.  If you are starting to practice meditation, you should know that the easiest way is to focus on your breathing.  It is not uncommon for the mind to wander when you are a beginner. Breathing will help you refocus in this case.
  • Try guided meditation. Many people find that it is challenging to quiet down their thoughts in the beginning.  In that case, you can try guided meditation. You can find some YouTube videos, download the app, or join a group. 

<img alt = "yoga in the forest">

Bottom Line

Let’s be clear about something! Although meditation changes the brain, it is not a cure for any disorder or disease! But it can be beneficial if you practice it regularly! What does it mean? Well, any meditation, and especially mindfulness, can help you cope better with daily stress you experience at work, anxiety, etc., by helping you focus on here and now and your needs! Have you practiced meditation so far?

About author

My name is Rebecca Shinn. I am a consulting psychologist who occasionally writes articles for blogs. I hope this helps make psychology more accessible. I am fond of running and traveling.

Donald Trump Jr.’s Solution To Chinese Balloon Gets Mocked | HuffPost …

Chinese officials claim the balloon is just for research and not spying, but its presence has some people, such as Rep. James Comer (R-Ky.), worried that it is actually carrying bioweapons.

After the Pentagon decided against shooting down the balloon out of concerns of hurting people on the ground, Trump took to Twitter to suggest a plan that may not have been even slightly feasible as anything but red meat for his base.

Former President Donald Trumps eldest son advised Montana citizens to take matters into their own hands and shoot down the balloon themselves:

If Joe Biden and his administration are too weak to do the obvious and shoot down an enemy surveillance balloon perhaps we just let the good people of Montana do their thing I imagine they have the capability and the resolve to do it all themselves.

Yes, he asked Montana residents to shoot their guns in the air at a balloon, and many Twitter users felt obliged to note the idiocy of the suggestion.

Many also pointed out a nagging issue: The balloon is extremely high in the sky.

Some people noted that having bullets falling from the sky after failing to hit a balloon miles above might not be safe for bystanders.

Others pointed to the possibility that the balloon might be holding dangerous cargo.

And one person tweeted that Trumps plan proved he was indeed his fathers son.

Read more:

Donald Trump Jr.'s Solution To Chinese Balloon Gets Mocked | HuffPost ...

Donald Trump had to be told a pool of reporters would no longer follow …

PAUL J. RICHARDS/AFP via Getty Images

Former President Donald Trump wanted reporters to cover a private event he was hosting.

Advisers then had to explain why he could no longer call on a press pool for his events.

Advisers found reporters who happened to be working near the area for his event, the Washington Post reported.

Aides and advisers to former President Donald Trump said he had a difficult time transitioning from the White House to life as a private citizen, according to a new report from the Washington Post.

According to the Post, one example of this was when Trump wanted his team to call on a press pool reporters who travel with presidents for an event at Mar-a-Lago. Advisers had to break the news to Trump that this was no longer a possibility.

"We had to explain to him that he didn't have a group standing around waiting for him anymore," an unnamed former aide told the Washington Post.

The advisers ended up pulling reporters who were near Mar-a-Lago for other reasons, two sources told the Post.

Once Trump left office, he was frustrated at his downsized life, which included a smaller number of Secret Service, no access to Air Force One, and little press coverage compared to when he was president, four unnamed advisers to Trump told the Post.

Trump has spent most of his post-presidency in isolation at Mar-a-Lago, playing golf six days a week and using dinner at the club as an opportunity to revel in the attention of admiring fans who applaud his entrances and exits from the dining room.

The praise he receives from guests at his Palm Beach, Florida, and Bedminster, New Jersey, clubs is how he gets the attention he became used to as president, an aide told the Post.

"The appetite for attention hasn't waned, but that's where he gets it now," an unnamed Trump confidant told the Washington Post."The networks don't carry his rallies. He doesn't get interviews anymore. He can't stand under the wing of Air Force One and gaggle [with reporters] for an hour."

He has also spent less time being challenged by aides and listening to opposition from political opponents, colleagues, and independent journalists, the Post reported.

Story continues

Trump is now seeking a second term in theWhite House. On November 24, he announced his bid for president in 2024. Meanwhile, he continues to face mounting legal and political challenges.

The January 6 committee investigating Trump's role during the 2021 insurrection at the US Capitol is expected to recommend at least three criminal charges insurrection, obstruction of an official proceeding, and conspiracy to defraud the US government against the former president to the Department of Justice.

Although the recommendations hold nolegal weight, the committee hopes the action will influence Attorney General Merrick Garland to take action against the former president, Politico reported.

Trump is also still facing an investigation from the Department of Justice after the FBI, executing a search warrant, found classified documents that the former president took with him from the White House to his Mar-a-Lago home.

A representative for Trump did not immediately respond to Insider's request for comment.

Read the original article on Business Insider

See the original post:

Donald Trump had to be told a pool of reporters would no longer follow ...

Those Three Clever Dogs Trained To Drive A Car Provide Valuable Lessons For AI Self-Driving Cars – Forbes

Perhaps this dog would prefer driving the car, just like three dogs that were trained to do so.

Weve all seen dogs traveling in cars, including how they like to peek out an open window and enjoy the fur-fluffing breeze and dwell in the cacophony of scents that blow along in the flavorful wind.

The life of a dog!

Dogs have also frequently been used as living props in commercials for cars, pretending in some cases to drive a car, such as the Subaru Barkleys advertising campaign that initially launched on TV in 2018 and continued in 2019, proclaiming that Subaru cars were officially dog tested and dog approved.

Cute, clever, and memorable.

What you might not know or might not remember is that there were three dogs that were trained on driving a car and had their moment of unveiling in December of 2012 when they were showcased by driving a car on an outdoor track (the YouTube posted video has amassed millions of views).

Yes, three dogs named Monty, Ginny, and Porter were destined to become the first true car drivers on behalf of the entire canine family.

Monty at the time was an 18-month-old giant schnauzer cross, while the slightly younger Ginny at one year of age was a beardie whippet cross, and Porter was a youthful 10-month-old beardie.

All three were the brave astronauts of their era and were chosen to not land on the moon but be the first to actively drive a car, doing so with their very own paws.

I suppose we ought to call them dog-o-nauts.

You might be wondering whether it was all faked.

I can guess that some might certainly think so, especially those that already believe that the 1969 moon landing was faked, and thus dogs driving a car would presumably be a piece of cake to fake in comparison.

The dog driving feat was not faked.

Well, lets put it this way, the effort was truthful in the sense that the dogs were indeed able to drive a car, albeit with some notable constraints involved.

Lets consider some of the caveats:

Specially Equipped Driving Controls

First, the car was equipped with specialized driving controls to allow the dogs to work the driving actions needed to steer the car, use the gas, shift gears, and apply the brakes of the vehicle.

The front paws of the dog driver were able to reach the steering wheel and gear-stick, while the back paws used extension levers to reach the accelerator and brake pedals. When a dog sat in the drivers seat, they did so on their haunches.

Of course, I dont think any of us would be hard-pressed to quibble about the use of specialized driving controls. I hope that establishing physical mechanisms to operate the driving controls would seem quite reasonable and not out of sorts per se.

We should willingly concede that having such accouterments is perfectly okay since its not the access to the controls that ascertains driving acumen but instead the ability to appropriately use the driving controls that are the core consideration.

By the way, the fact too that they operated the gear shift is something of a mind-blowing nature, particularly when you consider that most of todays teenage drivers have never worked a stick shift and always used only an automatic transmission.

Dogs surpass teenage drivers in the gear-stick realm, it seems.

Specialized Training On How To Drive

Secondly, as another caveat, the dogs were given about 8 weeks of training on how to drive a car.

I dont believe you can carp about the training time and need to realize that teenagers oftentimes receive weeks or even months of driving training, doing so prior to being able to drive a car on their own.

When you think about it, an 8-week or roughly two-month time frame to train a dog on nearly any complex task is remarkably short and illustrates how smart these dogs were.

One does wonder how many treats were given out during that training period, but I digress.

Focused On Distinct Driving Behaviors

Thirdly, the dogs learned ten distinct behaviors for purposes of driving.

For example, one behavior consisted of shifting the car into gear. Another behavior involved applying the brakes. And so on.

You might ponder this aspect for a moment.

How many distinct tasks are involved in the physical act of driving a car?

After some reflection, youll realize that in some ways the driving of a car is extremely simplistic.

You need to steer, turning the wheel either to the left, right, or keep it straight ahead. In addition, you need to be able to use the accelerator, either pressing lightly or strongly, and you need to use the brake, either pressing lightly or strongly. Plus, well toss into the mix the need to shift gears.

In short, driving a car does not involve an exhaustive and nor complicated myriad of actions.

It makes sense that weve inexorably devolved car driving into a small set of simple chores.

Early versions of cars had many convoluted tasks that had to be manually undertaken. Over time, the automakers aimed to make car driving so simple that anyone could do it.

This aided the widespread adoption of cars by the populous as a whole and led to the blossoming of the automotive industry by being able to sell a car to essentially anyone.

Driving On Command

Fourth, and the most crucial of the caveats, the dogs were commanded by a trainer during the driving act.

I hate to say it, but this caveat is the one that regrettably undermines the wonderment and imagery of the dogs driving a car.

Sorry.

A trainer stood outside the car and yelled commands to the dogs, telling them to shift gears or to steer to the right, etc.

Okay, lets all agree that the dogs were actively driving the car, and working the controls of the car, and serving as the captain of the ship in that they alone were responsible for the car as it proceeded along on the outdoor track. They were even wearing seat-belts, for gosh sake.

Thats quite amazing!

On the other hand, they were only responding to the commands being uttered toward them.

Thus, the dogs werent driving the car in the sense that the dogs were presumably not gauging the roadway scenery and nor mentally calculating what driving actions to undertake.

It would be somewhat akin to putting a human driver blindfolded into a drivers seat and asking them to drive, along with you sitting next to the driver and telling them what actions to take.

Yes, technically, the person would be the driver of the car, though I believe wed all agree they werent driving in the purest sense of the meaning of driving.

By and large, driving a car in its fullest definition consists of being able to assess the scene around the vehicle and render life-or-death judgments about what driving actions to take. Those mental judgments are then translated into our physical manipulation of the driving controls, such as opting to hit the gas or slam on the brakes.

One must presume that the dogs were not capable of doing the full driving act and were instead like the blindfolded human driver that merely is reacting to commands given to them.

Does this mean that those dogs werent driving the car?

I suppose it depends upon how strict you want to be about the definition of driving.

If you are a stickler, you would likely cry foul and assert that the dogs were not driving a car.

If you are someone with a bit more leniency, you probably would concede that the dogs were driving a car, and then under your breath and with a wee bit of a smile mutter that they were determinedly and doggedly driving that car.

Perhaps we shouldnt be overly dogmatic about it.

You might also be wondering whether a dog could really, in fact, drive a car, doing so in the fuller sense of driving, if the dog perchance was given sufficient training to do so.

In other words, would a dog have the mental capacity to grasp the roadway status and be able to convert that into suitable driving actions, upon which then the dog would work the driving controls?

At this juncture in the evolution of dogs, one would generally have to say no, namely that a dog would not be able to drive a car in a generalized way.

That being said, it would potentially be feasible to train a dog to drive a car in a constrained environment whereby the roadway scenery was restricted, and the dog did not need to broadly undertake a wholly unconstrained driving task.

Before I dig more deeply into this topic herein, please do not try placing your beloved dog into the drivers seat of your car and force them to drive.

Essentially, Im imploring you, dont try this at home.

I mention this warning because I dont want people to suddenly get excited about tossing their dog into the drivers seat to see what happens.

Bad idea.

Dont do it.

As mentioned, the three driving dogs were specially trained, and drove only on a closed-off outdoor track, doing so under the strict supervision of their human trainers and with all kinds of safety precautions being undertaken.

The whole matter was accomplished by the Royal New Zealand Society for the Prevention of Cruelty to Animals (SPCA), done as a publicity stunt that aimed to increase the adoption of neglected or forgotten dogs.

It was a heartwarming effort with a decent basis and please dont extrapolate the matter into any unbecoming and likely dangerous replicative efforts.

Speaking of shifting gears, one might wonder whether the dogs that drove a car might provide other insights to us humans.

Heres todays question: What lessons if any can be learned by dogs driving cars that could be useful for the advent of AI-based true self-driving cars?

Lets unpack the matter and see.

The Levels Of Self-Driving Cars

It is important to clarify what I mean when referring to true self-driving cars.

True self-driving cars are ones that the AI drives the car entirely on its own and there isnt any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we dont yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars wont be markedly different than driving conventional vehicles, so theres not much new per se to cover about them on this topic (though, as youll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public be forewarned about a disturbing aspect thats been arising lately, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Spiritual-Moral Values

For Level 4 and Level 5 true self-driving vehicles, there wont be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

If thats the case, it seems like theres no opportunity for dogs to drive cars.

Yes, thats true, namely that if humans arent driving cars then there seems little need or basis to ask dogs to drive cars.

But thats not what we can learn from the effort to teach dogs to drive a car.

Lets tackle some interesting facets that arose when dogs were tasked with driving a car:

Humans Giving Commands

First, recall that the dogs were responding to commands that were given to them while sitting at the steering wheel.

In a manner of speaking (pun intended), you could suggest that we humans will be giving commands to the AI driving systems that are at the wheel of true self-driving cars.

Using Natural Language Processing (NLP), akin to how you converse with Alexa or Siri, as a passenger in a self-driving car you will instruct the AI about various aspects of the driving.

In theory, you wont though be telling the AI to hit the gas or pound on the brakes. Presumably, the AI driving system will be adept enough to handle all of the everyday driving aspects involved and its not your place to offer commands about doing the driving chore.

Instead, youll tell the AI where you want to go.

You might divert the journey by suddenly telling the AI that you are hungry and want to swing through a local McDonalds or Taco Bell.

You might explain to the AI that it can drive leisurely and take you through the scenic part of town since you arent in a hurry and are a tourist in the town or city.

In some ways, you can impact the driving task, perhaps telling the AI that you are carsick and want it to slow down or not take curves so fast.

There are numerous open questions as yet resolved about the interaction between the human passengers and the AI driving systems (see my detailed discussion at this link here).

For example, if you tell the AI to follow that car, similar to what happens in movies or when you are trying to chase after someone, should the AI obediently do so, or should it question why you want to follow the other car?

We dont presumably want AI self-driving cars that are stalking others.

See the original post here:

Those Three Clever Dogs Trained To Drive A Car Provide Valuable Lessons For AI Self-Driving Cars - Forbes

Drive.ai raises $50M for retrofit kits to bring self-driving to existing fleets – TechCrunch

Self-driving technology startup Drive.ai has raised a $50 million Series B funding round, led by NEA and with participation from GGV and previous investors, including Series A lead Northern Light. The new funding will help the company pursue its evolved business strategy, which now focuses on creating retrofit kits that can be used to add self-driving capabilities to existing commercial and business vehicle fleets.

The Drive.ai approach to self-driving tech is based on development and use of deep learning for all aspects of the platform, which the company says will help it achieve better development pace, scalability and efficiency gains. Many others in the field use a hybrid approach, applying deep learning in certain areas but not others, but Drive.ai believes the true gains are best achieved by using it throughout the autonomous system stack.

The last time I spoke to Drive.ai, which was founded by a team from Stanfords AI lab, they werent yet talking about retrofit kits, and were instead focused on developing a self-driving car that also had a strong focus on intelligent and intuitive communication with the surrounding world. In an interview about this funding, Drive.ai co-founder and CEO Sameep Tandon explains the shift, while noting that communication is still a core aspect of their focus.

What we build at Drive.ai, you can think of it as an AI brain, and all those parts that are required to remove the human driver from the vehicle, he said. So we focus on Level 4 autonomous driving. A huge part of that is once you remove the human driver from the vehicle, how these vehicles will interact with people in the real world, and build their trust and depict their intentions. Thats something that we is absolutely critical to the safe deployment of autonomous technology.

Drive.ais retrofit kits employ off-the-shelf hardware, including radar and LiDAR, and the startup focuses on building the autonomous software platform that brings all those aspects together to make the self-driving magic happen. With this funding, the company will focus on launching its first pilots, which its aiming to start later this year, and on international expansion.

Retrofit options are definitely going to be attractive to any fleet operators who have a large pool of existing vehicles and arent eager to throw out that investment and buy all new cars when autonomy becomes the norm. But retrofits are typically costly and difficult, so I asked Tandon just how plug-and-play Drive.ais kits will actually be.

The retrofit kits are intended to be for business fleets, so its not intended to be something a consumer can install, itll take a little bit of integration, he said. But its intended to make it relatively quick to retrofit a large fleet.

Alongside this funding, Drive.ai is also adding two new Directors to its Board, including NEA chairman and head of Asia Carmen Chang, and Coursera co-founder and Google and Baidu AI alum Andrew Ng. Both should help Drive.ai with market expansion plans, particularly thanks to their experience with China, and Ngs AI bona fides are very highly regarded across the industry.

See the original post here:

Drive.ai raises $50M for retrofit kits to bring self-driving to existing fleets - TechCrunch

Augusta Health has saved 282 lives with AI-infused sepsis early warning system – Healthcare IT News

In Virginia, the statewide mortality rate for sepsis was 13.2% in 2016. Sepsis is the bodys life-threatening response to infection that can lead to tissue damage and organ failure. In the U.S., 1.5 million people develop sepsis each year, and about 17% of those die. Early detection of sepsis is critical to decrease mortality.

THE PROBLEM

Clinical and IT staffs at Augusta Health, an independent, community-owned, not-for-profit hospital in Virginia, knew that studies have shown that though treatments are available in a general hospital setting, they are rarely completed in a timely manner.

Our nurses are highly trained and are skilled at detecting early symptoms of sepsis based on standard indicators, but they are also very busy, said Penny Cooper, a data scientist at Augusta Health. Aware of how many patients our nurses care for and the many tasks nurses juggle at once, leadership formed a sepsis taskforce with the goal of providing staff with the resources to identify symptoms of sepsis sooner.

PROPOSAL

For sepsis, its all about early detection. Mortality from sepsis increases by as much as 8% for every hour that treatment is delayed.

By identifying sepsis early in the process, we have a much better chance of treating the infection before it goes too far, Cooper said. In addition, early identification of sepsis remains the greatest barrier to compliance with recommended evidenced based bundles.

Penny Cooper, Augusta Health

So Augusta Health decided to develop a system with Vocera Communications that provides an extra set of eyes that automatically reviews data. By reviewing the data electronically, staff is able to recognize symptoms earlier and provide automated alerts to the bedside nurses.

MARKETPLACE

On the clinical communications technology front, vendors include Avaya, Halo, HipLink Software, Mobile Heartbeat, PatientSafe Solutions, PerfectServe, Spok, Telmediq and Vocera.

MEETING THE CHALLENGE

Clinicians are familiar with the standard SIRS (Systemic Inflammatory Response Syndrome) criteria: Temp >38C; heart rate >90; respiratory rate >20; abnormal white blood cells.

But we wanted to see if we could increase the sensitivity by adding additional variables, so we began with the inpatient population and a retrospective study, Cooper explained.

We started with standard SIRS criteria but also added Mean Arterial Pressure and Shock Index, she added. By adding additional variables MAP and Shock Index as categorical variable to the Logistic Regression Analysis, we were able to increase the overall c statistic by .07. The c stat is a measurement of how well your test performs.

Staff run an automated process that collects information from the Rauland Bed System and clinical data from the EHR for all current inpatients.

The data then is compiled and analyzed and a score is assigned based on the results of the retrospective study, Cooper noted. This occurs every hour for every inpatient. Then for patients with a score greater than the cutoff, an alert is sent to the attending and charge nurse on their Vocera devices.

The sepsis communication system was developed in-house and is an example of interoperability between different healthcare information systems. The process runs in the background so no one actually interacts with the process unless their patient is alerted and requires screening.

By using artificial intelligence capabilities, we are able to screen 100% of our inpatient population and deliver results directly to the caregiver wherever they are in the hospital without any manual intervention, Cooper explained. Core measure requirements along with patient impact and historical volume trends also makes the introduction of this tool relevant and timely.

RESULTS

The sepsis mortality rate at August Health now is 4.8%, compared with 13.2% at the state level.

The work done by our teams at Augusta Health to reduce mortality rates from sepsis has been a collaborative effort, Cooper said.

We automated a process to screen all of our patients without any manual intervention but most importantly we have saved lives. By subtracting the actual mortality from the expected mortality rate, we estimate a total of 282 lives have been saved.

Health Quality Innovators named the hospital the Health Quality Innovator for Virginia in the category of Data-driven Care in 2018. U.S. News & World Report recognized Augusta Health as a Best Hospital in the Shenandoah Valley for 2019-20.

ADVICE FOR OTHERS

System interoperability is key to ensure that the right data gets to the right clinicians at the right time and without any manual effort, Cooper advised. While many EHRs may alert the clinician of sepsis within the medical record, this process delivers the alert to the communications device of the bedside nurse.

With the input from clinicians and quality staff, the process was not overly difficult to accomplish, she added. Staff developed the system using off-the-shelf tools, including SQL Server and SQL Server Integration Services.

The use of the model within the study facility has resulted in a culture change, she said. A review at a daily safety huddle prompts proactive rounding by the nursing directors. This process provides additional support that nursing staff may need to get patients to the appropriate level of care. For the immediate future, we plan to continue the use of the model within our facility, re-evaluate it from both an operational and clinical standpoint, and modify as appropriate.

Twitter:@SiwickiHealthITEmail the writer:bill.siwicki@himssmedia.comHealthcare IT News is a HIMSS Media publication.

See the article here:

Augusta Health has saved 282 lives with AI-infused sepsis early warning system - Healthcare IT News

Menten AIs combination of buzzword bingo brings AI and quantum computing to drug discovery – TechCrunch

Menten AI has an impressive founding team and a pitch that combines some of the hottest trends in tech to pursue one of the biggest problems in healthcare new drug discovery. The company is also $4 million richer with a seed investment from firms including Uncork Capital and Khosla Ventures to build out its business.

Menten AIs pitch to investors was the combination of quantum computing and machine learning to discover new drugs that sit between small molecules and large biologics, according to the companys co-founder Hans Melo.

A graduate of the Y Combinator accelerator, which also participated in the round alongside Social Impact Capital*, Menten AI looks to design proteins from scratch. Its a heavier lift than some might expect, because, as Melo said in an interview, it takes a lot of work to make an actual drug.

Menten AI is working with peptides, which are strings of amino acid chains similar to proteins that have the potential to slow aging, reduce inflammation and get rid of pathogens in the body.

As a drug modality [peptides] are quite new, says Melo. Until recently it was really hard to design them computationally and people tried to focus on genetically modifying them.

Peptides have the benefit of getting through membranes and into cells where they can combine with targets that are too large for small molecules, according to Melo.

Most drug targets are not addressable with either small molecules or biologics, according to Melo, which means theres a huge untapped potential market for peptide therapies.

Menten AI is already working on a COVID-19 therapeutic, although the companys young chief executive declined to disclose too many details about it. Another area of interest is in neurological disorders, where the founding team members have some expertise.

Image of peptide molecules. Image Courtesy: D-Wave

While Menten AIs targets are interesting, the approach that the company is taking, using quantum computing to potentially drive down the cost and accelerate the time to market, is equally compelling for investors.

Its also unproven. Right now, there isnt a quantum advantage to using the novel computing technology versus traditional computing. Something that Melo freely admits.

Were not claiming a quantum advantage, but were not claiming a quantum disadvantage, is the way the young entrepreneur puts it. We have come up with a different way of solving the problem that may scale better. We havent proven an advantage.

Still, the company is an early indicator of the kinds of services quantum computing could offer, and its with that in mind that Menten AI partnered with some of the leading independent quantum computing companies, D-Wave and Rigetti Computing, to work on applications of their technology.

The emphasis on quantum computing also differentiates it from larger publicly traded competitors like Schrdinger and Codexis.

So does the pedigree of its founding team, according to Uncork Capital investor, Jeff Clavier. Its really the unique team that they formed, Clavier said of his decision to invest in the early-stage company. Theres Hans the CEO who is more on the quantum side; theres Tamas [Gorbe] on the bio side and theres Vikram [Mulligan] who developed the research. Its kind of a unique fantastic team that came together to work on the opportunity.

Clavier has also acknowledged the possibility that it might not work.

Can they really produce anything interesting at the end? he asked. Its still an early-stage company and we may fall flat on our face or they may come up with really new ways to make new peptides.

Its probably not a bad idea to take a bet on Melo, who worked with Mulligan, a researcher from the Flatiron Institute focused on computational biology, to produce some of the early research into the creation of new peptides using D-Waves quantum computing.

Novel peptide structures created using D-Waves quantum computers. Image Courtesy: D-Wave

While Melo and Mulligan were the initial researchers working on the technology that would become Menten AI, Gorbe was added to the founding team to get the company some exposure into the world of chemistry and enzymatic applications for its new virtual protein manufacturing technology.

The gamble paid off in the form of pilot projects (also undisclosed) that focus on the development of enzymes for agricultural applications and pharmaceuticals.

At the end of the day what theyre doing is theyre using advanced computing to figure out what is the optimal placement of those clinical compounds in a way that is less based on those sensitive tests and more bound on those theories, said Clavier.

*This post was updated to add that Social Impact Capital invested in the round. Khosla, Social Impact, and Uncork each invested $1 million into Menten AI.

More here:

Menten AIs combination of buzzword bingo brings AI and quantum computing to drug discovery - TechCrunch

AI likes to do bad things. Here’s how scientists are stopping it from scamming you – SYFY WIRE

The robots arent taking over yet, but sometimes, they can get a little out of control.

AI apparently has a bias toward making unethical choices. This tends to spike in commercial situations, and nobody wants to get scammed by a bot. Some types of artificial intelligence even choose disproportionately when it comes to things like setting insurance prices for particular customers (yikes). Though there are many potential strategies a program can choose from, it needs to be prevented from going straight to the unethical ones. An international team of scientistshave now come up with a formula that explains why this is and are now working to combat crime by computer brain.

In an environment in which decisions are increasingly made without human intervention, there is therefore a strong incentive to know under what circumstances AI systems might adopt unethical strategies, thescientists said in a study recently published in Royal Society Open Science.

Even if there arent that many possible unethical strategies an AI program can pick up, that doesn't lessen the possibility of it picking something shady. Figuring out prices for car insurance can be tricky, since things like past accidents and points on your license have to be factored in. In a world where we are starting to communicate with robots more than humans sometimes, bots can be convenient. The problem is, in situations where money is involved, they can do things like apply price-raising penalties you dont deserve to your insurance policy (of course anyone would be thrilled if the unlikely opposite happened).

The chance of AI screwing up could mean huge consequences for a company everything from fines to lawsuits. With thinking robots come robot ethics. Youre probably wondering why unethical choices cant just be eliminated completely. They would happen in an ideal sci-fi world, but the scientists believe that the best which can be done is limiting the percentage of unethical choices to as few as possible. There is still the problem of the unethical optimization principle.

If an AI aims to maximize risk-adjusted return, then under mild conditions it is disproportionately likely to pick an unethical strategy unless the objective function allows sufficiently for this risk, as the team describes the principle. It isnt that robots are starting to turn evil.

The AI actually doesnt make unethical choices consciously. Were not at Westworld levels yet, but making a bot less likely to choose wrong will make sure we don't go there.

Excerpt from:

AI likes to do bad things. Here's how scientists are stopping it from scamming you - SYFY WIRE

Google is using AI to create stunning landscape photos using Street View imagery – The Verge

Googles latest artificial intelligence experiment is taking in Street View imagery from Google Maps and transforming it into professional-grade photography through post-processing all without a human touch. Hui Fang, a software engineer on Googles Machine Perception team, says the project uses machine learning techniques to train a deep neural network to scan thousands of Street View images in California for shots with impressive landscape potential. The software then mimics the workflow of a professional photographer to turn that imagery into an aesthetically pleasing panorama.

Google is training AI systems to perform subjective tasks like photo editing

The research, posted to the pre-print server arXiv earlier this week, is a great example of how AI systems can be trained to perform tasks that arent binary, with a right or wrong answer, and more subjective, like in the fields of art and photography. Doing this kind of aesthetic training with software can be labor-intensive and time-consuming, as it has traditionally required labeled data sets. That means human beings have to manually pick out which lighting effects or saturation filters, for example, result in a more aesthetically pleasing photograph.

Fang and his team used a different method. They were able to train the neural network quickly and efficiently to identify what most would consider superior photographic elements using whats known as a generative adversarial network. This is a relatively new and promising technique in AI research that pits two neural networks against one another and uses the results to improve the overall system.

In other words, Google had one AI photo editor attempt to fix professional shots that had been randomly tampered with using an automated system that changed lighting and applied filters. Another model then tried to distinguish between the edited shot the original professional image. The end result is software that understands generalized qualities of good and bad photographs, which allows it to then be trained to edit raw images to improve them.

To test whether its AI software was actually producing professional-grade images, Fang and his team used a Turing-test-like experiment. They asked professional photographers to grade the photos its network produced on a quality scale, while mixing in shots taken by humans. Around two out of every five photos received a score on par with that of a semi-pro or pro, Fang says.

The Street View panoramas served as a testing bed for our project, Fang says. Someday this technique might even help you to take better photos in the real world. The team compiled a gallery of photos its network created out of Street View images, and clicking on any one will pull up the section of Google Maps that it captures. Fang concludes with a neat thought experiment about capturing photos in the real world: Would you make the same decision if you were there holding the camera at that moment?

See the article here:

Google is using AI to create stunning landscape photos using Street View imagery - The Verge

Reduce background noise in Microsoft Teams meetings with AI-based noise suppression – Microsoft

Whether it be multiple meetings occurring in a small space, children playing loudly nearby, or construction noise outside of your home office, unwanted background noise can be really distracting in Teams meetings. We are excited to announce that users will have the ability to remove unwelcome background noise during their calls and meetings with our new AI-based noise suppression option.

Users can enable this helpful new feature by adjusting their device settings before their call or meeting and selecting "High" in the "Noise suppression" drop-down (note this feature is currently only supported in the Teams Windows desktop client). See this support article for details about how to turn it on and more here: https://aka.ms/noisesuppression.

Our new noise suppression feature works by analyzing an individuals audio feed and uses specially trained deep neural networks to filter out noise and only retain speech. While traditional noise suppression algorithms can only address simple stationary noise sources such as a consistent fan noise, our AI-based approach learns the difference between speech and unnecessary noise and is able to suppress various non-stationary noises, such as keyboard typing or food wrapper crunching. With the increased work from home due to the COVID-19 pandemic, noises such as vacuuming, your childs conflicting school lesson or kitchen noises have become more common but are effectively removed by our new AI-based noise suppression, exemplified in the video below.

The AI-based noise suppression relies on machine learning (ML) to learn the difference between clean speech and noise. The key is to train the ML model on a representative dataset to ensure it works in all situations our Teams customers are experiencing. There needs to be enough diversity in the data set in terms of the clean speech, the noise types, and the environments from which our customers are joining online meetings.

To achieve this dataset diversity, we have created a large dataset with approximately 760 hours of clean speech data and 180 hours of noise data. To comply with Microsofts strict privacy standards, we ensured that no customer data is being collected for this data set. Instead, we either used publicly available data or crowdsourcing to collect specific scenarios. For clean speech we ensured that we had a balance of female and male speech and we collected data from 10+ languages which also include tonal languages to ensure that our model will not change the meaning of a sentence by distorting the tone of the words. For the noise data we included 150 noise types to ensure we cover diverse scenarios that our customers may run into from keyboard typing to toilet flushing or snoring. Another important aspect was to include emotions in our clean speech so that expressions like laughter or crying are not suppressed. The characteristics of the environment from which our customers are joining their online Teams meetings has a strong impact on the speech signal as well. To capture that diversity, we trained our model with data from more than 3,000 real room environments and more than 115,000 synthetically created rooms.

Since we use deep learning it is important to have a powerful model training infrastructure. We use Microsoft Azure to allow our team to develop improved versions of our ML model. Another challenge is that the extraction of original clean speech from the noise needs to be done in a way that the human ear perceives as natural and pleasant. Since there are no objective metrics which are highly correlated to human perception, we developed a framework which allowed us to send the processed audio samples to crowdsourcing vendors where human listeners rated their audio quality on a one to five-star scale to produce mean opinion scores (MOS). With these human ratings we were able to develop a new perceptual metric which together with the subjective human ratings allowed us to make fast progress on improving the quality of our deep learning models.

To advance the research in this field we have also open-sourced our dataset and the perceptual quality crowdsourcing framework. This has been the basis of two competitions we hosted as part of the Interspeech 2020 and ICASSP 2021 conferences as outlined here: https://www.microsoft.com/en-us/research/dns-challenge/home/

Finally, we ensured that our deep learning model could run efficiently on the Teams client in real-time. By optimizing for human perception, we were able to achieve a good trade-off between quality and complexity which ensures that most Windows devices our customers are using can take advantage of our AI-based noise suppression. Our team is currently working on bringing this feature also to our Mac and mobile platforms.

AI based noise suppression is an example of how our deep learning technology has a profound impact on our customers quality of experience.

View original post here:

Reduce background noise in Microsoft Teams meetings with AI-based noise suppression - Microsoft

This know-it-all AI learns by reading the entire web nonstop – MIT Technology Review

This is a problem if we want AIs to be trustworthy. Thats why Diffbot takes a different approach. It is building an AI that reads every page on the entire public web, in multiple languages, and extracts as many facts from those pages as it can.

Like GPT-3, Diffbots system learns by vacuuming up vast amounts of human-written text found online. But instead of using that data to train a language model, Diffbot turns what it reads into a series of three-part factoids that relate one thing to another: subject, verb, object.

Pointed at my bio, for example, Diffbot learns that Will Douglas Heaven is a journalist; Will Douglas Heaven works at MIT Technology Review; MIT Technology Review is a media company; and so on. Each of these factoids gets joined up with billions of others in a sprawling, interconnected network of facts. This is known as a knowledge graph.

Knowledge graphs are not new. They have been around for decades, and were a fundamental concept in early AI research. But constructing and maintaining knowledge graphs has typically been done by hand, which is hard. This also stopped Tim Berners-Lee from realizing what he called the semantic web, which would have included information for machines as well as humans, so that bots could book our flights, do our shopping, or give smarter answers to questions than search engines.

A few years ago, Google started using knowledge graphs too. Search for Katy Perry and you will get a box next to the main search results telling you that Katy Perry is an American singer-songwriter with music available on YouTube, Spotify, and Deezer. You can see at a glance that she is married to Orlando Bloom, shes 35 and worth $125 million, and so on. Instead of giving you a list of links to pages about Katy Perry, Google gives you a set of facts about her drawn from its knowledge graph.

But Google only does this for its most popular search terms. Diffbot wants to do it for everything. By fully automating the construction process, Diffbot has been able to build what may be the largest knowledge graph ever.

Alongside Google and Microsoft, it is one of only three US companies that crawl the entire public web. It definitely makes sense to crawl the web, says Victoria Lin, a research scientist at Salesforce who works on natural-language processing and knowledge representation. A lot of human effort can otherwise go into making a large knowledge base. Heiko Paulheim at the University of Mannheim in Germany agrees: Automation is the only way to build large-scale knowledge graphs.

To collect its facts, Diffbots AI reads the web as a human wouldbut much faster. Using a super-charged version of the Chrome browser, the AI views the raw pixels of a web page and uses image-recognition algorithms to categorize the page as one of 20 different types, including video, image, article, event, and discussion thread. It then identifies key elements on the page, such as headline, author, product description, or price, and uses NLP to extract facts from any text.

Every three-part factoid gets added to the knowledge graph. Diffbot extracts facts from pages written in any language, which means that it can answer queries about Katy Perry, say, using facts taken from articles in Chinese or Arabic even if they do not contain the term Katy Perry.

Browsing the web like a human lets the AI see the same facts that we see. It also means it has had to learn to navigate the web like us. The AI must scroll down, switch between tabs, and click away pop-ups. The AI has to play the web like a video game just to experience the pages, says Tung.

Diffbot crawls the web nonstop and rebuilds its knowledge graph every four to five days. According to Tung, the AI adds 100 million to 150 million entities each month as new people pop up online, companies are created, and products are launched. It uses more machine-learning algorithms to fuse new facts with old, creating new connections or overwriting out-of-date ones. Diffbot has to add new hardware to its data center as the knowledge graph grows.

Researchers can access Diffbots knowledge graph for free. But Diffbot also has around 400 paying customers. The search engine DuckDuckGo uses it to generate its own Google-like boxes. Snapchat uses it to extract highlights from news pages. The popular wedding-planner app Zola uses it to help people make wedding lists, pulling in images and prices. NASDAQ, which provides information about the stock market, uses it for financial research.

Adidas and Nike even use it to search the web for counterfeit shoes. A search engine will return a long list of sites that mention Nike trainers. But Diffbot lets these companies look for sites that are actually selling their shoes, rather just talking about them.

For now, these companies must interact with Diffbot using code. But Tung plans to add a natural-language interface. Ultimately, he wants to build what he calls a universal factoid question answering system: an AI that could answer almost anything you asked it, with sources to back up its response.

Tung and Lin agree that this kind of AI cannot be built with language models alone. But better yet would be to combine the technologies, using a language model like GPT-3 to craft a human-like front end for a know-it-all bot.

Still, even an AI that has its facts straight is not necessarily smart. Were not trying to define what intelligence is, or anything like that, says Tung. Were just trying to build something useful.

See the article here:

This know-it-all AI learns by reading the entire web nonstop - MIT Technology Review

Facial recognition needs auditing and ethics standards to be safe, AI Now bias critic argues – Biometric Update

The artificial intelligence community needs to begin developing the vocabulary to define and clearly explain the harms the technology can cause, in order to reign in abuses with facial biometrics, AI Now Institute Technology Fellow Deb Raji argues in a TWIML AI podcast.

The podcast on How External Auditing is Changing the Facial Recognition Landscape with Deb Raji, hosted by Sam Charrington, who asks about the genesis of the audits Raji and colleagues have performed of biometric facial recognition systems, industry response, and the ethical way forward.

Raji describes her journey through academia and an internship with Clarifaito taking up the cause of algorithmic bias and connecting with Joy Buolamwini after watching her TedTalk. The work Raji did with others in the community gained prominence with Gender Shades, and concepts that emerged from that and similar projects have been built into engineering practices at Google.

Facial recognition is characterized as very immature technology, which was exposed as not working by the Gender Shades study.

It really sort of stemmed from this desire toidentify the problem in a consistent way and communicate it in a consistent way, Raji says of the early work delineating the problem of demographic differentials in facial recognition.

Raji won an AI Innovation Award, along with Buolamwini and Timnit Gebru, for their work in 2019.

The problem was hardly understood at all when Raji first began bringing it up, and even know seems to be fully comprehended by few in the community, as Raji says is demonstrated by a recent Twitter argument between Yann Lecun and Gebru. Raji comments that the connection between research efforts like Lecuns and products should be very clear to him. Raji also pans his downplaying of what she calls procedural negligence by not including people of color in the testing.

Representation does not necessarily mean that the training dataset demographics mirror the society the model is being deployed in. Raji notes that if 10 percent of the people in a certain area have dark skin, then models used there need to be trained with enough images of people with dark skin to ensure that the model works for that 10 percent, which may be a much higher ratio.

Raji also talks during the podcast about how the results of the follow-up testing shows the need for targeted pressure to force companies to address the gaps in their demographic performance. The limits of auditing are also explored in the conversation.

The need to have information specific to implementations is discussed in the context of facial recognition for law enforcement uses, and suggests it should be taken off the market in the absence of that information.

Raji says that as some facial recognition systems have reduced or practically eliminated demographic disparities and other accuracy issues, the problem of its weaponization has become more pressing. She notes that people are careful with their fingerprint data much more than facial images. In addition to misuse by law enforcement, sometimes out of ignorance about the technology and sometimes deliberate, Raji says the weaponization of the technology in deployments like the Atlantic Plaza Towers in Brooklyn.

The bias issue exposes the complexity of the issue, and the myth that facial recognition is like magic, Raji suggests. While the necessary conversations are held, the technology should not be used, according to Raji. To make it safe, Raji suggests that technical standards like those supplied by NIST need to be supplemented with others that include considerations of ethics like those produced or discussed by ISO, IEEE, and the WEF.

Though Raji presents the problems she is concerned with as systematic, she acknowledges the benevolence of some facial recognition algorithms.

No-ones threatening your Snapchat filter, Raji states.

accuracy | AI | AI Now Institute | artificial intelligence | biometric identification | biometrics | biometrics research | ethics | facial recognition

View post:

Facial recognition needs auditing and ethics standards to be safe, AI Now bias critic argues - Biometric Update

Watson Won Jeopardy, but Is It Smart Enough to Spin Big Blue’s AI Into Green? – WIRED

vF0;0{DRM{(bv[>Sn}=$*(;}}}yL" IX${FFg?2]=+43/O1gn/y8{']:s0a5|#?AC!=Nh s}w C{W>^GqZy{Wup,zYnw{wGi;h2;OoK* O`8sig(]~p;&Q+_',C$uu+eGo[ZkhLs]BHK.>2>X'q>CYx/Zu@4BD?fm`vB,_k[V%/&{b*BP&$G+lpCzQraOpgd63]GaY^6BeLewbY=ckF "I<@8P$t=)~uFbPw%:u?L_ WFng5jCclO'+)$Pz)@@R -!~V!P#n`y{IPbe0JAYl[W^.*HuZu9db]-?/sVQfD7&e#:2av[ wr*/tYJ;WQL~[aY,Hh-V |GA/Y>HwHl /AX:7>^_!g mz~:>2o7X%I^CjHG(zDM @OEc>Rv!&&4O]gL1 'lKk ~:&]}CL=og!GW,~z"|@yxG6X c6"7[?8:7TWw _=:`.tz1.3?W|gP {_hlfIerv@O~02[yFH}=rS/=04:`5x_]KwY[_a}ld|HaIwWl_{JhDFay8EwG.GUv'zGA#^i9ic`x>">C3G=-Z]&$9DT^J@m@F' D5"U@&:RDvAgq/ $ ,JZ'Tv$n_uv/:;H{#eu]wOb[ !o!4faa) PH;o:&tmP6h 4F%00*o|_zzlHf8!yJz lx^Vh1#FBp,^4#%Gx OPQ6BM_0Km(@=l"z ?b`|W6:q9ROM UWK7) h4MZ:ojaM.`4whR+6uD9u:2IHx = a.L[5sB]iT1JnVm2QNdyuEFLfUhBS%F0-cj3WZaSQ[aSwEL'}2Jso:E513r3fWR['oi25T,0iz,jI:L..?i=A7|gjP.?q*I44x 6-Dg'}>Wl>n]kjQWx"~X%jN/wc{-3|B|q22Y .d2m6e+<~Kn;vn*^:9lq=[6vwvuwRZLE@.mcFyS,wLrx-0U2v.qv;>u/O`i<9V*PB" d:@ML};<>O`.c{]#&P+W$h`bt)QI.!5A/b~m=&_qj>B~{$ ?ic5JW^9pQ -eaE ^)AebJk)T[;80Hy.e IQz:`IKBF=njNKkqX}'9zz/H%'V;0H3w|o5=UL"ik*yd[yUV]/|s[N, A^, wW?%!9[Ry3K?Z'gq~A&') ~[9sV- Q.%"m{~Bspq|r/Qu/-Y<3@iP &=HcO*wv'O(M6Id#PDgC@*m}_3$>(+L~L$(AD"8]DIh|W>;D8m|jJqDC[ 9EN${s$"R~.u:`G!tV G^9 kvP b4:R<0!D60yD"@$Z$pb4}Djmm.!}^dB= ,v1Pt;~(Tsqw:xlB-jPl=-42ZJsA',Qp5"RieL(`R}'j~O=Aniflc>HxtK5I/=H{4SBz>Vz(y]~VU-g~P-gHxtK=v)rEJ){>QC]I~i~p|#57Mj:G,%^}^a?GX##Fc%7^O]-gFTM~(rqj)6GM%fq?43-q?2~+iVc3j>Gm[ "zU~VUhPi~>C5]m #5G} q?x' 1V[K~XM~(RKK=q?x~$:>3'>3)rEN[lOOL~n*W]~V+ub+Txt$cM[l2TUU~lZM|&j7GMfJk ]=&xS=Wq/c~S|#j_{=U|~|+MGUjc1(j)6GMgJ[}b~Pb]mf]Y?*3K)q/1Y?1PMjc[l6V*:U_^y/b_b_L{>`p:|6-5~.||e`_Pwg~:4.> Z-N6u;H=-?x}#Lt[x,pte?hf=~|UOUGM6&k>!MyaYpNQTHuwR]<[t~q:Q0e~:TciN&ooV|Z&L6@Q>S~$>J?MRTJYw>gl6*TwXwoPD5shn> l@lydGWz~'M1!q{bt3%$E+@I?(%nYueUSZWY/&;_8UR sesR[{%&ZgSK+C62DtX@#E{ En],Z!)e)8k]E%:(_k&iP./gutk4Pd[d.l`e x[M<; 4@*9RY2z|z|y7qjE(lU,-&$Y7z7{lj"c;kYXU2nYgV[HZeYgYYgYffEP$vYgjYx9N5,D5,A5bG:(Lzuju+P.PC5:?PC5:?PN f]7ujuAYf]'u]juAYf]'u]juAYfCn5:?PC.5:?PC5:?PCN5:?Pn;u?fv5YC? f jmPnQ5,{:kO>Q:nkY>5qu$e`%}P?rwh*YC.Pkh]jV]rcc `I<_x]Cd 4,i|dju_:QuM*KZ IWC2Ls]rRbmxq& Fg.l C~v6%[Y^<:+LAXWpf"Uy[kD:` 20@!YF+IuH!:96-P.,*|D"O.ArlXr6@"(V&V(Z'+]Pk< ?tZ#oU#.lo8 z@ W]`K `d@e"t0*QH8;kY))k;`mUW`*L< wAi["}L"*qzo;Iv#h){<0I!`0Jh|_dc,BFp!{N+/^<9Ku$[9 i91`u'Dqa>/~4"9z.UAT} 5.Sy%dtAXe3Xl8 Ai$9Jxv"*4AL oGtW[J}gf$z Tx"$yr'q au2K'_2`,XXcZbd{PL1Ie wo3 3eH3aZ/[X9B~9<:Bht09_X%>%j%Vhl*VS*_ E c8:vF YQVmu*WJHlL%v"j z~Rfui.95QLG"F@X[u!`76%_+x6z5,f}mt+ M/Y3X2-FDgD*SU~O&? K}Yj.S~06'xjADn ,(Zem^_foev>+~9Q;rP[/&$MISfJMO6Um'x&7y rp|IpPzi |JTu=Y^C|,b;=hC/sS29k.xa cx~"6XDkXAyDf"6Yj0#^=M)E q_F t?>g[bbk<`9< AZnE(f[HFraZ:Lq k1'Z&%0-M+1dZ <=ia+dU"0u7"yW8p+VM*5BI/B ?|&olfAa;p8~^q*)l(P7QyQ4lEC1yFJ5702 pJ(fT, Mj$@z|xM@<.+rwFA,c&D-S.G|8OcT'+=*(b_O6c1s2r]upc.`>Xx:Gaia)%E57$$P-D. )`m{iUC|]>4 1 sV"auvQ%9w@z .DPx|tC$?wDf1m](}?B#?2]h!G#BvL ") g e,M"

+]%]SaHULqvIg0@D~F1I1K'J4)mT3- $1#+CmOHHQJ;F~hn ClR J8/``^f='x#YhS. )R2i||#ER !OE%7l u945DtiaI>G6nDo8&7s #yDme y=J -bHn`{&i@-R"`@"0Dn3hHki$BJ&0MipoAK9G^{>&6 EDX!nl|(A ^R50aIrrFXq5_Cnq}HOAx!fK>2%I|oTZ"/pE,-k#h!D@6s<=TAe" 9 XUUBIDVJbP0H KSEd>d8KB`O+e9K7L{ &IKf9W zyJ#C(/Q jb@ L'8Pid+bJ`c#/6{+C8mBUD3s) %e[7 4//2Dw@PrCJ; B,KixQ0hTC?9 K2z,+ />X1?LZ r6h,J~ehJ]+U!w$<&^9ahE~(: tBn:tVW)N|Ch-fR llq !ko@Po;`,-2/fJSM*wdR@O$aM/t:jm"BwC2lgh&R6n~kCZ|Ev/pxzcDt2jd[&:7==nxU_ZjS2zV.S7#IYv^d?E=yu( = tNRsSn1D ]t#FCVf0u5K3N`(Dj3xE>>:H6rr6tdh]w"g$6j*l:10w6S~xI{lVr>""z(|nv;+x`?d j%/ dR^s@hN7l/[H'@Vf9G} c|D<2Z.1mcwoDcR_v)F4}EmfE+'Ls2SIc'}c"m5{U#;h;4Dwmwqq>&' ML#mo~1)-^iXy>UGgJ+P(YmezU}hkegJ?QK[$t$Lj2LC]0'8":>r@WWi oq=Sn[_n JP{}re6pjrOw+! 1"!33;:=_maFV2ro'vKA yfCdlHwfCLb+L"mio%m%0 s{{(N{Lh7 _o9/>,in4-t [4cV(oH)> 8eX=a6lf/NW>df&(m:sz?Y9}:;$$iG+N["v4q7ksF+E=Ci0Ky2o#,UPNY/#,Z+RA)"bJpEyqpb[$cH 8!F.7k9tmJfK9=9`(3v4z5'|*tr<*65XmWYVQ|[bSVh^4"n T8GQz 'dwqi-Z>cd'KC9Bv6:`N Vz.ol*e}i/EjDp-|a| H 2LT4Vm4cN363aDyCFr-a>d4#u. d,z )>:ddCFR{P4A*/@h!GVtVcIQPjZ(FUAEdFW2h*Lh~R-4dIRM0|K& ]YsNm@~Ed,We[RY#*KD|}e}}u46`e}GfGM;epd[PjN>v_eX5K}D';)J%~'etf=vfjs5l^a70hu]Y1v*a/h'6,cj<6?eZO0@Nw|UGG].]$y]abJ%7 Use(Ju1..M}Mz%cL[0/xOa?2i>rJ-VjiL:o6-t *$ K ~_,h+e=2q1f}KfhqK , t7 0|Cdq%|ZwN]3={E _#Unbhrd"*V3?R~$Vz"g,T7m!Y7oNh:j52kusXggamz9t}B)t|'zI47homf|E5p?x-f+nvUho<>c]'Fm~$AG$&oB= ZNR's1qdj9N&}+7/%O0c#y~l)y$b`T{xyS~K_no % *HmJiR%}KDfBSgkxTmX$d1daRS4w'3&cmq'&{QM/D?d,M,r2~OW-n>5[=T(LtiT H!=H3,W+VXwe`Y:clk6}88tz -;hxv./I'oOgtZeZr>)O>^GX<$u]*K.oYbwu1!"r|cx3iE@YO5t%|+OEdw[S}.[z?_8QJw%'v& |jcn~RcN"ATt'K/|q*^S5ZS9ney / lwP'Lm^NuxZPA3&G:w~>p#j^6K VhqWr0h3J$+_XC5 64oiv4C:8 HU(n0KpkQ. x*b7`he&TmqvlTp/]!j!hP16x:rbK.:PY2P":RZ9<{-;l|cX_{s[|dtDOOL=-Cxo|p^.]Q_oUtLIK~75+Rt;v>Zf9M+h'&'5#] K+d7w,G0'xZ($[%[Ub6SZ5.J29RBVbXpi5l+wXszTE-CLh$_&}Omjc.ETjc[]|Jb'9zz/Vctr}$ :^VoGak]L%nK~XFd+o+VZzRTmsU^& *tRzM+JH"Qg-uY9 BBU^Q&)%c2[5<%XQY3BkgPO{tN{BfV}r 5e7 xT%Cor,zM:6AoNk_Y LX`np7Bwo$ $aEVF^.|...{yd)^{X<[,HE|U{ %#}AbffP%N]1pi3YxhCTyeS?QU`clp|_q#K[k*0 NyXD82g(Fk*2IIuA7wAX]Ls2'&E("_peF}3%<]e(9*=^n[LPP3~~'?Gr|q4P]^{ fyP59 2e'Xu5+`QAn0@?'T']zNCPwi`uB`Tx!zHvo3 fi8{qW-S ?}@QaM7jSNT!<#Qo[,Ql"OZ>'?Y8$S.m0J5hf]`{2w LZI H.)kc|1e8Yx)@dZ$/MhQo(n49[&O f NgSuFdM0`GF3XwGhiO'|Yo|43,s`cE1;!7=Us<|bm;;6ls:2@V*TO %Q>IFe}KA/R}uHp!XW.P|_`t 6zZjmBX/B~Vv0zVtEM*-ogC$lZ0;~Ix`}I>#/#zc 9H#| k ol~c[F3ptxZu(d`~/Ms +YF5C0HCc, "6 Zd3(G[!?dH? i+Ou!4:*c?[b~[&qEuhy~j6xM HU(&r6Oaau:?j4>s+ IB]Ram+1kEiRf3yFCpyX`O(Kl{N_/0%Go]<=U3a355>nv&0S;WYUgk_t!C

View post:

Watson Won Jeopardy, but Is It Smart Enough to Spin Big Blue's AI Into Green? - WIRED