Monthly Archives: August 2021

Letters to the Editor: No, LAPD, it wasn’t ‘Antifa’ that protested anti-vaxxers – Los Angeles Times

Posted: August 22, 2021 at 4:03 pm

To the editor: One must really try not to giggle at the Los Angeles Police Departments use of the term Antifa to describe people who showed up at Los Angeles City Hall on Saturday to protest against a group of anti-vaccine and anti-mask demonstrators.

Antifa is not an organization. It is a concept, even a trend at times, but there are no meetings, no rosters, no dues or fees and no secret handshakes.

This lack of serious intelligence by the LAPD is astonishing. Who gives these people a gun and badge? Santa Claus, the Lone Ranger and the Bogey Man all have more standing in reality than antifa.

The LAPD would have us believe that it is an organization. How embarrassing.

Robin Doyno, Mar Vista

..

To the editor: Can we please call anti-vaxxers what they truly are virus lovers?

By refusing to get a vaccine or wear a mask, they signal their desire to keep the coronavirus around so it can kill people, destroy the economy, put companies out of business and wreak havoc on everyones life. They do this under the false flag of freedom, a word with no fixed definition.

This pandemic wont be over until everyone gets in the fight, and Im not talking about the one on City Halls steps.

Nancy Zaman, Beverly Hills

Read more here:

Letters to the Editor: No, LAPD, it wasn't 'Antifa' that protested anti-vaxxers - Los Angeles Times

Posted in Antifa | Comments Off on Letters to the Editor: No, LAPD, it wasn’t ‘Antifa’ that protested anti-vaxxers – Los Angeles Times

Jason Aldeans wife wont shy away from politics amid belief her followers are afraid of cancel culture – Fox News

Posted: at 4:03 pm

Jason Aldeans wife, Brittany, is hoping to be a voice for people who feel they cant share their political opinions out of fear of online retribution.

The wife of the country star has been politically outspoken on her social media many times in the past, often leading to controversy and debate among her followers. However, she revealed recently that her ability to speak out publicly has actually helped a number of her followers who reach out to her privately.

When the 33-year-old mother of two conducted a question and answer segment on her Instagram Stories on Thursday night, she was confronted with the question, "What helped you be more open about your political views on here when most people don't agree?"

Thats when she noted that she gets more private support in her DMs for her politically charged posts than her followers may realize, and certainly more than they can see publicly.

JASON ALDEAN RESPONDS TO CRITICISM OVER MASK-FREE DISNEY PICTURE: 'CHILL OUT'

Jason Aldean's wife, Brittany, opened up about sharing her political views on social media. (Jeff Kravitz/FilmMagic)

"I personally don't give a damn if people don't agree with me. I think it's important now more than ever to stand for what you believe, even if it 'goes against the grain.' Do your research, and form your own opinion - speak out if you wish. But most importantly, don't bully people who feel differently than you," she added.

While her public-facing social media may be a hotbed of political debate among her fellow Republicans and her Democrat followers, it seems Brittany Aldean gets enough support from DMs to keep her not only engaged politically but fighting for those who feel cancel culture has stripped away their voice.

In fact, her comments echo those she previously made after catching immense backlash from supporters of Joe Biden around the time of the 2020 presidential election. She posted a video at the time showing off her blue sweatshirt with Donald Trumps name on it next to an American flag with a caption that reads "... STILL MY PRESIDENT."

She followed that up with a note explaining that the backlash on her page led many to privately message her their support for Trump. She explained then, as she did last week, that she plans to keep speaking up for people who can't for fear they'll lose their job or status in their community simply for voicing pro-Trump or pro-Republican political views.

"I WILL CONTINUE TO SPEAK. FOR THE PEOPLE THAT MESSAGE ME AND ARENT ABLE (FOR FEAR OF LOSING BUSINESS OR FRIENDS) IT IS DISGUSTING TO ME THAT FREEDOM OF SPEECH APPLIES TO EVERYONE BUT REPUBLICANS," she wrote.

The follow-up post included a heart sticker that reads "you are not alone."

However, her decision to speak out and voice her political beliefs got her into a bit of disinformation trouble when the Jan. 6 riot at the U.S. Capitol took place. On Jan. 6, supporters of former President Trump gathered in Washington, D.C., in a demonstration that led to riots in which hundreds of people breached the security at the U.S. Capitol building while the Senate was voting to certify then President-elect Bidens electoral college votes. Some people died as a result of the riots.

JASON ALDEAN AND BRITTANY KERR 5-YEAR WEDDING ANNIVERSARY

Brittany Aldean is often outspoken on behalf of fellow Republicans on social media. (Getty Images)

Brittany shared her thoughts on the matter at the time with a repost of an image that made the since-debunked claim that two of the men who breached the Capitol building were not protesters for Trump, but members of Antifa. The Associated Press has since deemed claims that any known Antifa members were present at the Capitol Wednesday as false.

According to Rolling Stone, Instagram removed the image, prompting Aldean to take to her Instagram Story to complain about being censored.

"Instagram wanted me to know that it was against their guidelines to post," she said in a video at the time. "Its getting so ridiculous the filters you put on everyone thats against your narrative. Its unbelievable and its ridiculous. Its just really sad what this worlds coming to."

She further called for unity in her Story from the Sunday after the riots.

"Apparently freedom of speech doesn't apply to everyone and that's the issue I have. I have AMAZING conversations with my liberal friends and we can agree to disagree. It's the people that aren't willing to hear you that chap my a--," she said.

Despite the backlash and controversy over her opinions on the 2020 election and the Jan. 6 riots, Aldean remains undeterred. On Memorial Day of 2021, she took a jab at Vice President Kamala Harris after she was criticized tweeting "Enjoy the long weekend."

Brittany Aldean took to Instagram to share a series of photos of her family's gathering in honor of the significant holiday, showing "stars and stripes" balloons and an American flag waving in the wind.

CLICK HERE TO SIGN UP FOR OUR ENTERTAINMENT NEWSLETTER

Almost immediately, Brittany appeared to troll language Harris used just days ahead of Memorial Day in which she told her Twitter followers to "enjoy the long weekend" above a photo of her smiling.

"Our family doesnt take Memorial Day lightly," Brittany began the post. "It's more than a long weekend."

Brittany Aldean believes she's speaking up for her fellow Republicans on social media. (Matt Winkelmeyer/ACMA2019/Getty Images for ACM)

"@jasonaldean and I both come from military families and understand the importance our loved ones and others have sacrificed for us, and our freedom. We fly our flag highEVERY SINGLE DAY. It's the least we can do to show our appreciation," the mother of two continued.

CLICK HERE TO GET THE FOX NEWS APP

The responses to the stars most-recent political post were largely positive as people began to share their thoughts on the significance of Memorial Day, which was likely vindicating for the celebrity who has stated numerous times that her followers find her "against the grain" political takes refreshing.

Fox News' Melissa Roberto contributed to this report.

Read this article:

Jason Aldeans wife wont shy away from politics amid belief her followers are afraid of cancel culture - Fox News

Posted in Antifa | Comments Off on Jason Aldeans wife wont shy away from politics amid belief her followers are afraid of cancel culture – Fox News

Yes, the resistance to the Democrats’ communist agenda is growing – Washington Times

Posted: at 4:03 pm

ANALYSIS/OPINION:

Before we get to the more upbeat portion of this column, heres a brief recap of what were facing.

Democrats are ramming through a multi-trillion-dollar socialist agenda and hope to kill voter ID laws and other state election safeguards to usher in a one-party government, for starters.

Republicans are trying to decide whether to be at least a speed bump to this coup or to bask in the glow of media praise for bipartisanship. So far, 19 GOP Senators have been there for the basking.

The southern border is wide open, with nearly one million illegal aliens crossing in the last eight months and another million expected. Bodies are piling up in the desert, and people infected with COVID-19 are being released by the thousands in McAllen and other Texas cities and then bused and flown to other states.

Deportations of illegals, even criminal sex offenders, have been halted. The whole scheme is designed to replace ornery Americans with a dependent Free Stuff Army of voters so that the Democrats can wreck America faster.

The willful spread of infected illegals helps gin up COVID-19 hysteria. Even for people who had the virus and have protective antibodies, a push for mandatory vaccinations is in full swing, from the military to school districts and corporations. Columnists in major papers are increasingly caustic toward the unvaccinated and are zeroing in on evangelical Christians and other religious objectors.

Gasoline prices are through the roof, even as the Biden Administration begs foreign oil nations to make up for his deliberately sabotaging Americas energy independence.

I could go on, but you get the picture. Were in the throes of what amounts to a communist takeover under moderate placeholder Joe Biden, who is to unfinished sentences what Babe Ruth was to home runs.

But the encouraging news is that more and more Americans realize that we cannot wait until 2022 or 2024 to get our country back. The resistance is growing rapidly at the grassroots level.

In Portland, Oregon, a once beautiful and now lawless city, a group of Christians gathered in a public park to hold a worship session. In short order, a group of black-suited and hooded Antifa thugs came and assaulted the Christians while police stood by.The next day, the Christians came back to the park to praise God and stand their ground in much larger numbers. Antifa attacked again but could not rout the faithful.

Members of Antifa showed up in Portland last night to threaten, harass, bully and intimidate us, organizer Scott Feucht tweeted. A mom and her baby were tear-gassed. Antifa stood 10 feet from me as we lifted our voices in praise, but we didnt back down. We kept worshipping, and God moved powerfully! One member of Antifa who came to disrupt our service was SAVED giving his life to Jesus!

In Loudoun County, Virginia, a second young schoolteacher has exposed the woke school boards behind-the-scenes antipathy toward Christians. Earlier, physical education instructor Tanner Cross was suspended after telling the board that he couldnt lie to children about their gender as a Christian. A court reinstated him, but the board is still intent on making an example of him.

This past week, Laura Morris told the board that she was informed in one of my so-called equity trainings that White, Christian, able-bodied females currently have the power in our schools and this has to change.

Clearly, youve made your point. You no longer value me or many other teachers youve employed in this county. School board, I quit, she said, adding as she choked up, I quit your policies, I quit your training, and I quit being a cog in a machine that tells me to push highly politicized agendas to our most vulnerable constituents children.

The board voted the very next night to adopt a sweeping transgender policy, complete with re-education sessions for teachers. Their motto should be: Driving out dedicated teachers while imposing a radical agenda.

Frustrated by the boards tin ear, Fight for Schools.com, which is leading a petition drive to recall six board members, held an alternative board meeting so citizens could air their views.

Among the speakers was Monica Gill, who teaches AP government in Loudoun. She explained why she opposes Critical Race Theory.I have been teaching for 26 years, and I have never, once, ever been afraid to deal with the issues of race and injustice in our history, she said. In fact, I welcome the opportunity to teach about them because it allows me to point to Americas moral compass, which we have used time and time again to right those wrongs and right the ship.

Thank God for these brave souls who are not taking it anymore. Dividing children by obsessing on race is criminal.

Likewise, openly subverting laws instead of upholding them should be impeachable. Whoever is really running the Biden Administration is more dangerous than any foreign entity. They are consolidating power, stoking dangerous inflation and public debt, destroying our energy independence, and sowing division and hatred between the races.

As Christians, were being set up to play a role seen before in history that of the scapegoat or even a subversive element that must be identified and eliminated. We must keep in mind three things.

1. There is power in numbers; a sleeping giant appears to be awakening.

2. We still have the incomparable United States Constitutions legacy of liberty.

3. And most importantly, Almighty God is greater than any earthly ruler.

From Psalm 2: Why do the heathen rage, and the people imagine a vain thing? The kings of the earth set themselves, and the rulers take counsel together, against the Lord, and against his anointed, saying, Let us break their bands asunder, and cast away their cords from us. He that sitteth in the heavens shall laugh: the Lord shall have them in derision.

Robert Knight is a columnist for The Washington Times. His website is roberthknight.com.

Thank you for being a Washington Times reader. Comments are temporarily disabled. We apologize for the inconvenience.

Continued here:

Yes, the resistance to the Democrats' communist agenda is growing - Washington Times

Posted in Antifa | Comments Off on Yes, the resistance to the Democrats’ communist agenda is growing – Washington Times

What is AI? Here’s everything you need to know about …

Posted: at 3:58 pm

What is artificial intelligence (AI)? It depends who you ask.

Back in the 1950s, the fathers of the field,MinskyandMcCarthy, described artificial intelligence as any task performed by a machine that would have previously been considered to require human intelligence.

That's obviously a fairly broad definition, which is why you will sometimes see arguments over whether something is truly AI or not.

Modern definitions of what it means to create intelligence are more specific. Francois Chollet, an AI researcher at Google and creator of the machine-learning software library Keras, has said intelligence is tied to a system's ability to adapt and improvise in a new environment, to generalise its knowledge and apply it to unfamiliar scenarios.

"Intelligence is the efficiency with which you acquire new skills at tasks you didn't previously prepare for,"he said.

"Intelligence is not skill itself; it's not what you can do; it's how well and how efficiently you can learn new things."

It's a definition under which modern AI-powered systems, such as virtual assistants, would be characterised as having demonstrated 'narrow AI', the ability to generalise their training when carrying out a limited set of tasks, such as speech recognition or computer vision.

Typically, AI systems demonstrate at least some of the following behaviours associated with human intelligence: planning, learning, reasoning, problem-solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

At a very high level, artificial intelligence can be split into two broad types:

Narrow AI

Narrow AI is what we see all around us in computers today -- intelligent systems that have been taught or have learned how to carry out specific tasks without being explicitly programmed how to do so.

This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems on self-driving cars, or in the recommendation engines that suggest products you might like based on what you bought in the past. Unlike humans, these systems can only learn or be taught how to do defined tasks, which is why they are called narrow AI.

General AI

General AI is very different and is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets or reasoning about a wide variety of topics based on its accumulated experience.

This is the sort of AI more commonly seen in movies, the likes of HAL in 2001 or Skynet in The Terminator, but which doesn't exist today and AI experts are fiercely divided over how soon it will become a reality.

There are a vast number of emerging applications for narrow AI:

New applications of these learning systems are emerging all the time. Graphics card designerNvidia recently revealed an AI-based system Maxine, which allows people to make good quality video calls, almost regardless of the speed of their internet connection. The system reduces the bandwidth needed for such calls by a factor of 10 by not transmitting the full video stream over the internet and instead of animating a small number of static images of the caller in a manner designed to reproduce the callers facial expressions and movements in real-time and to be indistinguishable from the video.

However, as much untapped potential as these systems have, sometimes ambitions for the technology outstrips reality. A case in point is self-driving cars, which themselves are underpinned by AI-powered systems such as computer vision. Electric car company Tesla is lagging some way behind CEO Elon Musk's original timeline for the car's Autopilot system being upgraded to "full self-driving" from the system's more limited assisted-driving capabilities, with the Full Self-Driving option only recently rolled out to a select group of expert drivers as part of a beta testing program.

A survey conducted among four groups of experts in 2012/13 by AI researchers Vincent C Mller and philosopher Nick Bostrom reported a 50% chance that Artificial General Intelligence (AGI) would be developed between 2040 and 2050, rising to 90% by 2075. The group went even further, predicting that so-called 'superintelligence' which Bostrom defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest" -- was expected some 30 years after the achievement of AGI.

However, recent assessments by AI experts are more cautious. Pioneers in the field of modern AI research such as Geoffrey Hinton, Demis Hassabis and Yann LeCunsay society is nowhere near developing AGI. Given the scepticism of leading lights in the field of modern AI and the very different nature of modern narrow AI systems to AGI, there is perhaps little basis to fears that a general artificial intelligence will disrupt society in the near future.

That said, some AI experts believe such projections are wildly optimistic given our limited understanding of the human brain and believe that AGI is still centuries away.

While modern narrow AI may be limited to performing specific tasks, within their specialisms, these systems are sometimes capable of superhuman performance, in some instances even demonstrating superior creativity, a trait often held up as intrinsically human.

There have been too many breakthroughs to put together a definitive list, but some highlights include:

AlexNet's performance demonstrated the power of learning systems based on neural networks, a model for machine learning that had existed for decades but that was finally realising its potential due to refinements to architecture and leaps in parallel processing power made possible by Moore's Law. The prowess of machine-learning systems at carrying out computer vision also hit the headlines that year, withGoogle training a system to recognise an internet favorite: pictures of cats.

The next demonstration of the efficacy of machine-learning systems that caught the public's attention wasthe 2016 triumph of the Google DeepMind AlphaGo AI over a human grandmaster in Go, an ancient Chinese game whose complexity stumped computers for decades. Go has about possible 200 moves per turn compared to about 20 in Chess. Over the course of a game of Go, there are so many possible moves that are searching through each of them in advance to identify the best play is too costly from a computational point of view. Instead, AlphaGo was trained how to play the game by taking moves played by human experts in 30 million Go games and feeding them into deep-learning neural networks.

Training these deep learning networks can take a very long time, requiring vast amounts of data to be ingested and iterated over as the system gradually refines its model in order to achieve the best outcome.

However,more recently, Google refined the training process with AlphaGo Zero, a system that played "completely random" games against itself and then learned from it. Google DeepMind CEO Demis Hassabis has also unveiled a new version of AlphaGo Zero that has mastered the games of chess and shogi.

And AI continues to sprint past new milestones:a system trained by OpenAI has defeated the world's top playersin one-on-one matches of the online multiplayer game Dota 2.

That same year, OpenAI created AI agents that invented theirown languageto cooperate and achieve their goal more effectively, followed by Facebook training agents tonegotiateandlie.

2020 was the year in which an AI system seemingly gained the ability to write and talk like a human about almost any topic you could think of.

The system in question, known as Generative Pre-trained Transformer 3 or GPT-3 for short, is a neural network trained on billions of English language articles available on the open web.

From soon after it was made available for testing by the not-for-profit organisation OpenAI, the internet was abuzz with GPT-3's ability to generate articles on almost any topic that was fed to it, articles that at first glance were often hard to distinguish from those written by a human. Similarly, impressive results followed in other areas, with its ability toconvincingly answer questions on a broad range of topicsandeven pass for a novice JavaScript coder.

But while many GPT-3 generated articles had an air of verisimilitude, further testing found the sentences generated often didn't pass muster,offering up superficially plausible but confused statements, as well as sometimes outright nonsense.

There's still considerable interest in using the model's natural language understanding as to the basis of future services. It isavailable to select developers to build into software via OpenAI's beta API. It will also beincorporated into future services available via Microsoft's Azure cloud platform.

Perhaps the most striking example of AI's potential came late in 2020 when the Google attention-based neural network AlphaFold 2 demonstrated a result some have called worthy of a Nobel Prize for Chemistry.

The system's ability to look at a protein's building blocks, known as amino acids, and derive that protein's 3D structure could profoundly impact the rate at which diseases are understood, and medicines are developed. In the Critical Assessment of protein Structure Prediction contest, AlphaFold 2 determined the 3D structure of a protein with an accuracy rivaling crystallography, the gold standard for convincingly modelling proteins.

Unlike crystallography, which takes months to return results, AlphaFold 2 can model proteins in hours. With the 3D structure of proteins playing such an important role in human biology and disease, such a speed-up has beenheralded as a landmark breakthrough for medical science, not to mention potential applications in other areas where enzymes are used in biotech.

Practically all of the achievements mentioned so far stemmed from machine learning, a subset of AI that accounts for the vast majority of achievements in the field in recent years. When people talk about AI today, they are generally talking about machine learning.

Currently enjoying something of a resurgence, in simple terms, machine learning is where a computer system learns how to perform a task rather than being programmed how to do so. This description of machine learning dates all the way back to 1959 when it was coined by Arthur Samuel, a pioneer of the field who developed one of the world's first self-learning systems, the Samuel Checkers-playing Program.

To learn, these systems are fed huge amounts of data, which they then use to learn how to carry out a specific task, such as understanding speech or captioning a photograph. The quality and size of this dataset are important for building a system able to carry out its designated task accurately. For example, if you were building a machine-learning system to predict house prices, the training data should include more than just the property size, but other salient factors such as the number of bedrooms or the size of the garden.

The key to machine learning success is neural networks. These mathematical models are able to tweak internal parameters to change what they output. A neural network is fed datasets that teach it what it should spit out when presented with certain data during training. In concrete terms, the network might be fed greyscale images of the numbers between zero and 9, alongside a string of binary digits -- zeroes and ones -- that indicate which number is shown in each greyscale image. The network would then be trained, adjusting its internal parameters until it classifies the number shown in each image with a high degree of accuracy. This trained neural network could then be used to classify other greyscale images of numbers between zero and 9. Such a network was used in a seminal paper showing the application of neural networks published by Yann LeCun in 1989 and has been used by the US Postal Service to recognise handwritten zip codes.

The structure and functioning of neural networks are very loosely based on the connections between neurons in the brain. Neural networks are made up of interconnected layers of algorithms that feed data into each other. They can be trained to carry out specific tasks by modifying the importance attributed to data as it passes between these layers. During the training of these neural networks, the weights attached to data as it passes between layers will continue to be varied until the output from the neural network is very close to what is desired. At that point, the network will have 'learned' how to carry out a particular task. The desired output could be anything from correctly labelling fruit in an image to predicting when an elevator might fail based on its sensor data.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a large number of sizeable layers that are trained using massive amounts of data. These deep neural networks have fuelled the current leap forward in the ability of computers to carry out tasks like speech recognition and computer vision.

There are various types of neural networks with different strengths and weaknesses. Recurrent Neural Networks (RNN) are a type of neural net particularly well suited to Natural Language Processing (NLP) -- understanding the meaning of text -- and speech recognition, while convolutional neural networks have their roots in image recognition and have uses as diverse as recommender systems and NLP. The design of neural networks is also evolving, with researchersrefining a more effective form of deep neural network called long short-term memoryor LSTM -- a type of RNN architecture used for tasks such as NLP and for stock market predictions allowing it to operate fast enough to be used in on-demand systems like Google Translate.

The structure and training of deep neural networks.

Another area of AI research isevolutionary computation.

It borrows from Darwin's theory of natural selection. It seesgenetic algorithms undergo random mutations and combinations between generations in an attempt to evolve the optimal solution to a given problem.

This approach has even been used to help design AI models, effectively using AI to help build AI. This use of evolutionary algorithms to optimize neural networks is called neuroevolution. It couldhave an important role to play in helping design efficient AI as the use of intelligent systems becomes more prevalent, particularly as demand for data scientists often outstrips supply. The technique was showcased byUber AI Labs, which released paperson using genetic algorithms to train deep neural networks for reinforcement learning problems.

Finally, there areexpert systems, where computers are programmed with rules that allow them to take a series of decisions based on a large number of inputs, allowing that machine to mimic the behaviour of a human expert in a specific domain. An example of these knowledge-based systems might be, for example, an autopilot system flying a plane.

As outlined above, the biggest breakthroughs for AI research in recent years have been in the field of machine learning, in particular within the field of deep learning.

This has been driven in part by the easy availability of data, but even more so by an explosion in parallel computing power, during which time the use of clusters of graphics processing units (GPUs) to train machine-learning systems has become more prevalent.

Not only do these clusters offer vastly more powerful systems for training machine-learning models, but they are now widely available as cloud services over the internet. Over time the major tech firms, the likes of Google, Microsoft, and Tesla, have moved to using specialised chips tailored to both running, and more recently, training, machine-learning models.

An example of one of these custom chips is Google's Tensor Processing Unit (TPU), the latest version of which accelerates the rate at which useful machine-learning models built using Google's TensorFlow software library can infer information from data, as well as the rate at which they can be trained.

These chips are used to train up models for DeepMind and Google Brain and the models that underpin Google Translate and the image recognition in Google Photos and services that allow the public to build machine-learning models usingGoogle's TensorFlow Research Cloud. The third generation of these chips was unveiled at Google's I/O conference in May 2018 and have since been packaged into machine-learning powerhouses called pods that can carry out more than one hundred thousand trillion floating-point operations per second (100 petaflops). These ongoing TPU upgrades have allowed Google to improve its services built on top of machine-learning models, for instance,halving the time taken to train models used in Google Translate.

As mentioned, machine learning is a subset of AI and is generally split into two main categories: supervised and unsupervised learning.

Supervised learning

A common technique for teaching AI systems is by training them using many labelled examples. These machine-learning systems are fed huge amounts of data, which has been annotated to highlight the features of interest. These might be photos labelled to indicate whether they contain a dog or written sentences that have footnotes to indicate whether the word 'bass' relates to music or a fish. Once trained, the system can then apply these labels to new data, for example, to a dog in a photo that's just been uploaded.

This process of teaching a machine by example is called supervised learning. Labelling these examples is commonly carried out byonline workers employed through platforms like Amazon Mechanical Turk.

Training these systems typically requires vast amounts of data, with some systems needing to scour millions of examples to learn how to carry out a task effectively --although this is increasingly possible in an age of big data and widespread data mining. Training datasets are huge and growing in size --Google's Open Images Dataset has about nine million images, while its labelled video repositoryYouTube-8Mlinks to seven million labelled videos.ImageNet, one of the early databases of this kind, has more than 14 million categorized images. Compiled over two years, it was put together by nearly 50 000 people -- most of whom were recruited through Amazon Mechanical Turk -- who checked, sorted, and labelled almost one billion candidate pictures.

Having access to huge labelled datasets may also prove less important than access to large amounts of computing power in the long run.

In recent years, Generative Adversarial Networks (GANs) have been used in machine-learning systems that only require a small amount of labelled data alongside a large amount of unlabelled data, which, as the name suggests, requires less manual work to prepare.

This approach could allow for the increased use of semi-supervised learning, where systems can learn how to carry out tasks using a far smaller amount of labelled data than is necessary for training systems using supervised learning today.

Unsupervised learning

In contrast, unsupervised learning uses a different approach, where algorithms try to identify patterns in data, looking for similarities that can be used to categorise that data.

An example might be clustering together fruits that weigh a similar amount or cars with a similar engine size.

The algorithm isn't set up in advance to pick out specific types of data; it simply looks for data that its similarities can group, for example, Google News grouping together stories on similar topics each day.

Reinforcement learning

A crude analogy for reinforcement learning is rewarding a pet with a treat when it performs a trick. In reinforcement learning, the system attempts to maximise a reward based on its input data, basically going through a process of trial and error until it arrives at the best possible outcome.

An example of reinforcement learning is Google DeepMind's Deep Q-network, whichhas been used to best human performance in a variety of classic video games. The system is fed pixels from each game and determines various information, such as the distance between objects on the screen.

By also looking at the score achieved in each game, the system builds a model of which action will maximise the score in different circumstances, for instance, in the case of the video game Breakout, where the paddle should be moved to in order to intercept the ball.

The approachis also used in robotics research, where reinforcement learning can help teach autonomous robots the optimal way to behave in real-world environments.

Many AI-related technologies are approaching, or have already reached, the "peak of inflated expectations" in Gartner's Hype Cycle, with the backlash-driven 'trough of disillusionment' lying in wait.

With AI playing an increasingly major role in modern software and services, each major tech firm is battling to develop robust machine-learning technology for use in-house and to sell to the public via cloud services.

Each regularly makes headlines for breaking new ground in AI research, although it is probably Google with its DeepMind AI AlphaFold and AlphaGo systems that have probably made the biggest impact on the public awareness of AI.

All of the major cloud platforms -- Amazon Web Services, Microsoft Azure and Google Cloud Platform -- provide access to GPU arrays for training and running machine-learning models, withGoogle also gearing up to let users use its Tensor Processing Units-- custom chips whose design is optimized for training and running machine-learning models.

All of the necessary associated infrastructure and services are available from the big three, the cloud-based data stores, capable of holding the vast amount of data needed to train machine-learning models, services to transform data to prepare it for analysis, visualisation tools to display the results clearly, and software that simplifies the building of models.

These cloud platforms are even simplifying the creation of custom machine-learning models, with Google offeringa service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service builds custom image-recognition models and requires the user to have no machine-learning expertise.

Cloud-based, machine-learning services are constantly evolving. Amazon now offers a host of AWS offeringsdesigned to streamline the process of training up machine-learning modelsandrecently launched Amazon SageMaker Clarify, a tool to help organizations root out biases and imbalances in training data that could lead to skewed predictions by the trained model.

For those firms that don't want to build their own machine=learning models but instead want to consume AI-powered, on-demand services, such as voice, vision, and language recognition, Microsoft Azure stands out for the breadth of services on offer, closely followed by Google Cloud Platform and then AWS. Meanwhile, IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed at everything from healthcare to retail, grouping these offerings together under itsIBM Watson umbrella, and having invested $2bn in buying The Weather Channelto unlock a trove of data to augment its AI services.

Internally, each tech giant and others such as Facebook use AI to help drive myriad public services: serving search results, offering recommendations, recognizing people and things in photos, on-demand translation, spotting spam -- the list is extensive.

But one of the most visible manifestations of this AI war has been the rise of virtual assistants, such as Apple's Siri, Amazon's Alexa, the Google Assistant, and Microsoft Cortana.

Relying heavily on voice recognition and natural-language processing and needing an immense corpus to draw upon to answer queries, a huge amount of tech goes into developing these assistants.

But while Apple's Siri may have come to prominence first, it is Google and Amazon whose assistants have since overtaken Apple in the AI space -- Google Assistant with its ability to answer a wide range of queries and Amazon's Alexa with the massive number of 'Skills' that third-party devs have created to add to its capabilities.

Over time, these assistants are gaining abilities that make them more responsive and better able to handle the types of questions people ask in regular conversations. For example, Google Assistant now offers a feature called Continued Conversation, where a user can ask follow-up questions to their initial query, such as 'What's the weather like today?', followed by 'What about tomorrow?' and the system understands the follow-up question also relates to the weather.

These assistants and associated services can also handle far more than just speech, with the latest incarnation of the Google Lens able to translate text into images and allow you to search for clothes or furniture using photos.

Despite being built into Windows 10, Cortana has had a particularly rough time of late, with Amazon's Alexa now available for free on Windows 10 PCs. At the same time, Microsoftrevamped Cortana's role in the operating systemto focus more on productivity tasks, such as managing the user's schedule, rather than more consumer-focused features found in other assistants, such as playing music.

It'd be a big mistake to think the US tech giants have the field of AI sewn up. Chinese firms Alibaba, Baidu, and Lenovo, invest heavily in AI in fields ranging from e-commerce to autonomous driving. As a country, China is pursuing a three-step plan to turn AI into a core industry for the country,one that will be worth 150 billion yuan ($22bn) by the end of 2020to becomethe world's leading AI power by 2030.

Baidu has invested in developing self-driving cars, powered by its deep-learning algorithm, Baidu AutoBrain. After several years of tests, with its Apollo self-driving car havingracked up more than three million miles of driving in tests, it carried over 100 000 passengers in 27 cities worldwide.

Baidu launched a fleet of 40 Apollo Go Robotaxis in Beijing this year. The company's founder has predicted that self-driving vehicles will be common in China's cities within five years.

The combination of weak privacy laws, huge investment, concerted data-gathering, and big data analytics by major firms like Baidu, Alibaba, and Tencent, means that some analysts believe China will have an advantage over the US when it comes to future AI research, with one analyst describing the chances ofChina taking the lead over the US as 500 to 1 in China's favor.

Baidu's self-driving car, a modified BMW 3 series.

While you could buy a moderately powerful Nvidia GPU for your PC -- somewhere around the Nvidia GeForce RTX 2060 or faster -- and start training a machine-learning model, probably the easiest way to experiment with AI-related services is via the cloud.

All of the major tech firms offer various AI services, from the infrastructure to build and train your own machine-learning models through to web services that allow you to access AI-powered tools such as speech, language, vision and sentiment recognition on-demand.

Robots and driverless cars

The desire for robots to be able to act autonomously and understand and navigate the world around them means there is a natural overlap between robotics and AI. While AI is only one of the technologies used in robotics, AI is helping robots move into new areas such asself-driving cars,delivery robotsand helping robotslearn new skills. At the start of 2020,General Motors and Honda revealed the Cruise Origin, an electric-powered driverless car and Waymo, the self-driving group inside Google parent Alphabet, recently opened its robotaxi service to the general public in Phoenix, Arizona,offering a service covering a 50-square mile area in the city.

Fake news

We are on the verge of having neural networks that cancreate photo-realistic imagesorreplicate someone's voice in a pitch-perfect fashion. With that comes the potential for hugely disruptive social change, such as no longer being able to trust video or audio footage as genuine. Concerns are also starting to be raised about how such technologies will be used to misappropriate people's images, with tools already being created to splice famous faces into adult films convincingly.

Speech and language recognition

Machine-learning systems have helped computers recognise what people are saying with an accuracy of almost 95%. Microsoft's Artificial Intelligence and Research group also reported it had developed a system that transcribesspoken English as accurately as human transcribers.

With researchers pursuing a goal of 99% accuracy, expect speaking to computers to become increasingly common alongside more traditional forms of human-machine interaction.

More:

What is AI? Here's everything you need to know about ...

Posted in Ai | Comments Off on What is AI? Here’s everything you need to know about …

Finnish project aims to boost adoption of AI-based solutions – Healthcare IT News

Posted: at 3:58 pm

A new project will tap into the potential of Finnish SMEs to grow their businesses through identifying and implementing artificial intelligence (AI) based solutions.

The AI Innovation Ecosystem for Competitiveness of SMEs (AI-TIE) project, coordinated by Haaga-Helia University of Applied Sciences, will focus on the health, social care, cleantech and wellbeing sectors.

It aims to help develop AI competencies and support collaborative networking between solution providers, RDI institutions, expert organisations and other key actors.

The Helsinki-Uusimaa Regional Council has awarded European regional development funding and state funding to create a bundle of services directed at SMEs to facilitate the planning, piloting, and adoption of AI-based solutions.

SMEs will be provided with training materials and web content on the business use of AI to help increase staff competency.

They will also be encouraged to develop digital and web-based solutions, in addition to physical products, to ensure business viability in crises, such as the COVID-19 pandemic.

The main partners in the project are Finlands Artificial Intelligence Accelerator FAIA as a part of Technology Industries of Finland, and MyData Global, the developer of the internationally renowned MyData model have collaborated.

Other collaborators include Laurea University of Applied Sciences, the Helsinki Region Chamber of Commerce, West-Uusimaa Chamber of Commerce, East-Uusimaa Development Organisation Posintra, Regional Federation of Finnish Entrepreneurs of Uusimaa, NewCo and Health Capital Helsinki.

WHY IT MATTERS

According to Finlands artificial intelligence accelerator FAIA, Finland needs to focus more on cooperation in the field to strengthen its position at the top of the international AI arena.

The AI-TIE project will help develop a new collaboration arena which enables SMEs, large companies and corporations, higher education institutions, expert organisations and other stakeholders to collaborate and offer their products and services with the objective of increasing sales.

THE LARGER CONTEXT

Earlier this year, the World Health Organisation (WHO) released new guidance on ethics and governance of AI for health, following two years of consultations held by a panel of international experts appointed by WHO.

In the guidance, WHO warns against overestimating the benefits of AI for health at the expense of core investments and strategies to achieve universal health coverage. It also argues that ethics and human rights must be put at the heart of AIs design, deployment, and use if the technology is to improve the delivery of healthcare worldwide.

ON THE RECORD

Dr Anna Nikina-Ruohonen, AI-TIE project manager, Haaga-Helia University of Applied Sciences, said: Industry specific AI capabilities are needed, especially in SMEs, and wellbeing, social and health services is one of the main focus areas in AI-TIE.

Finnish SMEs from this industry are supported in the development of their internal business processes, and product and service innovations through AI. In the long run this work enables industry-specific AI expertise, sustainability and ecosystem development.

More here:

Finnish project aims to boost adoption of AI-based solutions - Healthcare IT News

Posted in Ai | Comments Off on Finnish project aims to boost adoption of AI-based solutions – Healthcare IT News

AI Pick of the Week (8-21) – VSiN Exclusive News – News – VSiN

Posted: at 3:58 pm

Weve been tinkering with the Artificial Intelligence programs at 1/ST BET, which gives us more than 50 data points (such as speed, pace, class, jockey, trainer and pedigree stats) for every race based on how you like to handicap.

Last Saturday, we lost with Domestic Spending in the Mister D. Stakes at Arlington Park, but our record is still 10-of-22 overall since taking over this feature. Based on a $2 Win bet on the A.I. Pick of the Week, thats $44 in wagers and payoffs totalling $47.70 for a respectable ROI of $2.17 for every $2 wagered.

This week, I ran Saturdays plays from me and my handicapping friends in my Tuleys Thoroughbred Takes column at vsin.com/horses through the 1/ST BET programs and came up with our A.I. Pick of the Week:

Saturday, Aug. 21

Saratoga Race No. 10 (6:13 p.m. ET/3:13 p.m. PT)

#6 Malathaat (6-5 ML odds)

Malathaat ranks 1st in 15 of the 52 factors used by 1/ST BET A.I.

This 3-year-old filly also ranks 1st in 7 of the Top 15 Factors at betmix.com, including top win percentage, best average speed in last three races and best average lifetime earnings.

He also ranks in the Top 5 in 5 of the other 8 categories, including best speed at the track and average off-track earnings. And we also get the classic combo of trainer Todd Pletcher and jockey John Velazquez.

See original here:

AI Pick of the Week (8-21) - VSiN Exclusive News - News - VSiN

Posted in Ai | Comments Off on AI Pick of the Week (8-21) – VSiN Exclusive News – News – VSiN

China’s Baidu launches second chip and a ‘robocar’ as it sets up future in AI and autonomous driving – CNBC

Posted: at 3:58 pm

Robin Li (R), CEO of Baidu, sits in the Chinese tech giant's new prototype "robocar", an autonomous vehicle, at the company's annual Baidu World conference on Wednesday, August 18, 2021.

Baidu

GUANGZHOU, China Chinese internet giant Baidu unveiled its second-generation artificial intelligence chip, its first "robocar" and a rebranded driverless taxi app, underscoring how these new areas of technology are key to the company's future growth.

The Beijing-headquartered firm, known as China's biggest search engine player, has focused on diversifying its business beyond advertising in the face of rising competition and a difficult advertising market in the last few years.

Robin Li, CEO of Baidu, has tried to convince investors the company's future lies in AI and related areas such as autonomous driving.

On Wednesday, at its annual Baidu World conference, the company launched Kunlun 2, its second-generation AI chip. The semiconductor is designed to help devices process huge amounts of data and boost computing power. Baidu says the chip can be used in areas such as autonomous driving and that it has entered mass production.

Baidu's first-generation Kunlun chip was launched in 2018. Earlier this year, Baidu raised money for its chip unit valuing it at $2 billion.

Baidu also took the wraps off a "robocar," an autonomous vehicle with doors that open up like wings and a big screen inside for entertainment. It is a prototype and the company gave no word on whether it would be mass-produced.

But the concept car highlights Baidu's ambitions in autonomous driving, which analysts predict could be a multibillion dollar business for the Chinese tech giant.

Baidu has also been running so-called robotaxi services in some cities including Guangzhou and Beijing where users can hail an autonomous taxi via the company's Apollo Go app in a limited area. On Wednesday, Baidu rebranded that app to "Luobo Kuaipao" as it looks to roll out robotaxis on a mass scale.

Wei Dong, vice president of Baidu's intelligent driving group, told CNBC the company is aiming for mass public commercial availability in some cities within two years.

It's unclear how Baidu will price the robotaxi service.

In June, Baidu announced a partnership with state-owned automaker BAIC Group to build 1,000 driverless cars over the next three years and eventually commercialize a robotaxi service across China.

Baidu also announced four new pieces of hardware, including a smart screen and a TV equipped with Xiaodu, the company's AI voice assistant. Xiaodu is another growth initiative for the company.

Link:

China's Baidu launches second chip and a 'robocar' as it sets up future in AI and autonomous driving - CNBC

Posted in Ai | Comments Off on China’s Baidu launches second chip and a ‘robocar’ as it sets up future in AI and autonomous driving – CNBC

Tesla’s AI Day Event Did A Great Job Convincing Me They’re Wasting Everybody’s Time – Jalopnik

Posted: at 3:58 pm

Screenshot: YouTube/Tesla

Teslas big AI Day event just happened, and Ive already told you about the humanoid robot Elon Musk says Tesla will be developing. Youd think that would have been the most eye-roll-inducing thing to come out of the event, but, surprisingly, thats not the case. The part of the presentation that actually made me the most baffled was near the beginning, a straightforward demonstration of Tesla Full Self-Driving. Ill explain.

The part Im talking about is a repeating loop of a sped-up daytime drive through a city environment using Teslas FSD, a drive that contains a good amount of complex and varied traffic situations, road markings, maneuvering, pedestrians, other carsall the good stuff.

The Tesla performs the driving pretty flawlessly. Here, watch for yourself:

Now, technically, theres a lot to be impressed by here the car is doing an admirable job of navigating the environment. The more I watched it, though, the more I realized one very important point: this is a colossal waste of time.

Well, thats not entirely fair: its a waste of time, talent, energy, and money.

I know that sounds harsh, and its not really entirely fair, I know. A lot of this research and development is extremely important for the future of self-driving vehicles, but the current implementationand, from what I can tell, the plan moving aheadis still focusing on the wrong things.

G/O Media may get a commission

Heres the root of the issue, and its not a technical problem. Its the fundamental flaw of all these Level 2 driver-assist, full-attention required systems: what problem are they actually solving?

That segment of video was kind of maddening to watch because thats an entirely mundane, unchallenging drive for any remotely decent, sober driver. I watched that car turn the wheel as the person in the drivers seat had their hand right there, wheel spinning through their loose fingers, feet inches from those pedals, while all of this extremely advanced technology was doing something that the driver was not only fully capable of doing on their own, but was in the exact right position and mental state to actually be doing.

Screenshot: YouTube/Tesla

Whats being solved, here? The demonstration of FSD shown in the video is doing absolutely nothing the human driver couldnt do, and doesnt free the human to do anything else. Nothings being gained!

It would be like if Tesla designed a humanoid dishwashing robot that worked fundamentally differently than the dishwashing robots many of us have tucked under our kitchen counters.

The Tesla Dishwasher would stand over the sink, like a human, washing dishes with human-like hands, but for safety reasons you would have to stand behind it, your hands lightly holding the robots hands, like a pair of young lovers in their first apartment.

Screenshot: YouTube/Tesla

Normally, the robot does the job just fine, but theres a chance it could get confused and fling a dish at a wall or person, so for safety you need to be watching it, and have your hands on the robots at all times.

If you dont, it beeps a warning, and then stops, mid-wash.

Would you want a dishwasher like that? Youre not really washing the dishes yourself, sure, but youre also not not washing them, either. Thats what FSD is.

Every time I saw the Tesla in that video make a gentle turn or come to a slow stop, all I could think is, buddy, just fucking drive your car! Youre right there. Just drive!

The effort being expended to make FSD better at doing what it does is fine, but its misguided. The place that effort needs to be expended for automated driving is in developing systems and procedures that allow the cars to safely get out of the way, without human intervention, when things go wrong.

Level 2 is a dead end. Its useless. Well, maybe not entirelyI suppose on some long highway trips or stop-and-go very slow traffic it can be a useful assist, but it would all be better if the weak link, the part that causes problemsdemanding that a human be ready to take over at any momentwas eliminated.

Teslaand everyone else in this spaceshould be focusing efforts on the two main areas that could actually be made better by these systems: long, boring highway drives, and stop-and-go traffic. The situations where humans are most likely to be bad at paying attention and make foolish mistakes, or be fatigued or distracted.

Screenshot: YouTube/Tesla

The type of driving shown in the FSD video here, daytime short-trip city driving, is likely the least useful application for self-driving.

If were all collectively serious about wanting automated vehicles, the only sensible next step is to actually make them forgiving of human inattention, because that is the one thing you can guarantee will be a constant factor.

Level 5 drive-everywhere cars are a foolish goal. We dont need them, and the effort it would take to develop them is vast. Whats needed are systems around Level 4, focusing on long highway trips and painful traffic jam situations, where the intervention of a human is never required.

This isnt an easy task. The eventual answer may require infrastructure changes or remote human intervention to pull off properly, and hardcore autonomy/AI fetishists find those solutions unsexy. But who gives a shit what they think?

The solution to eliminating the need for immediate driver handoffs and being able to get a disabled or confused AV out of traffic and danger may also require robust car-to-car communication and cooperation between carmakers, which is also a huge challenge. But it needs to happen before any meaningful acceptance of AVs can happen.

Heres the bottom line: if your AV only really works safely if there is someone in position to be potentially driving the whole time, its not solving the real problem.

Now, if you want to argue that Tesla and other L2 systems offer a safety advantage (Im not convinced they necessarily do, but whatever) then I think theres a way to leverage all of this impressive R&D and keep the safety benefits of these L2 systems. How? By doing it the opposite way we do it now.

What I mean is that there should be a role-reversal: if safety is the goal, then the human should be the one driving, with the AI watching, always alert, and ready to take over in an emergency.

In this inverse-L2 model, the car is still doing all the complex AI things it would be doing in a system like FSD, but it will only take over in situations where it sees that the human driver is not responding to a potential problem.

This guardian angel-type approach provides all of the safety advantages of what a good L2 system could provide, and, because its a computer, will always be attentive and ready to take over if needed.

Driver monitoring systems wont be necessary, because the car wont drive unless the human is actually driving. And, if they get distracted or dont see a person or car, then the AI steps in to help.

All of this development can still be used! We just need to do it backwards, and treat the system as an advanced safety back-up driver system as opposed to a driver-doesnt-have-to-pay-so-much-attention system.

Andrej Karpathy and Teslas AI team are incredibly smart and capable people. Theyve accomplished an incredible amount. Those powerful, pulsating, damp brains need to be directed to solving the problems that actually matter, not making the least-necessary type of automated driving better.

Once the handoff problem is solved, that will eliminate the need for flawed, trick-able driver monitoring systems, which will always be in an arms race with moron drivers who want to pretend they live in a different reality.

Its time to stop polishing the turd that is Level 2 driver-assist systems and actually put real effort into developing systems that stop putting humans in the ridiculous, dangerous space of both driving and not driving.

Until we get this solved, just drive your damn car.

More:

Tesla's AI Day Event Did A Great Job Convincing Me They're Wasting Everybody's Time - Jalopnik

Posted in Ai | Comments Off on Tesla’s AI Day Event Did A Great Job Convincing Me They’re Wasting Everybody’s Time – Jalopnik

Five ways that Open Source Software shapes AI policy – Brookings Institution

Posted: at 3:58 pm

Open-source software (OSS), which is free to access, use, and change without restrictions, plays a central role in the development and use of artificial intelligence (AI). An AI algorithm can be thought of as a set of instructionsthat is, what calculations must be done and in what order; developers then write software which contains these conceptual instructions as actual code. If that software is subsequently published in an open-source mannerwhere the underlying code publicly available for anyone to use and modifyany data scientist can quickly use that algorithm with little effort. There are thousands of implementations of AI algorithms that make using AI easier in this way, as well as a critical family of emerging tools that enable more ethical AI. Simultaneously, there are a dwindling number of OSS tools in the especially important subfield of deep learningleading to the enhanced market influence of the companies that develop that OSS, Facebook and Google. Few AI governance documents focus sufficiently on the role of OSS, which is an unfortunate oversight, despite this quietly affecting nearly every issue in AI policy. From research to ethics, and from competition to innovation, open-source code is playing a central role in AI and deserves more attention from policymakers.

OSS enables and increases AI adoption by reducing the level of mathematical and technical knowledge necessary to use AI. Writing the complex math of algorithms into code is difficult and time-consuming, which means any existing open-source alternative can be a huge benefit for data scientists. OSS benefits from both a collaborative and competitive environment in that developers work together to find bugs just as often as they compete to write the best version of an algorithm. This frequently results in more accessible, robust, and high-quality code relative to what an average data scientistoften more of a data explorer and pragmatic problem-solver than pure mathematicianmight develop. This means that well-written open-source AI code significantly expands the capacity of the average data scientist, letting them use more-modern machine learning algorithms and functionality. Thus, while much attention has been paid to training and retaining AI talent, making AI easier to useas OSS code doesmay have a similarly significant impact in enabling economic growth from AI.

Open-source AI tools can also enable the broader and better use of ethical AI. Open-source tools like IBMs AI Fairness 360, Microsofts Fairlearn, and the University of Chicagos Aequitas ease technical barriers to fighting AI bias. There is also OSS software that makes it easier for data scientists to interrogate their models, such as IBMs AI Explainability 360 or Chris Molnars interpretable machine learning tool and book. These tools can help time-constrained data scientists who want to build more responsible AI systems, but are under pressure to finish projects and deliver for clients. While more government oversight of AI is certainly necessary, policymakers should also more frequently consider investing in open-source ethical AI software as an alternative lever to improve AIs role in society. The National Science Foundation is already funding research into AI fairness, but grant-making agencies and foundations should consider OSS as an integral component of ethical AI, and further fund its development and adoption.

In 2007, a group of researchers argued that the lack of openly available algorithmic implementations is a major obstacle to scientific progress in a paper entitled The Need for Open Source Software in Machine Learning. Its hard to imagine this problem today, as there is now a plethora of OSS AI tools for scientific discovery. As just one example, the open-source AI software Keras is being used to identify subcomponents of mRNA molecules and build neural interfaces to better help blind people see. OSS software also makes research easier to reproduce, enabling scientists to check and confirm one anothers results. Even small changes in how an AI algorithm was implemented can lead to very different results; using shared OSS can mitigate this source of uncertainty. This makes it easier for scientists to critically evaluate the results of their colleagues research, a common challenge in the many disciplines facing an ongoing replication crisis.

While OSS code is far more common today, there are still efforts to raise the percent of academic papers which publicly release their codecurrently around 50 to 70 percent at major machine learning conferences. Policymakers also have a role in supporting OSS code in the sciences, such as by encouraging federally funded AI research projects to publicly release the resulting code. Grant-making agencies might also consider funding the ongoing maintenance of OSS AI tools, which is often a challenge for critical software. The Chan Zuckerberg Initiative, which funds critical OSS projects, writes that OSS is crucial to modern scientific research yet even the most widely-used research software lacks dedicated funding.

OSS has significant ramifications for competition policy. On one hand, the public release of machine learning code broadens and better enables its use. In many industries, this will enable more AI adoption with less AI talentlikely a net good for competition. However, for Google and Facebook, the open sourcing of their deep learning tools (Tensorflow and PyTorch, respectively), may further entrench them in their already fortified positions. Almost all the developers for Tensorflow and PyTorch are employed by Google and Facebook, suggesting that the companies are not relinquishing much control. While these tools are certainly more accessible to the public, the oft stated goal of democratizing technology through OSS is, in this case, euphemistic.

Tensorflow and PyTorch have become the most common deep learning tools in both industry and academia, leading to great benefits for their parent companies. Google and Facebook benefit more immediately from research conducted with their tools because there is no need to translate academic discoveries into a different language or framework. Further, their dominance manifests a pipeline of data scientists and machine learning engineers trained in their systems and helps position them as the cutting-edge companies to work for. All told, the benefits to Google and Facebook to controlling OSS deep learning are significant and may persist far into the future. This should be accounted for in any discussions of technology sector competition.

OSS AI also has important implications for standards bodies, such as IEEE, ISO/JTC, and CEN-CENELEC, which seek to influence the industry and politics of AI. In other industries, standards bodies often add value by disseminating best practices and enabling interoperable technology. However, in AI, the diversified use of operating systems, programming languages, and tools means that interoperability challenges have already received substantial attention. Further, the AI practitioner community is somewhat informal, with many practices and standards disseminated through twitter, blog posts, and OSS documentation. The dominance of Tensorflow and PyTorch in the deep learning subfield means that Google and Facebook have outsized influence, which they may be reluctant to cede to the consensus-driven standards bodies. So far, OSS developers have not been extensively engaged in the work of the international standards bodies, and this may significantly inhibit their influence on the AI field.

From research to ethics, and from competition to innovation, open-source code is playing a central role in the developing use of artificial intelligence. This makes the consistent absence of open-source developers from the policy discussions quite notable, since they wield meaningful influence over, and highly specific knowledge of, the direction of AI. Involving more OSS AI developers can help AI policymakers more routinely consider the influence of OSS in the pursuit of the just and equitable development of AI.

The National Science Foundation, Facebook, Google, Microsoft, and IBM are donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and not influenced by any donation.

Read the original here:

Five ways that Open Source Software shapes AI policy - Brookings Institution

Posted in Ai | Comments Off on Five ways that Open Source Software shapes AI policy – Brookings Institution

Tesla AI Day Starts Today. Here’s What to Watch. – Barron’s

Posted: at 3:58 pm

Text size

Former defense secretary Donald Rumsfeld said there are known knownsthings people knowknown unknownsthings people know they dont knowand unknown unknownsthings people dont realize they dont know. That pretty much sums up autonomous driving technology these days.

It isnt clear how long it will take the auto industry to deliver truly self-driving cars. Thursday evening, however, investors will get an education about whats state of the art when Tesla (ticker: TSLA) hosts its artificial intelligence day.

The event will likely be livestreamed on the companys website beginning around 8 p.m. Eastern Standard Time. The companys YouTube channel will likely be one place to watch the event. Other websites will carry the broadcast as well. The company didnt respond to a request for comment about the agenda for the event, but has said it will be available to watch.

Much of what will get talked about wont be a surprise, even if investors dont understand it all. Those are known unknowns.

Tesla should update investors about its driver assistance feature dubbed full self driving. Whats more, the company will describe the benefit of vertical integration. Tesla makes the hardwareits own computers with its own microchipsand its software. Tesla might even give a more definitive timeline for when Level 4 autonomous vehicles will be ready.

Roth Capital analyst Craig Irwin doesnt believe Level 4 technology is on the horizon though. He tells Barrons the computing power and camera resolution just isnt there yet. Tesla will work hard to suggest tech leadership in AI for automotive, says Irwin. Reality will probably be much less exciting than their claims.

Irwin rates Tesla shares Hold. His price target is just $150 a share.

The car industry essentially defines five levels of autonomous driving. Level 1 is nothing more than cruise control. Level 2 systems are available on cars today and combine features such as adaptive cruise and lane-keeping assistance, enabling the car to do a lot on its own. Drivers, however, still need to pay attention 100% of the time with Level 2 systems.

Level 3 systems would allow drivers to stop paying attention part of the time. Level 4 would let them stop paying attention most of the time. And Level 5 means the car does everything always. Level 5 autonomy isnt an easy endeavor, says Global X analyst Pedro Palandrani. There are so many unique cases for technology to tackle, like in bad weather or dirt roads. But level 4 is enough to change the world. he added. He is more optimistic than Irwin about the timing for Level 4 systems and hopes Tesla provides more timing detail at its event.

Beyond a technology run down and level 4 timing, the company might have some surprises up its sleeve for investors. Palandrani has two ideas.

For starters, Tesla might indicate its willing to sell its hardware and software to other car companies. That would give Tesla other unexpected, sources of income. Tesla already offers its full self driving as a monthly subscription to owners of its cars. Thats new for the car industry and opens up a source of recurring revenue for anyone with the requisite technology. Selling hardware and software to other car companies, however, would be new, and surprising, for investors.

Tesla might also talk about its advancements in robotics. CEO Elon Musk has talked often in the past about the difficulty of making the machine that makes the machine. Some of Teslas AI efforts might also be targeted at building, and not just driving, vehicles. Were just making a crazy amount of machinery internally, said Musk on the companys second-quarter conference call. This is.not well understood.

Those are two items that can surprise. Whether they, or other tidbits, will move the stock is something else entirely.

Tesla stock dropped about 7% over Monday and Tuesday partly because NHTSA disclosed it was looking into accidents involving Teslas driver assistance features. Tesla will surely stress the safety benefits of driver assistance features on Thursday, whether it can shake off that bit of bad news though is harder to tell.

Thursday becomes a much more important event in light of this weeks [NHTSA] probe, says Wedbush analyst Dan Ives. This week has been another tough week for Tesla [stock] and the Street needs some good news heading into this AI event.

Ives rates Tesla shares Buy and has a $1,000 price target for the stock. Teslas autonomous driving leadership is part of his bullish take on shares.

If history is any guide investors should expect volatility. Tesla stock dropped 10% the day following its battery technology event in September 2020. It took shares about seven trading days to recover, and Tesla stock gained about 86% from the battery event to year-end.

Tesla stock is down about 6% year to date, trailing behind the 18% and 15% comparable, respective gains of the S&P 500 and Dow Jones Industrial Average. Tesla stock hasnt moved much, in absolute terms, since March. Shares were in the high $600s back then. They closed down 3% at $665.71 on Tuesday, but are up 1.3% at $674.19 in premarket trading Wednesday.

Write to allen.root@dowjones.com

Originally posted here:

Tesla AI Day Starts Today. Here's What to Watch. - Barron's

Posted in Ai | Comments Off on Tesla AI Day Starts Today. Here’s What to Watch. – Barron’s