Monthly Archives: January 2020

Rosemary Barton dropped from CBC’s The National, format scrapped – The Post Millennial

Posted: January 27, 2020 at 12:02 am

Many Canadians feel the CBC is biased and doesnt live up to its own standards and practices. Many Canadians have taken action: writing their MPs, filing complaints, and taking to social media.

But now, with the CBC scheduled to appear before the Canadian Radio-Television and Telecommunications Commission (CRTC) to have their licence renewed, an unlikely corner of the internet#Gamergateaims to take things to the next level with a co-ordinated campaign to file complaints that theyre calling #OperationCanadianBaConII.

They take issue with coverage of gamers on the CBC, stretching all the way back to 2014 when the Canadian public broadcaster first promoted the narrative that #Gamergate was about harassing women (although there were undoubtedly misogynist bad actors within the amorphous internet group), and not about conflicts of interest between video game developers and video game journalists.

Theyve waited until now because the CBC has delayed the date of their consultation period for several years following a regime change at the CBC in 2018.

Lead #OperationCanadianBaConII organizer@LunarArchivist hopes that the complaints will prompt an official response from the Canadian government, and lead to the CBC revising their Journalistic Standards and Practices.

CBC had done several hit pieces on #GamerGate and several supporters, including myself, had filed complaints with the CBC Ombudsman, Esther Enkin, only to have our concerns downplayed and dismissed in her reviews, which were always in favour of the CBC, he said.

After speaking with my local Member of Parliament, the idea occurred to me to take a page from the handbook of Operation Disrespectful Noda #GamerGate e-mail campaign where supporters were encouraged to inform advertisers of the dubious ethical standards of the websites who had employed smear tactics against us.

@LunarArchivist says that there are at least a dozen people working together across multiple Discord servers involved with the operation. Theyve got until 8 p.m. EST on February 13 to get their submissions in.

One hurdle has definitely been trying to convince non-Canadian #GamerGate supporters that theyre allowed to submit interventions despite not being from Canada, he says.

The announcement about its start was rather sudden and were still working on establishing a distribution network for the archive of all of CBCs anti-#GamerGate coverage for use as a reference for those who want to concentrate on that aspect of things.

The #OperationCanadianBacConII crew have been working to transcribe over six hours of audio and video broadcast and prepare a list of specific ways in which the CBC breached their own standards of practice, such as including the false claim that programmer Eron Gjoni accused game developer Zoe Quinn of sleeping with game journalists for good reviews in their reportage.

Theyve also noted when pieces critical of #Gamergate have disappeared from the CBCs website, and documented how three separate CBC radio interviewers conducted the exact same interview with an anti-#Gamergate pop-culture expert.

We want to raise public awareness of the fact that #GamerGates situation isnt unique and the CBC tends to use the same tactics on others, @LunarArchivist says.

First impressions are important, and a bad one can do lasting or permanent damage to your cause or reputation. The longer false information is allowed to marinate in the public consciousness, the more likely it is to get accepted as truth, regardless of the facts. And the likelihood of this increases if the CBC doesnt correct the record within a reasonable amount of time.

This negative impression of gamers as a whole, perpetuated by the CBC is what really bothers the members of #OperationCanadianBaconII.

@LunarArchivists believes that while CBC employees were allowing their anti-gamer biases to seep into their reporting even before #Gamergate started to trend, the real issue is that CBC was just following the leader instead of asking critical questions about the narrative being spread.

Many CBC journalists just threw due diligence to the wind and ran with the baseless claim advanced by Anita Sarkeesian and other social justice advocates for years that gamers were opposed to mainstream feminism and identity politics and harassing them.

@LunarArchivist hopes that the operation will not only lead to more balanced coverage of gamers, but will also help other Canadians who are upset with the CBCs coverage.

Im hoping that not just activists, but regular people will start taking a more active role in taking the CBC to task, especially since they get over a billion taxpayer dollars a year.

See the original post here:

Rosemary Barton dropped from CBC's The National, format scrapped - The Post Millennial

Comments Off on Rosemary Barton dropped from CBC’s The National, format scrapped – The Post Millennial

Behind the Scenes at Rotten Tomatoes – WIRED

Posted: at 12:02 am

My second day at Rotten Tomatoes, I went to lunch with some of the site's editorial staff. These are the front-facing Tomatopeople, separate from the curators. They interview movie stars. They schmooze at film festivals. They write hot takes for the site. I asked if, as de facto brand ambassadors, they find that people understand Rotten Tomatoes. No, came the reply, they do not. One editor, Jacqueline Coley, said that she tells Uber drivers she's a traveling nurse, so they don't start accosting her about scores she can't control. She also hears complaints about the algorithm. Says Coley, incredulous: We don't have an algorithm!

Indeed not. This is why review-bombing trolls caused such grief not just to studios but to Rotten Tomatoes itself. When audience scores for The Last Jedi began plummeting to suspiciously low depths a couple of years ago (it's currently at 43 percent, with a Tomatometer score of 91), casual users couldn't know if the criticism was representative of the film-going public or just Gamergate runoff protesting the film's casting inclusivity (or some other niche superfan grievance, for that matter). Absent its reputation for accurate ratings, Rotten Tomatoes is nothing.

To bolster that trust, Rotten Tomatoes fixed an obvious problem: It forbade people from rating movies before they actually came out. It also began verifying the reviews of tomato throwers who could prove they bought their tickets on Fandango. The new verified rating is now the site's default Audience Score. (Rotten Tomatoes says it is working with cinema chains to verify their ticket stubs too, but for now this arrangement obviously benefits Fandango.) Still, there's nothing stopping people from bombing a movie for nefarious purposes after it comes out.

These changes took place in tandem with a parallel overhaul of its critics' criteria, designed to make its Tomatometer more representative. Prior to August 2018, Tomatometer-approved critics were almost exclusively staff writers from existing publications, who tended to be whiter, maler, and crustier. Since the site changed its policies, it's added roughly 600 new criticsthe majority of whom are freelancers and women. But that also means there are now a stunning 4,500 critics, some of whom inevitably will be terrible. A couple of years ago, an approved critic named Cole Smithey, who writes for Colesmithey.com, bragged about intentionally tanking Lady Bird's then-100 percent rating with a negative review.

It's hard to know how much of a difference high or low scores make at the box office. In late 2018, Morning Consult conducted a national poll and found that one-third of Americans look at Rotten Tomatoes before seeing a movie, and 63 percent of those have been deterred by low scores. Whatever the effect, appearance is everything in Hollywood. Nobody wants a green tomato. Studios hold screenings for critics as close to release dates as possible, to delay splats, while disputing rotten ratings to curators like Giles.

I've noticed over the last year that Certified Fresh is more important for studios and filmmakers, he says, referring to the little badge movies get if the Tomatometer is 75 percent or higher for a minimum of 40 film reviews. They know the value we add to their marketing. The AMC movie chainthe largest in the countrydisplays the Tomatometer on its websites, but only next to movies that are Certified Fresh.

In any case, Fandango did not buy Rotten Tomatoes to discourage people from seeing movies. To that point, the site doesn't have its own boss. Instead, it's led by Fandango's president, a fit, ageless-looking Canadian named Paul Yanover. He started out developing software for animators working on Disney's original Beauty and the Beast, and he doesn't seem like a suit, exactly. But he knows how the popcorn gets buttered. I think we actually see ourselves as a really useful marketing platform for the studios, he told me.

Used properly, Rotten Tomatoes becomes a resource of nearly infinite vastness. Which was kind of the point of the internet in the first place.

Fandango makes money in several ways. It earns a cut of the convenience fee you pay when you buy a ticket on its platform. It also strikes licensing agreements with content providers who want to use the Tomatometer.

Follow this link:

Behind the Scenes at Rotten Tomatoes - WIRED

Comments Off on Behind the Scenes at Rotten Tomatoes – WIRED

3 ways to browse the web anonymously – We Live Security

Posted: at 12:01 am

Are you looking to hide in plain sight? Heres a rundown of three options for becoming invisible online

As concern about internet privacy grows and grows, more and more people are actively seeking to browse the web anonymously. There are various ways to avoid being identified or tracked on the internet, although, in fact, attempt to avoid might often be more appropriate. Online anonymity can often feel like a fleeting goal, and a problem as complex as online privacy has no solution that is bulletproof under all circumstances.

Besides rather simple options such as proxy services or virtual private networks (VPNs), there are other services that you can use in order to hide your surfing habits from your Internet Service Provider (ISP), government, or the very websites youre visiting. Lets look at the benefits and downsides of three easy-to-use anonymity networks Tor, I2P, and Freenet.

Tor which is occasionally referred to as Onionland because of its use of onion routing, with its encapsulation of network traffic in layer upon layer of encryption is the best known and most widely used network other than the surface web. The Tor network is made up of entry, transit and exit nodes through which a users communication passes until it reaches its destination. The many hoops and the encryption used in each of them make it almost impossible to track or analyze a communication.

The Tor network is estimated to have an average of 200,000 users, making it the biggest anonymous network at the moment. In a way, its popularity is a boon for users, as the Tor browser is very easy to use and supports many languages and various platforms, including Linux, Windows and evenAndroid. In addition, browsing is relatively fast and consumes relatively few resources.

Nevertheless, Tor is still a network of anonymous proxies, which are often overpopulated. It is very useful for traditional browsing, visiting websites and accessing unindexed content, but it might not be the best option for other kinds of communications. Also, as shown over the years, it is not a magic solution. In other words, there have been scenarios when your identity can be unmasked. In addition, recent ESET research uncovered cybercriminals distributing unofficial, trojanized copies of the Tor Browser with the intent of stealing from their victims.

RELATED READING: An introduction to private browsing

TheInvisible Internet Project(I2P) is an anonymous, decentralized network that also allows its users and applications to browse anonymously. Unlike the onion routing used by Tor, communication on I2P is likened to garlic, with each message being a clove and a group of them being a bulb. This way, with I2P a number of packets (or messages) are sent instead of just one, and they go through different nodes. It also uses one-way entry and exit tunnels, so that a query and a reply take different routes. Furthermore, within each tunnel there is onion routing similar to Tors.

Consequently, with I2P its even more complicated to analyze traffic than with Tor or a traditional VPN, since it not only uses various nodes and tunnels, but it also sends a number of packets, not just one.

Themain advantage of I2P is that it can be used for all the activities we carry out on the Internet, since its compatible with most apps, such as browsers, torrent and other P2P (peer-to-peer) tools, mail, chat, games and many more. In addition, the projects documentation is very clear and comprehensive, allowing you to adapt its API for any application.

However, as it is not as popular a network as Tor. It doesnt yet have as high a volume of users (and so fewer players to share the load), meaning that browsing is sometimes slower.

Freenetis the oldest network of the three considered here, having been launched in 2000. Freenet is designed as an unstructured P2P network with non-hierarchical nodes among which information is shared. Like Tor and I2P, communication travels between different entry, transit and exit nodes.

Freenets purpose is to store encrypted documents that can only be accessed if you know the associated key, thereby preventing them from being found and censored. It offers anonymity both to those who post information and to those who download it.

Among its main benefits, Freenet has strong privacy and anonymity controls that allow users to browse websites, search or read forums, and publish files anonymously. Furthermore,being a P2P network, it is the best of the three for publishing and sharing anonymous content. Nevertheless, that same functionality has the downside in that every user has to store the content on their own hardware in order to share it, so it requires a large amount of disk space and resources.

As each network was developed for different use cases and purposes, their features vary. Tor and I2P cannot compete with Freenets durability, whereas the latter does not support music and video streaming. On the other hand, I2P offers great flexibility and can easily be adapted to any application, but even so, there is no better proxy system than Tor. Arguably the best approach is to learn how to use all of them, and then choose one most suitable for each situation.

Follow this link:
3 ways to browse the web anonymously - We Live Security

Posted in Tor Browser | Comments Off on 3 ways to browse the web anonymously – We Live Security

What is a Bitcoin mixer and how does it work? – CryptoTicker

Posted: at 12:01 am

Few people know that Bitcoin isnt as anonymous as most of the users think. For this reason, Bitcoin mixing services make your coins safe and your transactions private.

So are you ready to improve your Bitcoin anonymity? Start mixing! Bitcoin mixing services break down your BTC into smaller, different parts. Next, they mix them up with coins from other addresses, so that third-parties will find it extremely difficult to link your wealth with your identity.

As an illustration, think about making a smoothie drink. Every small piece of fruit that you put in the blender is analogous to coins from an original address. However, when the drink is all ready, you could never really identify which fruit produces a specific flavor. Are you in search of an easy-to-understand and detailed explanation of Bitcoin Mixer? Youve come to the right place. In simple words, a Bitcoin mixing service focuses on helping you gain privacy and security over the anonymity concern of Bitcoin.

Of course, its a fact that Bitcoin operates on the blockchain, which implies that every other trader, miner, bitcoin user or someone else can monitor your moves and transactions.

However, thanks to Bitcoin mixer, you can throw your distrust for Bitcoin under the carpet and use your coins with confidence.

In this post, we present an in-depth discussion on the Bitcoin mixing service BitcoinMix.org, paying attention to its appeal to people, and how it helps you keep your coins safe.

Without further ado, lets get started.

A Bitcoin mixing service mixes your coins via a predefined system or random mixing. The ultimate aim of mixing Bitcoins is to create a misleading situation which disables hackers and third-parties from tracing your Bitcoin transactions.

Today, many mixers offer reliable and competitive services. Also known as Bitcoin tumblers, shufflers, blenders, Bitcoin mixers are capable of hiding your Bitcoin address or web identity to protect you from internet snoopers.

Now, youre probably wondering why these services are in vogue in the present age of digital currencies and crypto mining. As far as Bitcoin is concerned, anonymity is partial. With Bitcoin and many other cryptocurrencies, pseudo-anonymity is what operates.

A blockchain network is a form of public ledger which records the blocks which individual miners add. It keeps a log of all your activities as well as your Bitcoin addresses.

While some hold the belief that thieves and criminals love to avoid the public domain, news reports regularly show that exchanges fall victims to hacks. This, therefore, goes to show that the issue of anonymity is vital for every individual who has Bitcoins and values them.

As a result of this, Bitcoin mixing services break down your BTC into smaller, different parts. Next, they mix them up with coins from other addresses so that third-parties will find it extremely difficult to steal from you.

Lets go a little further to discuss peoples reasons for using mixing services.

Just as in older times, when people moved their funds to countries which operate strict bank-secrecy regulations, people now opt for mixing services to keep their coin business private.

Because personal information could go to a third-party during the course of a payment transaction. Nonetheless, with a Bitcoin mixer, no criminal can trace any transaction to your Bitcoin address.

The transaction fees for mixing services are between the reasonable range of 2-5%.

It really looks smooth and easy to use starting with choosing the coin which you would like to anonymize Bitcoin(BTC), Ethereum(ETH), Litecoin(LTC), then each user have to provide its own receiving (new) Bitcoin address, before moving forward with setting a time delay that the user may find it appropriate to the transaction he wants the mixing service to provide. The following steps like sending Bitcoin to mixer address involve the advice, that I highly thank mixing service for encouraging users to do it, which is the use of the Tor browser for more security. All these steps are concluded by receiving Bitcoin in private and a secure way that makes it impossible to trace the transaction that has been made by the user.

Bitcoin is not as anonymous as many people in the cryptocurrency community are led to believe. In fact, it could be quite easy to track an address and connect identity to it. But mixing service allows users to be anonymous which is absolutely welcomed by the crypto community who always seeks its privacy and security along with of course fast and cheap transactions.

Instant Crypto Credit Lines from only 5.9% APR. Earn up to 8% interest per year on your Stablecoins, USD, EUR & GBP. $100 million custodial insurance.

Ad

This post may contain promotional links that help us fund the site. When you click on the links, we receive a commission - but the prices do not change for you! 🙂

Disclaimer: The authors of this website may have invested in crypto currencies themselves. They are not financial advisors and only express their opinions. Anyone considering investing in crypto currencies should be well informed about these high-risk assets.

Trading with financial products, especially with CFDs involves a high level of risk and is therefore not suitable for security-conscious investors.CFDs are complex instruments and carry a high risk of losing money quickly through leverage. Be aware that most private Investors lose money, if they decide to trade CFDs. Any type of trading and speculation in financial products that can produce an unusually high return is also associated with increased risk to lose money. Note that past gains are no guarantee of positive results in the future.

Posted By

The first cryptocurrency Bitcoin emerged in 2009. Despite a period of disbelief in it and other blockchain-based currencies, known collectively

Go here to read the rest:
What is a Bitcoin mixer and how does it work? - CryptoTicker

Posted in Tor Browser | Comments Off on What is a Bitcoin mixer and how does it work? – CryptoTicker

FAITH AND VALUES: Where does the buck stop? – Aiken Standard

Posted: January 26, 2020 at 11:58 pm

It seems as if people (including me) always look for ways to avoid responsibility for what they make out of their lives. From time to time, I have been tempted to choose from a vast variety of cop-outs. Each one promised me freedom from responsibility for what was happening in my life. Because of my lack of spiritual maturity, I bought into one or more of these. Im convinced that people need to take more responsibility for the messes they make of their lives.

Research findings in genetics over the past few years make it more necessary than ever for people of faith to own up to their own responsibility. It seems as if every month researchers discover a new gene that has a direct bearing on how people behave. Everything from congeniality, criminal impulses, and IQ, to sexual preference is attributed to our genes.

Genetic explanations are the rage today. Nature slowly becomes more important than nurture. As the neuroscientific view of life moves to the forefront of the academic world, if we are not careful, society will begin to worship at the feet of biogenetics, making us slaves to our genes. This does not present a good future scenario.

The implications of the genetic sciences on faith should cause concern for several reasons. First, experience teaches us that many people will respond to genetic engineerings success in one of two ways. Either we will ignore it or we will approach it as an enemy of our faith and ignite another religious fight against science. We know all too painfully how neither response prepares us for the future.

Second, it may hasten the day of the posthuman or cyborg. At the very time when high touch and relationships are needed more than ever, genetic engineering threatens to greatly expand the divisions in the faith.

Third, it gives people a wonderful cop-out for their sins. Cant you just hear the cry, My genes made me do it. This is my greatest fear for genetic research, because this response is likely to vitally affect us more than the first two concerns. We dont need much of an excuse to avoid taking responsibility for our lives.

Over the next few years, the entire human genetic structure will be fully mapped out. We already know that our present drugs could fundamentally affect about 5,000 genes. Futurists already speculate about when parents-to-be can select the specific genes they prefer in their babies-to-be.

So I feel responsible for communicating one of the greatest lessons I have learned from Jesus: I am responsible for everything that I do. It is never acceptable to make excuses for my actions. Think with me for a moment of all of the excuses people have conjured up over the centuries.

The first excuse that comes to mind is The devil made me do it. Wow! Who am I to challenge cosmic forces? Naturally, I am prone to go with this one. However, one may say that experience has taught me not to believe in a devil even though I deal with evil every day. Think that one through! Religion can be one of the easier routes to copping out.

Many people blame their life on kismet: Its all in the stars. Again, who am I to tempt fate? This one is harder for me because in some sense we are victims of some form

of fate. Or are we? Is it possible that we even contribute to our fate? Consider the death of Princess Di. Was it fate or poor judgment?

One of the classic cop-outs I often hear is Im just a layperson. This one makes my blood boil. Did Jesus die so that someone could say, Im just a layperson? We all know better, but we still use the excuse. Allowing this cop-out to continue is one of the most immoral actions of our time. Laity are Gods gifts to the world. All of us are laity. If the truth be known, little room is left in this world for laity and clergy. Arent all people of faith called to some form of ministry? Isnt it time we clergy give up our union and replace it with pursuit of the Gospel?

Over the last decade, clergys favorite cop-out had been Its the systems fault. Restructure our church or denomination and all will be well. Most established mainline denominations have bought into this one hook, line, and sinker. But all the restructuring in the world will not overcome a lack of passion or commitment to God. All we do is rearrange the deck chairs on the Titanic. Who makes up the system? We do. We need to get our act together.

What about youths favorite response to their parents: Everyone else is doing it. I used that one a few times myself. Or what about many parents favorite excuse during the 1970s: Im OK; youre OK. Remember that best-seller? Age doesnt seem to matter when it comes to copping out. All of us have played this game, havent we? Which one is your favorite cop-out?

The powers and principalities with which we wrestle may be our own genes, but Jesus offers us an alternative to making excuses when he tells us that we will do greater things than he did. Oh, I know he didnt know about genetics, but that doesnt matter. He knew the heart of God and that God doesnt make junk. That is more important than genetics. Sin can alter genetics. Jesus knew that if we take him seriously, we can be more than just the highest species of animal life on this planet. He knew were destined for more than a cop-out existence.

For most of my life I failed to see the importance of the Fall and especially the concept of total depravity. Based on a lifetime of experience, I am beginning to see another picture. I still believe that humans are basically good to the core, but at the same time I am now convinced that we are also rotten to the core.

The only thing that separates the rottenness from the good is the grace of God. We need to emphasize this grace more. God can make a difference, in spite of our genes. God overcomes the power of our gene pool! Isnt that the ultimate form of redemption?

What, then, is the response of healthy people of faith to genetic engineering? Genetic research will lead to resurgent interest in the concepts of holiness and discipleship. In an attempt to say No to our genes, people of faith will begin to see a new dimension of relevance for faith, offsetting the power of our genes. As a recovering alcoholic takes life one step at a time, people who yearn for a God-centered or ethical life will turn to the church as the nurturer of such a life.

This nurture will go far beyond what we call nurture today. It will not be a form of spiritual hand-holding or spiritual hangnail-fixing. It will be a nurture that helps people truly overcome their sin and triumph over their genes. Perhaps this is part of whats happening today in the renewal of emphasis on lay mobilization and spiritual gifts. Churches are challenging the fatalistic attitude of our time instead of copping out once again. Its about time.

Well, whats it going to be? Are you going to join the crowd and cop out, or are you going to hear Jesus say that you will do greater works than these? How responsible are you to the claims and call of God?

Dr. Fred Andrea, retired Pastor of Aikens First Baptist Church, is serving as Pastor of Clinton United Methodist Church in Salley.

Excerpt from:

FAITH AND VALUES: Where does the buck stop? - Aiken Standard

Posted in Posthuman | Comments Off on FAITH AND VALUES: Where does the buck stop? – Aiken Standard

Artificial Intelligence: What Educators Need to Know …

Posted: at 11:57 pm

Commentary

Photo by Michael Langan

ByOren Etzioni & Carissa Schoenick

Editors Note: This Commentary is part of a special report exploring game-changing trends and innovations that have the potential to shake up the schoolhouse.Read the full report: 10 Big Ideas in Education.

Artificial intelligence is a rapidly emerging technology that has the potential to change our everyday lives with a scope and speed that humankind has never experienced before. Some well-known technology leaders such as Tesla architect Elon Musk consider AI a potential threat to humanity and have pushed for its regulation "before it's too late"an alarmist statement that confuses AI science with science fiction. What is the reality behind these concerns, and how can educators best prepare for a future with artificial intelligence as an inevitable part of our lives?

General, widespread legislative regulation of AI is not going to be the right way to prepare our society for these changes. The AI field is already humming with a wide variety of new research at an international scale, such that blindly constraining AI research in its early days in the United States would only serve to put us behind the global curve in developing the most important technology of the future. It is also worth noting that there are many applications of AI currently under development that have huge potential benefits for humanity in the fields of medicine, security, finance, and personal services; we would risk a high human and economic cost by slowing or stopping research in those areas if we hastily impose premature, overbearing, and poorly understood constraints.

Oren Etzioni & Carissa Schoenick are CEO and senior program manager at the Allen Institute for Artificial Intelligence, respectively.

Based in Seattle, Etzioni is a professor of computer science at the University of Washington; Schoenick was previously a program manager for Amazon Web Services and for the computational knowledge project WolframAlpha.

The most impactful way to shape the future of AI is not going to be through the regulation of research, but rather through understanding and correctly controlling the tangible impacts of AI on our lives. For example, it is our belief that AI should not be weaponized, and that humans should always have the ultimate "off switch." Beyond these obvious limitations, there are three rules we propose for AI that can be meaningfully applied now to mitigate possible future harm.

An AI system:

1) Must always respect the same laws that apply to its creators and operators;

2) Must always disclose that it is not human whenever it interacts with another entity;

3) Should never retain or share confidential information without explicit approval from the source.

These rules are a strong practical starting point, but to successfully navigate the new world AI will bring about in the coming decades, we're going to need to ensure that our children are learning the skills required both to make sense of this new human-machine dynamic and to control it in the right ways. All students today should be taught basic computer literacy and the fundamentals behind how an AI works, as they will need to be comfortable with learning and incorporating rapidly emerging new technologies into their lives and occupations as they are developed.

We will need our future scientists and engineers to be keenly aware that an AI system can only be as good as the data it is given to work with, and that to avoid dangerous bias or incorrect actions, we need to cultivate the right inputs to these systems that fairly cover all possible perspectives and variables. We will need policymakers who can successfully apply the rules suggested above as well as define the new ones we will need as AI continues to proliferate into the various aspects of our lives.

New and different opportunities and values will likely emerge for humans in the economy that AI creates. As AI makes more resources more widely available, we will find less meaning in material wealth and more value in the activities that are uniquely human. This means that occupations with creative and expressive qualities, such as chefs, tailors, organic farmers, musicians, and artists of all types will become more important in an age in which a real human connection is increasingly precious. Roles that directly affect human development and well-being, such as teaching, nursing, and caregiving, will be especially crucial and should be uplifted as excellent options for people whose vocations are otherwise replaced by AI systems. No AI can hope to match a human for true compassion and empathy, qualities that we should be taking extra care to cultivate in our children to prepare them to inherit a world where these characteristics will be more important than ever.

Background

By Benjamin Herold

What will the rise of artificial intelligence mean for K-12 education?

First, AI and related technologies are reshaping the economy. Some jobs are being eliminated, many others are being changed, and entirely new fields of work are opening up. Those changes are likely to have big implications for the job market in 2030, when today's 6th graders are set to hit their prime working years. But the nation's top economists and technologists are sharply divided about whether AI will be a job killer or creator, presenting a big challenge for the educators and policymakers who must prepare today's students to thrive in a very uncertain tomorrow.

Second, artificial intelligence is changing what it means to be an engaged citizen. K-12 education has never been just about preparing young people for jobs; it's also about making sure they're able to weigh arguments and evidence, synthesize information, and take part in the civic lives of their communities and country. But as algorithms, artificial intelligence, and automated decisionmaking systems are being woven into nearly every aspect of our lives, from loan applications to dating to criminal sentencing, new questions and policy debates and ethical quandaries are emerging. Schools are now faced with having to figure out how to teach students to think critically about the role these technologies are playing in our society and how to use them in smart, ethical ways. Plus, in the age of AI, students will likely have to develop a new communication skill: the ability to talk effectively to intelligent machines. Some economists say that skill could be the difference between success and failure in the workplace of the future.

And third, artificial intelligence could play a powerful role in the push to provide more personalized instruction for all studentsand in the process change the teaching profession itself. Intelligent tutoring systems are making inroads in the classroom. New educational software and technology platforms use algorithms to recommend content and lessons for individual students, sometimes pushing teachers away from the front of the classroom and into the role of "coach" or "facilitator." And schools are being flooded with data about their students, information that educators and administrators alike are increasingly expected to use to make real-time decisions and adjustments in the course of their day-to-day work.

Some educators see the rising role of AI as a threat to their existence and a danger to student-data privacy. Others take a more positive view, seeing it as having the potential to free them from mundane tasks like lecturing and grading, creating rich opportunities for continuous improvement, and opening the doors for more meaningful trial-and-error learning by students.

Whatever the perspective, there is one thing most everyone seems to agree on: Now is the time for the K-12 field to start wrestling with the promises and perils of AI.

Vol. 37, Issue 16, Pages 28-29

Back to Top

Read the original here:

Artificial Intelligence: What Educators Need to Know ...

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence: What Educators Need to Know …

The Architecture of Artificial Intelligence | Features …

Posted: at 11:57 pm

Behnaz Farahi Breathing Wall II

Let us consider an augmented architect at work. He sits at a working station that has a visual display screen some three feet on a side, this is his working surface, controlled by a computer with which he can communicate by means of small keyboards and various other devices. Douglas Engelbart

This vision of the future architect was imagined by engineer and inventor Douglas Engelbart during his research into emerging computer systems atStanfordin 1962. At the dawn of personal computing he imagined the creative mind overlapping symbiotically with the intelligent machine to co-create designs. This dual mode of production, he envisaged, would hold the potential to generate new realities which could not be realized by either entity operating alone. Today, self-learning systems, otherwise known asartificial intelligence or AI, are changing the way architecture is practiced, as they do our daily lives, whether or not we realize it. If you are reading this on a laptop or tablet, then you are directly engaging with a number of integrated AI systems, now so embedded in our the way we use technology, they often go unnoticed.

As an industry, AI is growing at an exponential rate, now understood to be on track to be worth $70bn globally by 2020.This is in part due to constant innovation in the speed of microprocessors, which in turn increases the volume of data that can be gathered and stored. But dont panicthe artificial architect with enhanced Revit proficiency is not coming to steal your job. The human vs. robot debate, while compelling, is not so much the focus here but instead how AI is augmenting design and how architects are responding to and working with these technological developments. What kind of innovation is artificial intelligence generating in the construction industry?

Assuming you read this as a non-expert, it is likely that much of the AI you have encountered to this point has been weak AI, otherwise known as ANI (Artificial Narrow Intelligence). ANI follows pre-programmed rules so that it appears intelligent but is in effect a simulation of a human-like thought process. With recent innovations such as that of Nvidias microchip in April 2016, a shift is now being seen towards what we might understand as deep learning, where a system can, in effect, train and adapt itself. The interest for designers is that AI is, therefore, starting to apply itself to more creative tasks, such aswriting books, making art, web design, or self-generating design solutions, due to its increased proficiency in recognizing speech and images. Significant AI winters', or periods where funding has been hard to source for the industry, have occurred over the last twenty years, but commentators such as philosopher Nick Bostrom now suggest we are on the cusp of an explosion in AI, and this will not only shape but drive the design industry in the next century. AI, therefore, has the potential to influence the architectural design process at a series of different construction stages, from site research to the realization and operation of the building.

1. Site and social research

By already knowing everything about us, our hobbies, likes, dislikes, activities, friends, our yearly income, etc., AI software can calculate population growth, prioritize projects, categorize streets according to usage and so on, and thus predict a virtual future and automatically draft urban plans that best represent and suit everyone.-Rron Beqiri on Future Architecture Platform.

Gathering information about a project and its constraints is often the first stage of an architectural design process, traditionally involving traveling to a site, perhaps measuring, sketching and taking photographs. In the online and connected world, there is already a swarm-like abundance of data for the architect to tap into, already linked and referenced against other sources allowing the designer to, in effect, simulate the surrounding site without ever having to engage with it physically. This information fabric has been referred to as the internet of things. BIM tools currently on the market already tap into these data constellations, allowing an architect to evaluate site conditions with minute precision. Software such as EcoDesigner Star or open-source plugins for Google SketchUp allows architects to immediately calculate necessary building and environmental analyses without ever having to leave their office. This phenomenon is already enabling many practices to take on large projects abroad that might have been logistically unachievable just a decade ago.The information gathered by our devices and stored in the Cloud amounts to much more than the material conditions of the world around us

The information gathered by our devices and stored in the Cloud amounts to much more than the material conditions of the world around us. Globally, we are amassing ever-expanding records of human behavior and interactions in real-time. Personal, soft data might, in the most optimistic sense, work towards the socially focused design that has been widely publicized in recent years by its ability to integrate the needs of users. This approach, if only in the first stages of the design process, would impact the twentieth-century ideals of mass production and standardization in design. Could the internet of things create a socially adaptable and responsive architecture? One could speculate that, for example, when the population of children in a city crosses a maximum threshold in relation to the number of schools, a notification might be sent to the district council that it is time to commission a new school. AI could, therefore, in effect, write the brief for and commission architects by generating new projects where they are most needed.

Autodesk. Bicycle design generated by Dreamcatcher AI software.

2. Design decision-making

Now that we have located live-updating intelligence for our site, it is time to harness AI to develop a design proposal. Rather than a program, this technology is better understood as an interconnected, self-designing system that can upgrade itself. It is possible to harness a huge amount of computing power and experience by working with these tools, even as an individual as Pete Baxter, Vice President of Digital Manufacturingat Autodesk,told the Guardian: now a one-man designer, a graduate designer, can get access to the same amount of computing power as these big multinational companies. The architect must input project parameters, in effect an edited design brief, and the computer system will then suggest a range of solutions which fulfill these criteria. This innovation has the potential to revolutionize how architecture is not only imagined but how it is fundamentally expressed for designers who choose to adopt these new methods.

I spoke with Michael Bergin, a Principal Research Scientist at Autodesk, to get a better understanding of how AI systems are influencing the development of design software for architects. While their work was initially aimed at the automotive and industrial design industries, Dreamcatcher now is beginning to filter into architecture projects. It was used recently to develop The Livings generative design for Autodesk's new office in Toronto and MX3Ds steel bridge in Amsterdam. The basic concept is that CAD models of the surrounding site and other data, such as client databases and environmental information, are fed into the processor. Moments later, the system outputs a series of optimized 3D design solutions ready to render. These processes effectively rely on cloud computing to create a multitude of options based on self-learning algorithmic parameters. Lattice-like and fluid forms are often the aesthetic result, perhaps unsurprisingly, as the software imitates structural rules found in nature.future architects would be less in the business of drawing and more into specifying requirements of the problem

The Dreamcatcher software has been designed to optimize parametric design and link into and extend existing software designed by Autodesk, such as Revit and Dynamo. Interestingly, Dreamcatcher can make use of a wide and increasing spectrum of design input datasuch as formulas, engineering requirements, CAD geometry, and sensor informationand the research team is now experimenting with Dreamcatchers ability to recognize sketches and text as input data. Bergin suggests he imagines the future of design tools as systems that accept any type of input that a designer can produce [to enable] a collaboration with the computer to iteratively target a high-performing design that meets all the varied needs of the design team. This would mean future architects would be less in the business of drawing and more into specifying requirements of the problem, making them more in sync with their machine counterparts in a project. Bergin suggests architects who adopt AI tools would have the ability to synthesize a broad set of high-level requirements from the design stakeholders, including clients and engineers, and produce design documentation as output, in line with Engelbarts vision of AI augmenting the skills of designers.

AI is also being used directly in software such as Space Syntaxs depthmapX, designed at The Bartlett in London, to analyze the spatial network of a city with an aim to understand and utilize social interactions and in the design process. Another tool, Unity 3D, is built from software developed for game engines to enable designers to analyze their plans, such as the shortest distances to fire exits. This information would then allow the architect to re-arrange or generate spaces in plan, or even to organize entire future buildings. Examples of architects who are adopting these methods include Zaha Hadid with the Beijing Tower project (designed antemortem) and MAD Architects in China, among others.

Computational Architecture Digital Grotesque Project

3. Client and user engagement

As so much of the technology built into AI has been developed from the gaming industry, its ability to produce forms of augmented reality have interesting potential to change the perception and engagement with architecture designs for both the architects and non-architects involved in a project. Through the use of additional hardware, augmented reality has the ability to capture and enhance real-world experience. It would enable people to engage with a design prior to construction, for example, to select the most appealing proposal from their experiences within its simulation. It is possible that many architecture projects will also remain in this unbuilt zone, in a parallel digital reality, which the majority of future world citizens will simultaneously inhabit.

Augmented reality would, therefore, allow a client to move through and sense different design proposals before they are built. Lights, sounds, even the smells of a building can be simulated, which could reorder the emphasis architects currently give to specific elements of their design. Such a change in representational method has the potential to shift what is possible within the field of architecture, as CAD drafting did at the beginning of this century. Additionally, the feedback generated by augmented reality can feed directly back into the design, allowing models to directly interact and adapt to future users. Smart design tools such as Materiable by Tangible Media are beginning to experiment with how AI can begin to engage with and learn from human behavior.

Computational Architecture Digital Grotesque Project

4. Realizing designs and rise of robot craftsmen

AI systems are already being integrated into the construction industryinnovative practices such asComputational Architectureare working with robotic craftsmen to explore AI in construction technology and fabrication. Michael Hansmeyer and Benjamin Dillenburger, founders of Computational Architecture, are investigating the new aesthetic language these developments are starting to generate. Architecture stands at an inflection point, he suggests on their website, the confluence of advances in both computation and fabrication technologies lets us create an architecture of hitherto unimaginable forms, with an unseen level of detail, producing entirely new spatial sensations.

3D printing technology developed from AI software has the potential to offer twenty-first-century architects a significantly different aesthetic language, perhaps catalyzing a resurgence of detail and ornamentation, now rare due to the decline in traditional crafts. Hansmeyer and Dillenburgers Grotto Prototype for the Super Material exhibition, London, was a complex architectural grotto 3D-printed from sandstone. The form of the sand grains was arranged by a series of algorithms custom designed by the practice. The technique allowed forms to be developed which were significantly different to that of traditional stonemasonry. The aim of the project was to show that it is now possible to print building-scale rooms from sandstone and that 3D printing can also be used for heritage applications, such as repairs to statues.The confluence of advances in both computation and fabrication technologies lets us create an architecture of hitherto unimaginable forms

Robotics are also becoming more common on construction job sites, mostly dealing with human resources and logistics. According to AEM, their applications will soon expand to bricklaying, concrete dispensing, welding, and demolition. Another example of their future use could include working with BIM to identify missing elements in the snagging process and update the AI in real-time. Large scale projects, for example, government-lead infrastructure initiatives, might be the first to apply this technology, followed by mid-scale projects in the private sector, such as cultural buildings. The challenges of the construction site will bring AI robotics out of the indoor, sanitized environment of the lab into a less scripted reality. Robert Saunders, a researcher into AI and fabrication at the University of Sydney, told New Atlas that "robots are great at repetitive tasks and working with materials that react reliablywhat we're interested in doing is trying to develop robots that are capable of learning how to work with materials that work in non-linear ways like working with hot wax or expanding foam or, more practically, with low-grade building materials like low-grade timber. Saunders foresees robot stonemasons and other craftsbots working in yet unforeseen ways, such as developing the architect's skeleton plans, in effect, spontaneously generating a building on-site from a sketch.

Ori System by Ori

5. Integrating AI systems

This innovation involves either integrating developing artificial technologies with existing infrastructure or designing architecture around AI systems. There is a lot of excitement in this field, influenced in part by Mark Zuckerbergs personal project to develop networked AI systems within his home, which he announced in hisNew years Facebook postin 2016. His wish is to develop simple AI systems to run his home and help with his day-to-day work. This technology would have the ability to recognize the voices of members of the household and respond to their requests. Designers are taking on the challenge of designing home-integrated systems, such as theOri Systemof responsive furniture, or gadgets such asEliqfor energy monitoring. Other innovations, such as driverless cars that run on an integrated system of self-learning AI, have the potential to shape how our cities are laid out and plannedin the most basic sense, limiting our need for more roads and parking areas.

Behnaz Farahi is a young architect who is employing her research into AI and adaptive surfaces to develop interactive designs, such as in her Aurora and Breathing Wall projects. She creates immersive and engaging indoor environments which adapt to and learn from their occupants. Her approach is one of manydifferent practices with different goals will adapt AI at different stages of their process, creating a multitude of architectural languages.

Researchers and designers working in the field of AI are attempting to understand the potential of computational intelligence to improve or even upgrade parts of the design process with an aim to create a more functional and user-optimized built environment. It has always been the architects task to make decisions based on complex, interwoven and sometimes contradictory sets of information. As AI gradually improves in making useful judgments in real-world situations, it is not hard to imagine these processes overlapping and engaging with each other. While these developments have the potential to raise questions in terms of ownership, agency and, of course, privacy in data gathering and use, the upsurge in self-learning technologies is already altering the power and scope of architects in design and construction. As architect and design theorist Christopher Alexander said back in 1964, We must face the fact that we are on the brink of times when man may be able to magnify his intellectual and inventive capacity, just as in the nineteenth century he used machines to magnify his physical capacity.To think architecturally is to imagine and construct new worlds, integrate systems and organize information

In our interview, Bergin gave some insights into how he sees this technology impacting designers in the next twenty years. The architectural language of projects in the future may be more expressive of the design teams intent, he stated. Generative design tools will allow teams to evaluate every possible alternative strategy to preserve design intent, instead of compromising on a sub-optimal solution because of limitations in time and/or resources. Bergin believes AI and machine learning will be able to support a dynamic and expanding community of practice for design knowledge. He can also foresee implications of this in the democratization of design work, suggesting the expertise embodied by a professional of 30 years may be more readily utilized by a more junior architect. Overall, he believes architectural practice over the next 20 years will likely become far more inclusive with respect to client and occupant needs and orders of magnitude more efficient when considering environmental impact, energy use, material selection and client satisfaction.

On the other hand, Pete Baxter suggestsarchitects have little to fear from artificial intelligence: "Yes, you can automate. But what does a design look like that's fully automated and fully rationalized by a computer program? Probably not the most exciting piece of architecture you've ever seen. At the time of writing, many AI algorithms are still relatively uniform and relatively ignorant of context, and it is proving difficult to automate decision-making that would at first glance seem simple for a human. A number of research labs, such theMIT Media Lab, are working to solve this. However, architectural language and diagramming have been part of programming complex systems and software from the start, and they have had a significant influence on one another. To think architecturally is to imagine and construct new worlds, integrate systems and organize information, which lends itself to the front line of technical development. As far back as the 1960s, architects were experimenting with computer interfaces to aid their design work, and their thinking has inspired much of the technology we now engage with each day.

Behnaz Farahi Aurora

See the article here:

The Architecture of Artificial Intelligence | Features ...

Posted in Artificial Intelligence | Comments Off on The Architecture of Artificial Intelligence | Features …

What will artificial intelligence bring in 2020?

Posted: at 11:57 pm

As an expert, Im often asked: what will this year bring? I dont have a glass ball to look into the future, or an artificial intelligence (AI)-based system for these kinds of predictions, but there are some interesting trends I certainly want to share.

I will not discuss the growth figures of AI use cases, or whether those not using AI will limp along behind, or whether the AI bubble will burst and a new AI winter will come. Much progress has been made, but not enough to deflect the next hurdle: and that is how to gain knowledge about your business domain with the help of AI.

So, what will AI bring us in the near future? Let me discuss three important topics:

AI is already starting to transform how organizations do business, manage their customer relationships, and stimulate the ideas and creativity that fuel ground-breaking innovation. (Capgemini)

Three years ago, good use cases for machine learning were hard to find. Now, success stories are everywhere. Machine learning, deep learning, neural networks, and all the other variants are now plentiful. So, whatever will happen this year, machine learning is here to stay and, it will become even more successful as more businesses start to use AI for their daily activities.

All these AI algorithms now constitute an integral part of many data-driven tools. For data analysts, using AI is just a click away. But does this imply that AI is used correctly? Im afraid not, because:

But theres more to business processes than task execution. How can we determine if our AI is really an improvement over human-based actions? This is still an open discussion.

Currently, we see machine learning being used in very narrow applications, to make process steps more efficient or to alleviate tedious jobs. But how AI will contribute to a meaningful return on investment has also been a big question, both last year and in 2020.

AI ethics isnt just a feel-good add-on a want but not a need. AI has been called one of the great human rights challenges of the 21st century. (Khari Johnson)

Last year, discussions about the ethics of AI really took off. Though mainly academic, the discussion now focuses not only on the (im)moral consequences of AI, for instance discrimination, job loss, inequality, and so on. The focus now is on values. Is there a thing like AI for good? Do we as a society really want to give decisive powers to machines? And are those machine fair and open? And what about checks and balances?

These discussions do not focus on AI alone. They also concern the use of big data. Smart cities, facial recognition, fraud detection these are all areas where privacy and expedience are to be discussed and assessed. This will require the evaluation of the ethical side from the beginning of the project. Will the ethics of AI be a burdensome duty or a real competitive advantage? I dont know yet.

We will see the rise of ethical frameworks. Just like compliance frameworks for accounting, these frameworks will offer ways of assessing the ethical implications of AI. Like any framework, they are no excuse not to think independently and systematically about AI. Frameworks dont guarantee a good outcome. And the discussion will arise on how to use these frameworks in a business context.

My recent three part blog on ethics (part 1, part 2, part 3) describes an approach to implementing ethics for AI in products, services, and businesses.

Deep learning has instead given us machines with truly impressive abilities but no intelligence. The difference is profound and lies in the absence of a model of reality. (Judea Pearl)

Machine learning, including deep learning and neural networks, is highly successful. These methods are all very good in extracting information from data. Yes, Im aware of the numerous mistakes machine learning makes, and about how machine learning, mostly image recognition, can be fooled. We must learn from these mistakes by improving the algorithms and learning processes. But AI of far more than machine learning alone. Cognitive Computing, Symbolic AI and Contextual Reasoning are also AI. We need to re-evaluate the use of these other AI- techniques for our applications.

This year, well continue to open the black box of machine learning. The algorithms will, through interpretable machine learning, provide insights into how they reached their decisions. But AI in a business context will not be able to evaluate the correctness and fairness of the decisions.

Machine learning is good at extracting information from data, but its lousy at extracting knowledge from information. For data to become information, it must be contextualized, categorized, calculated, and condensed. Information is key for knowledge. Knowledge is closely linked to doing and implies know-how and understanding. This raises the decades-old philosophical question of AI: Do AI systems really understand what they are doing?

Without visiting John Searles Chinese Room again, I truly think that the next step in AI can only be taken once we incorporate some level of knowledge or understanding of AI. In order to do that, well have to take another step toward human-like AI. For example, by using symbolic AI (or classical AI). This is the branch of AI research that concerns itself with attempting to explicitly represent human knowledge in a declarative form (i.e., facts and rules). Combining these older techniques with neural networks in a hybrid form, will take AI even further. This means that causation, knowledge representation, and so on are key factors necessary to take AI to the next level a next level that will be even more exciting than the achievements AI has reached this year.

For more information on this connect with Reinoud Kaasschieter.

Originally posted here:

What will artificial intelligence bring in 2020?

Posted in Artificial Intelligence | Comments Off on What will artificial intelligence bring in 2020?

AI, emerging technologies to replace 69% of managerial …

Posted: at 11:57 pm

By 2024, artificial intelligence (AI) and emerging technologies such as virtual personal assistants and chatbots will replace almost 69 per cent of the manager's workload, predicts research and advisory firm Gartner, Inc.

Such technologies are rapidly making headway into the workplace, Gartner said.

"The role of manager will see a complete overhaul in the next four years," said Helen Poitevin, research vice- president at Gartner, in a statement.

"Currently, managers often need to spend time filling in forms, updating information and approving workflows. By using AI to automate these tasks, they can spend less time managing transactions and can invest more time on learning, performance management and goal setting," she said.

AI and emerging technologies will undeniably change the role of the manager and will allow employees to extend their degree of responsibility and influence, without taking on management tasks, Gartner said.

Application leaders focused on innovation and AI are now accountable for improving worker experience, developing worker skills and building organisational competency in responsible use of AI, it was noted.

"Application leaders will need to support a gradual transition to increased automation of management tasks as this functionality becomes increasingly available across more enterprise applications, said Poitevin.

Nearly 75 per cent of heads of recruiting reported that talent shortages will have a major effect on their organisations, according to Gartner.

Enterprises have been experiencing critical talent shortage for several years.

Organisations need to consider people with disabilities, an untapped pool of critically skilled talent.

Today, AI and other emerging technologies are making work more accessible for employees with disabilities.

Gartner estimates that organisations actively employing people with disabilities have 89 per cent higher retention rates, a 72 per cent increase in employee productivity and a 29 per cent increase in profitability.

In addition, Gartner said that by 2023, the number of people with disabilities employed will triple, due to AI and emerging technologies reducing barriers to access.

"Some organisations are successfully using AI to make work accessible for those with special needs," said Poitevin.

"Restaurants are piloting AI robotics technology that enables paralysed employees to control robotic waiters remotely. With technologies like braille-readers and virtual reality, organisations are more open to opportunities to employ a diverse workforce," she said.

More here:

AI, emerging technologies to replace 69% of managerial ...

Posted in Artificial Intelligence | Comments Off on AI, emerging technologies to replace 69% of managerial …

Artificial Intelligence Task Force | Agency of Commerce …

Posted: at 11:57 pm

The Artificial Intelligence Task Force shall investigate the field of artificial intelligence; and make recommendations on the responsible growth of Vermonts emerging technology markets, the use of artificial intelligence in State government, and State regulation of the artificial intelligence field.

Additional detail about H.378 / Act 137.

The Task Force is comprised of fourteen (14) members who will meet not more than 15times and shall submit a Final Report to the Senate Committee on Government Operations and the House Committee on Energy and Technology on or before January 15, 2020.

Read the Final Report here.

Past Meeting Schedule:

Friday, January 10, 20201:30-4:30 PM Agendaand Meeting Minutes

Friday, December 6, 2019 Agenda andMeeting Minutes

November 4, 2019 -Agendaand Meeting Minutes

October 17, 2019 - AgendaandMeeting Minutes

Public Meeting held at the TechJamOctober 17, 2019- Meeting Minutes

October 10, 2019 - Meeting Minutes

October 1, 2019 - Meeting Minutes

September 23, 2019 - Agendaand Meeting Minutes

August 23, 2019 - Agendaand Meeting Minutes

July 25, 2019 - Meeting minutes

July 19, 2019 -Agenda

June 14, 2019- Agendaand meeting minutes

May 20, 2019- Agenda

April26, 2019 -Agendaand meeting minutes

February22, 2019 -Agendaandmeeting minutes

January 18, 2019 - Agenda and meeting minutesORCA Media Recordings: Transportation, Technology, and Manufacturing/Construction

December 14, 2018 -Agenda and meeting minutesPresentation: Artificial Intelligence (AI) - The Hardware Perspective

November 29, 2018 -Agendaand meeting minutes

October 12-Agendaand meetingminutes

September 4 -Agendaand meetingminutes

Read more here:

Artificial Intelligence Task Force | Agency of Commerce ...

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Task Force | Agency of Commerce …