Coca-Cola Wants to Use AI Bots to Create Its Ads – Adweek

BARCELONA, SPAINCoca-Cola is one of the most beloved brands in the world and is known for creating some of the best work in the advertising industry. But can an AI bot replace a creative? Mariano Bosaz, the brands global senior digital director, wants to find out.

Speaking to Adweek at Mobile World Congress on Tuesday, Bosaz said that hes partly in Barcelona this week to get a better feel for how brands can use artificial intelligence because hes interested in swapping in robots for humans that crank out ads.

Content creation is something that we have been doing for a very long timewe brief creative agencies and then they come up with stories that they audio visualize and then we have 30 seconds or maybe longer, Bosaz said. In content, what I want to start experimenting with is automated narratives.

As part of a recent restructuring to make Coke a digital business, the brandhired its first chief digital marketing officer, David Godsman. That digital transformation includes four focus areas: Customer and consumer experience, operations, new businesses and culture. Within the customer and consumer bucket, Coke is interested in using artificial intelligence to improve content, media and commerceparticularly when it makes the creative process more effective.

"If you try to implement a technology that is too old, obviously its obsolete but if youre trying to implement technology too early, youre going to have a problem with scale."

-Mariano Bosaz, global senior digital director, Coca-Cola

In theory, Bosaz thinks AI could be used by his team for everything from creating music for ads, writing scripts, posting a spot on social media and buying media. It doesnt need anyone else to do that but a robotthats a long-term vision, he said. I dont know if we can do it 100 percent with robots yetmaybe one daybut bots is the first expression of where that is going.

Bosaz isnt alone in envisioning human-less creative. AI is already being used to create commercial music and jingles and publishers like the AP are experimenting with using robots to write copy.

He noted that while bots and data may not be able to write an entire script, they are capable of putting together the first 5 seconds of a commercial or the end of a spot because you always have the same closing in Coke ads, where an image of the brands logo flashes across the screen with a tagline.

In terms of Coca-Colas interest in AI for media buying, Bosaz said that Coke already buys ads programmatically but that its far from putting more than half of its media budget into programmatic.

Coca-Cola is also looking for ways to use programmatic technology to fulfill ecommerce sales through tactics like subscriptions, though Bosaz didnt say exactly what that may entail.

Right now, Coke sells though third-party retailers like Amazon and Tesco, vending machines and a small portion of sales come from direct-to-consumer programs like Share a Coke.

In theory, Bosaz thinks AI could be used by his team for everything from creating music for ads, writing scripts, posting a spot on social media and buying media.

Souped-up vending machines are particularly intriguing in countries like Japan, where mobile adoption and vending machine sales are high. Coke has a Japanese app called Coke On that lets consumers pay for drinks. Once you have that, then you can use beacons so that you know when people are passing by the machines and you can understand habit of consumption, location and time, Bosaz said.

At the same time, Bosaz said marketers need to keep in mind privacy concerns with the Internet of Things and need to find the right balance of using consumer data to provide better services that they appreciate without crossing the line.

That includes devices like Amazon Echo and also includes Cokes own packaging, bottles and trucks. For example, the brand is testing beacons in Belgium in retail stores that pull in live data as shoppers move around the store.

You can see that [in] real-time and then you have historical data that [helps] you predict behavior, Bosaz said.

At Mobile World Congress, Bosaz said that hes looking for the next breakout technology thats also big enough to work for a massive brand like Coke.

Here, I try to understand timing, he said. You need to have the right timing. If you try to implement a technology that is too old, obviously its obsolete but if youre trying to implement technology too early, youre going to have a problem with scale.

Follow this link:

Coca-Cola Wants to Use AI Bots to Create Its Ads - Adweek

The Future of AI Regulation: Part 1 – The National Law Review

INTRODUCTION

Artificial intelligence (AI) systems have raised concerns in the publicsome speculative and some based in contemporary experience. Some of these concerns overlap with concerns about privacy of data, some relate to the effectiveness of AI systems and some relate to the possibility of the misapplication of the technology. At the same time, the development of AI technology is seen as a matter of national priority, and fears of losing the AI technology race fuel national efforts to support its development.

The healthcare and life sciences sectors are highly influenced by US government policy; accordingly, these industry sectors should monitor carefully US government policy pronouncements on AI. This special report is the first of two that will review the US governments overarching national policy on AI, as articulated in Executive Order 13,859, Maintaining American Leadership in Artificial Intelligence (Executive Order), and the related draft Office of Management and Budget memorandum entitled Guidance for Regulation of Artificial Intelligence Applications (Draft Memo). While these two special reports will provide a high-level review of these documents, they will also highlight certain aspects and other recent developments that may be related to the Executive Order and the Draft Memo.

Artificial intelligence (AI) systems have raised concerns in the publicsome speculative and some based in contemporary experience. Some of these concerns overlap with concerns about privacy of data, some relate to the effectiveness of AI systems and some relate to the possibility of the misapplication of the technology. These concerns are heightened by the relative lack of specific legal and regulatory environment that creates guiderails for the development and deployment of AI systems. Indeed, the potential use cases of this new technology are startlingself-driving cars, highly accurate medical diagnosis and screenplay writing are all tasks that AI systems have proven themselves capable of performing. The black box nature of some of these systems, where there is an inability to fully understand how or why an AI system performs as it does, adds to the anxiety about how they are developed and deployed.

At the same time, many nations view the development of AI technologies a matter of national concern. Economic and academic competitiveness in the field is growing, and some governments are concerned that commercial enterprise alone will be insufficient to remain competitive in AI. It is not surprising, then, that governments around the world are beginning to address national strategies for the support of AI development, while at the same time struggling with the issue of regulationpreliminarily, conceptually and directlyincluding the US government.

The role of the government in every industry can be significant, even in a market-driven economy like the US. This is particularly true for those industries that are susceptible to innovation through AI technologies and also highly regulated, controlled or supplied by governments, such as healthcare. Accordingly, the healthcare and life science industries should payparticular attention to governmental pronouncements on policy related to AI.

On January 13, 2020, the Office of Management and Budget (OMB) published a request for comments on a Draft Memorandum to the Heads of Executive Departments and Agencies, Guidance for Regulation of Artificial Intelligence Applications (the Draft Memo).1 OMB produced the Draft Memo in accordance with the requirements of Executive Order 13,859, Maintaining American Leadership in Artificial Intelligence (the Executive Order).2 The Executive Order called on OMB, in coordination with the Office of Science and Technology Policy Director, the Director of the Domestic Policy Council and the Director of the National Economic Council, to issue a memorandum that will:

(i) Inform the development of regulatory and non-regulatory approaches by such agencies regarding technologies and industrial sectors that are either empowered or enabled by AI, and that advance American innovation while upholding civil liberties, privacy and American values; and

(ii) Consider ways to reduce barriers to the use of AI technologies in order to promote their innovative application while protecting civil liberties, privacy American values, and United States economic and national security.3

The Executive Order also required OMB to issue a draft version for public comment to help ensure public trust in the development and implementation AI applications.4 Public comments on the Draft Memo are due March 13, 2020.5 Although the Draft Memo, like the Executive Order, speaks in general terms, it does provide more focusthan the Executive Order in many ways. For example, the Executive Order requires implementing agencies to review their authorities relevant to applications of AI and submit plans to OMB to ensure consistency with the final OMB memorandum.6 The Draft Memo provides additional specificity regarding the information that the agencies must incorporate in their respective plans.7

This special report is the first of two that will review the five guiding principles and six strategic objectives articulated in the Executive Order and the specific provisions of the Draft Memo. While these two reports will provide a high-level review of these documents, they will also highlight certain aspects and other recent developments that may be related to the Executive Order and the Draft Memo. These articles will not, however, address national defense matters.

The Executive Order makes very clear that maintaining American leadership in AI is a paramount concern of the administration because of its importance to the economy and national security. In addition, the Executive Order recognizes the important role the Federal Government plays:

[I]n facilitating AI R&D, promoting the trust of the American people in the development and deployment of AI-related technologies, training a workforce capable of using AI in their occupations, and protecting the American AI technology base from attempted acquisition by strategic competitors and adversarial nations.8

The Executive Order identifies objectives that executive departments and agencies should pursue, which primarily address how the federal governmentcan participate in developing the US AI industry. These objectives are as follows:

1. PROMOTE AI R&D INVESTMENT: Promote sustained investment in AI R&D in collaboration with industry, academia, international partners and allies, and other non-Federal entities to generate technological breakthroughs in AI and related technologies and to rapidly transition those breakthroughs into capabilities that contribute to our economic and national security.9

The first objective has a few interesting components. First, the reference to collaboration includes international partners and allies. This implies that the current administration considers the US AI industry as being both international and also, perhaps, governmental. In particular, the reference to allies implies that foreign governments may be partners in the development of the US AI technology industry, presumably, at least, with respect to national security matters. Second, this objective specifically references investment, implying that the administration anticipates financial investment from the identified collaboration partners, including non-US industry and governments. How agencies achieve this objective will be fascinating to discover, particularly in light of US government restrictions on foreign investment in sensitive US industries and the recently enacted regulations implementing the Foreign Investment Risk Review Modernization Act of 2018.10

Federal policy on investment in AI is the subject of the National Artificial Intelligence Research and Development Strategic Plan (the AI R&D Plan),11 a product of the work of the National Science & Technology Councils Select Committee on Artificial Intelligence. The AI R&D Plan is broadly consistentwith the Executive Order, but its objectives and goals pre-date the Executive Order, and were not changed after the Executive Order. Other Federal agencies have also begun the process of actively engaging in an effort to support AI development, including healthcare-related agencies. The Centers for Medicare and Medicaid Services (CMS), citing the Executive Order, announced an AI Health Outcomes Challenge that will include a financial award to selected participants.12 CMS has selected organizations to participate that span a number of industry sectors, and include large consulting firms, academic medical centers, universities, health systems, large and small technology companies, and life sciences companies.13 In addition, a recent report on roundtable discussions co-hosted by the Office of the Chief Technology Officer of the Department of Health and Human Services and the Center for Open Data Enterprise (the Code Report) has identified a number of recommendations for Federal investment within its own infrastructure to support the R&D efforts within and without the Federal Government.14

2. OPEN GOVERNMENT DATA:

Enhance access to high-quality and fully traceable Federal data, models, and computing resources to increase the value of such resources for AI R&D, while maintaining safety, security, privacy and confidentiality protections consistent with applicable laws and policies.15 This objective should resonate with those developers who believe the Federal Government holds valuable data for purposes of AI R&D. The Code Report hasalready identified potentially valuable healthcarerelated data within the Federal Government (and elsewhere) and presented a series of recommendations consistent with the Executive Order objectives. In addition, the AI R&D Plan calls for the sharing of public data as well.

3. REDUCE BARRIERS:

Reduce barriers to the use of AI technologies to promote their innovative application while protecting American technology, economic and national security, civil liberties, privacy, and values.16

Reducing barriers to use of AI technologies is an objective that implicates the existing regulatory landscape, as well as the potential regulatory landscape for AI technologies. Clearly, this objective is a call for agencies and departments to carefully balance the impact of regulations on development and deployment against what can only be described as an amorphous set of values. It remains to be seen whether we will see more definition here, although it should be noted that recent legislative efforts and regulations are reflecting certain values. For example, pending legislation in the State of Washington would require facial recognition services to be susceptible to independent tests for accuracy and unfair performance differences across distinct subpopulations, which can be defined by race, skin tone, ethnicity and other factors.17 The law would also require meaningful human review of all facial recognition services that are used to make decisions that produce legal effects on consumers or similarly significant effects on consumers.18

Ensure that technical standards minimize vulnerability to attacks from malicious actors and reflect Federal priorities for innovation, public trust, and public confidence in systems that use AI technologies; and develop international standards to promote and protect those priorities.19

This objective includes quite a bit, and seems to imply a significant role for the Federal Government in terms of setting the objectives for technical standards for AI. In the summer of 2019, the National Institute of Standards and Technology (NIST) of the US Department of Commerce released a plan for Federal engagement in developing technical standards for AI in response to the Executive Order (the NIST Plan).20 The NIST Plan also clearly articulates the Federal Governments perspective on how standards should be set in the US, including a recognition of the impact of other government approaches:

The standards development approaches followed in the United States rely largely on the private sector to develop voluntary consensus standards, with Federal agencies contributing to and using these standards. Typically, the Federal role includes contributing agency requirements to standards projects, providing technical expertise to standards development, incorporating voluntary standards into policies and regulations, and citing standards in agency procurements. This use of voluntary consensus standards that are open to contributions from multiple parties, especially the private sector, is consistent with the US market-driven economy and has been endorsed in Federal statute and policy. Some governments play a more centrally managed role in standards development-relatedactivitiesand they use standards to support domestic industrial and innovation policy, sometimes at the expense of a competitive, open marketplace. This merits special attention to ensure that US standards-related priorities and interests, including those related to advancing reliable, robust, and trustworthy AI systems, are not impeded.21

The development of industry standards is already happening, evidenced, for example, by the publication of AI-related standards, including in healthcare, by the Consumer Technology Association.22 Another interesting aspect of this objective is to ensure that the standards reflect Federal priorities related to public trust and confidence in AI systems. An exploration of the issue of public trust is well beyond the scope of this short article, but even the most casual observer of this industry will note the very real lack of confidence in AI systems and fear associated with how they are being or may, in the future, be deployed.23 5. NEXT GENERATION RESEARCHERS: Train the next generation of American AI researchers and users through apprenticeships; skills programs; and education in science, technology, engineering, and mathematics (STEM), with an emphasis on computer science, to ensure that American workers, including Federal workers, are capable of taking full advantage of the opportunities of AI.24

The Future of AI Regulation: The Government as Regulator and Research & Development Participant 8The need for education related to the advances in technology is obvious, and is reflected in both the Code Report and the AI R&D Plan as well. It will be interesting to see how the Federal government achieves this objective, particularly in the many crossdisciplinary applications available. Already, some are reconsidering medical education in light of the advancement of AI systems.25

Develop and implement an action plan, in accordance with the National Security Presidential Memorandum of February 11, 2019 (Protecting the United States Advantage in Artificial Intelligence and Related Critical Technologies) (the NSPM) to protect the advantage of the United States in AI and technology critical to United States economic and national security interests against strategic competitors and foreign adversaries.26

At the same time the White House issued the Executive Order, the Department of Defense launched its AI strategy. This subject is beyond the scope of these articles.

The guiding principles articulated in the Executive Order are, in some instances, little more than restatements of aspects of the objectives. Given the general nature of the objectives, this is not surprising. Nonetheless, some of the guiding principles highlight critical issues.

1. COLLABORATION: The United States must drive technological breakthroughs in AI across the Federal Government, industry, and academia in order to promote scientific discovery, economic competitiveness, and national security. 27

Collaboration, as we have seen, is a theme that permeates many of the strategic objectives. Collaboration across industry sectors and government can be a challenge, but public-private partnerships have a long history in the United States and elsewhere.

2. DEVELOP TECHNICAL STANDARDS AND REDUCE BARRIERS: The United States must drive development of appropriate technical standards and reduce barriers to the safe testing and deployment of AI technologies in order to enable the creation of new AI-related industries and the adoption of AI by todays industries.28 Developing and deploying new technology within sensitive sectors, such as healthcare, requires balancing issues of safety with issues of overly burdensome regulation. The Food and Drug Administration (FDA) has been wrestling with this challenge for some time with respect to the treatment of clinical decision support tools covered in the 21st Century Cures Act, as well as digital health more broadly. Recently, the FDA published a discussion paper, which offers suggested approaches to the FDA clearance process that are designed to ensure efficacy while streamlining the review process.29

3.WORKFORCE: The United States must train current and future generations of American workers with the skills to develop and apply AI technologies to prepare them for todays economy and jobs of the future.30

This guiding principle reads more like an objective, and is very closely aligned with the fifth objective of the Executive Order. As noted already, we are seeing the need for cross-disciplinary training in areas where AI systems are likely to have application, and furthering the preparation of our workforce for these systems will be critical. 4. TRUST: The United States must foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application in order to fully realize the potential of AI technologies for the American people.31 The need for trust and confidence in AI systems for us to take full advantage of the benefit they promise is universally understood. This is a subject that will be explored in other articles within this series.

5. INTERNATIONALIZATION: The United States must promote an international environment that supports American AI research and innovation and opens markets for American AI industries, while protecting our technological advantage in AI and protecting our critical AI technologies from acquisition by strategic competitors and adversarial nations.32 This guiding principle is a reflection of many longstanding US policy goals of opening markets for US industry participants while protecting their valuable intellectual property. In addition, the protection of vital US industries from foreign ownership or control has been of interest to the US government for many years, and, as noted above, the tools at the governments disposal to protect this interest have been strengthened.

Even taken together, the objectives and guiding principles set forth in the Executive Order provide only a general sense of focus and direction, but it would be surprising if it had been more specific. The goals of the Federal Government are broad, cut across multiple government agencies and functions, include the collaboration of industry and foreign interests, and address the government as both regulator and participant in the development of the AI industry. Since the issuance of the Executive Order, Federal agencies have been moving forward and are beginning the process of addressing the goals of the Executive Order. Greater specificity is coming.

Regardless, a few themes can certainly be pulled from the Executive Order. First, it is clear that this administration views the Federal Government as an active participant in the development of the US AI industry. While not without some downside risk, this generally bodes well for the industry in terms of investment, workforce training, access to data and other Federal resources and, potentially, having a convener of resources.

Second, this administration recognizes the importance of international collaboration, but is also acutely aware of potential dangers and risk. The extent to and ways in which this and future administrations balance the risk and reward of international collaboration in AI is yet to be defined. Third, standards need to be established. This is perhaps the most obvious of the objectives set forth, but it is also the one most fraught. The link between trust and standards, and the degree and type of regulation applied to the AI industry, are all yet to be developed. Here, every agency and organization must contemplate the market, public perception, effective testing criteria, and appropriate role for government and self-regulation.

The final theme, and key takeaway, perhaps, is that we are not there yet. The Executive Order is a call to action of the executive departments and agencies tostart the process of coalescing around a central set of general objectives. We are far from seeing what this might look like, although many agencies have been addressing AI issues for years. A key development, and a key next step, will be the finalization of the Draft Memo and the development of executive department and agency work plans.

1 85 Fed. Reg. 1731, 1825 (Jan. 13, 2020). The full text of the Draft Memo is available on the White House website at https://www.whitehouse.gov/wp-content/uploads/2020/01/DraftOMB-Memo-on-R....

2 Exec. Order No. 13,859, Maintaining American Leadership in Artificial Intelligence, 84 Fed. Reg. 3967 (Feb. 11, 2019), availableat https://www.whitehouse.gov/presidential-actions/executive-ordermaintaini... (hereinafter Exec. Order).

3 Id. 6(a).

4 Id. 6(b).

5 85 Fed. Reg. at 1825.

6 Exec. Order 6(c).

7 Draft Memo, p. 10.

8 Exec. Order 1.

9 Exec. Order 2(a). 10 David J. Levine et al., Final Rules Issued on Reviews of Foreign Investments in the United States CFIUS (Jan. 23, 2020), availableat https://www.mwe.com/insights/final-rules-issued-on-reviews-offoreign-inv.... 11 NATL SCI. & TECH. COUNCIL, SELECT COMM. ON ARTIFICIAL INTELLIGENCE, THE NATIONAL ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT STRATEGIC PLAN: 2019 UPDATE, available at https://www.nitrd.gov/pubs/National-AI-RD-Strategy-2019.pdf.

12 CMS Newsroom, CMS Launches Artificial Intelligence Health Outcomes Challenge (Mar. 2019), available at https://www.cms.gov/newsroom/press-releases/cms-launchesartificial-intel....

13 AI Health Outcomes Challenge, available at https://ai.cms.gov/.

14 THE CENTER FOR OPEN DATA ENTERPRISE AND THE OFFICE OF THE CHIEF TECHNOLOGY OFFICER AT THE U.S. DEPT OF HEALTH & HUM. SERVS., SHARING AND UTILIZING HEALTH DATA FOR AI APPLICATIONS: ROUNDTABLE REPORTS (2019), p. 15, available athttps://www.hhs.gov/sites/default/files/sharing-and-utilizing-healthdata... (hereinafter Code Report).

15 Exec. Order 2(b).

16 Exec. Order 2(c).

17 Washington Privacy Act, S.B. 6281, 17(1) (2020) (hereinafter Washington Privacy Act).

18 Id. 17(7). The notion of human intervention between an AI system and an individual is not limited to this legislation. The notion is widely discussed as a core ethical concern related to AI systems, and has been adopted in some corporate policies (see, e.g.,https://www.bosch.com/stories/ethical-guidelines-for-artificialintellige...).

19 Exec. Order 2(d).

20 NIST, U.S. LEADERSHIP IN AI: A PLAN FOR FEDERAL ENGAGEMENT IN DEVELOPING TECHNICAL STANDARDS AND RELATED TOOLS (Aug. 2019), available at https://www.nist.gov/system/files/documents/2019/08/10/ai_standar ds_fedengagement_plan_9aug2019.pdf (hereinafter NIST Plan).

21 NIST Plan, p. 9.

22 See https://shop.cta.tech/collections/standards/artificialintelligence.

23 The issues surrounding trust in AI systems will be explored in a future article in this series.

24 Exec. Order 2(e).

25 See, e.g., Steven A. Wartman & C. Donald Combs, Reimagining Medical Education in the Age of AI, 21 AMA J. ETHICS 146 (Feb. 2019), available at https://journalofethics.amaassn.org/sites/journalofethics.ama-assn.org/f....

26 Exec. Order 2(f).

27 Id. 1(a).

28 Exec. Order 1(b).

29 FOOD & DRUG ADMIN., PROPOSED REGULATORY FRAMEWORK FOR MODIFICATIONS TO ARTIFICIAL INTELLIGENCE/MACHINE LEARNING (AI/ML)-BASED SOFTWARE AS A MEDICAL DEVICE (SaMD), available at fda.gov/files/medical%20devices/published/US-FDA-ArtificialIntelligence-and-Machine-Learning-Discussion-Paper.pdf.

30 Exec. Order 1(c).

31 Id. 1(d).

32 Exec. Order 1(e).

Here is the original post:

The Future of AI Regulation: Part 1 - The National Law Review

AI technology will soon replace error-prone humans all over the world but here’s why it could set us all free – The Independent

It has been oft-quoted albeit humouredly that the ideal of medicine is the elimination of the physician. The emergence and encroachment of artificial intelligence (AI) on the field of medicine, however, puts an inconvenient truth on the aforementioned witticism. Over the span of their professional lives, a pathologist may review 100,000 specimens, a radiologist more so; AI can perform this undertaking in days rather than decades.

Visualise your last trip to an NHS hospital, the experience was either one of romanticism or repudiation: the hustle and bustle in the corridors, or the agonising waiting time in A&E; the empathic human touch, or the dissatisfaction of a rushed consultation; a seamless referral or delays and cancellations.

Contrary to this, our experience of hospitals in the future will be slick and uniform; the human touch all but erased and cleansed, in favour of complete and utter digitalisation. Envisage an almost automated hospital: cleaning droids, self-portered beds, medical robotics. Fiction of today is the fact of tomorrow, doesnt quite apply in this situation, since all of the above-mentioned AI currently exists in some form or the other.

Sharing the full story, not just the headlines

But then, what comes of the antiquated, human doctor in our future world? Well, they can take consolation, their unemployment status would be part of a global trend: the creation displacing the creator. Mechanisation of the workforce leading to mass unemployment. This analogy of our friend, the doctor, speaks volumes; medicine is cherished for championing human empathy if doctors arent safe, nobody is. The solution: socialism.

Open revolt against machinery seems a novel concept set in some futuristic dystopian land, though, the reality can be found in history: The Luddites of Nottinghamshire. A radical faction of skilled textile workers protecting their employment through machine destruction and riots, during the industrial revolution of the 18th century. The now satirised term "Luddite", may be more appropriately directed to your fathers fumbled attempt at unlocking his iPhone, as opposed to a militia.

What lessons are to be learnt from the Luddites? Much. Firstly, the much-fictionalised fight for dominance between man and machine is just that: fictionalised. The real fight is within mankind. The Luddites fight was always against the manufacturer, not the machine; machine destruction simply acted as the receptacle of dissidence. Secondly, government feeling towards the Luddites is exemplified through 12,000 British soldiers being deployed against the Luddites, far exceeding the personnel deployed against Napoleons forces in the Iberian Peninsula in the same year.

Though providing clues, the future struggle against AI and its wielders will be tangibly different from that of the Luddite struggle of the 18th century, next; its personal, its about soul. Our higher cognitive faculties will be replaced: the diagnostic expertise of the doctor, decision-making ability of the manager, and (if were lucky) political matters too.

Boston Dynamics describes itself as 'building dynamic robots and software for human simulation'. It has created robots for DARPA, the US' military research company

Google has been using similar technology to build self-driving cars, and has been pushing for legislation to allow them on the roads

The DARPA Urban Challenge, set up by the US Department of Defense, challenges driverless cars to navigate a 60 mile course in an urban environment that simulates guerilla warfare

Deep Blue, a computer created by IBM, won a match against world champion Garry Kasparov in 1997. The computer could evaluate 200 million positions per second, and Kasparov accused it of cheating after the match was finished

Another computer created by IBM, Watson, beat two champions of US TV series Jeopardy at their own game in 2011

Apple's virtual assistant for iPhone, Siri, uses artificial intelligence technology to anticipate users' needs and give cheeky reactions

Xbox's Kinect uses artificial intelligence to predict where players are likely to go, an track their movement more accurately

Boston Dynamics describes itself as 'building dynamic robots and software for human simulation'. It has created robots for DARPA, the US' military research company

Google has been using similar technology to build self-driving cars, and has been pushing for legislation to allow them on the roads

The DARPA Urban Challenge, set up by the US Department of Defense, challenges driverless cars to navigate a 60 mile course in an urban environment that simulates guerilla warfare

Deep Blue, a computer created by IBM, won a match against world champion Garry Kasparov in 1997. The computer could evaluate 200 million positions per second, and Kasparov accused it of cheating after the match was finished

Another computer created by IBM, Watson, beat two champions of US TV series Jeopardy at their own game in 2011

Apple's virtual assistant for iPhone, Siri, uses artificial intelligence technology to anticipate users' needs and give cheeky reactions

Xbox's Kinect uses artificial intelligence to predict where players are likely to go, an track their movement more accurately

The monopolising of AI will lead to mass unemployment and mass welfare, reverberating globally. AI efficiency and efficacy will soon replace the error-prone human. It must be the case that AI is to be socialised and the means of production, the AI, redistributed: in other words, brought under public ownership. Perhaps, the emergence of co-operative groups made up of experienced individuals will arise to undertake managerial functions in their previous, now automated, workplace. Whatever the structure, such an undertaking will require the full intervention of the state; on a moral basis not realised in the Luddite struggle.

Envisaging an economic system of nationalised labour of AI machinery performing laborious as well as lively tasks shant be feared. This economic model, one of "abundance", provides a platform of the fullest of creative expression and artistic flair for mankind. Humans can pursue leisurely passions. Imagine the doctor dedicating superfluous amounts of time on the golfing course, the manager pursuing artistic talents. And what of the politician? Well, thats anyones guess

An abundance economy is one of sustenance rather than subsistence; initiating an old form of socialism fit for a futuristic age. AI will transform the labour market by destroying it; along with the feudalistic structure inherent to it.

Thought-provoking questions do arise: what is to become of human aspiration? What exactly will it mean to be human in this world of AI?

Ironically; perhaps it will be the machine revolution that gives us the resolution to age-old problems in society.

Visit link:

AI technology will soon replace error-prone humans all over the world but here's why it could set us all free - The Independent

What is Artificial Intelligence (AI)?

Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing (NLP), speech recognition and machine vision.

AI programming focuses on three cognitive skills: learning, reasoning and self-correction.

Learning processes. This aspect of AI programming focuses on acquiring data and creating rules for how to turn the data into actionable information. The rules, which are called algorithms, provide computing devices with step-by-step instructions for how to complete a specific task.

Reasoning processes. This aspect of AI programming focuses on choosing the right algorithm to reach a desired outcome.

Self-correction processes. This aspect of AI programming is designed to continually fine-tune algorithms and ensure they provide the most accurate results possible.

Artificial neural networks and deep learning artificial intelligence technologies are quickly evolving, primarily because AI processes large amounts of data much faster and makes predictions more accurately than humanly possible.

While the huge volume of data being created on a daily basis would bury a human researcher, AI applications that use machine learning can take that data and quickly turn it into actionable information. As of this writing, the primary disadvantage of using AI is that it is expensive to process the large amounts of data that AI programming requires.

AI can be categorized as either weak or strong. Weak AI, also known as narrow AI, is an AI system that is designed and trained to complete a specific task. Industrial robots and virtual personal assistants, such as Apple's Siri, use weak AI.

Strong AI, also known as artificial general intelligence (AGI), describes programming that can replicate the cognitive abilities of the human brain. When presented with an unfamiliar task, a strong AI system can use fuzzy logic to apply knowledge from one domain to another and find a solution autonomously. In theory, a strong AI program should be able to pass both a Turing test and the Chinese room test.

Some industry experts believe the term artificial intelligence is too closely linked to popular culture, and this has caused the general public to have improbable expectations about how AI will change the workplace and life in general. Some researchers and marketers hope the label augmented intelligence, which has a more neutral connotation, will help people understand that most implementations of AI will be weak and simply improve products and services.

The concept of the technological singularity -- a future ruled by an artificial superintelligence that far surpasses the human brain's ability to understand it or how it is shaping our reality -- remains within the realm of science fiction.

While AI tools present a range of new functionality for businesses, the use of artificial intelligence also raises ethical questions because, for better or worse, an AI system will reinforce what it has already learned.

This can be problematic because machine learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human being selects what data is used to train an AI program, the potential for machine learning bias is inherent and must be monitored closely.

Anyone looking to use machine learning as part of real-world, in-production systems needs to factor ethics into their AI training processes and strive to avoid bias. This is especially true when using AI algorithms that are inherently unexplainable in deep learning and generative adversarial network (GAN) applications.

Explainability is a potential stumbling block to using AI in industries that operate under strict regulatory compliance requirements. For example, financial institutions in the United States operate under regulations that require them to explain their credit-issuing decisions. When a decision to refuse credit is made by AI programming, however, it can be difficult to explain how the decision was arrived at because the AI tools used to make such decisions operate by teasing out subtle correlations between thousands of variables. When the decision-making process cannot be explained, the program may be referred to as black box AI.

As the hype around AI has accelerated, vendors have been scrambling to promote how their products and services use AI. Often what they refer to as AI is simply one component of AI, such as machine learning. AI requires a foundation of specialized hardware and software for writing and training machine learning algorithms. No one programming language is synonymous with AI, but a few, including Python, R and Java, are popular.

Because hardware, software and staffing costs for AI can be expensive, many vendors are including AI components in their standard offerings or providing access to artificial intelligence as a service (AIaaS) platforms. AIaaS allows individuals and companies to experiment with AI for various business purposes and sample multiple platforms before making a commitment.

Popular AI cloud offerings include the following:

Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University, explained in a 2016 article that AI can be categorized into four types, beginning with the task-specific intelligent systems in wide use today and progressing to sentient systems, which do not yet exist. The categories are as follows:

The terms AI and cognitive computing are sometimes used interchangeably, but, generally speaking, the label AI is used in reference to machines that replace human intelligence by simulating how we sense, learn, process and react to information in the environment.

The label cognitive computing is used in reference to products and services that mimic and augment human thought processes

AI is incorporated into a variety of different types of technology. Here are six examples:

The concept of inanimate objects endowed with intelligence has been around since ancient times. The Greek god Hephaestus was depicted in myths as forging robot-like servants out of gold. Engineers in ancient Egypt built statues of gods animated by priests. Throughout the centuries, thinkers from Aristotle to the 13th century Spanish theologian Ramon Llull to Ren Descartes and Thomas Bayes used the tools and logic of their times to describe human thought processes as symbols, laying the foundation for AI concepts such as general knowledge representation.

The late 19th and first half of the 20th centuries brought forth the foundational work that would give rise to the modern computer. In 1836, Cambridge University mathematician Charles Babbage and Augusta Ada Byron, Countess of Lovelace, invented the first design for a programmable machine. In the 1940s, Princeton mathematician John Von Neumann conceived the architecture for the stored-program computer -- the idea that a computer's program and the data it processes can be kept in the computer's memory. And Warren McCulloch and Walter Pitts laid the foundation for neural networks.

With the advent of modern computers, scientists could test their ideas about machine intelligence. One method for determining whether a computer has intelligence was devised by the British mathematician and World War II code-breaker Alan Turing in 1950. The Turing Test focused on a computer's ability to fool interrogators into believing its responses to their questions were made by a human being.

The modern field of artificial intelligence is widely cited as starting in 1956 during a summer conference at Dartmouth College. Sponsored by the Defense Advanced Research Projects Agency (DARPA), the conference was attended by 10 luminaries in the field, including AI pioneers Marvin Minsky, Oliver Selfridge and John McCarthy, who is credited with coining the term artificial intelligence. Also in attendance were Allen Newell, a computer scientist, and Herbert A. Simon, an economist, political scientist and cognitive psychologist, who presented their groundbreaking Logic Theorist, a computer program capable of proving certain mathematical theorems and referred to as the first AI program.

In the wake of the Dartmouth College conference, leaders in the fledgling field of AI predicted that a man-made intelligence equivalent to the human brain was around the corner, attracting major government and industry support. Indeed, nearly 20 years of well-funded basic research generated significant advances in AI: For example, in the late 1950s, Newell and Simon published the General Problem Solver (GPS) algorithm, which fell short of solving complex problems but laid the foundations for developing more sophisticated cognitive architectures; McCarthy developed Lisp, a language for AI programming that is still used today. In the mid-1960s MIT Professor Joseph Weizenbaum developed ELIZA, an early natural language processing program that laid the foundation for today's chatbots.

But the achievement of artificial general intelligence proved elusive, not imminent, hampered by limitations in computer processing and memory and by the complexity of the problem. Government and corporations backed away from their support of AI research, leading to a fallow period lasting from 1974 to 1980 and known as the first "AI Winter." In the 1980s, research on deep learning techniques and industry's adoption of Edward Feigenbaum's expert systems sparked a new wave of AI enthusiasm, only to be followed by another collapse of government funding and industry support. The second AI winter lasted until the mid-1990s.

Increases in computational power and an explosion of data sparked an AI renaissance in the late 1990s that has continued to present times. The latest focus on AI has given rise to breakthroughs in natural language processing, computer vision, robotics, machine learning, deep learning and more. Moreover, AI is becoming ever more tangible, powering cars, diagnosing disease and cementing its role in popular culture. In 1997, IBM's Deep Blue defeated Russian chess grandmaster Garry Kasparov, becoming the first computer program to beat a world chess champion. Fourteen years later, IBM's Watson captivated the public when it defeated two former champions on the game show Jeopardy!. More recently, the historic defeat of 18-time World Go champion Lee Sedol by Google DeepMind's AlphaGo stunned the Go community and marked a major milestone in the development of intelligent machines.

Artificial intelligence has made its way into a wide variety of markets. Here are eight examples.

AI and machine learning are at the top of the buzzword list security vendors use today to differentiate their offerings. Those terms also represent truly viable technologies. Artificial intelligence and machine learning in cybersecurity products are adding real value for security teams looking for ways to identify attacks, malware and other threats.

Organizations use machine learning in security information and event management (SIEM) software and related areas to detect anomalies and identify suspicious activities that indicate threats. By analyzing data and using logic to identify similarities to known malicious code, AI can provide alerts to new and emerging attacks much sooner than human employees and previous technology iterations.

As a result, AI security technology both dramatically lowers the number of false positives and gives organizations more time to counteract real threats before damage is done. The maturing technology is playing a big role in helping organizations fight off cyberattacks.

Despite potential risks, there are currently few regulations governing the use of AI tools, and where laws do exist, they typically pertain to AI indirectly. For example, as previously mentioned, United States Fair Lending regulations require financial institutions to explain credit decisions to potential customers. This limits the extent to which lenders can use deep learning algorithms, which by their nature are opaque and lack explainability.

The Europe Union's General Data Protection Regulation (GDPR) puts strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.

In October 2016, the National Science and Technology Council issued a report examining the potential role governmental regulation might play in AI development, but it did not recommend specific legislation be considered.

Crafting laws to regulate AI will not be easy, in part because AI comprises a variety of technologies that companies use for different ends, and partly because regulations can come at the cost of AI progress and development. The rapid evolution of AI technologies is another obstacle to forming meaningful regulation of AI. Technology breakthroughs and novel applications can make existing laws instantly obsolete. For example, existing laws regulating the privacy of conversations and recorded conversations do not cover the challenge posed by voice assistants like Amazon's Alexa and Apple's Siri that gather but do not distribute conversation -- except to the companies' technology teams which use it to improve machine learning algorithms. And, of course, the laws that governments do manage to craft to regulate AI don't stop criminals from using the technology with malicious intent.

See original here:

What is Artificial Intelligence (AI)?

Calling all AI experts – Technical.ly Philly

Next month, Comcast will host PHLAI, a technical conference for engineers and professionals interested in and working with machine learning and artificial intelligence.

Well bring together local practitioners in A.I. and machine learning to discuss past experiences and common technological goals aimed at making peoples lives better. Attendees will learn about new ideas and best practices from experts in the field and hear about the latest developments in machine learning and artificial intelligence.

As an example, I was fortunate to be part of the team here at Comcast that used A.I. to launch our Xfinity X1 voice remote. That device has changed the way people watch television, and to date weve deployed more than 14 million of them in homes all across our service area from San Francisco to Philadelphia. And it keeps getting smarter, faster and more accurate every day, all thanks to machine learning.

The PHLAIconference will take place onTuesday, Aug.15 at Convene Cira Centre in Philadelphia.

Featured speakers will include:

And as part of the event, we also want to hear from those who are solving their own problems with A.I.

Practitioners can share their stories and submit proposals until July 14.

Attendees can register here(its free). We hope to see you there!

Jeanine Heck serves as Executive Director in the Technology and Product organization of Comcast Cable. In this role, Heck brings artificial intelligence into XFINITY products. She was the founding product manager for the X1 voice remote, has led the launch of a TV search engine, and managed the companys first TV recommendations engine.

Read the original:

Calling all AI experts - Technical.ly Philly

Microsoft’s new AI sucks at coding as much as the typical Stack Overflow user – TNW

Microsoft has made some impressive leaps forward in the world of artificial intelligence (AI), but this might be its biggest yet. Microsoft Research, in conjunction with Cambridge University, has developed an AI thats able to solve programming problems by reusing lines of code cribbed from other programs.

The dream of one day creating an artificial intelligence with the ability to write computer programs has long been a goal of computer scientists. And now, were one step closer to its actualization.

The AI which is called DeepCoder takes an input and an expected output and then fills in the gaps, using pre-created code that it believes will create the desired output. This approach is called program synthesis.

In short, this is the digital equivalent of searching for your problem on Stack Overflow, and then copying-and-pasting some code you think might work.

But obviously, this is a lot more sophisticated than that. DeepCoder, as pointed out by the New Scientist, is vastly more efficient than a human. Its able to scour and combine code with the speed of a computer, and is able to use machine learning in order to sort the fragments by their probable usefulness.

At the moment, DeepCoder is able to solve problems that take around five lines of code. Its certainly early days, but its still undeniably promising. Full details about the system, and its strengths and limitations can be found in the research paper Microsoft published.

And at least DeepCoder wont ask you to plz send teh codes.

Read next: This drum-like keyboard lets you type in virtual reality like a boss

Shh. Here's some distraction

Continued here:

Microsoft's new AI sucks at coding as much as the typical Stack Overflow user - TNW

IBM’s AI Will Make Your Hospital Stay More Comfortable – Futurism

In Brief IBM's Watson, probably the most famous AI system in the world today, is making its way into hospitals to assist with menial tasks, thereby freeing up medical personnel. Watson has already made a big impact on the medical industry, as well as many others, and the AI shows no signs of slowing down. Dr. Watson, Coming Soon

IBMs Watsonhas done everything from beat human champions at the game of Go to diagnose undetected leukemia in a patient, saving her life. Now, the artificial intelligence (AI) system is poised to make life in a hospital a lot easier for patients and staff alike.

Right now, some medical staff spend almost 10 percent of their working hours answering basic patient questions about physician credentials, lunch, and visiting hours, Bret Greenstein, the vice president of Watsons Internet of Things (IoT) platform, tells CNET.

These staff members also have to tend to very basic needs that dont require medical expertise, such as changing the temperature in rooms or pulling the blinds. If assisted by some kind of AI-powered device, these workers could spend their time more effectively and focus on patient care.

Thats where Watson comes in. Philadelphias Thomas Jefferson University Hospitals have teamed up with IBM and audio company Harman to develop smart speakers for a hospital setting. Once activated by the voice command Watson, these speakers can respond to a dozen commands, including requests to adjust theblinds, thermostat, and lightsor to play calming sounds.

Watson is no stranger to thehealthcare industry. In addition to providing a correct diagnosis for the woman mentioned above, Watson was able to recommend treatment plans at least as well as human oncologists in 99 percent of the cases it analyzed, and it even provided options missed by doctors in 30 percent of those cases.

Watson will soon be working in many dermatologists offices, too, and while its integration into the medial field hasnt been free of problems, it is still the AI with the broadest access to patient data the key to better diagnoses and greater predictive power.

Watson has had a notable impact on various other industries, as well.

OnStar Gouses Watson, and it will be making driving simpler in more than 2 million 4G LTE-connected GM vehicles by the end of this year. Watson is also no stranger to retail, having been incorporated into operations at Macys, Lowes, Best Buy, and Nestle Cafesin Japan, and the AI is even helping to bring a real-life Robocop to the streets of Dubai.

Watson is branching out into creative work, too, which was previously assumed to be off-limits to AIs. The system successfullyedited an entire magazine on its own and has also created a movie trailer.

What the AI will do next is anyones guess, but its safe to say that Watson probably has a more exciting and ambitious five-year plan than most humans.

Follow this link:

IBM's AI Will Make Your Hospital Stay More Comfortable - Futurism

The Role of Artificial Intelligence (AI) in the Global Agriculture Market 2021 – ResearchAndMarkets.com – Business Wire

DUBLIN--(BUSINESS WIRE)--The "Global Artificial Intelligence (AI) Market in Agriculture Industry Market 2021-2025" report has been added to ResearchAndMarkets.com's offering.

The artificial intelligence (AI) market in the agriculture industry is poised to grow by $458.68 million during 2021-2025, progressing at a CAGR of over 23% during the forecast period.

The market is driven by maximizing profits in farm operations, higher adoption of robots in agriculture, and the development of deep-learning technology. This study also identifies the advances in AI technology as another prime reason driving industry growth during the next few years.

The artificial intelligence (AI) market in agriculture industry analysis includes the application segment and geographic landscape.

The report on artificial intelligence (AI) market in agriculture industry covers the following areas:

The robust vendor analysis is designed to help clients improve their market position, and in line with this, this report provides a detailed analysis of several leading artificial intelligence (AI) market in agriculture industry vendors that include Ag Leader Technology, aWhere Inc., Corteva Inc., Deere & Co., DTN LLC, GAMAYA, International Business Machines Corp., Microsoft Corp., Raven Industries Inc., and Trimble Inc. Also, the artificial intelligence (AI) market in agriculture industry analysis report includes information on upcoming trends and challenges that will influence market growth. This is to help companies strategize and leverage all forthcoming growth opportunities.

Key Topics Covered:

Executive Summary

Market Landscape

Market Sizing

Five Forces Analysis

Market Segmentation by Application

Customer landscape

Geographic Landscape

Vendor Landscape

Vendor Analysis

For more information about this report visit https://www.researchandmarkets.com/r/58dom9

About ResearchAndMarkets.com

ResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

See more here:

The Role of Artificial Intelligence (AI) in the Global Agriculture Market 2021 - ResearchAndMarkets.com - Business Wire

From page building to apps to game design to AI, this programming training package covers it all – Boing Boing

Learning to code can be intimidating. It always takes some time and attention to develop any new skill, but for one with as many approaches as programming, it can be particularly nerve-wracking, especially when youve never dipped into those murky waters before.

Even if you fall into that absolute beginner category, the package of training in The 2020 Comprehensive Programming Collection offers up some solid primers on the basic languages, tools, and methods for building websites, creating apps and basically becoming a one-person digital content machine.

Ask a hiring manager the no. 1 skill they want to see on the resume of an IT job applicant and its knowledge of JavaScript. So this collection of nine courses starts with The Complete Beginner's JavaScript Course, introducing you to the basics of this core programming language.

While JavaScript serves as the backbone of web page building, its also instrumental in creating apps. Its training that will serve you well in the four bundle courses centered around creating your own working apps. iOS App Development for Beginners and Intro to Java for Android Development will get you familiar with Android Studio and Swift, the two primary platforms for tailoring an app specifically to Android and iOS users.

Youll expand that foundation with Discover React for Web Applications as you use the React JavaScript library for building cool interactive user experiences; and Develop an AR App for the Retail Industry, where you actually build a working augmented reality app that will show you how virtual furniture will look in your very real-world settings.

Since game development is bound to capture nearly any new programmers interests, the Mobile Game Development for Beginners gets into coding on the Unity game engine as you start building your own mobile games for Android and iOS devices. Then the Build a Micro-Strategy Game explores Unity further as you create a game about building and managing a colony on Mars.

Finally, this training concludes with instruction in a pair of avenue-expanding new fields, machine learning, and artificial intelligence. In Machine Learning for Beginners with Tensorflow, youll learn with a practical hands-on project building a program where computers run through a series of tasks, spot patterns, then react according to your instructions.

Then, Convolutional Neural Networks (CNNs) for Image Classification will have you crafting CNNs of your own, the image recognition technology behind self-driving cars, facial recognition, fingerprint matching and more.

A $1,650 package of training, you can get all nine courses now at less than $4 per course, only $29.99.

The death toll in Italys coronavirus outbreak today passed 1,000. Schools throughout Italy are completely shut down, which is reportedly driving a surge in internet traffic as bored kids forced to stay indoors turn to online games.

Facebooks default settings facilitated the disclosure of personal information, including sensitive information, at the expense of privacy.

Before Clearview Became a Police Tool, It Was a Secret Plaything of the Rich. Thats the title of the New York Times piece, and thats the horrifying reality of how artificial intelligence and facial recognition are already being used in ways that violate your expectations of privacy in the world.

Lets go ahead and assume you never thought youd own a lamp that can chill a bottle of Albario. And if that lamp thats also a cooler were also a Bluetooth speaker, its even less likely the idea crossed your mind, but here we are. The Dutch company Kooduu designed a multifunction accent piece that []

The great thing about living your best life is that it means something different to each individual. So while you may not need new socks or a fishing camera, other people are clamoring for it. To prove the point, weve assembled 25 of the best deals we were able to scrounge up this week. Some []

Heres a cool factoid about company branding and the importance of iconic imagery that you probably never considered. Brands that do a poor job of branding themselves to the public definitely pay the price. In fact, its a literal priceas in, salaries 10 percent higher than their better-branded competitors. Not only does the lack of []

Visit link:

From page building to apps to game design to AI, this programming training package covers it all - Boing Boing

Can Artificial Intelligence Help Students Work Better Together? According to Research, the Answer is Yes. – WPI News

Once the AI Partners are integrated in these classrooms, Whitehill and his team will be able to collect data on how students interact with them, and then iteratively make them more intelligent and effective. Initially, the AI Partner might be controlled by a human teacher in a backroom (Wizard of Oz-style interaction), but over time, it can learn from its human controller what to do when and thereby become more autonomous. Whitehill and his team anticipate that the particular form that the Partner takes is likely also important.

Students might find an embodied robot creepy, but they might like interacting with an animated avatar on a touchscreen, he says.

This project represents a shift in how researchers envision AI in the classroom. While earlier work in this field sought to fully automate the teaching process, which Whitehill considers to be infeasible, this project is about human-AI teaming, and how humans and teachers possess complementary abilities. AI Partners can help to magnify teachers existing strengths by increasing the number of students in the classroom who receive the real-time feedback they need for optimal learning.

Whitehill also says that this research will be greatly informative even during the COVID-19 pandemic, when many school districts across the country are participating in remote learning. In fact, he says testing agent-student interactions over platforms like Zoom have certain advantages over in-person interactions.

With Zoom, each student and teacher in the classroom is cleanly separated from each other, and all their audiovisual inputs are channeled through a common software interface. This makes it much easier to analyze their speech, gestures, language, and interactions with each other, Whitehill says. In contrast, in normal, in-person classrooms, the interactions are much messier, since students often sit in all kinds of different positions, might be touching their faces, and work in a noisy environment, which makes it more challenging for the Partner to observe and analyze.

By the end of this research, Whitehill says he hopes to find practical teaching and coaching strategies that AI Partners can execute that work well with students. Its not clear at all that the way humans teach would work well for a computer, robot, or avatar, he says.

While the computational challenges of the projectsignal processing in extremely noisy and cluttered settings, real-time control in an uncertain environment, and human-computer interaction for a novel settingare formidable, Whitehill says the potential rewards make it worth the effort.

The exciting thing about this project is that we get to completely rethink the role of AI in the classroom, he says. My hope is that, through next-generation educational AI, we will be able to stimulate deeper critical thinking and collaboration among students to help them learn better and achieve more.

Jessica Messier

The rest is here:

Can Artificial Intelligence Help Students Work Better Together? According to Research, the Answer is Yes. - WPI News

Google’s DeepMind Creates Dataset With 300,000 YouTube Clips to … – The Daily Dot

Even the most advanced artificial intelligencealgorithms in the world have trouble recognizing the actions of Homer Simpson.

DeepMind, the Google-owned artificial intelligence lab best known for defeating the worlds greatest Go players, created a new dataset of YouTube clips to help AI find and learn patterns so it can better recognize human movement. The massive sample set consists of 300,000 video clips and 400 different actions.

AI systems are now very good at recognizing objects in images, but still have trouble making sense of videos, aDeepMind spokesperson told IEEE Spectrum.One of the main reasons for this is that the research community has so far lacked a large, high-quality video dataset.

According to IEEE Spectrum, early testing of the Kinetics Human Action Video Dataset showed mixed results. The deep learning algorithm was up to 80 percent accurate in classifying actions like playing tennis, crawling baby, cutting watermelon, and bowling. But its accuracy dropped to 20 percent or less when attempting to identify some of the activities and habits associated with Homer: drinking beer, eating doughnuts, and yawning.

Video understanding represents a significant challenge for the research community, and we are in the very early stages with this, a DeepMind spokesperson said in a statement. Any real-world applications are still a really long way off, but you can see potential in areas such as medicine, for example, aiding the diagnosis of heart problems in echocardiograms.

DeepMind got some help from Amazons Mechanical Turk, a crowdsourcing service that companies can use to enlist other humans in completing a task. In this case, thetask was labeling actions in thousands of 10-second YouTube clips.

After discovering the effectiveness of its dataset, the U.K.-based company ran tests to see if it had any gender imbalance. Past tests showed that the contents of certain datasets resulted in AI that was unsuccessful recognizing certain ethnics groups. Preliminary results showed this particular set of video clips did not present those problems. In fact, DeepMind found that no single gender dominated within 340 of 400 action classes. The actions that did not pass the test included shaving a beard, cheerleading, and playing basketball.

We found little evidence that the resulting classifiers demonstrate bias along sensitive axes, such as across gender, researchers at DeepMind wrote in a paper.

The company will now work with outside researchers to grow its dataset and continue to develop AI so it can better recognize what is going on in videos. The research could lead to uses ranging from suggesting relevant YouTube video to users to diagnosing heart problems.

We have reached out to DeepMind to learn more about why Homer Simpson is causing such problems.

Update June 9, 5pm:A DeepMind spokesperson clarified that dataset didnt actually include videos ofThe Simpsons characterjust actions hes widely associated with. Doh! Weve updated our article accordingly.

H/T IEEE Spectrum

The rest is here:

Google's DeepMind Creates Dataset With 300,000 YouTube Clips to ... - The Daily Dot

Facebook wants you to trust AI, and it’s hiring for a Research group to get you to do just that – Thinknum Media

Beginning on August 20, 2019, Facebook ($NASDAQ:FB) began listing job openings for a new "Research"job categoryat the company that appears to be hiring scientists to research everything from blockchain in user experience, to augmented reality, to the way that humans think about artificial intelligence. So far, the company has listedmore than 100 positions for the group, with the apex of hiring in September.

The openings are spread throughout the Facebook organization, with a particular focus on artificial intelligence. Of the 70 job titles, 22 are focused on AI, 17 on UX (user experience), and three on AR/VR.

A "Visiting Scientist - Trust in AI" listing mentions a "Trust-in-AI-Research" team that is looking for a Ph.D. in machine learning and AI to "contribute research that can be applied to Facebook product development". Other AI positions are listed in the table below.

Title

Category

Chercheur en Intelligence Artificielle (Diplm de l'universit)

Research

Chercheur Invit, Intelligence Artificielle (Diplm de l'universit)

Research

Ingnieur de Recherche, Intelligence Artificielle (Diplm de l'universit)

Research

Postdoctoral Researcher, Artificial Intelligence (PhD)

Research

Program Manager, AI Programs

Research

Research Engineer, Artificial Intelligence (University Grad)

Research

Research Scientist, (Conversational AI Group)

Research

Research Scientist, AI

Research

Research Scientist, AI (EMEA)

Research

Research Scientist, AI (Montreal)

Research

Research Scientist, AI Research

Research

Visiting Researcher, Artificial Intelligence (PhD)

Research

Visiting Scientist - AI-Infra (US)

Research

Visiting Scientist - Facebook AI Applied Research (EMEA)

Research

Visiting Scientist - Facebook AI Applied Research (US)

Research

Visiting Scientist - Facebook AI Research (EMEA)

Research

Visiting Scientist - Facebook AI Research (Montreal)

Research

Visiting Scientist - Facebook AI Research (US)

Research

Visiting Scientist - Trust in AI (US)

Research

Visiting Scientist, AI

Research

Visiting Scientist, AI (EMEA)

Research

Visiting Scientist, AI (Montreal)

Research

Another curious hire is for an "Epitaxy Engineering Manager" for the new group's Augmented / Virtual Reality (AR/VR) team. Epitaxy involves the growth of crystals on the substrate,and this listing makes mention of advancing LED Expitaxy Materials Systems for AR Displays, an indication that Facebook and its Oculus group are creating new display technology.

Hiring for the new Research group appears spread across several Facebook offices, including its Menlo Park Headquarters, New York, Montreal, Redmond, Pittsburg, San Francisco, Seattle, Boston, Cork, and London.

Facebook's interest in the future of AI isn't surprising, and it already has a deep foothold in Virtual Reality. Understanding how humans perceive and interact with these technologies appears to be the first step in approachinghow the company will handleproduct releases that will use the technology.

Thinknum tracks companies using the information they post online - jobs, social and web traffic, product sales and app ratings - andcreates data sets that measure factors like hiring, revenue and foot traffic. Data sets may not be fully comprehensive (they only account for what is available on the web), but they can be used to gauge performance factors like staffing and sales.

Read this article:

Facebook wants you to trust AI, and it's hiring for a Research group to get you to do just that - Thinknum Media

Leveraging AI to reduce COVID-19 risk: ‘It’s not enough to rely on test and trace’ – FoodNavigator.com

The UK Government has said that businesses must update their risk assessments to factor in the dangers associated with coronavirus. This means that, in order to remain compliant and avoid any future liability issues, businesses need to take action to mitigate the impact of the virus on their workforce.

The government has clearly warned that any food business which fails to complete a risk assessment that takes COVID-19 into account could be in a breach of health and safety law. Employers therefore need to prioritise managing the risks properly. They need to consider the wider context of the workforce to ensure there are no weaknesses in procedures that may put them and their employees at risk, Will Cooper, CEO and founder at Delfin Health, explained.

Digital health tech companies Delfin Health and DocHQ have created a new tool that leverages artificial intelligence to predict, monitor and test the health and safety of the diverse workforce that operates in the food sector.

The very nature of food production means there are many different functions and roles within food manufacturing, Cooper observed. And, because these workers are directly involved in food processing and the handling of food production, employers are required by law to follow specific government guidelines.

Dubbed Klarity, the AI can give food businesses a real-time clinical understanding of the health of their workers across various job functions, from food inspectors, food handlers, packers, managers and cleaners, to maintenance contractors and delivery workers.

Tools like Klarity can both mitigate any potential employer liability risks and provide a long-term solution to a health crisis.

This could become particularly important for essential food businesses if there is a second spike in COVID-19 that results in further lockdowns, either locally or nationally.

It can help manufacturers stay operational during a potential second lockdown. Due to food being an essential industry, we have already seen them continue to operate during the first wave of COVID-19, albeit in a limited way which is putting their key workers at risk. If these are to remain open, they need to be able to monitor the health and safety of their staff in the most efficient way.

While food businesses have remained largely operational during the various national lockdowns, certain facilities have had to be shuttered due to localised outbreaks. Cases in the meat sector - from Germany and the US to the UK and the Netherlands - have highlighted issues that Cooper believes everyone operating in the food industry would do well to take heed of.

Its not enough for employers to simply rely on people using the test and trace government solution which tests only symptomatic people. There are cases of the virus spreading rapidly throughout food manufacturing units in the US and Germany and no doubt elsewhere due to the conditions of the facilities which typically involve close contact. Its also highly likely these units relied on just testing or returning home symptomatic people. It requires a systematic process of regular testing.

Cooper does not believe it is possible to simply test the entire workforce due to cost and capacity restraints. Digital AI platform Klarity takes a different approach.

One of the important roles in COVID-19 transmission in this pandemic, especially at this stage, is being played by asymptomatic individuals. Although theoretically speaking the easiest solution might be to apply systematic testing to general population, doing that would become technically unfeasible due to lack of resources and sky rocketing expenditures.

We have developed a tailored solution that guarantees a consistent testing methodology developed to filter infected asymptomatic employees among large taskforce pools. Thus, our solution can meet the requirements of different sectors, reducing the number of tests, decreasing uncertainty in the workplace and potentially mitigating future outbreaks.

How does Klarity work?

Cooper elaborated: The explainable AI that we have developed asks a series of questions about a persons health history, family health history and current lifestyle. The algorithm within Klarity has been developed using data from one of the largest patient datasets in the world containing over 6.5 million patient years of both medical and lifestyle data.

We use this information to predict the mortality risk of a patient if they contract COVID-19. Our technology further combines optional daily symptom checking and live virus and antibody testing methodologies to track asymptomatic patients before they transfer this disease to people around them, he told FoodNavigator.

The various testing methodologies, which include group and randomised testing, allow employers to reduce the amount of testing required and minimise the risk of an outbreak in the currently active workforce, in particular by identifying asymptomatic cases.

The testing process is guided by healthcare professionals who also interpret the results based on World Health Organisation protocols and third-party validation including Polymerase Chain Reaction (PCR), Enzyme-Linked Immunosorbent Assay (ELISA) and, where relevant, rapid antibody tests.

Our solution allows the reduction of testing yet, through group and randomised testing, identifies the virus quickly.

Cooper believes that, as well as reducing the risk of outbreaks in a food business, the use of Klarity would also serve to reassure employees that they are safe at work.

Not everyone will feel comfortable sharing details of their medical history and lifestyle choices, particularly with a programme provided through their employer. For this reason, Cooper stressed, data protection is key.

Data privacy is one of our utmostpriorities as we are processing sensitive and private patient information. Patient have complete control of their data, they can share and revoke consent. Moreover, no data ever leaves the platform. We dont share sensitive personal health data but only aspects necessary to help employers keep employees safe. Our platform keeps up to date with the ever-changing policies and regulations so that the companies dont have to worry about GDPR rules and employee rights, he told us.

In terms of encryption, we use a highly secure (Quantum resistant), distributed and highly configurable storage mechanism; allowing citizens the ability to source, store and share (by record or down to individual fields) their data.

More here:

Leveraging AI to reduce COVID-19 risk: 'It's not enough to rely on test and trace' - FoodNavigator.com

The Future of Productivity: AI and Machine Learning – Entrepreneur

The productivity and project management market is booming, and its continuing to evolve in new and exciting ways. I wanted to know what the future of artificial intelligence in project management would look like, so I reached out to founders, productivity expertsand futurists who work in this space every dayto ask what their predictions are for the next five and 10 years. Their answers were enlightening.

David Allen, the inventor of Getting Things Done, believes,Systems will get better at presenting the relevant data to optimize our experience in every situation -- at the right place, at the right time. We need to think of productivity systems as supporting systems for our decision process.

Related: How to Prepare Employees to Work With AI

So, we wont yet be using AI to eliminate our decisions and automate them, but to enhance the ability to make a decision in any situation. I also think this is the next likely step for AI. Most people wouldnt trust a computer to make decisions for them, but they do look for information to help make those decisions.

Allen goes on to say, Neither these new presentation forms nor trends like A.I. will make a decision for you. That wont work. I see that A.I. can support your decisions but we still will use our heads to make decisions.

When most people think of AI, they think of robots doing manual labor to get things done for us, whether its Rosie from The Jetsons or a robot on the assembly line at Ford. The issue I have with this is that this view is too limiting. Sure, robots will become intelligent, but so will everything else. The team behind the online video seriesIn a Nutshellreleased a great explainer video about this.

Right now, an app that will be able to recognize your motivation level and give you tasks that fit that level is not out of the question. Bots that can answer simple service questions and learn from the responses are already around, working with some success and some big failures.

Related: Rethinking Chatbots: They're Not Just for Customers

Mark Mader, CEO at Smartsheet, thinks that thinking of AI as roving robots is missing the point, saying, Looking further out, theres no doubt that automation -- dont think robots, think removing mundane and unproductive work steps from your day -- will increase. Machine learning will be able to predict what workers are trying to do and make their work easier. How? By automatically gathering the information they need to complete a task, populating formsand sharing them with the appropriate people.

I see a combination of both. Specifically in project management, I see a future where machines will be able to predict a change using real-time data and make changes accordingly. Its the combination of bots and machine learning that holds the key: Think of an assembly line system pushing out barbecue equipment. The system will be able to predict that demand will increase due to an upcoming holiday and automatically tell the bots on the line to increase production. Im not sure if there will be a human decision between them, but I think as we become more comfortable with the machines decisions, well give themmore control of the process.

AI is far from being human, let alone superhuman. As I pointed out, weve released machine learning bots with some pretty terrible results. Machines are not yet good at understanding context or sarcasm, so when we let them learn, they usually miss the mark.

Using machines to help with a decision, however, seems like the only way forward for the moment. As a form of decision support, productivity expertCarl Pulleinthinks that machine learning and artificial intelligence [will move]towards creating productivity tools that can schedule your meetings and tasks for you and to be able to know what needs to be done based on your context, where you are and what needs to be done.

Related: Why Small Business Should Be Paying Attention to Artificial Intelligence

Machine learning enabled tools like Grammarly are already on the market, but as these decision-making aides become more well-known, they are moving into more complex areas. Think of it as your Facebook timeline algorithm or your spam filter, but for your to-do list.

To get a sense of where we are now, think of systems that are on and learning all of the time. Your computer browser talks to Google and Facebook and any number of companies, where what youre doing, clicking, buyingand beyond, is stored. Now with the recent trend of IoT, these things are moving away from our computers and cellphones and becoming part of our everyday lives. And gaining access to data along the way. Your smart fridge might be able to tell the local supermarket how much water you drink per week. If everyone in the local area has a smart fridge, that same supermarket would be able to make a better decision about how much water to keep stocked.

Productivity psychologistMelissa Gratias sees this system working for us in our workplaces too. Most apps and programs require the user to purposefully interface with the tool in order to use it. We will see more smart homes, smart carsand voice-activated entry points that allow the tool to be always available to the user, no matter where she is. She wont have to stop what shes doing to, for example, add something to her task list.

Related: Will a Robot Take My Job?

So, not only will the supermarket know what to stock, but your fridge will know what to add to your shopping list for the next week. Automatically, through learned behavior.

All the experts I spoke with agreed that decision support islikely the only way forward in the short term. Machines are being taught the decision preferences of humans, because they cant discern context on their own. So, we need to educate machines on what its like to think like a human. Companies like Alphabet are already working on it, with projects like DeepMind at the forefront of AI and machine learning technologies. Others, like Elon Musks OpenAI, are working to make sure that humanitys fears of a malevolent AI will never be realized. As we learn to trust these systems, adoption will quickly follow. And since they're so universal, they will surely touch allindustries.

Martin Welker is the founder and CEO of collaboration platformZenkit. After finishing his studies in computer science at KIT in Germany, he established Axonic, where he created one of the leading AI-driven engines for document analysi...

Continue reading here:

The Future of Productivity: AI and Machine Learning - Entrepreneur

Researchers Rank Deepfakes as the Biggest Crime Threat Posed by AI – Adweek

While science fiction is often preoccupied with the threat of artificial intelligence successfully imitating human intelligence, researchers say a bigger danger right now is people using the technology to imitate one another.

A recent survey from the University College of London ranked deepfakes as the most worrying application of machine learning in terms of potential for crime and terrorism. According to 31 AI experts, the video fabrication technique could fuel a variety of crimesfrom discrediting a public figure with fake footage to extorting money through video call scams impersonating a victims loved onewith the cumulative effect leading to a dangerous mistrust of audio and visual evidence on the part of society.

The experts were asked to rank a list of 20 identified threats associated with AI, ranging from driverless car attacks to AI-authored phishing messages and fake news. The criteria for the ranking included overall risk, ease of use, profit potential and the level of difficulty in how hard they are to detect and stop.

Deepfakes are worrying on all of these counts. They are easily made and are increasingly hard to differentiate from real video. Advertised listings are easily found on corners of the dark web, so the prominence of the targets and variety of possible crimes means that there could be a lot of money at stake.

While the threat of deepfakes was once confined to celebrities, politicians and other prominent figures with enough visual data to train an AI, more recent systems have been proven to be effective when trained on as little data with a couple of photos.

People now conduct large parts of their lives online and their online activity can make and break reputations, said the reports lead author, UCL researcher Matthew Caldwell, in a statement. Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity.

Despite the abundance of possible criminal applications of deepfakes, a report last fall found that they are so far primarily used by bad actors to create fake pornography against the subjects consent.

Not all uses for deepfakes are nefarious, however. Agencies like Goodby, Silverstein & Partners and R/GA have used them in experimental ad campaigns, and the underlying generative technology is helping fuel different types of AI creativity and art.

Link:

Researchers Rank Deepfakes as the Biggest Crime Threat Posed by AI - Adweek

War has never been a reason not to replace a UK Prime Minister – The National

IT is striking to note the spurious argument that because there is war in Ukraine this means that the Prime Minister should remain in office and not be replaced.

This is a man who has broken the law, the first Prime Minister in office to do this, as well as lying to parliament. History highlights that on numerous occasions we have replaced the Prime Minister in wars we have been directly involved in.

For instance, in May 1940 Neville Chamberlain resigned after the failure of British efforts to liberate Norway. In December 1916, at the height of the First World War, Lloyd George replaced Herbert Asquith. More recently, Mrs Thatcher resigned in November 1990, with Iraq invading Kuwait in August of that year, which led to the Gulf War.

READ MORE:Nicola Sturgeon: Douglas Ross using Ukraine to defend Johnson is 'lowest of the low'

Add to this changes to Prime Ministers during the war in Afghanistan, the Second Boer War, the Second Opium War and the Crimean War. Changing a Prime Minister in a time of conflict is clearly not unprecedented.

Those who make the law cannot be seen to be breaking the law and it is scarcely credible that Mr Johnson, who has now lost the final fragments of any moral authority he did have, can carry the confidence of the country and remain in office.

Alex OrrEdinburgh

I HAVE squirmed repeatedly as Unionists defend Boris Johnson on the grounds that an illegal party is trivial compared with the bloody war in Ukraine.

Firstly, the whole mechanism of government will be available to deal with the ramifications of the war, and the PMs part is comparatively minor, in fact given Johnsons legendary laziness and lack of preparation its probably negligible. But secondly, and far more importantly, the Tory drinks parties were not victimless crimes as has been implied.

READ MORE:Douglas Ross denies 'cynically using Ukraine' to defend law-breaking Boris Johnson

An ordinary member of the public, breaking the rules, risks harming and maybe causing the deaths of a few acquaintances. A senior politician breaking the rules risks much more: a public loss of belief, replaced by nihilism, and ultimately a large amount of avoidable mortality and morbidity. Its a crime of reckless endangerment of millions of people.

Derek BallBearsden

WHAT a dreadful shambles that is Westminster. The odious Boris Johnson supported by his dumb and dumber Scottish rep Douglas Dross.

Johnson has lied, misled parliament, broken the ministerial code, the first PM in office to break the law, he is at the root of the cronyism and PPE scandal that has lined the pockets of his millionaire buddies.

READ MORE:Douglas Ross 'destroys credibility' with car crash BBC Scotland interview

The cost-of-living crisis is in the large part due to him and his business vultures who put themselves and their wealth first and enforced Brexit on Scotland.

Johnsons MPs are standing by him but that does not mean that our SNP MPs need to join the eejits they should stand up for Scotland and refuse to work with him. Come home and lets start the campaign for our self-determination our independence. Scotland deserves better than being dominated by a bunch of self-serving numpties.

Jan FerrieAyrshire

WHY are individuals and the the media overlooking the fact that Johnson lied to parliament which, when exposed, calls for an automatic resignation of the perpetrator? Shouldnt the Speaker be taking action to implement parliamentary standards?

Douglas Ross claiming that Johnsons presence is essential while the war is raging in the Ukraine is nonsense. Surely a party with an 80-seat majority could find a better replacement, even one who doesnt tell porkies. Johnsons handling of the run-up to the war did nothing to enhance his attempt at statesmanship.

Mike UnderwoodLinlithgow

SO here we have D Ross, who on the basis of what were then merely allegations publicly called for Johnson to resign or be sacked. Now with actual guilt being found and confirmed by Johnson, D Ross says Johnson must stay because of Ukraine. At first the logic of this was beyond me, then I saw his logic!

Clearly when dealing with a pathological lying populist in the Kremlin, the UK needs a lying populist (or two) in Downing Street!

The criminal and Unionist party are a disgrace lets throw them out here in Scotland and lets get Scotland out of the UK, which is run by pathological liars and now criminals!

Rab Doigvia email

THE widespread use of the name BOZO almost sounds like a term of endearment these days. Following Tuesdays confirmation of party fines, may I suggest the name is henceforth changed to BOOZO? At least, being more apt, it cant easily be misconstrued.

Bruce MogliaBridge of Weir

DOUGLAS Ross suggesting this is not the time to remove a Prime Minister who has been found out only serves to promote Mr Rosss cowardliness. It is not only the party that is over political careers are hanging by a thread. Voters only have a short time to wait for the opportunity to give their verdict: the local elections are looming on May 5 and a clear message should be forthcoming.

Catriona C ClarkFalkirk

IF the lying Prime Minister will not resign, surely one of the Queens final acts should be to summon him to Buckingham Palace and insist that he resign to uphold longstanding protocols. Does this also mean that Ian Blackford can call him a liar in the House of Commons without fear of contradiction?

Steve CunninghamAberdeen

See the original post here:

War has never been a reason not to replace a UK Prime Minister - The National

Trafficking conviction results in 2-14-year sentence | Serving Minden-Gardnerville and Carson Valley – The Record-Courier

A longtime Johnson Lane resident was sentenced to 2-14 years in prison on Monday after being convicted by a jury of multiple drug felonies.

Robert Vieth Wilson, 65, has indicated on several occasions that its his intention to appeal his conviction for charges of trafficking, sales and possession of methamphetamine, 4.4 pounds of psilocybin mushrooms and concentrated marijuana.

I know the truth will come out, he said as his friends and family watched from the gallery.

Wilson was given credit for 833 days time served since his first arrest on Oct. 16, 2019.

Attorney Christopher Day recommended Wilson be sentenced to 19-48 months in prison with credit for time served on the 11 counts the jury voted to convict him on.

Day said that psilocybin mushrooms are being legalized in other states and appear to be following the same path as cannabis.

The defense attorney cited the death of Wilsons wife in 2015 as the cause of his spiral into drug use. Prior to that he had a clean criminal record. Day argued that Wilsons crimes were essentially victimless.

He is not a drug trafficker, Day said. He is a harmless and hapless individual. He is not a criminal master mind. He wont hurt the community.

Prosecutor A.J. Hames told District Judge Tom Gregory that Wilson had significant amounts of drugs in his home.

This was a big operation, he said.

Hames said that Wilson had been arrested in 2017 for concentrating marijuana in his home and that hed entered a no contest plea to misdemeanor introduction of a drug not allowed in interstate commerce in 2018 and placed on two years probation.

His arrest in 2019 violated that probation and he was ordered to serve that sentence.

Hames recommended an aggregate sentence of 20-53 years in prison, which Day argued would amount to a death sentence.

Wilson benefitted in the reduction in sentences approved by the 2021 Legislature, which took effect July 1, 2022. Had he been sentenced under the old scheme, he would have faced multiple life terms.

One of the issues at trial was the information provided by a confidential informant resulted in up to five different buys, something Gregory said resulted in an unusual stacking of charges.

The informant refused to testify at Wilsons trial after he violated the probation hed won by participating in the buys and was sent to prison. The Record-Courier reported on the informants case, including his eventual sentencing.

Gregory sentenced Wilson to 1-10 years on the first count of trafficking and 19-48 months for the following five counts related to Wilsons sale of drugs to an informant.

Wilson received a consecutive 1-4-year sentence for drugs found during a search of his home and ran three felonies and a gross misdemeanor concurrent.

Wilson has 21 days from Monday to file an appeal of his conviction.

Continued here:

Trafficking conviction results in 2-14-year sentence | Serving Minden-Gardnerville and Carson Valley - The Record-Courier

OC District Attorneys Office launches diversion program for cases that may involve mental health, substance abuse – OCRegister

Orange County has launched a program to connect eligible people arrested for low-level crimes with mental health and substance abuse services before criminal charges are filed in an effort to curb reoffending, the District Attorneys Office announced Wednesday.

The FIRST (Focused Intervention Route to Services and Treatment) Point Diversion Program plans to target specific people with possible behavioral issues, including mental health challenges or substance abuse, who have committed victimless misdemeanor crimes or misdemeanor crimes where the victim is cooperative.

It will be headed by the OC District Attorneys Office in partnership with participating police departments around Orange County, including the Orange County Sheriffs Department, Irvine Police Department and Seal Beach Police Department.

A law enforcement officer from one of the participating agencies will send the crime report to a designated prosecutor. If the prosecutor believes there is enough evidence to file charges, but the person qualifies for the program, the case is referred to an intake counselor at the countys Health Care Agency to assess and develop a treatment plan. This could include services such as substance abuse counseling and mental illness assistance.

The persons progress and success in the program is monitored by their counselor. If successful, the charges are not filed, but if the program is not completed within 10 months, prosecutors will still have the ability to file criminal charges before the one-year statute of limitations expires.

Participants in the program will also be able to consult with the Countys Social Services Agency to see if they are eligible for food stamps or Medi-Cal services. Access to social services and any benefits they may receive will continue regardless of whether the program participant successfully completes the program, the OC District Attorneys Office said in a news release.

Law enforcement agencies hope addressing underlying issues in exchange for not filing criminal charges will prevent a criminal record that may impede their employment or housing opportunities and reduce the number of repeat offenders.

Original post:

OC District Attorneys Office launches diversion program for cases that may involve mental health, substance abuse - OCRegister

Industry Perspectives Op-Ed: When contractors engage in tax fraud they are stealing from the rest of us – constructconnect.com – Daily Commercial News

Every year, Canadians contribute to their communities by paying taxes.

Public tax dollars pay for critical investments in our schools and hospitals and for a host of other benefits for both the elderly and our children.

The money we pay builds the neighbourhoods we live in and takes care of the people around us. However, every year many construction contractors and builders selfishly and criminally refuse to pay their fair share. Does that seem right to you?

According to the Ontario Construction Secretariat, our governments lose up to $3.1 billion in revenue due to construction contractors not paying their fair share of taxes. They lose an estimated $1.1 billion in general taxes, $656 million in CPP contributions, $18 million in Employer Health Tax, $119 million in employment insurance, $340 million in WSIB payments and $832 million in HST revenues.

For governments at every level, finding the required financial resources is key to our ongoing ability to deliver much-needed health care, education and infrastructure investments. Stamping out tax fraud in construction is a crucial tool in enabling us to collect such financial resources. Just think of the services and investments which could be paid for with the extra $3.1 billion that would be available each and every year if everyone paid their fair share.

Tax fraud within the construction industry takes many forms.

It can be as simple as people paying cash for home renovations or, as is extremely common now, the misclassification of workers as independent contractors.

These practices lower the general contractors income and payroll tax responsibilities and allows them to avoid tax and CPP/EI/WSIB contributions. This goes on with every trade across the entire industry, including floor covering installers, tile setters, painters and decorators and carpenters. Unfortunately, the underground economy is growing.

It is sometimes assumed that practices like paying cash or misclassifying workers are victimless crimes but workers vanishing into the grey economy has led, in the worst cases, to human trafficking and serious disregard for the basic health and safety of workers, resulting in deaths in some cases.

We have seen examples of all of these phenomena in recent years and, in fact, one of the biggest human trafficking cases in Canadian history, the Domotor Case, was a result of these practices right here in Ontario.

We at the Carpenters Union are determined to do something about this.

Construction companies engaging in tax fraud, such as by not classifying their workers properly to avoid their tax obligations, result in two severe consequences, economic inequity and worker vulnerability.

Economic inequity results from the unfair advantage gained by employers who do not play by the rules by improperly avoiding taxes. This hurts industry competitors but it also hurts the general public as well.

Now, when governments collect taxes to pay for infrastructure or social security, the average citizen has to contribute more to make-up for the amounts not contributed by the construction companies.

This means that as spending increases with our economy bouncing back from the COVID-19 pandemic, honest and hardworking taxpayers are hurt the most.

Additionally, by committing tax fraud, contractors are putting the lives of their workers at risk. Paying construction workers off the books or as independent contractors makes employers less accountable and jeopardizes basic safety standards on jobsites.

Workers can often be cheated out of their pay and forced to work long hours, under horrendous conditions, at jobsites that simply are not safe.

This can include, not being paid overtime or other required premiums or not even being paid at all in the worst cases. These are just a few examples of what is going on in Ontario right now.

This underground economy is large in size and broad in scope.

However, whether someone is being paid under the table to paint the inside of a house or human trafficking is taking place, both are crimes and should not be tolerated in Ontarios construction industry. Quite simply, all forms of tax fraud are theft and when contractors engage in tax fraud they are stealing from the rest of us.

While tax fraud is not an issue that can be resolved overnight, there are tangible steps that can be taken to begin eliminating these toxic practices on our jobsites.

The implementation and enforcement of a fair wage policy by both the Ontario and the federal government needs to happen now and these policies need to be applied universally to all builders.

This is a solution that the City of Toronto adopted over 100 years ago to ensure their workers were not discriminated against by contractors failing to pay fair hourly wages, vacation and holiday pay, along with certain basic benefits.

Torontos Fair Wage Policy ensures the ethical treatment of workers and holds employers accountable through oversight and enforcement by the citys fair wage office.

If the federal and provincial governments adopted these practices it would greatly reduce the employment vulnerability that many workers face and help eliminate unfair competition between contractors. We need to take a stand against tax fraud and show our support for those who build our province.

The Carpenters District Council of Ontario is leading the way against tax fraud in the construction industry and will be promoting Tax Fraud Days of Action from April 11 to 16, as seen on our website http://www.notaxfraud.com.

With a long economic recovery and large levels of uncertainty ahead, it is important that contractors pay their fair share, to ensure critical infrastructure can continue to be built and our economy makes a strong comeback.

For more information also see http://www.stoptaxfraud.net.

Mike Yorke is president of the Carpenters District Council of Ontario. Send Industry Perspectives comments and column ideas to editor@dailycommercialnews.com.

See the article here:

Industry Perspectives Op-Ed: When contractors engage in tax fraud they are stealing from the rest of us - constructconnect.com - Daily Commercial News

Two expected to plead guilty in cryptocurrency case – Concord Monitor

As Keene resident and libertarian activist Ian Freeman awaits trial on federal charges related to his bitcoin-exchange business, two of his alleged co-conspirators have signaled they will enter guilty pleas.

Renee and Andrew Spinella, both of Derry, are scheduled for change-of-plea hearings Tuesday in U.S. District Court in Concord. Their shift from not-guilty pleas to pleading guilty would mark the first time any of the six alleged co-conspirators have admitted wrongdoing.

Freeman, Colleen Fordham of Alstead, Aria DiMezzo of Keene and a Keene man who legally changed his name from Richard Paul to Nobody have all pleaded not guilty to all charges.

Prosecutors claim Freeman and his alleged co-conspirators violated federal law by running an unlicensed virtual currency-exchange business that handled more than $10 million in transactions over several years.

According to the government, Freeman and other co-defendants used personal bank accounts and accounts in the names of purported religious entities like the Shire Free Church, the Crypto Church of NH, the Church of the Invisible Hand and the Reformed Satanic Church to conceal the nature of their business while directing customers to falsely report that they were donating to churches or buying rare coins, not purchasing cryptocurrency.

The government arrested the six in March 2021. The FBI conducted several searches in Keene one day that month, including at 73-75 Leverett St. and at two properties on Route 101.

Those properties are linked to the libertarian activist group known locally as Free Keene, which has ties to some of the defendants. The Route 101 searches were at 661 Marlboro Road, at a business called Bitcoin Embassy N.H., and 659 Marlboro Road, which is owned by Shire Free Church Holdings LLC. The FBI also conducted an operation at a local convenience store, with an employee at the time telling The Sentinel agents removed a Bitcoin ATM.

Court records indicate the defendants trial is scheduled to begin Nov. 1.

All six alleged co-conspirators were charged with conspiracy to operate an unlicensed money-transmitting business, and all but DiMezzo also face a charge of conspiracy to commit wire fraud.

Renee Spinella additionally faces two charges of wire fraud and Andrew Spinella faces a single additional charge of wire fraud. Court documents do not indicate what charge or charges they are expected to plead guilty to.

Freeman also faces charges of operation of an unlicensed money-transmitting business, continuing financial-crimes enterprise, money laundering and six counts of wire fraud. The continuing financial-crimes enterprise charge carries a 10-year mandatory minimum sentence.

Unfortunately the way the federal government works is they do their best to intimidate people by stacking on as many charges as possible, Freeman said Saturday.

Freeman said he has not been allowed to talk to his co-defendants, but has heard that the prosecution threatened the Spinellas with additional charges to force them into a plea deal. The Sentinel has not been able to confirm this. Court documents do not indicate any additional charges. Renee Spinella was not immediately reachable by phone. Neither Andrew Spinella nor his attorney were immediately reachable Saturday for a request for comment.

Freeman said he does not expect that the Spinellas will cooperate with the government, despite the scheduled guilty plea.

Nobody here did anything wrong. These are victimless so-called crimes, he said. I expect they will not be cooperating with the state because we all believe the state is evil.

Assistant U.S. Attorney Georgiana L. MacDonald has previously alleged that hordes of cybercriminals bought virtual currency from Freeman in an effort to avoid detection by banks and government regulators.

The government also claims in court documents Freeman allowed an undercover agent to exchange around $20,000 in cash for bitcoin after the agent told him he was dealing drugs. Freemans lawyer, Mark Sisti, has previously told The Sentinel he doesnt know where the governments claim about an undercover agent is coming from. He said he has seen evidence of Freeman refusing to deal with criminals.

DiMezzo, also reached by phone Saturday, said the Spinellas have to do what is best for themselves even if that means entering a plea deal with the government.

In the libertarian philosophy as long as they are making the decision that is best for them the world is best served, DiMezzo said.

While she said she believes a jury will find no evidence of the alleged crimes, certainly, if they agree to be star witnesses to the prosecutors, that certainly will have an effect on other peoples cases.

But a guilty plea is not evidence of guilt, DiMezzo argued, claiming, as Freeman did, that the federal government stacks charges against defendants to bully them into pleas.

Its hard to accept a guilty plea as an actual confession of guilt in the modern court system, she said. Whether theyre guilty or not, [defendants] accept the deal to make the bigger threat go away.

See the article here:

Two expected to plead guilty in cryptocurrency case - Concord Monitor