How AI is shaping the new life in life sciences and pharmaceutical industry – YourStory

The pharma and life sciences industry is faced with increasing regulatory oversight, decreasing R&D productivity, challenges to growth and profitability, and the impact of artificial intelligence (AI) in the value chain. The regulatory changes led by the far-reaching Patient Protection and Affordable Care Act (PPACA) in the US are forcing the pharma and life sciences industry to change its status quo.

Besides the increasing cost of regulatory compliance, the industry is facing rising R&D costs, even though the health outcomes are deteriorating and new epidemics are emerging. Led by the regulatory changes, the customer demographics are also changing. The growth is being driven by emerging geographies of APAC and Latin American region.

Pharmaceutical organisations can leverage AI in a big way to drive insightful decisions on all aspects of their business, from product planning, design to manufacturing and clinical trials to enhance collaboration in the ecosystem, information sharing, process efficiency, cost optimisation, and to drive competitive advantage.

AI enables data mining, engineering, and real time- and algorithmic-driven decision-making solutions, which help in responding to the following key business value chain disruptions in the pharmaceutical industry:

Though genomics currently hogs the spotlight, there are plenty of other biotechnology fields wrestling with AI. In fact, when it comes to human microbes the bacteria, fungi, and viruses that live on or inside us we are talking about astronomical amounts of data. Scientists with the NIHs Human Microbiome Project have counted more than 100 trillion microbes in the human body.

To determine which microbes are most important to our well-being, researchers at the Harvard Public School of Health used unique computational methods to identify around 350 of the most important organisms in their microbial communities. With the help of DNA sequencing, they sorted through 3.5 terabytes of genomic data and pinpointed genetic name tags sequences specific to those key bacteria. They could then identify where and how often these markers occurred throughout a healthy population. This gave them the opportunity to catalogue over 100 opportunistic pathogens and understand where in the microbiome these organisms occur normally. Like genomics, there are also plenty of startups Libra Biosciences, Vedanta Biosciences, Seres Health, Onsel looking to leverage on new discoveries.

Perhaps the biggest data challenge for biotechnologists is synthesis. How can scientists integrate large quantities and diverse sets of data genomic, proteomic, phenotypic, clinical, semantic, social etc. into a coherent whole?

Many AI researchers are occupied to provide plausible responses:

Cambridge Semantics has a developed semantic web technologies that help pharmaceutical companies sort and select which businesses to acquire and which drug compounds to license.

Data scientists at the Broad Institute of MIT and Harvard have developed the Integrative Genomics Viewer (IGV), open source software that allows for the interactive exploration of large, integrated genomic datasets.

GNS Healthcare is using proprietary causal Bayesian network modeling and simulation software to analyse diverse sets of data and create predictive models and biomarker signatures.

Numbers-wise, each human genome is composed of 20,000-25,000 genes composed of three billion base pairs. Thats around three gigabytes of data. Genomics and the role of AI in personalising the healthcare experience:

Sequencing millions of human genomes would add up to hundreds of petabytes of data.

Analysis of gene interactions multiplies this data even further.

In addition to sequencing, massive amounts of information on structure/function annotations, disease correlations, population variations the list goes on are being entered into databanks. Software companies are furiously developing tools and products to analyse this treasure trove.

For example, using Google frameworks as a starting point, the AI team at NextBio have created a platform that allows biotechnologists to search life-science information, share data, and collaborate with other researchers. The computing resources needed to handle genome data will soon exceed those of Twitter and YouTube, says a team of biologists and computer scientists who are worried that their discipline is not geared to cope with the coming genomics flood.

By 2025, between 100 million and 2 billion human genomes could have been sequenced, which is published in the journal PLoS Biology. The data-storage demands for this alone could run to as much as 240 exabytes (1 exabyte is 1,018 bytes), because the number of data that must be stored for a single genome are 30 times larger than the size of the genome itself, to make up for errors incurred during sequencing and preliminary analysis.

The extensive data generation in pharma, genome, and microbiome serves as a clarion call that these fields are going to pose some severe challenges. Astronomers and high-energy physicists process much of their raw data soon after collection and then discard them, which simplifies later steps such as distribution and analysis. But fields like genomics do not yet have standards for converting raw sequence data into processed data.

The variety of analysis that biologists want to perform in genomics is also uniquely large, the authors write, and current methods for performing these analyses will not necessarily translate well as the volume of such data rises. For instance, comparing two genomes requires comparing two sets of genetic variants. If you have a million genomes, youre talking about a million-squared pairwise comparisons. The algorithms for doing that will be able to deliver this will be required with strong data engineering capabilities.

Theres a massive opportunity of AI transforming life sciences and pharmaceutical industry. The above mentioned disruptions in business value chains have already started making inroads and the CXOs in life sciences industry have realised the virtues of innovation and transformation regime led by AI . Brace up for more interventions in life sciences industry leveraged by AI.

(Edited by Evelyn Ratnakumar)

(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)

Read the rest here:

How AI is shaping the new life in life sciences and pharmaceutical industry - YourStory

Leadership in the age of Artificial Intelligence – Analytics Insight

Stationed at the frontier of accelerating artificial intelligence (AI) landscape, organizations need to validate executives who make nimble, informed decisions about where and how to employ AI in their business. Encouraging the industry-wide digital transformation, the widespread technology has permeated more organizations and more parts within organizations spanning the C-suite executives as well. The very fundamentals of leadership need to be rethought, from overall strategy to customer experience, in order to deploy AI appropriately while considering the human capital too.

As the conventional business leaderships are giving way to new approaches, opportunities, and threats as a result of broader AI adoption, the new set of AI executives are ready to take over the challenge to drive better innovation and competitiveness. Several C-level executives, in todays dynamic AI culture, are confident enough to wheel their organizations leadership team towards the ability to adapt significant and innovative AI approaches across the business.

As it stands now, top AI executives are not only evolving at a rapid pace but also revamping their surroundings for better technology implementation. Moreover, their employees and fellow teammates support them with full-confidence while promoting the positive aspects of AI. To excel further, the C-level executives press over the need to train the leadership team on AI as a top priority.

Despite, business leaders optimism about artificial intelligence and the opportunities it presents, they cannot neglect the fact regarding its potential risks. A number of C-level executives and their leadership team are hesitant to invest in AI technologies because of security or privacy concerns. However, showcasing the brave and progressive attributes of leadership, while ensuring security through innovation, some prominent executives are performing experiments with AI capabilities, and evidently, those are the ones who form the clan of topmost AI executives across the industry.

As claimed by certain market reports, business executives are showered with great success in AI across five major industries retail, transportation, healthcare, financial services, and technology itself. Tracing the success-map of such leaders, executives across various other sectors are admittingly adopting AI capabilities more aggressively than before.

In the age of AI, business executives must focus on embedding AI into their strategic plans which would subsequently enable such frontrunners develop an enterprise-wide strategy for AI, that inclusive business segments can follow. Moreover, as a part of the leadership team, they are responsible to look after financial aspects of the organization as well, therefore, applying AI to revenue and customer engagement opportunities will help them explore the use of technology for various revenue enhancements and client experience initiatives while tracing their own progress.

AI executives should also focus on employing multiple options for acquiring AI and developing innovative applications in an effort to accelerate the adoption of AI initiatives via access to a wider pool of talent and technology solutions.

Excerpt from:

Leadership in the age of Artificial Intelligence - Analytics Insight

All Organizations Developing AI Must Be Regulated, Warns Elon Musk – Analytics Insight

Through the development of artificial intelligence (AI) in the past few years, Teslas Elon Musk has been expressing serious concerns and warnings regarding its negative effects. Tesla and SpaceX CEO Elon Musk is once again sounding a warning note regarding the development of AI. He tweeted recently, all organizations developing advanced AI should be regulated, including Tesla.

Musk was responding to a new MIT Technology Review profile of OpenAI, an organization founded in 2015 by Musk, along with Sam Altman, Ilya Sutskever, Greg Brockman, Wojciech Zaremba, and John Schulman. At first, OpenAI was formed as a non-profit backed by US$1 billion in funding from its pooled initial investors, with the aim of pursuing open research into advanced AI with a focus on ensuring it was pursued in the interest of benefiting society, rather than leaving its development in the hands of a small and narrowly-interested few (i.e., for-profit technology companies).

He also responded to a tweet posted back in July about how OpenAI originally billed itself as a nonprofit, but the company is now seeking to license its closed technology. In response, Musk, who was one of the companys founders but is no longer a part of the company, said that there were reasonable concerns.

Musk exited the company later, reportedly due to disagreements about the companys direction.

Back in April, Musk said during an interview at the World Artificial Intelligence Conference in Shanghai, that computers would eventually surpass us in every single way.

The first thing we should assume is we are very dumb, Musk said. We can definitely make things smarter than ourselves.

Musk pointed to computer programs that allow computers to beat chess champions as well as technology from Neuralink, his own brain interface company that may eventually be able to help people boost their cognitive abilities in some spheres, as examples.

AI is being criticized by others beside Musk, however. Digital rights groups and the American Civil Liberties Union (ACLU) have called for either a complete ban or more transparency in AI technology such as facial recognition software. Even Googles CEO, Sundar Pichai, has warned of the dangers of AI, calling for more regulation of the technology.

The Tesla and SpaceX CEO has been outspoken about the potential dangers of AI before. During a talk sponsored by South by Southwest in Austin in 2018, Musk talked about the dangers of artificial intelligence.

Moreover, he tweeted in 2014 that it could be more dangerous than nukes, and told an audience at an MIT Aeronautics and Astronautics symposium that year that AI was our biggest existential threat, and humanity needs to be extremely careful. He quoted, With artificial intelligence, we are summoning the demon. In all those stories where theres the guy with the pentagram and the holy water, its like yeah hes sure he can control the demon. Didnt work out.

However, not all his Big Tech contemporaries agree. Facebooks chief AI scientist Yann LeCun described his call for prompt AI regulation nuts, while Mark Zuckerberg said his comments on the risks of the tech were pretty irresponsible. Musk responded by saying the Facebook founders understanding of the subject is limited.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Smriti is a Content Analyst at Analytics Insight. She writes Tech/Business articles for Analytics Insight. Her creative work can be confirmed @analyticsinsight.net. She adores crushing over books, crafts, creative works and people, movies and music from eternity!!

Link:

All Organizations Developing AI Must Be Regulated, Warns Elon Musk - Analytics Insight

How A Recent Workshop Of NITI Aayog Gave Boost To India’s AI Ambitions – Analytics India Magazine

NITI Aayogs AI workshop called Artificial Intelligence The India Imperative took place recently in New Delhi.

The AI workshop- The India Imperative took place on 19th Dec20 at NITI Aayog Bhawan where all the relevant stakeholders from across the country were present. From ministers from states to representatives from the IT industry to professors from IITs were part of the workshop.

The event highlighted Indias leading think tank continuous work in ensuring continuous activity in the AI field. NITI Aayog has also been publishing Approach Papers to create execution plans and showcase essential suggestions along with stakeholders.

The CEO of NITI Aayog started the workshop with Indias AI aspirations and the importance of AI for all the sectors. He also recommended AI Superpower by Kai-Fu. He also added that India is expected to reach $15 trillion which will be more than the US and China together.

In his keynote address, CEO Amitabh Kant kickstarted the deliberations by emphasising on the importance of AI for All in realising Indias artificial intelligence aspirations. The main 5 sectors that the workshop focused on were- healthcare, agriculture, education, infrastructure and transportation that would benefit the most from AI.

Arnab Kumar, the Programme Director gave a presentation on AI For All and discussed the 4 fundamental themes:

1.Data Rich to Data Intelligent

2.Research & Development

3. AI-specific Computing

4.Large scale AI adoption

The inaugural session was followed by the breakout sessions which included the following topics- structured data infrastructure for AI, research ecosystem for AI, Moonshots for India and Adoption- Focus on healthcare, education and agriculture.

In the breakout session at the workshop on Artificial Intelligence The India Imperative, scalable approach to building solutions for a billion citizens, by leveraging technologies like AI/ML was discussed by the participants.

In 2018 2019, the government-mandated NITI Aayog to create the National Program on AI, with the aim of guiding the research and development in new innovation in artificial intelligence for India. NITI Aayog came out with the National Strategy for Artificial Intelligence (NSAI) discourse paper in June of 2018 to highlight Indian governments importance and role in boosting AI.

NITI Aayog has taken a three-part approach here undertaking exploratory proof-of-concept AI projects in different areas of the country, creating a national strategy for a vibrant AI ecosystem in India and collaborating with experts and stakeholders in the field. The recent workshop was a part of the think tanks engagement with stakeholders, including multiple startups.

One such startup had been Silversparro, an AI-powered video analytics firm invited by Niti Aayog for a day-long session on realising Indias AI aspirations. The startup presented its views on how AI can help India leapfrog in sectors like manufacturing, heavy industry and giving a boost to SMEs.

We are heartened by Niti Aayogs focus on making India an AI Superpower. We are also proud to be contributing directly by leveraging AI for making Indian Manufacturing more productive with our latest offering Sparrosense AI Supervisor said Abhinav Kumar Gupta, Founder & CEO at Silversparro.

At the workshop, there were several speakers talking about leadership and vision in the AI workshop Artificial Intelligence The India Imperative event in Delhi by NITI Aayog, including those from state governments such as Telangana. In fact, Telangana presented Year of AI at NITI Aayogs workshop.

Also Read: India Lags Behind In AI Research, But Will 7,000 Crore Boost Change Things?

India has rich publicly available data, and across government departments, the various processes have been digitised for reporting and analytics insights, which are feeding into information systems and visualisation dashboards. According to NITI Aayog, this data is being utilised to track and visualise processes and make iterative enhancements.

National Data and Analytics Platform (NDAP), an initiative aimed to aid Indias progress by promoting data-driven discourse and decision-making, NDAP also aimed to standardise data across multiple government sources, provide flexible analytics and make it easily accessible in formats conducive for research, innovation, policy-making and public consumption. As part of it, multiple data sets have been presented using a standardised schema, by using common geographical and temporal identifiers.

However, the data landscape can be further improved as the entire public government data can be smoothly accessible to all stakeholders in a user-friendly manner. Further, data across different government assets should be interlinked to enable analytics and insights, such websites of ministries and departments of the central and state governments.

Also Learn: AIRAWAT: NITI Aayog Describes How Indias AI Infrastructure Will Look Like

comments

See the original post:

How A Recent Workshop Of NITI Aayog Gave Boost To India's AI Ambitions - Analytics India Magazine

Here’s How CEO’s Can Harness the Full Potential of AI – Analytics Insight

Artificial intelligence (AI) stands apart as a transformational innovation of our digital age and its practical application all through the economy is developing apace.

Artificial intelligence, automation and complementary technologies already play a critical role in how organizations work. According to a survey, 54% of executives state that AI solutions have increased profitability in their organizations, and that number is sure to develop in the coming years. But as opposed to many press reports, increased efficiency doesnt really bring about lost jobs. As business activities become smarter, with more AI incorporated in them, these tools will be utilized less to replace individuals and more to augment them. Innovation will help build human capabilities over a scope of jobs, functions and business units.

Numerous CEOs feel they have to bring AI into their company. Theres this fear factor that if youre not on the AI temporary fad, at that point you will miss out to contenders that will be eating your market, since theyre utilizing technologies to settle on decisions faster and superior to you.

They may ask the chief information officer, What are we doing in AI? And the CIO will at that point hire or try to procure data scientists, whose work speaks to a sort of intermediary for AI. In any case, data scientists have a particular sort of skill. They see how to utilize statistics and machine learning to discover patterns in information. Theyre not really acceptable at building production-grade systems that can settle on decisions or that can adapt themselves.

CEO needs to define a vision for the ways in which automation and AI will drive the organizations business system. A key issue here is the scope and desire of that vision, how extensively and how quick the company should implement these innovations. Will it be an AI pioneer or just a fast devotee in its industry? Similarly,CEOs have to distinguish explicit business issues or challenges and figure out where AI can help. The hype encompassing some rising uses of AI can be overpowering to the point that organizations are enticed to chase them, launching pilot after pilot without a reasonable methodology for scaling up successes or tying initiatives to more extensive strategic goals.

The greatest friction may originate from having loads of expenses related to utilizing employees. Or on the other hand, the organization may have frictions related to customer experience. Or, if they have a lot of analysts reading lots of reports and, at that point trying to integrate those reports into data, they can get machine learning to do that better.

A key part of ensuring you have the correct procedures and teams set up is considering data in the correct ways. You should begin by distinguishing where you need to make value, then take a look at what data resources you as of now have and which ones you have to get that going. Without the capacity to extract information from different systems or to ensure that the right individuals approach the correct information when they need it, AI cant in any way, possibly deliver targeted benefits.

CEOs likewise need to have an exceptionally clear comprehension about the competitive landscape. Most organizations dont simply have direct competition; they likewise have indirect competition from the likes of Google, Facebooks and Alibaba. Loads of those large organizations can enter practically any market and shake it up. So, organizations should be taking a look at indirect competitors and evaluating what these contenders could do, given every one of the information that theyre as of now sitting on, in light of the fact that once they make sense of how to mobilize that data, they can tear apart those business sectors.

Further, CEOs also need to harness employees intrinsic motivation to learn. Similarly, as you develop your own development outlook, you ought to encourage employees at all levels to do likewise, while imparting plainly all through the company how integrating certain technologies may affect individuals jobs so they see how they can add to the organizations prosperity as well as their own.

Go here to read the rest:

Here's How CEO's Can Harness the Full Potential of AI - Analytics Insight

Elon Musk says all advanced AI development should be regulated, including at Tesla – TechCrunch

Tesla and SpaceX CEO Elon Musk is once again sounding a warning note regarding the development of artificial intelligence. The executive and founder tweeted on Monday evening that all org[anizations] developing advance AI should be regulated, including Tesla.

Musk was responding to a new MIT Technology Review profile of OpenAI, an organization founded in 2015 by Musk, along with Sam Altman, Ilya Sutskever, Greg Brockman, Wojciech Zaremba and John Schulman. At first, OpenAI was formed as a non-profit backed by $1 billion in funding from its pooled initial investors, with the aim of pursuing open research into advanced AI with a focus on ensuring it was pursued in the interest of benefiting society, rather than leaving its development in the hands of a small and narrowly-interested few (i.e., for-profit technology companies).

At the time of its founding in 2015, Musk posited that the group essentially arrived at the idea for OpenAI as an alternative to sit[ting] on the sidelines or encourag[ing] regulatory oversight. Musk also said in 2017 that he believed that regulation should be put in place to govern the development of AI, preceded first by the formation of some kind of oversight agency that would study and gain insight into the industry before proposing any rules.

In the intervening years, much has changed including OpenAI. The organization officially formed a for-profit arm owned by a non-profit parent corporation in 2019, and it accepted $1 billion in investment from Microsoft along with the formation a wide-ranging partnership, seemingly in contravention of its founding principles.

Musks comments this week in response to the MIT profile indicate that hes quite distant from the organization he helped co-found both ideologically and in a more practical, functional sense. The SpaceX founder also noted that he must agree that concerns about OpenAIs mission expressed last year at the time of its Microsoft announcement are reasonable, and he said that OpenAI should be more open. Musk also noted that he has no control & only very limited insight into OpenAI and that his confidence in Dario Amodei, OpenAIs research director, is not high when it comes to ensuring safe development of AI.

While it might indeed be surprising to see Musk include Tesla in a general call for regulation of the development of advanced AI, it is in keeping with his general stance on the development of artificial intelligence. Musk has repeatedly warned of the risks associated with creating AI that is more independent and advanced, even going so far as to call it a fundamental risk to the existence of human civilization.

He also clarified on Monday that he believes advanced AI development should be regulated both by individual national governments as well as by international governing bodies, like the U.N., in response to a clarifying question from a follower. Time is clearly not doing anything to blunt Musks beliefs around the potential threat of AI: Perhaps this will encourage him to ramp up his efforts with Neuralink to give humans a way to even the playing field.

Read the rest here:

Elon Musk says all advanced AI development should be regulated, including at Tesla - TechCrunch

Choosing Between Rule-Based Bots And AI Bots – Forbes

Shutterstock

Until a decade ago, the only option people had to reach out to a company was to call or email their customer service team. Now, companies offer a chat team to provide better round-the-clock customer service. According to aFacebook-commissioned study by Nielsen, 56% of people would prefer to message rather than call customer service, and thats where bots come into play.

Bots are revolutionizing the way companies interact with their customers. A decade ago, bots were considered a passing tech fad. However, that debate has been put to rest now as major companies like Amazon, Microsoft, Facebook and others have started deploying bots in almost every area of their business. The new debate brewing in the bot community is about the choice between rule-based bots or AI bots. Which one to choose? Which one is better? These are the questions on the minds of business leaders intending to utilize bots in their organizations. There are many factors that contribute to the efficiency of bots for different applications, and understanding these factors can help businesses make informed decisions as to choosing between rule-based versus AI bots.

Building and deploying bots is now on most companies to-do lists, if theyre not already deployed. Nevertheless, most are confused about whether they should go with rule-based bots or AI bots. Lets evaluate the pros and cons of each.

Rule-based bots can answer questions based on a predefined set of rules that are embedded into them. The set of rules can greatly vary in complexity. Building such rule-based bots is much simpler than building AI bots. Rule-based bots are generally faster to train. The bots are built on a conditional if/then basis, which makes them simpler to train. The rule-based bots can take action based on the outcome of the conditional statements. Easy training of rule-based bots simultaneously reduces implementation cost. The rule-based bots are highly accountable and secure. These bots cannot learn on their own and will provide the answers that the companies want them to provide. Since rule-based bots cannot self-learn, this ensures that they will provide consistent customer service. Rule-based bots can professionally hand over the conversation to a human agent if the customer asks something that is absent from the database. The practice to handover the conversation to the human agent ensures that no unnecessary information is conveyed to the customer.

A rule-based approach also enables faster implementation of bots. Unlike AI bots, rule-based bots do not need to wait for years to gather data that can be analyzed by algorithms to understand customer problems and provide solutions. Rule-based bots can be easily implemented by embedding known scenarios and their outputs into them. These bots can then be embedded with more data according to new conversational patterns from new customer interactions. Although rule-based bots have many advantages, their limitations cannot be overlooked.

The problem with predefined rule-based bots is that they need to be embedded with rules for performing every small to complex task. If anything that is out of the database comes their way, then the rule-based bots hand over the conversation to humans. It means that rule-based bots cannot operate on a standalone basis; they need human intervention at some point.

Another limitation of rule-based bots is that of personalized communication. Chatbots can service different people speaking different languages. In addition, not only the languages, but the way of communication also varies from person to person. For instance, to book a flight to Paris one person may say, I want to book a flight to Paris, and another may say, I need a ticket to Paris. Both statements mean the same thing, yet if the rule-based bot is unable to understand that, it will pass the conversation to a human which may frustrate the customer.

Rule-based bots can be embedded with information from conversational patterns as time passes. Nevertheless, it becomes a challenge for developers to embed every possible scenario into rule-based bots. Although rule-based bots can be quickly implemented, they are hard to maintain after a certain length of time.

AI bots are self-learning bots that are programmed with Natural Language Processing (NLP) and Machine Learning. It takes a long time to train and build an AI bot initially. However, AI bots can save a lot of time and money in the long run. The use of AI bots works well with companies that have a lot of data as they can self-learn from the data. The self-learn ability of AI bots saves money, as unlike rule-based bots, they do not need to be updated after a certain interval of time. AI bots can be programmed to understand different languages and can address personalized communication challenges faced by rule-based bots.

With the use of deep learning, AI bots can learn to read the emotions of a customer. These bots can interact with the customers based on their mood. For instance, a China-based startup, Emotibot, is helping to develop chatbots that can detect the current mood of the customer and respond accordingly. With constant learning, AI bots can help provide personalized customer service to enhance customer engagement. Since AI bots can handle customer queries from end-to-end without human interaction required, they can be deployed for round-the-clock customer service.

AI can make chatbots smart, but it cannot make them understand the context of human interactions. For example, humans can change their way of communication depending on with whom they are communicating. If they are communicating with small children they use simpler words and shorter sentences. In addition, when human employees communicate with clients they use a more formal tone. Since bots cannot understand the human context, they communicate with everyone in the same way, irrespective of age or gender. The self-learning ability of AI bots might seem helpful to businesses but it can cause trouble sometimes. AI bots do not possess an accurate decision-making quality, and thus can learn something that they are not supposed to. For instance, a chatbot named Tay started posting offensive tweets. The chatbot Tay got manipulated through social engineering tweets and started posting undesirable phrases like Hitler was right, in a canned Repeat after me. series of tweets.

These advantages and disadvantages can help companies decide whether to use rule-based bots or AI bots, but only up to a certain extent. There are many other factors that enterprises should consider before implementing chatbots in their companies. Whether the bots will serve B2B or B2C, in what areas the bots will be deployed, and how the bots will be maintained are some factors to be considered. Rule-based bots and AI bots both have their own benefits and disadvantages, and both can be useful in their own ways. Enhanced customer service is king when it comes to the growth of a business. Understanding how different bots will improve their customer service ultimately helps them choose the best-suited bot for their business.

Read this article:

Choosing Between Rule-Based Bots And AI Bots - Forbes

An Indian politician used AI to translate his speech into other languages – The Verge

As social media platforms move to crack down on deepfakes and misinformation in the US elections, an Indian politician has used artificial intelligence techniques to make it look like he said things he didnt say, Vice reports. In one version of a campaign video, Manoj Tiwari speaks in English; in the fabricated version, he speaks in Haryanvi, a dialect of Hindi.

Political communications firm The Ideaz Factory told Vice it was working with Tiwaris Bharatiya Janata Party to create positive campaigns using the same technology used in deepfake videos, and dubbed in an actors voice to read the script in Haryanvi.

We used a lip-sync deepfake algorithm and trained it with speeches of Manoj Tiwari to translate audio sounds into basic mouth shapes, Sagar Vishnoi of The Ideaz Factory said, adding that it allowed the candidate to target voters he might not have otherwise been able to reach as directly (while India has two official languages, Hindi and English, some Indian states have their own languages and there are hundreds of various dialects).

The faked video reached about 15 million people in India, according to Vice.

Even though more deepfake videos are used to create nonconsensual pornography, the now-infamous 2018 deepfake video of President Obama raised concerns about how false or misleading videos could be used in the political arena. Last May, faked videos were posted on social media that appeared to show House Speaker Nancy Pelosi slurring her words.

In October, however, California passed a bill making it illegal to share deepfakes of politicians within 60 days of an election. And in January, the US House Ethics Committee informed members that posting deepfakes on social media could be considered a violation of House rules.

Social media companies have announced plans to try to combat the spread of deepfakes on their platforms. Twitters deceptive media ban takes effect in March. Facebook banned some deepfakes last month and Reddit updated its policy to ban all impersonation on the platform, which includes deepfakes.

How and when intentional use of altered videos might affect the 2020 US elections is anyones guess, but as one expert told Vice, even though the Tiwari video was meant to be part of a positive effort, the genie is out of the bottle now.

Read the original here:

An Indian politician used AI to translate his speech into other languages - The Verge

The messy, secretive reality behind OpenAIs bid to save the world – MIT Technology Review

Every year, OpenAIs employees vote on when they believe artificial general intelligence, or AGI, will finally arrive. Its mostly seen as a fun way to bond, and their estimates differ widely. But in a field that still debates whether human-like autonomous systems are even possible, half the lab bets it is likely to happen within 15 years.

In the four short years of its existence, OpenAI has become one of the leading AI research labs in the world. It has made a name for itself producing consistently headline-grabbing research, alongside other AI heavyweights like Alphabets DeepMind. It is also a darling in Silicon Valley, counting Elon Musk and legendary investor Sam Altman among its founders.

Above all, it is lionized for its mission. Its goal is to be the first to create AGIa machine with the learning and reasoning powers of a human mind. The purpose is not world domination; rather, the lab wants to ensure that the technology is developed safely and its benefits distributed evenly to the world.

Sign up for The Algorithm artificial intelligence, demystified

The implication is that AGI could easily run amok if the technologys development is left to follow the path of least resistance. Narrow intelligence, the kind of clumsy AI that surrounds us today, has already served as an example. We now know that algorithms are biased and fragile; they can perpetrate great abuse and great deception; and the expense of developing and running them tends to concentrate their power in the hands of a few. By extrapolation, AGI could be catastrophic without the careful guidance of a benevolent shepherd.

OpenAI wants to be that shepherd, and it has carefully crafted its image to fit the bill. In a field dominated by wealthy corporations, it was founded as a nonprofit. Its first announcement said that this distinction would allow it to build value for everyone rather than shareholders. Its chartera document so sacred that employees pay is tied to how well they adhere to itfurther declares that OpenAIs primary fiduciary duty is to humanity. Attaining AGI safely is so important, it continues, that if another organization were close to getting there first, OpenAI would stop competing with it and collaborate instead. This alluring narrative plays well with investors and the media, and in July Microsoft injected the lab with a fresh $1 billion.

Christie Hemm Klok

But three days at OpenAIs officeand nearly three dozen interviews with past and current employees, collaborators, friends, and other experts in the fieldsuggest a different picture. There is a misalignment between what the company publicly espouses and how it operates behind closed doors. Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration. Many who work or worked for the company insisted on anonymity because they were not authorized to speak or feared retaliation. Their accounts suggest that OpenAI, for all its noble aspirations, is obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees.

Since its earliest conception, AI as a field has strived to understand human-like intelligence and then re-create it. In 1950, Alan Turing, the renowned English mathematician and computer scientist, began a paper with the now-famous provocation Can machines think? Six years later, captivated by the nagging idea, a group of scientists gathered at Dartmouth College to formalize the discipline.

It is one of the most fundamental questions of all intellectual history, right? says Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence (AI2), a Seattle-based nonprofit AI research lab. Its like, do we understand the origin of the universe? Do we understand matter?

The trouble is, AGI has always remained vague. No one can really describe what it might look like or the minimum of what it should do. Its not obvious, for instance, that there is only one kind of general intelligence; human intelligence could just be a subset. There are also differing opinions about what purpose AGI could serve. In the more romanticized view, a machine intelligence unhindered by the need for sleep or the inefficiency of human communication could help solve complex challenges like climate change, poverty, and hunger.

But the resounding consensus within the field is that such advanced capabilities would take decades, even centuriesif indeed its possible to develop them at all. Many also fear that pursuing this goal overzealously could backfire. In the 1970s and again in the late 80s and early 90s, the field overpromised and underdelivered. Overnight, funding dried up, leaving deep scars in an entire generation of researchers. The field felt like a backwater, says Peter Eckersley, until recently director of research at the industry group Partnership on AI, of which OpenAI is a member.

Christie Hemm Klok

Against this backdrop, OpenAI entered the world with a splash on December 11, 2015. It wasnt the first to openly declare it was pursuing AGI; DeepMind had done so five years earlier and had been acquired by Google in 2014. But OpenAI seemed different. For one thing, the sticker price was shocking: the venture would start with $1 billion from private investors, including Musk, Altman, and PayPal cofounder Peter Thiel.

The star-studded investor list stirred up a media frenzy, as did the impressive list of initial employees: Greg Brockman, who had run technology for the payments company Stripe, would be chief technology officer; Ilya Sutskever, who had studied under AI pioneer Geoffrey Hinton, would be research director; and seven researchers, freshly graduated from top universities or plucked from other companies, would compose the core technical team. (Last February, Musk announced that he was parting ways with the company over disagreements about its direction. A month later, Altman stepped down as president of startup accelerator Y Combinator to become OpenAIs CEO.)

But more than anything, OpenAIs nonprofit status made a statement. Itll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest, the announcement said. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. Though it never made the criticism explicit, the implication was clear: other labs, like DeepMind, could not serve humanity because they were constrained by commercial interests. While they were closed, OpenAI would be open.

In a research landscape that had become increasingly privatized and focused on short-term financial gains, OpenAI was offering a new way to fund progress on the biggest problems. It was a beacon of hope, says Chip Huyen, a machine learning expert who has closely followed the labs journey.

At the intersection of 18th and Folsom Streets in San Francisco, OpenAIs office looks like a mysterious warehouse. The historic building has drab gray paneling and tinted windows, with most of the shades pulled down. The letters PIONEER BUILDINGthe remnants of its bygone owner, the Pioneer Truck Factorywrap around the corner in faded red paint.

Inside, the space is light and airy. The first floor has a few common spaces and two conference rooms. One, a healthy size for larger meetings, is called A Space Odyssey; the other, more of a glorified phone booth, is called Infinite Jest. This is the space Im restricted to during my visit. Im forbidden to visit the second and third floors, which house everyones desks, several robots, and pretty much everything interesting. When its time for their interviews, people come down to me. An employee trains a watchful eye on me in between meetings.

wikimedia commons / tfinc

On the beautiful blue-sky day that I arrive to meet Brockman, he looks nervous and guarded. Weve never given someone so much access before, he says with a tentative smile. He wears casual clothes and, like many at OpenAI, sports a shapeless haircut that seems to reflect an efficient, no-frills mentality.

Brockman, 31, grew up on a hobby farm in North Dakota and had what he describes as a focused, quiet childhood. He milked cows, gathered eggs, and fell in love with math while studying on his own. In 2008, he entered Harvard intending to double-major in math and computer science, but he quickly grew restless to enter the real world. He dropped out a year later, entered MIT instead, and then dropped out again within a matter of months. The second time, his decision was final. Once he moved to San Francisco, he never looked back.

Brockman takes me to lunch to remove me from the office during an all-company meeting. In the caf across the street, he speaks about OpenAI with intensity, sincerity, and wonder, often drawing parallels between its mission and landmark achievements of science history. Its easy to appreciate his charisma as a leader. Recounting memorable passages from the books hes read, he zeroes in on the Valleys favorite narrative, Americas race to the moon. (One story I really love is the story of the janitor, he says, referencing a famous yet probably apocryphal tale. Kennedy goes up to him and asks him, What are you doing? and he says, Oh, Im helping put a man on the moon!) Theres also the transcontinental railroad (It was actually the last megaproject done entirely by hand a project of immense scale that was totally risky) and Thomas Edisons incandescent lightbulb (A committee of distinguished experts said Its never gonna work, and one year later he shipped).

Christie Hemm Klok

Brockman is aware of the gamble OpenAI has taken onand aware that it evokes cynicism and scrutiny. But with each reference, his message is clear: People can be skeptical all they want. Its the price of daring greatly.

Those who joined OpenAI in the early days remember the energy, excitement, and sense of purpose. The team was smallformed through a tight web of connectionsand management stayed loose and informal. Everyone believed in a flat structure where ideas and debate would be welcome from anyone.

Musk played no small part in building a collective mythology. The way he presented it to me was Look, I get it. AGI might be far away, but what if its not? recalls Pieter Abbeel, a professor at UC Berkeley who worked there, along with several of his students, in the first two years. What if its even just a 1% or 0.1% chance that its happening in the next five to 10 years? Shouldnt we think about it very carefully? That resonated with me, he says.

But the informality also led to some vagueness of direction. In May 2016, Altman and Brockman received a visit from Dario Amodei, then a Google researcher, who told them no one understood what they were doing. In an account published in the New Yorker, it wasnt clear the team itself knew either. Our goal right now is to do the best thing there is to do, Brockman said. Its a little vague.

Nonetheless, Amodei joined the team a few months later. His sister, Daniela Amodei, had previously worked with Brockman, and he already knew many of OpenAIs members. After two years, at Brockmans request, Daniela joined too. Imaginewe started with nothing, Brockman says. We just had this ideal that we wanted AGI to go well.

Throughout our lunch, Brockman recites the charter like scripture, an explanation for every aspect of the companys existence.

By March of 2017, 15 months in, the leadership realized it was time for more focus. So Brockman and a few other core members began drafting an internal document to lay out a path to AGI. But the process quickly revealed a fatal flaw. As the team studied trends within the field, they realized staying a nonprofit was financially untenable. The computational resources that others in the field were using to achieve breakthrough results were doubling every 3.4 months. It became clear that in order to stay relevant, Brockman says, they would need enough capital to match or exceed this exponential ramp-up. That required a new organizational model that could rapidly amass moneywhile somehow also staying true to the mission.

Unbeknownst to the publicand most employeesit was with this in mind that OpenAI released its charter in April of 2018. The document re-articulated the labs core values but subtly shifted the language to reflect the new reality. Alongside its commitment to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power, it also stressed the need for resources. We anticipate needing to marshal substantial resources to fulfill our mission, it said, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.

We spent a long time internally iterating with employees to get the whole company bought into a set of principles, Brockman says. Things that had to stay invariant even if we changed our structure.

Christie Hemm Klok

That structure change happened in March 2019. OpenAI shed its purely nonprofit status by setting up a capped profit arma for-profit with a 100-fold limit on investors returns, albeit overseen by a board that's part of a nonprofit entity. Shortly after, it announced Microsofts billion-dollar investment (though it didnt reveal that this was split between cash and credits to Azure, Microsofts cloud computing platform).

Predictably, the move set off a wave of accusations that OpenAI was going back on its mission. In a post on Hacker News soon after the announcement, a user asked how a 100-fold limit would be limiting at all: Early investors in Google have received a roughly 20x return on their capital, they wrote. Your bet is that youll have a corporate structure which returns orders of magnitude more than Google ... but you dont want to unduly concentrate power? How will this work? What exactly is power, if not the concentration of resources?

The move also rattled many employees, who voiced similar concerns. To assuage internal unrest, the leadership wrote up an FAQ as part of a series of highly protected transition docs. Can I trust OpenAI? one question asked. Yes, began the answer, followed by a paragraph of explanation.

The charter is the backbone of OpenAI. It serves as the springboard for all the labs strategies and actions. Throughout our lunch, Brockman recites it like scripture, an explanation for every aspect of the companys existence. (By the way, he clarifies halfway through one recitation, I guess I know all these lines because I spent a lot of time really poring over them to get them exactly right. Its not like I was reading this before the meeting.)

How will you ensure that humans continue to live meaningful lives as you develop more advanced capabilities? As we wrote, we think its impact should be to give everyone economic freedom, to let them find new opportunities that arent imaginable today. How will you structure yourself to evenly distribute AGI? I think a utility is the best analogy for the vision that we have. But again, its all subject to the charter. How do you compete to reach AGI first without compromising safety? I think there is absolutely this important balancing act, and our best shot at that is whats in the charter.

OpenAI

For Brockman, rigid adherence to the document is what makes OpenAIs structure work. Internal alignment is treated as paramount: all full-time employees are required to work out of the same office, with few exceptions. For the policy team, especially Jack Clark, the director, this means a life divided between San Francisco and Washington, DC. Clark doesnt mindin fact, he agrees with the mentality. Its the in-between moments, like lunchtime with colleagues, he says, that help keep everyone on the same page.

In many ways, this approach is clearly working: the company has an impressively uniform culture. The employees work long hours and talk incessantly about their jobs through meals and social hours; many go to the same parties and subscribe to the rational philosophy of effective altruism. They crack jokes using machine-learning terminology to describe their lives: What is your life a function of? What are you optimizing for? Everything is basically a minmax function. To be fair, other AI researchers also love doing this, but people familiar with OpenAI agree: more than others in the field, its employees treat AI research not as a job but as an identity. (In November, Brockman married his girlfriend of one year, Anna, in the office against a backdrop of flowers arranged in an OpenAI logo. Sutskever acted as the officiant; a robot hand was the ring bearer.)

But at some point in the middle of last year, the charter became more than just lunchtime conversation fodder. Soon after switching to a capped-profit, the leadership instituted a new pay structure based in part on each employees absorption of the mission. Alongside columns like engineering expertise and research direction in a spreadsheet tab titled Unified Technical Ladder, the last column outlines the culture-related expectations for every level. Level 3: You understand and internalize the OpenAI charter. Level 5: You ensure all projects you and your team-mates work on are consistent with the charter. Level 7: You are responsible for upholding and improving the charter, and holding others in the organization accountable for doing the same.

The first time most people ever heard of OpenAI was on February 14, 2019. That day, the lab announced impressive new research: a model that could generate convincing essays and articles at the push of a button. Feed it a sentence from The Lord of the Rings or the start of a (fake) news story about Miley Cyrus shoplifting, and it would spit out paragraph after paragraph of text in the same vein.

But there was also a catch: the model, called GPT-2, was too dangerous to release, the researchers said. If such powerful technology fell into the wrong hands, it could easily be weaponized to produce disinformation at immense scale.

The backlash among scientists was immediate. OpenAI was pulling a publicity stunt, some said. GPT-2 was not nearly advanced enough to be a threat. And if it was, why announce its existence and then preclude public scrutiny? It seemed like OpenAI was trying to capitalize off of panic around AI, says Britt Paris, an assistant professor at Rutgers University who studies AI-generated disinformation.

Christie Hemm Klok

By May, OpenAI had revised its stance and announced plans for a staged release. Over the following months, it successively dribbled out more and more powerful versions of GPT-2. In the interim, it also engaged with several research organizations to scrutinize the algorithms potential for abuse and develop countermeasures. Finally, it released the full code in November, having found, it said, no strong evidence of misuse so far.

Amid continued accusations of publicity-seeking, OpenAI insisted that GPT-2 hadnt been a stunt. It was, rather, a carefully thought-out experiment, agreed on after a series of internal discussions and debates. The consensus was that even if it had been slight overkill this time, the action would set a precedent for handling more dangerous research. Besides, the charter had predicted that safety and security concerns would gradually oblige the lab to reduce our traditional publishing in the future.

This was also the argument that the policy team carefully laid out in its six-month follow-up blog post, which they discussed as I sat in on a meeting. I think that is definitely part of the success-story framing, said Miles Brundage, a policy research scientist, highlighting something in a Google doc. The lead of this section should be: We did an ambitious thing, now some people are replicating it, and here are some reasons why it was beneficial.

But OpenAIs media campaign with GPT-2 also followed a well-established pattern that has made the broader AI community leery. Over the years, the labs big, splashy research announcements have been repeatedly accused of fueling the AI hype cycle. More than once, critics have also accused the lab of talking up its results to the point of mischaracterization. For these reasons, many in the field have tended to keep OpenAI at arms length.

Christie Hemm Klok

This hasnt stopped the lab from continuing to pour resources into its public image. As well as research papers, it publishes its results in highly produced company blog posts for which it does everything in-house, from writing to multimedia production to design of the cover images for each release. At one point, it also began developing a documentary on one of its projects to rival a 90-minute movie about DeepMinds AlphaGo. It eventually spun the effort out into an independent production, which Brockman and his wife, Anna, are now partially financing. (I also agreed to appear in the documentary to provide technical explanation and context to OpenAIs achievement. I was not compensated for this.)

And as the blowback has increased, so have internal discussions to address it. Employees have grown frustrated at the constant outside criticism, and the leadership worries it will undermine the labs influence and ability to hire the best talent. An internal document highlights this problem and an outreach strategy for tackling it: In order to have government-level policy influence, we need to be viewed as the most trusted source on ML [machine learning] research and AGI, says a line under the Policy section. Widespread support and backing from the research community is not only necessary to gain such a reputation, but will amplify our message. Another, under Strategy, reads, "Explicitly treat the ML community as a comms stakeholder. Change our tone and external messaging such that we only antagonize them when we intentionally choose to."

There was another reason GPT-2 had triggered such an acute backlash. People felt that OpenAI was once again walking back its earlier promises of openness and transparency. With news of the for-profit transition a month later, the withheld research made people even more suspicious. Could it be that the technology had been kept under wraps in preparation for licensing it in the future?

Christie Hemm Klok

But little did people know this wasnt the only time OpenAI had chosen to hide its research. In fact, it had kept another effort entirely secret.

There are two prevailing technical theories about what it will take to reach AGI. In one, all the necessary techniques already exist; its just a matter of figuring out how to scale and assemble them. In the other, there needs to be an entirely new paradigm; deep learning, the current dominant technique in AI, wont be enough.

Most researchers fall somewhere between these extremes, but OpenAI has consistently sat almost exclusively on the scale-and-assemble end of the spectrum. Most of its breakthroughs have been the product of sinking dramatically greater computational resources into technical innovations developed in other labs.

Brockman and Sutskever deny that this is their sole strategy, but the labs tightly guarded research suggests otherwise. A team called Foresight runs experiments to test how far they can push AI capabilities forward by training existing algorithms with increasingly large amounts of data and computing power. For the leadership, the results of these experiments have confirmed its instincts that the labs all-in, compute-driven strategy is the best approach.

For roughly six months, these results were hidden from the public because OpenAI sees this knowledge as its primary competitive advantage. Employees and interns were explicitly instructed not to reveal them, and those who left signed nondisclosure agreements. It was only in January that the team, without the usual fanfare, quietly posted a paper on one of the primary open-source databases for AI research. People who experienced the intense secrecy around the effort didnt know what to make of this change. Notably, another paper with similar results from different researchers had been posted a few months earlier.

Christie Hemm Klok

In the beginning, this level of secrecy was never the intention, but it has since become habitual. Over time, the leadership has moved away from its original belief that openness is the best way to build beneficial AGI. Now the importance of keeping quiet is impressed on those who work with or at the lab. This includes never speaking to reporters without the express permission of the communications team. After my initial visits to the office, as I began contacting different employees, I received an email from the head of communications reminding me that all interview requests had to go through her. When I declined, saying that this would undermine the validity of what people told me, she instructed employees to keep her informed of my outreach. A Slack message from Clark, a former journalist, later commended people for keeping a tight lid as a reporter was sniffing around.

In a statement responding to this heightened secrecy, an OpenAI spokesperson referred back to a section of its charter. We expect that safety and security concerns will reduce our traditional publishing in the future, the section states, while increasing the importance of sharing safety, policy, and standards research. The spokesperson also added: Additionally, each of our releases is run through an infohazard process to evaluate these trade-offs and we want to release our results slowly to understand potential risks and impacts before setting loose in the wild.

One of the biggest secrets is the project OpenAI is working on next. Sources described it to me as the culmination of its previous four years of research: an AI system trained on images, text, and other data using massive computational resources. A small team has been assigned to the initial effort, with an expectation that other teams, along with their work, will eventually fold in. On the day it was announced at an all-company meeting, interns werent allowed to attend. People familiar with the plan offer an explanation: the leadership thinks this is the most promising way to reach AGI.

The man driving OpenAIs strategy is Dario Amodei, the ex-Googler who now serves as research director. When I meet him, he strikes me as a more anxious version of Brockman. He has a similar sincerity and sensitivity, but an air of unsettled nervous energy. He looks distant when he talks, his brows furrowed, a hand absentmindedly tugging his curls.

Amodei divides the labs strategy into two parts. The first part, which dictates how it plans to reach advanced AI capabilities, he likens to an investors portfolio of bets. Different teams at OpenAI are playing out different bets. The language team, for example, has its money on a theory postulating that AI can develop a significant understanding of the world through mere language learning. The robotics team, in contrast, is advancing an opposing theory that intelligence requires a physical embodiment to develop.

As in an investors portfolio, not every bet has an equal weight. But for the purposes of scientific rigor, all should be tested before being discarded. Amodei points to GPT-2, with its remarkably realistic auto-generated texts, as an instance of why its important to keep an open mind. Pure language is a direction that the field and even some of us were somewhat skeptical of, he says. But now it's like, Wow, this is really promising.

Over time, as different bets rise above others, they will attract more intense efforts. Then they will cross-pollinate and combine. The goal is to have fewer and fewer teams that ultimately collapse into a single technical direction for AGI. This is the exact process that OpenAIs latest top-secret project has supposedly already begun.

Christie Hemm Klok

The second part of the strategy, Amodei explains, focuses on how to make such ever-advancing AI systems safe. This includes making sure that they reflect human values, can explain the logic behind their decisions, and can learn without harming people in the process. Teams dedicated to each of these safety goals seek to develop methods that can be applied across projects as they mature. Techniques developed by the explainability team, for example, may be used to expose the logic behind GPT-2s sentence constructions or a robots movements.

Amodei admits this part of the strategy is somewhat haphazard, built less on established theories in the field and more on gut feeling. At some point were going to build AGI, and by that time I want to feel good about these systems operating in the world, he says. Anything where I dont currently feel good, I create and recruit a team to focus on that thing.

For all the publicity-chasing and secrecy, Amodei looks sincere when he says this. The possibility of failure seems to disturb him.

Were in the awkward position of: we dont know what AGI looks like, he says. We dont know when its going to happen. Then, with careful self-awareness, he adds: The mind of any given person is limited. The best thing Ive found is hiring other safety researchers who often have visions which are different than the natural thing I mightve thought of. I want that kind of variation and diversity because thats the only way that you catch everything.

The thing is, OpenAI actually has little variation and diversitya fact hammered home on my third day at the office. During the one lunch I was granted to mingle with employees, I sat down at the most visibly diverse table by a large margin. Less than a minute later, I realized that the people eating there were not, in fact, OpenAI employees. Neuralink, Musks startup working on computer-brain interfaces, shares the same building and dining room.

Christie Hemm Klok

According to a lab spokesperson, out of the over 120 employees, 25% are female or nonbinary. There are also two women on the executive team and the leadership team is 30% women, she said, though she didnt specify who was counted among these teams. (All four C-suite executives, including Brockman and Altman, are white men. Out of over 112 employees I identified on LinkedIn and other sources, the overwhelming number were white or Asian.)

In fairness, this lack of diversity is typical in AI. Last year a report from the New Yorkbased research institute AI Now found that women accounted for only 18% of authors at leading AI conferences, 20% of AI professorships, and 15% and 10% of research staff at Facebook and Google, respectively. There is definitely still a lot of work to be done across academia and industry, OpenAIs spokesperson said. Diversity and inclusion is something we take seriously and are continually working to improve by working with initiatives like WiML, Girl Geek, and our Scholars program.

Indeed, OpenAI has tried to broaden its talent pool. It began its remote Scholars program for underrepresented minorities in 2018. But only two of the first eight scholars became full-time employees, even though they reported positive experiences. The most common reason for declining to stay: the requirement to live in San Francisco. For Nadja Rhodes, a former scholar who is now the lead machine-learning engineer at a New Yorkbased company, the city just had too little diversity.

But if diversity is a problem for the AI industry in general, its something more existential for a company whose mission is to spread the technology evenly to everyone. The fact is that it lacks representation from the groups most at risk of being left out.

Nor is it at all clear just how OpenAI plans to distribute the benefits of AGI to all of humanity, as Brockman frequently says in citing its mission. The leadership speaks of this in vague terms and has done little to flesh out the specifics. (In January, the Future of Humanity Institute at Oxford University released a report in collaboration with the lab proposing to distribute benefits by distributing a percentage of profits. But the authors cited significant unresolved issues regarding the way in which it would be implemented.) This is my biggest problem with OpenAI, says a former employee, who spoke on condition of anonymity.

Christie Hemm Klok

They are using sophisticated technical practices to try to answer social problems with AI, echoes Britt Paris of Rutgers. It seems like they dont really have the capabilities to actually understand the social. They just understand that thats a sort of a lucrative place to be positioning themselves right now.

Brockman agrees that both technical and social expertise will ultimately be necessary for OpenAI to achieve its mission. But he disagrees that the social issues need to be solved from the very beginning. How exactly do you bake ethics in, or these other perspectives in? And when do you bring them in, and how? One strategy you could pursue is to, from the very beginning, try to bake in everything you might possibly need, he says. I dont think that that strategy is likely to succeed.

The first thing to figure out, he says, is what AGI will even look like. Only then will it be time to make sure that we are understanding the ramifications.

Last summer, in the weeks after the switch to a capped-profit model and the $1 billion injection from Microsoft, the leadership assured employees that these updates wouldnt functionally change OpenAIs approach to research. Microsoft was well aligned with the labs values, and any commercialization efforts would be far away; the pursuit of fundamental questions would still remain at the core of the work.

For a while, these assurances seemed to hold true, and projects continued as they were. Many employees didnt even know what promises, if any, had been made to Microsoft.

But in recent months, the pressure of commercialization has intensified, and the need to produce money-making research no longer feels like something in the distant future. In sharing his 2020 vision for the lab privately with employees, Altmans message is clear: OpenAI needs to make money in order to do researchnot the other way around.

This is a hard but necessary trade-off, the leadership has saidone it had to make for lack of wealthy philanthropic donors. By contrast, Seattle-based AI2, a nonprofit that ambitiously advances fundamental AI research, receives its funds from a self-sustaining (at least for the foreseeable future) pool of money left behind by the late Paul Allen, a billionaire best known for cofounding Microsoft.

But the truth is that OpenAI faces this trade-off not only because its not rich, but also because it made the strategic choice to try to reach AGI before anyone else. That pressure forces it to make decisions that seem to land farther and farther away from its original intention. It leans into hype in its rush to attract funding and talent, guards its research in the hopes of keeping the upper hand, and chases a computationally heavy strategynot because its seen as the only way to AGI, but because it seems like the fastest.

Yet OpenAI is still a bastion of talent and cutting-edge research, filled with people who are sincerely striving to work for the benefit of humanity. In other words, it still has the most important elements, and theres still time for it to change.

Near the end of my interview with Rhodes, the former remote scholar, I ask her the one thing about OpenAI that I shouldnt omit from this profile. I guess in my opinion, theres problems, she begins hesitantly. Some of them come from maybe the environment it faces; some of them come from the type of people that it tends to attract and other people that it leaves out.

But to me, it feels like they are doing something a little bit right, she says. I got a sense that the folks there are earnestly trying.

Update: We made some changes to this story after OpenAI asked us to clarify that when Greg Brockman said he didnt think it was possible to bake ethics in from the very beginning when developing AI, he intended it to mean that ethical questions couldnt be solved from the beginning, not that they couldnt be addressed from the beginning. Also, that after dropping out of Harvard he transferred straight to MIT rather than waiting a year. Also, that he was raised not on a farm, but "on a hobby farm." Brockman considers this distinction important.

In addition, we have clarified that while OpenAI did indeed "shed its nonprofit status," a board that is part of a nonprofit entity still oversees it, and that OpenAI publishes its research in the form of company blog posts as well as, not in lieu of, research papers. Weve also corrected the date of publication of a paper by outside researchers and the affiliation of Peter Eckersley (former, not current, research director of Partnership on AI, which he recently left).

Continue reading here:

The messy, secretive reality behind OpenAIs bid to save the world - MIT Technology Review

EU’s new AI rules will focus on ethics and transparency – VentureBeat

The European Union is set to release new regulations for artificial intelligence that are expected to focus on transparency and oversight as the region seeks to differentiate its approach from those of the United States and China.

On Wednesday, EU technology chief Margrethe Vestager will unveil a wide-ranging plan designed to bolster the regions competitiveness. While transformative technologies such as AI have been labeled critical to economic survival, Europe is perceived as slipping behind the U.S., where development is being led by tech giants with deep pockets, and China, where the central government is leading the push.

Europe has in recent years sought to emphasize fairness and ethics when it comes to tech policy. Now its taking that approach a step further by introducing rules about transparency around data-gathering for technologies like AI and facial recognition. These systems would require human oversight and audits, according to a widely leaked draft of the new rules.

In a press briefing in advance of Wednesdays announcement, Vestager noted that companies outside the EU that want to deploy their tech in Europe might need to take steps like retraining facial recognition features using European data sets. The rules will cover such use cases as autonomous vehicles and biometric IDs.

But the proposal features carrots as well as sticks. The EU will propose spending almost $22 billion annually to build new data ecosystems that can serve as the basis for AI development. The plan assumes Europe has a wealth of government and industrial data, and it wants to provide regulatory and financial incentives to pool that data, which would then be available to AI developers who agree to abide by EU regulations.

In an interview with Reuters over the weekend, Thierry Breton, the European commissioner for Internal Market and Services, said the EU wants to amass data gathered in such sectors as manufacturing, transportation, energy, and health care that can be leveraged to develop AI for the public good and to accelerate Europes own startups.

Europe is the worlds top industrial continent, Breton told Reuters. The United States [has] lost much of [its] industrial know-how in the last phase of globalisation. They have to gradually rebuild it. China has added-value handicaps it is correcting.

Of course, these rules are spooking Silicon Valley companies. Regulations such as GDPR, even if they officially target Europe, tend to have global implications.

To that end, Facebook CEO Mark Zuckerberg visited Brussels today to meet with Vestager and discuss the proposed regulations. In a weekend opinion piece published by the Financial Times, however, Zuckerberg again called for greater regulation of AI and other technologies as a way to help build public trust.

We need more oversight and accountability, Zuckerberg wrote. People need to feel that global technology platforms answer to someone, so regulation should hold companies accountable when they make mistakes.

Following the introduction of the proposals on Wednesday, the public will have 12 weeks to comment. The European Commission will then officially propose legislation sometime later this year.

Excerpt from:

EU's new AI rules will focus on ethics and transparency - VentureBeat

As AI startups focus on time-to-market, ethical considerations should be the priority – SmartCompany.com.au

A girl making friends with a robot at Kuromon Market in Osaka. Source: Andy Kelly/Unsplash.

Artificial intelligence (AI) has clearly emerged as one of the most transformational technologies of our age, with AI already prevalent in our everyday lives. Among many fascinating uses, AI has helped explore the universe, tackle complex and chronic diseases, formulate new medicines, and alleviate poverty.

As AI becomes more widespread over the next decade, like many, I believe we will see more innovative and creative uses.

Indeed, 93% of respondents in anISACAs Next Decade of Tech: Envisioning the 2020s study believe the augmented workforce (or people, robots and AI working closely together) will reshape how some or most jobs are performed in the 2020s.

The rise of social robots to assist patients with physical disabilities, manage elderly care and even educate our children are just some of the many uses being explored.

As AI continues to redefine humanity in various ways, ethical consideration is of paramount importance, which as Australians, we should be addressing in government and business. ISACAs research highlights the double-edged nature of this budding technology.

Only 39% of respondents in Australia believe that enterprises will give ethical considerations around AI and machine learning sufficient attention in the next decade to prevent potentially serious unintended consequences in their deployments. Respondents specifically pinpointed malicious AI attacks involving critical infrastructure, social engineering and autonomous weapons as their primary fears.

These concerns are quite disturbing, although not alarming, due to long-sounded early warnings about these risks.

For instance, in February 2018, prominent researchers and academics published a report about the increasing possibilities that rogue states, criminals, terrorists and other malefactors could soon exploit AI capabilities to cause widespread harm.

And in 2017, the late physicist Stephen Hawking cautioned that the emergence of AI could be the worst event in the history of our civilization unless society finds a way to control its development.

To date, no industry standards exist to guide the secure development and maintenance of AI systems.

Further exacerbating this lack of standards is the fact that startup firms still dominate the AI market. An MIT report revealed that, other than a few large players such as IBM and Palantir Technologies, AI remains a market of 2,600 startups. The majority of these startups are primarily focused on rapid time to market, product functionality and high return on investments. Embedding cyber resilience into their products is not a priority.

Malicious AI programs have surfaced much quicker than many pundits had anticipated. A case in point is the proliferation of deep fakes, ostensibly realistic audio or video files generated by deep learning algorithms or neural networks toperpetratea range of malevolent acts, such as faking celebrity pornographic videos, revenge porn, fake news, financial fraud, and wide range of other disinformation tactics.

Several factors underpinned the rise of deep fakes, but a few stand out.

First is the exponential increase of computing power combined with the availability of large image databases. Second, and probably the most vexing, is the absence of coherent efforts to institute global laws to curtail the development of malicious AI programs. Third, social media platforms, which are being exploited to disseminate deep fakes at scale, are struggling to keep up with the rapidly maturing and evasive threat.

Unsurprisingly, deep fake videos published online have doubled in the past nine months to almost 15,000 cases, according to DeepTrace, a Netherlands-based cyber security group.

Its clear that addressing this growing threat will prove complex and expensive, but the task is pressing.

The ACCC Digital Platforms Inquiryreport highlighted the risk of consumers being exposed to serious incidents of disinformation. Emphasising the gravity of the risk is certainly a step in the right direction, but more remains to be done.

Currently,there is no consensus globally on whether the development of AI requires its own dedicated regulator or specific statutory regime.

Ironically, the role of the auditor and IT auditor is a function that AI is touted as being able to eliminate. This premise would make for a good Hollywood script the very thing requiring ethical consideration and regulation, becomes the regulator.

Government, enterprises and startups need to be mindful of the key risks that are inherent in AI adoption, conduct appropriate oversight, and develop principles and regulation that articulate the roles that can be partially or fully automated today to secure the future of humanity and business.

Until then, AI companies need to embed protocols and cyber security into their inventions to prevent malicious use.

NOW READ:Expert warns artificial intelligence will have a huge impact on small businesses but wont take your job just yet

NOW READ:Why artificial intelligence in Australia needs to get ethical

Read the rest here:

As AI startups focus on time-to-market, ethical considerations should be the priority - SmartCompany.com.au

Future Goals in the AI Race: Explainable AI and Transfer Learning – Modern Diplomacy

Recent years have seen breakthroughs in neural network technology:computers can now beat any living person at the most complex game invented by humankind, as well as imitate humanvoices and faces (both real and non-existent) in a deceptively realistic manner. Is this a victory for artificialintelligence over human intelligence? And if not, what else do researchers anddevelopers need to achieve to make the winners in the AI race the kings of the world?

Background

Over the last 60 years, artificialintelligence (AI) has been the subject of muchdiscussion among researchers representing different approaches and schools ofthought. One of the crucial reasons for this is that there is no unifieddefinition of what constitutes AI, with differences persisting even now. Thismeans that any objective assessment of the current state and prospects of AI, andits crucial areas of research, in particular, will be intricately linked withthe subjective philosophical views of researchers and the practical experienceof developers.

In recent years, the term general intelligence, meaning the ability tosolve cognitive problems in general terms, adapting to the environment throughlearning, minimizing risks and optimizing the losses in achieving goals, hasgained currency among researchers and developers. This led to the concept of artificialgeneral intelligence (AGI), potentially vested not in a human,but a cybernetic system of sufficient computational power. Many refer to thiskind of intelligence as strong AI, as opposed to weak AI, which has becomea mundane topic in recent years.

As applied AI technology has developed over the last 60 years, we cansee how many practical applications knowledge bases, expert systems, imagerecognition systems, prediction systems, tracking and control systems forvarious technological processes are no longer viewed as examples of AI andhave become part of ordinary technology. The bar for what constitutes AIrises accordingly, and today it is the hypothetical general intelligence,human-level intelligence or strong AI, that is assumed to be the real thingin most discussions. Technologies that are already being used are broken downinto knowledge engineering, data science or specific areas of narrow AI thatcombine elements of different AI approaches with specialized humanities ormathematical disciplines, such as stock market or weather forecasting, speechand text recognition and language processing.

Different schools of research, each working within their own paradigms,also have differing interpretations of the spheres of application, goals,definitions and prospects of AI, and are often dismissive of alternativeapproaches. However, there has been a kind of synergistic convergence ofvarious approaches in recent years, and researchers and developers areincreasingly turning to hybrid models and methodologies, coming up withdifferent combinations.

Since the dawn of AI, two approaches to AI have been the most popular.The first, symbolic approach, assumes that the roots of AI lie inphilosophy, logic and mathematics and operate according to logical rules, signand symbolic systems, interpreted in terms of the conscious human cognitiveprocess. The second approach (biological in nature), referred to asconnectionist, neural-network, neuromorphic, associative or subsymbolic, isbased on reproducing the physical structures and processes of the human brainidentified through neurophysiological research. The two approaches have evolvedover 60 years, steadily becoming closer to each other. For instance, logicalinference systems based on Boolean algebra have transformed into fuzzy logic orprobabilistic programming, reproducing network architectures akin to neuralnetworks that evolved within the neuromorphic approach. On the other hand,methods based on artificial neural networks are very far from reproducing thefunctions of actual biological neural networks and rely more on mathematicalmethods from linear algebra and tensor calculus.

Are There Holes in Neural Networks?

In the last decade, it was the connectionist, or subsymbolic, approachthat brought about explosive progress in applying machine learning methods to awide range of tasks. Examples include both traditional statisticalmethodologies, like logistical regression, and more recent achievements inartificial neural network modelling, like deep learning and reinforcementlearning. The most significant breakthrough of the last decade was broughtabout not so much by new ideas as by the accumulation of a critical mass oftagged datasets, the low cost of storing massive volumes of training samplesand, most importantly, the sharp decline of computational costs, including thepossibility of using specialized, relatively cheap hardware for neural networkmodelling. The breakthrough was brought about by a combination of these factorsthat made it possible to train and configure neural network algorithms to makea quantitative leap, as well as to provide a cost-effective solution to a broadrange of applied problems relating to recognition, classification andprediction. The biggest successes here have been brought about by systems basedon deep learning networks that build on the idea of the perceptronsuggested 60 years ago by Frank Rosenblatt. However, achievements in the use ofneural networks also uncovered a range of problems that cannot be solved usingexisting neural network methods.

First, any classic neural network model, whatever amount of data it istrained on and however precise it is in its predictions, is still a black boxthat does not provide any explanation of why a given decision was made, letalone disclose the structure and content of the knowledge it has acquired inthe course of its training. This rules out the use of neural networks incontexts where explainability is required for legal or security reasons. Forexample, a decision to refuse a loan or to carry out a dangerous surgicalprocedure needs to be justified for legal purposes, and in the event that aneural network launches a missile at a civilian plane, the causes of thisdecision need to be identifiable if we want to correct it and prevent futureoccurrences.

Second, attempts to understand the nature of modern neural networks havedemonstrated their weak ability to generalize. Neural networks rememberisolated, often random, details of the samples they were exposed to duringtraining and make decisions based on those details and not on a real generalgrasp of the object represented in the sample set. For instance, a neuralnetwork that was trained to recognize elephants and whales using sets ofstandard photos will see a stranded whale as an elephant and an elephantsplashing around in the surf as a whale. Neural networks are good atremembering situations in similar contexts, but they lack the capacity tounderstand situations and cannot extrapolate the accumulated knowledge tosituations in unusual settings.

Third, neural network models are random, fragmentary and opaque, whichallows hackers to find ways of compromising applications based on these modelsby means of adversarial attacks. For example, a security system trained toidentify people in a video stream can be confused when it sees a person inunusually colourful clothing. If this person is shoplifting, the system may notbe able to distinguish them from shelves containing equally colourful items.While the brain structures underlying human vision are prone to so-calledoptical illusions, this problem acquires a more dramatic scale with modernneural networks: there are known cases where replacing an image with noiseleads to the recognition of an object that is not there, or replacing one pixelin an image makes the network mistake the object for something else.

Fourth, the inadequacy of the information capacity and parameters of theneural network to the image of the world it is shown during training andoperation can lead to the practical problem of catastrophic forgetting. This isseen when a system that had first been trained to identify situations in a setof contexts and then fine-tuned to recognize them in a new set of contexts maylose the ability to recognize them in the old set. For instance, a neuralmachine vision system initially trained to recognize pedestrians in an urbanenvironment may be unable to identify dogs and cows in a rural setting, butadditional training to recognize cows and dogs can make the model forget how toidentify pedestrians, or start confusing them with small roadside trees.

Growth Potential?

The expert community sees a number of fundamental problems that need tobe solved before a general, or strong, AI is possible. In particular, asdemonstrated by the biggest annual AI conference held in Macao, explainable AI and transfer learning are simply necessary in somecases, such as defence, security, healthcare and finance. Many leadingresearchers also think that mastering these two areas will be thekey to creating a general, or strong, AI.

Explainable AI allows for human beings (the user of theAI system) to understand the reasons why a system makes decisions and approvethem if they are correct, or rework or fine-tune the system if they are not.This can be achieved by presenting data in an appropriate (explainable) manneror by using methods that allow this knowledge to be extracted with regard tospecific precedents or the subject area as a whole. In a broader sense,explainable AI also refers to the capacity of a system to store, or at leastpresent its knowledge in a human-understandable and human-verifiable form. Thelatter can be crucial when the cost of an error is too high for it only to beexplainable post factum. And herewe come to the possibility of extracting knowledge from the system, either toverify it or to feed it into another system.

Transfer learning is the possibility of transferringknowledge between different AI systems, as well as between man and machine sothat the knowledge possessed by a human expert or accumulated by an individualsystem can be fed into a different system for use and fine-tuning.Theoretically speaking, this is necessary because the transfer of knowledge isonly fundamentally possible when universal laws and rules can be abstractedfrom the systems individual experience. Practically speaking, it is theprerequisite for making AI applications that will not learn by trial and erroror through the use of a training set, but can be initialized with a base ofexpert-derived knowledge and rules when the cost of an error is too high orwhen the training sample is too small.

How to Get the Best of Both Worlds?

There is currently no consensus on how to make an artificial general intelligence that is capable ofsolving the abovementioned problems or is based on technologies that couldsolve them.

One of the most promising approaches is probabilisticprogramming, which is a modern development ofsymbolic AI. In probabilistic programming, knowledge takes the form ofalgorithms and source, and target data is not represented by values ofvariables but by a probabilistic distribution of all possible values. Alexei Potapov, a leading Russian expert on artificial general intelligence, thinksthat this area is now in a state that deep learning technology was in about tenyears ago, so we can expect breakthroughs in the coming years.

Another promising symbolic area is Evgenii Vityaevs semantic probabilistic modelling, which makes it possible to build explainable predictive models basedon information represented as semantic networks with probabilistic inferencebased on Pyotr Anokhins theory of functional systems.

One of the most widely discussed ways to achieve this is throughso-called neuro-symbolic integration an attempt to get the best of bothworlds by combining the learning capabilities of subsymbolic deep neuralnetworks (which have already proven their worth) with the explainability ofsymbolic probabilistic modelling and programming (which hold significantpromise). In addition to the technological considerations mentioned above, thisarea merits close attention from a cognitive psychology standpoint. As viewed by Daniel Kahneman, human thought can be construed as the interaction oftwo distinct but complementary systems: System 1 thinking is fast, unconscious,intuitive, unexplainable thinking, whereas System 2 thinking is slow,conscious, logical and explainable. System 1 provides for the effectiveperformance of run-of-the-mill tasks and the recognition of familiarsituations. In contrast, System 2 processes new information and makes sure wecan adapt to new conditions by controlling and adapting the learning process ofthe first system. Systems of the first kind, as represented by neural networks,are already reaching Gartnersso-called plateau of productivity in avariety of applications. But working applications based on systems of thesecond kind not to mention hybrid neuro-symbolic systems which the mostprominent industry players have only started to explore have yet to becreated.

This year, Russian researchers, entrepreneurs and government officialswho are interested in developing artificial general intelligence have a uniqueopportunity to attend the first AGI-2020 international conference in St. Petersburg in late June 2020, wherethey can learn about all the latest developments in the field from the worldsleading experts.

From ourpartner RIAC

Related

View post:

Future Goals in the AI Race: Explainable AI and Transfer Learning - Modern Diplomacy

Dont leave it up to the EU to decide how we regulate AI – City A.M.

The war of words between Britain and the EU has begun ahead of next months trade talks.

But as Britain sets its own course on everything from immigration to fishing, there is one area where the battle for influence is only just kicking off: the future regulation of artificial intelligence.

As AI becomes a part of our everyday lives from facial recognition software to the use of black-box algorithms the need for regulation has become more apparent. But around the world, there is rigorous disagreement about how to do it.

Last Wednesday, the EU set out its approach in a white paper, proposing regulations on AI in line with European values, ethics and rules. It outlined a tough legal regime, including pre-vetting and human oversight, for high-risk AI applications in sectors such as medicine and a voluntary labelling scheme for the rest.

In contrast, across the Atlantic, Donald Trumps White House has so far taken a light-touch approach, publishing 10 principles for public bodies designed to ensure that regulation of AI doesnt needlessly get in the way of innovation.

Britain has still to set out its own approach, and we must not be too late to the party. If we are, we may lose the opportunity to influence the shaping of rules that will impact our own industry for decades to come.

This matters, because AI firms the growth generators of the future can choose where to locate and which market to target, and will do so partly based on the regulations which apply there.

Put simply, the regulation of AI is too important for Britains future prosperity to leave it up to the EU or anyone else.

That doesnt mean a race to the bottom. Regulation is meaningless if it is so lax that it doesnt prevent harm. But if we get it right, Britain will be able to maintain its position as the technology capital of Europe, as well as setting thoughtful standards that guide the rest of the western world.

So what should a British approach to AI regulation look like?

It is tempting for our legislators to simply give legal force to some of the many vague ethical codes currently floating around the industry. But the lack of specificity of these codes means that they would result in heavy-handed blanket regulation, which could have a chilling effect on innovation.

Instead, the aim must be to ensure that AI works effectively and safely, while giving companies space to innovate. With that in mind, we have created four principles which we believe a British approach to AI regulation should be designed around.

The first is that regulations should be context-specific. AI is not one technology, and it cannot be governed as such. Medical algorithms and recommender algorithms, for example, are likely to both be regulated, but to differing extents because of the impact of the outcomes the consequences of a diagnostic error are far greater than an algorithm pushing an irrelevant product advert into your social media feed.

Our second principle is that regulation must be precise; it should not be left up to tech companies themselves to interpret.

Fortunately, the latest developments in AI research including some which we are pioneering at Faculty allow for analysis of an algorithms performance across a range of important dimensions: accuracy (how good is an AI tool at doing its job?); fairness (does it have implicit biases?); privacy (does it leak peoples data?); robustness (does it fail unexpectedly?); and explainability (do we know how it is working?).

Regulators should set out precise thresholds for each of these according to the context in which the AI tool is deployed. For instance, an algorithm which hands out supermarket loyalty points might be measured only on whether it is fair and protects personal data, whereas one making clinical decisions in a hospital would be required to reach better-than-human-average standards in every area.

The third principle is that regulators must balance transparency with trust. For example, they might publish one set of standards for supermarket loyalty programmes, and another for radiology algorithms. Each would be subject to different licensing regimes: a light-touch one for supermarkets, and a much tougher inspection regime for hospitals.

Finally, regulators will need to equip themselves with the skills and know-how needed to design and manage this regime. That means having data scientists and engineers who can look under the bonnet of an AI tool, as well as ethicists and economists. They will also need the powers to investigate any algorithms performance.

These four principles offer the basis for a regulatory regime precise enough to be meaningful, nuanced enough to permit innovation, and robust enough to retain public trust.

We believe they offer a pragmatic guide for the UK to chart its own path and lead the debate about the future of the AI industry.

Main image credit: Getty

Read more:

Dont leave it up to the EU to decide how we regulate AI - City A.M.

Robosen Robotics Showcases T9 at Toy Fair New York – The World’s Most Advanced and Programmable Robot – Salamanca Press

NEW YORK, Feb. 22, 2020 /PRNewswire/ -- Toy Fair NY, Hall 1E, Booth # 4514-- Robosen Robotics (Shenzhen) Co. Ltd, a leading innovator in the field of AI and robotics, today showcased T9, the world's most advanced programmable robot that automatically converts from a robot to a vehicle in a stunningly smooth and seamless movement, at Toy Fair New York. T9 is the first robot available in the consumer market that features all of the following functions: automatic convertible movement from vehicle to robot, bipedal walking ability in robot form, race function in vehicle form, programmable/code development, robot control/commands by either voice or via app. T9 retails for $499USD and is available on Amazon and robosen.us

T9 is made with the latest robotic technology available with 23 proprietary chips and 22 proprietary servo motors (one for each artificial joint) that make it one of the most agile and flexible robots ever created; allowing it to perform high-speed, upright bipedal walking, while also automatically converting from robot to vehicle form.

Robosen Robotics' visionary craftsmanship and cutting-edge technology in artificial joint driving algorithms and digital electric drive technology, provide T9's artificial intelligence (AI) - Easy to remember voicecommands, complex animations completed with precision control, captivating dance performances and innovative stunts.

These animations are created and customized with three intuitive and easy-to-use programming platforms (Manual, Visual and 3D Graphics*) and T9's massive storage has enough memory to store tens of thousands of them. So, whether the user is a beginner, intermediate, or an advanced coder, T9's advanced robotics and AI will provide endless entertainment and opportunity to teach logical-based skills. Robosen Robotics also offers free online tutorials which makes learning to code fast and fun.

T9 is controlled by voice as well as via the T9 app (iOS and Android). With just a touch of a button, T9 can perform the latest customized dance animation, race around in vehicle mode, change back and forth from robot to vehicle form and more. Additionally, users can collaborate, create and connect with a global community of robo-centric fans through the Robosen Hub. They'll be able to upload and download popular user created animations, share programming tips and participate in fun events and competitions.

FEATURES/SPECS:DimensionsRobot Form: 265163340 mm;Vehicle Form: 287198149 mmControl MethodMobile app, voice controlWeight1.48kgExternal PortsDC charging port, Micro USB portMaterialAluminum alloy frame, ABS+PC shellBattery Capacity2000mAh lithium battery packServo motor22 (Chest 2 / Hands 42 / Legs 52 / Drive Wheels 2) Adapter Input 100V-240V ~ 50/60Hz 0.6A,OutputDC 12V 2AWirelessConnection Bluetooth 4.2 BLECertificationsFCC Certification

*A summary of each of three programming platforms:

Online Press Kit: HERE

About Robosen Robotics:Robosen Robotics (Shenzhen) Co. Ltd, is a leading innovator in the field of AI and robotics, leading the way in digital drive technology, artificial joint driving algorithms, force feedback technology, digital electric drive technology and artificial intelligence and programming. For more information, please visit https://robosen.us/

Read the original here:

Robosen Robotics Showcases T9 at Toy Fair New York - The World's Most Advanced and Programmable Robot - Salamanca Press

Top 10 Women in Robotics Industry – Analytics Insight

From driving rovers on Mars to improving farm automation, women have been everywhere. These women cover all parts of the robotics industry, both research, product and approach. They are authors and pioneers, they are investigators and activists. They are founders and emeritus. There is a role model here for everybody! Whats more, there is no reason ever not to have a lady talking on a board on robotics and AI.

Robotics is the method for the future, and women are driving the way for the absolute most accommodating innovations! For little girls, strong role models are vital! From Ada Lovelace, the worlds first computer programmer, to ladies engaged with robotics today, this rundown of female pioneers makes certain to motivate children to think about robotics as a future career.

While working at Otherlab, Danielle Applestone built up the Other Machine, a desktop CNC machine and machine control software appropriate for students, and financed by DARPA. The organization is currently known as Bantam Tools and was acquired by Bre Pettis. Right now, Applestone is CEO and CoFounder of Daughters of Rosie, determined to solve the labor shortage in the U.S. manufacturing industry by getting more women into stable manufacturing employments with purpose, growth potential, and benefits.

Crystal Chao is Chief Scientist at Huawei and the Global Lead of Robotics Projects, administering a group that works in Silicon Valley, Boston, Shenzhen, Beijing, and Tokyo. She has worked with all aspects of the robotics programming stack in her previous career, including a stint at X, Googles moonshot production line. In 2012, Chao won Outstanding Doctoral Consortium Paper Award, ICMI, for her PhD at Georgia Tech, where she built up an architecture for social human-robot interaction (HRI) called CADENCE: Control Architecture for the Dynamics of Natural Embodied Coordination and Engagement, empowering a robot to collaborate fluently with people utilizing dialogue and manipulation.

Squishy robots are quickly deployable mobile sensing robots for disaster rescue, remote monitoring and space exploration, created from the research at the BEST Lab or Berkeley Emergent Space Tensegrities Lab. Prof. Alice Agogino is the Roscoe and Elizabeth Hughes Professor of Mechanical Engineering, Product Design Concentration Founder and Head Advisor, MEng Program at the University of California, Berkeley, and has a long history of combining research, entrepreneurship and inclusion in engineering. Agogino won the AAAS Lifetime Mentor Award in 2012 and the Presidential Award for Excellence in Science, Mathematics and Engineering Mentoring in 2018.

Emily Cross is a cognitive neuroscientist and artist. As the Director of the Social Brain in Action Laboratory (www.soba-lab.com), she investigates how our cerebrums and behaviors are formed by various types of experience all through our life expectancies and across societies. She is right now the Principal Investigator on the European Research Council Starting Grant entitled Social Robots, which runs from 2016-2021.

Dr. Susanne Bieller is General Secretary, of The International Federation of Robotics (IFR), a non-profit organization representing more than 50 makers of industrial robots and national robot associations from more than twenty nations. Prior to that, Dr Bieller was project manager of the European Robotics Association EUnited Robotics. In the wake of finishing her PhD in Chemistry, she started her expert profession at the European Commission in Brussels, at that point dealt with the flat-panel display group at the German Engineering Federation (VDMA) in Frankfurt.

If robots can act in the most profound pieces of the sea, for what reason wouldnt they be able to contribute at home? That question has driven Cynthia Breazeal to pioneer social robotics that communicate with people. She made the worlds first social robot, Kismet, and established Jibo, the worlds first family robot. She additionally directs the Personal Robots Group at MITs Media Lab.

Heather Justice has the dream job title of Mars Exploration Rover Driver and is a Software Engineer at NASA JPL. As a 16-year-old viewing the first Rover arriving on Mars, she stated: I saw exactly how far robotics could take us and I was enlivened to seek after my inclinations in computer science and engineering. Justice graduated from Harvey Mudd College with a B.S. in computer science in 2009 and an M.S. from the Robotics Institute at Carnegie Mellon University in 2011, having additionally interned at three diverse NASA places and working in an assortment of research areas including computer vision, mobile robot path planning, and spacecraft flight rule validation.

Ayorkor Korsah experienced childhood in Ghana and studies in the United States picking up her Ph.D. in Robotics from Carnegie Mellon University. Presently back in Ghana, she is a professor of computer science and robotics at Ashesi University. In 2012, she co-founded the African Robotics Network, a community that shares robotics resources.

Madeline Gannon is a multidisciplinary designer imagining better approaches to speak with machines. Her ongoing works taming giant industrial robots center around growing new boondocks in human-robot relations. Her interactive establishment, Mimus, was granted a 2017 Ars Electronica STARTS Prize Honorable Mention. She was likewise named a 2017/2018 World Economic Forum Cultural Leader. She holds a PhD in Computational Design from Carnegie Mellon University, where she studied human-focused interfaces for autonomous fabrication machines. She additionally holds a Masters in Architecture from Florida International University.

Kanako Harada is Program Manager of the ImPACT program Bionic Humanoids Propelling New Industrial Revolution of the Cabinet Office, Japan. She is additionally Associate Professor of the divisions of Bioengineering and Mechanical Engineering, School of Engineering and the University of Tokyo, Japan. She acquired her M.Sc. in Engineering from the University of Tokyo in 2001, and her Ph.D. in Engineering from Waseda University in 2007. She worked for Hitachi Ltd., Japan Association for the Advancement of Medical Equipment, and Scuola Superiore SantAnna, Italy, before joining the University of Tokyo. Her research interests incorporate surgical robots and surgical skill assessment.

Share This ArticleDo the sharing thingy

Read more from the original source:

Top 10 Women in Robotics Industry - Analytics Insight

New ‘cobot’ robots kill some jobs, create others – Automotive News Canada

Technology is often blamed for replacing humans in the job market, but when Shelley Fellows looks at a collaborative robot a cobot she sees the result of highly paid, highly skilled labour.

I see the mechanical designer who designed the tooling at the end of that robot arm, said Fellows, vice-president of communications at Windsor, Ont.-based AIS Technology Group, which specializes in automation technology.

I see the workers who fabricated that tooling. I see the electrical designer and the engineers who designed the electrical system and the circuitry. I see the programmers who programmed the controls. I see the vision system designer and the programmer for the vision system.

I see all of those highly skilled people; and without them, you wouldnt see that robot on the factory floor, said Fellows, who also chairs Automate Canada, an industry association devoted to growing Canadas automation sector.

While robotic technology kills certain jobs, automating the more monotonous tasks typically leads to more interesting, better paid positions, said Linamar Corp. CEO Linda Hasenfratz.

Between 2012 and 2019, the Guelph, Ont.-based parts supplier increased employment in Canada by almost 40 per cent, but the payroll was up 60 per cent. Most of the increase in employment occurred in jobs such as engineer and programmer, Hasenfratz said.

I think that is an interesting evolution, and it is a winwin all around, but that does have implications for our education and training system.

We have an increased need for people in engineering, technology, math, the trades.

We need to make sure we are graduating people with more skills.

The cobots also help ease a chronic labour shortage plaguing the parts industry, Hasenfratz said.

We have got huge shortages and need for people in all of these areas.

By automating tasks that are more repetitive, the industry can shift its workforce into the higher-value jobs, Hasenfratz said.

Fellows said an opportunity also exists to boost automation manufacturing in Canada. Currently, one-third of Canadian manufacturers source their automation outside the country.

We can be supplying our Canadian manufacturers with a lot more of our robotics, controls and other automation. To me, it would be a shame if our manufacturers automate but are sourcing most of their technology outside of our country.

William Melek, director of the University of Waterloos Ontario robotics research centre, RoboHub, said making this happen will require a collaborative ecosystem of industry, researchers, policymakers and advisers working together to address everything from workforce training to safety policies for working around cobots, as well as encouraging their development and evolution.

We cant be working in isolation, he said.

Read the original here:

New 'cobot' robots kill some jobs, create others - Automotive News Canada

What happens when robotics industry vets have kids? – ZDNet

What happens when a group of robotics industry veterans have kids? They have start a company specializing educational robots for kids, of course.

Matatalabis fast carving a path for itself in the world of children's STEAM education products and content. Matata's kid's programming robots uses image recognition technology to develop children's cognitive abilities and computational thinking through a variety of programming games. The robot focuses on physical programming, with no need for screen or literacy to learn to code.

Perhaps to the horror of some educators, who believe there's too much tech in education, but the delight of a growing customer base, the company's entry-level learning tech is targeted at kids as young as three. So far Matatalab has pursued an ambitious market strategy, bringing its products out in over 40 countries via a combination of traditional brick-and-mortar stores as well as Amazon.

There area numberof robotics companies vying for space in the lucrativeed techmarket. STEAM eduction is increasingly emphasized in budgets, and many schools now offer coding instruction as part of the standard curricula as early askindergarten. Analysts predict the educational robotics market will be worth$1.7 billionby 2023. In someChinese schools, students begin as early as preschool, which is where Matatalab is focusing.

The company was founded in 2017 by four Shenzhen-based robotics industry veterans who had all had kids who were entering pre-kindergarten. Studies show that brains begin to develop logic around 3 and 4 years old, so the team wanted to make a coding education product that catered specifically to this age group at an early stage of development. Their mission was to give kids around the world the greatest advantage for learning to code as they grow.

Like kid-aimed robots from companies likeCubetto,SAM Labs, andWonder Workshop, the result is an interesting blend of interactive technologies that purport to teach kids to code. The game-basedMatatalab Coding Set, meant for kids as young as four, contains coding blocks, a command board, maps, and challenge booklets. It's a screen-free experience, as well as a word-free experience, relying instead on graphical symbols that differentiate various coding blocks.

The company has won several awards, including the Reddot Design Awards and IDEA Awards. Whether it can eat market share from better known competitors like Wonder Workshop remains to be seen. Price is a perennial Achilles heel in this space, with parents hesitant to shell out on toys they aren't sure their kids will continue to use.

Matatalab's kits range from a $125 "lite" model to a nearly $300 "pro" model.

View post:

What happens when robotics industry vets have kids? - ZDNet

Zymergen installs ‘dozens’ of miniature industrial robots from Mecademic – Robotics and Automation News

Zymergen, a science and materials innovation company based in California, has integrated dozens of Meca500 robots to automate experiments in their life sciences facility. (See video below.)

The Meca500 is a miniature industrial robot manufactured by startup company Mecademic. It has found a great number of applications in labs and industries such as watchmaking and medical technology.

Some predict that Mecademic has huge potential with its robot, especially in wtachmaking and medtech, especially as there is currently no other robot like it.

At the moment, Stubli is the leading supplier of robots for the watchmaking and medtech industry, according to a report by Robotics and Automation News.

Zymergen says flexibility, reliability, speed, and ease of integration were a few reasons why the Meca500 maximized throughput and lowered costs for Zymergens lab automation processes.

You might also like

See more here:

Zymergen installs 'dozens' of miniature industrial robots from Mecademic - Robotics and Automation News

China buys Danish robots to fight coronavirus – Robotics and Automation News

As a new and powerful weapon against the spread of the coronavirus, Chinese hospitals are now deploying Danish disinfection robots from UVD Robots.

Self-driving Danish disinfection robots are now shipping to a number of hospitals in China to help fight the coronavirus, also called COVID-19.

This happened after Sunay Healthcare Supply today signed an agreement with the Danish company UVD Robots.

The first robots shipped this week and in the following weeks, many more robots will be shipped via air to be deployed in the fight against the coronavirus.

With ultraviolet light, the Danish robot can disinfect and kill viruses and bacteria autonomously, effectively limiting the spread of coronaviruses without exposing hospital staff to the risk of infection.

Through Sunay Healthcare Supplys partners in China, the robots will be deployed in all Chinese provinces.

With this agreement, more than 2,000 hospitals will now have the opportunity to ensure effective disinfection, protecting both their patients and staff, says Su Yan, CEO of Sunay Healthcare Supply, a medical equipment supplier to the Chinese market.

Now sold in more than 40 countries, UVD Robots is already delivering its self-driving disinfection robots to hospitals in other parts of Asia in addition to healthcare markets in Europe and the United States.

The invention increases the safety of both staff, patients and their relatives by reducing the risk of contact with bacteria, viruses and other harmful microorganisms.

The concentrated UV-C light emitted by the robots as they drive has a germicidal effect that removes virtually all airborne viruses and bacteria on the surfaces of a room. Results, that led to the UVD robot winning the robotics industrys Oscar IERA Award in 2019.

Technology found superior in the market

Before entering into the agreement with UVD Robots, Sunay Healthcare Supply did its due diligence and screened the market for the best technologies to fight the corona-virus.

We found the UVD robot to be superior compared to other technologies and are pleased to in a very short amount of time enter into a reseller agreement with exclusive rights to supply the UVD robots in China, says Su Yan, emphasizing how both parties have worked intensively to get deliveries of robots to the Chinese hospitals.

CEO of UVD Robots, Per Juul Nielsen, is pleased to be helping combat the spread of the virus in China through the companys solution. In a severe crisis like this where the world health is threatened, our innovative technology really proves its worth, he says.

Developed by large group of collaborators from hospital and robotics industriesUVD Robots is a portfolio company in Blue Ocean Robotics, which develops a wide range of service robots.

The development of the UVD robot started in 2014, when a group of Danish hospitals demanded a far more effective way of reducing infection rates in hospitals.

The fruitful collaboration between bacteriologists, virologists and hospital staff from hospitals, and robot developers, designers, engineers, investors and business people from Blue Ocean Robotics led to an early market introduction in 2018.

Claus Risager, CEO of Blue Ocean Robotics and chairman of UVD Robots, calls it a tremendous satisfaction for employees, management and the circle of owners to witness the deployment of the UVD Robot.

We are now helping solve one of the biggest problems of our time, preventing the spread of bacteria and viruses with a robot that saves lives in hospitals every day.

You might also like

More here:

China buys Danish robots to fight coronavirus - Robotics and Automation News

Book Review: The Globotics Upheaval: Globalization, Robotics and the Future of Work by Richard Baldwin – USAPP American Politics and Policy (blog)

InThe Globotics Upheaval: Globalization, Robotics and the Future of Work,Richard Baldwin provides a new analysis of how automation and globalisation could together shape our societies in the years to come. Drawing on numerous examples to keep readers engaged from cover to cover,this book is a tour de force, writesWannaphong Durongkaveroj, discussing the past, present and future of globalisation and automation and their implications on the way we work.

The Globotics Upheaval: Globalization, Robotics and the Future of Work. Richard Baldwin. Oxford University Press. 2019.

Find this book:

There is little wonder that the rise of artificial intelligence (AI) has sparked ongoing debates about the future of work. In The Globotics Upheaval: Globalization, Robotics and the Future of Work, Richard Baldwin, the author of The Great Convergence, provides a meticulous and succinct analysis of how a dynamic duo of economic change automation and globalisation can shape our societies in the years to come.

Baldwin starts by defining the term globotics a combination of globalisation and robotics. These are not old wine in a new bottle. Globalisation is no longer simply a trade of goods and services across boundaries. It is telemigration a widespread, new form of work that allows workers to sit in one nation and work in offices in another. Simply put, forget about the crowded office workers can now deliver services remotely. In addition, a new phase of automation is not just about vast machines and industrial robots that replace blue-collar workers in factories. It concerns white-collar robots software that performs functions that previously only humans could. An example is Amelia an AI-based digital assistant introduced at the Swedish bank SEB. The first key implication of Baldwins argument is that this transformation has happened so quickly. It took just years, rather than a century, for this dynamic duo to emerge, spread throughout the economy and change our lives. Second, it creates upheaval throughout society.

To depict the massive changes brought about by globalisation and automation, Baldwin proposes a four-step progression: transformation; upheaval; backlash; and resolution. First, an advance in digital technology has transformed the nature of jobs. Thanks to collaborative platforms such as Business Skype, Slack and Trello, remote work is possible. This mostly affects jobs that do not require a physical presence: for instance, those in management, business and finance. Moreover, the preponderance of AI-trained robots also disrupts jobs that are automatable. Most of these jobs are in the service sector the sector in which most people work. Baldwin points out that these changes will not eliminate all jobs, but they will certainly lower the headcount in many service-sector occupations (183). At the same time, this is not a doomsday predicament as the duo also helps create some jobs, especially for workers with specific skills in which the human average scores higher than that of AI.

Baldwin asserts that this unprecedented change can lead to a so-called globotics upheaval. This happens when people are forced to find new jobs. Society could wind up in economic, social and political turmoil. Baldwin uses the ubiquity of the iPhone to explain how globotics invades our society. They are everywhere, and we could not imagine how to live without them. For remote workers residing in different countries, they may accept lower wages and may not receive other benefits such as insurance and health care. This creates a fierce competition borne by domestic labour markets. People may view this practice of using remote workers or telemigrants as unfair competition (200), triggering discontent.

Baldwin describes how the globotics upheaval could turn into a violent globotics backlash: a fight between millions of service-sector and professional workers and globots (212). Baldwin argues that a failure on the part of mainstream politicians to stop the disruption of communities, the loss of good jobs and the undermining of hope has already resulted, in part, in the twin convulsions of 2016 Donald Trump winning the US presidential election and the UK referendum on leaving the EU. Protest can be another example of how workers react when their livelihoods and communities are threatened.

Baldwin ends the book with resolution. While it is true that robots are good at many tasks, it is equally true that they are useless in some cases. It is difficult to automate some jobs (e.g. education and technical) and some cannot be carried out from far away (i.e. hotels and restaurants, transportation and construction). Baldwin argues that future jobs will rely heavily on skills that globots dont have (261). These will require face-to-face interactions that stress humanitys abilities over AI robots; such jobs will be newly created in the future. Overall, Baldwin is optimistic about the transformation. As guided by history, he believes that this will make for a better society.

This book is another tour de force from Baldwin. He discusses the past, present and future of globalisation and automation and their implications on the future of work. With the book offering numerous examples, it is easy for readers to stay with Baldwin from cover to cover.

I do agree with Baldwins argument that the globotics transformation can have a profound impact on the future of work. However, while the evidence has been observed in advanced economies, the book does not address the implications for the Global South in a detailed manner. This poses a big limitation in a book aimed at extending our understanding of the future of work. Developing countries have been relying on manufacturing for decades to absorb the flood of labour released from agriculture. The result has been swift poverty reduction unmatched in human history. As industrialisation has fundamentally transformed the West in the nineteenth century, East Asia in the twentieth century and now Africa, it is important to know how the duo of automation and globalisation can have implications for development paths in the Global South, given levels of economic development and human capital. Whether the vulnerable services sector can provide more and better jobs than manufacturing remains an unsolved issue.

Moreover, while job creation is always good, the economy also needs better jobs. Take vulnerable jobs those without formal working arrangements, lacking decent working conditions, adequate social security and labour rights. Telemigrants tend to be particularly prone to this vulnerability. Additionally, the focus on the effects of globalisation and automation should not be limited to the creation of new jobs or the loss of the same old. What matters is the quality of the job. As observed by Winnie Byanyima:

It is the quality of jobs that matter. When you talk about low levels of unemployment, you are counting the wrong things. You are not counting dignity of people. You are counting exploited people.

It would be beneficial if the book had shone some light on this vital issue.

In addition, more analysis of the mechanisms of how resultant upheaval could flare into violent protest could complement the chapter on backlash, one of the key parts of Baldwins four-step globotics transformation. It is true that rising populism is a reaction to current economic and political situations. Yet, the book does not acknowledge other possible ways that people express their dissent, such as through social media platforms like Facebook or Twitter. Furthermore, the book does not systemically picture how governments can cooperate and deal with the protesters. Not all demonstrations in the street will turn violent. Countries with different levels of democracy and regime repressiveness seem to handle national uprising differently. Think of the recent protests in Hong Kong and Chile.

Lastly, Baldwin argues throughout the book that the future of jobs depends on how quickly new jobs can be created. But another illuminating framework is how firms use their profits. As pointed out by Mariana Mazzucato, the future of work looks grim when new profit is not used to reinvest and expand business but rather to maximise shareholder value through financial instruments. This has happened as finance has come to occupy the core of capitalism the same period in which we have seen the rise of globotics. No doubt the changing practices of firms can complement Baldwins story.

As one of the world thinkers on globalisation, Baldwin offers more than simply a prediction of the future in this book. It belongs on the reading list of all of us who live in this ever-changing world.

Please read our comments policy before commenting.

Note: This article gives the views of theauthors, and not the position of USAPP American Politics and Policy, nor of the London School of Economics.

Shortened URL for this post:http://bit.ly/2T1OnKy

Wannaphong Durongkaveroj Australian National UniversityWannaphong Durongkaveroj is a PhD candidate at the Arndt-Corden Department of Economics, Crawford School of Public Policy, College of Asia and Pacific at the Australian National University, Australia. His research focuses on poverty, inequality and trade.

Original post:

Book Review: The Globotics Upheaval: Globalization, Robotics and the Future of Work by Richard Baldwin - USAPP American Politics and Policy (blog)