Humans and AI will work together in almost every job, Parc CEO … – Recode

Artificial intelligence is poised to continue advancing until it is everywhere and before it gets there, Tolga Kurtoglu wants to make sure its trustworthy.

Kurtoglu is the CEO of Parc, the iconic Silicon Valley research and development firm previously known as Xerox Parc. Although its best known for its pioneering work in the early days of computing developing technologies such as the mouse, object-oriented programming and the graphical user interface Parc continues to help companies and government agencies envision the future of work.

A really interesting project that were working on is about how to bring together these AI agents, or computational agents, and humans together, in a way that they form sort of collaborative teams, to go after tasks, Kurtoglu said on the latest episode of Recode Decode, hosted by Kara Swisher. And robotics is a great domain for exploring some of the ideas there.

Whereas today you might be comfortable asking Apples Siri for the weather or telling Amazon's Alexa to add an item to your to-do list, Kurtoglu envisions a future where interacting with a virtual agent is a two-way street. You might still give it commands and questions, but it would also talk back to you in a truly conversational way.

What were talking about here is more of a symbiotic team between an AI agent and a human, he said. They solve the problems together, its not that one of them tells the other what to do; they go back and forth. They can formulate the problem, they can build on each others ideas. Its really important because were seeing significant advancements and penetration of AI technologies in almost all industries."

You can listen to Recode Decode on Apple Podcasts, Google Play Music, Spotify, TuneIn, Stitcher and SoundCloud.

Kurtoglu believes that both in our personal lives and in the office, every individual will be surrounded by virtual helpers that can process data and make recommendations. But before artificial intelligence reaches that level of omnipresence, it will need to get a lot better at explaining itself.

"At some point, there is going to be a huge issue with people really taking the answers that the computers are suggesting to them without questioning them, he said. So this notion of trust between the AI agents and humans is at the heart of the technology were working on. Were trying to build trustable AI systems.

So, imagine an AI system that explains itself, he added. If youre using an AI to do medical diagnostics and it comes up with a seemingly unintuitive answer, then the doctor might want to know, Why? Why did you come up with that answer as opposed to something else? And today, these systems are pretty much black boxes: You put in the input, it just spits out what the answer is.

So, rather than just spitting out an answer, Kurtoglu says virtual agents will explain what assumptions they made and how they used those assumptions to reach a conclusion: Here are the paths Ive considered, here are the paths I've ruled out and heres why.

If you like this show, you should also sample our other podcasts:

If you like what were doing, please write a review on Apple Podcasts and if you dont, just tweet-strafe Kara.

Continued here:

Humans and AI will work together in almost every job, Parc CEO ... - Recode

New AI tech to bridge the culture gap in organisations: IT experts – BusinessLine

Digital transformation (DX) is set to bridge the culture gap, with DX requiring a new level of collaboration between business leaders, employees, and IT staff, according to IT experts.

In 2020, a cultural shift and collaborative mentality will become just as important as the technology itself, said Don Schuerman, CTO, VP product marketing, Pegasystems.

Organisations will look at the DX culture and ramp up efforts to ensure that DX is optimised for success. Expect traditional organisational boundaries between IT and business lines to start breaking down, and new roles like citizen developer and AI Ethicist that blend IT and business backgrounds to grow, he added.

Mankiran Chowhan, Managing Director, Indian Subcontinent, SAP Concur, noted that as we move towards the fourth industrial revolution, workers looking to save time will kick demand for AI into overdrive, and in 2020, workplace changes related to AI will become a noticeable trend.

A recent PwC report revealed that 67 per cent would prefer AI assistance over humans as office assistants. Band-aid transformation is also expected to lose out to deeper DX efforts. Offering consumers a slick interface or a cool app only scratches the surface of a true digital transformation, said Pegasystems Schuerman. He added that next year is bound to witness visible failures of organisations and projects that do not take their transformation efforts below the surface.

AI is also expected to move out of the lab. Rubber will truly meet the road, with DX tech, which has been in a constant state of being in the labs, moving out, explains Schuerman.

While societal tension around AI will continue, Chowhan said that workers openness to automation will incrementally drive change. For example, millennials, who now represent the majority of workers, are instinctively comfortable using AI. As consumers, they are more likely to approve of AI-provided customer support, automated product recommendations, and even want AI to enhance their experience watching sports, he said.

AI and emotional intelligence are expected to converge. Customers are individuals with similar needs: to feel important, heard and respected. As a result, empathetic AI is increasingly applied in advertising, customer service, and to measure how engaged a customer is in their journey.

A report from Accenture showed that AI has the potential to add $957 billion or 15 per cent of Indias current gross value in 2035. Chowhan said that in 2020, this trend will kick into gear, with more technology companies infusing empathy into their AI.

As companies use empathetic AI to bring more of the benefits of advanced technology to life, they will instill more trust, create better user experiences, and deliver higher productivity, said the SAP Concur official.

Machine learning (ML) is also expected to move from a novelty to a routine function. In 2020, ML will be less of a novelty, as it proliferates under the hood of technology services everywhere, especially behind everyday workflows, said Chowhan. Apart from that, data is expected to move from an analytical to a decision-making tool.

In 2020, the shift to leveraging data for real-time decision-making will accelerate for a number of business functions, he added, noting that in the coming years, more organisations will start to realise the potential of their data to intelligently guide business decisions and leverage them to reach even greater levels of success.

Dave Russell, Vice-President of Enterprise Strategy at Veeam Software, noted that all applications will become mission-critical. The number of applications that businesses classify as mission-critical will rise during 2020, paving the way to a landscape in which every app is considered a high-priority, as businesses become completely reliant on their digital infrastructure.

A Veeam Cloud Data Management report showed IT decision-makers saying their business can tolerate two hours downtime of mission-critical apps.

Application downtime costs organisations $20.1 million globally in lost revenue and productivity annually, he said.

Visit link:

New AI tech to bridge the culture gap in organisations: IT experts - BusinessLine

Artificial Intelligence in Fintech – Global Market Growth, Trends and Forecasts to 2025 – Assessment of the Impact of COVID-19 on the Industry -…

DUBLIN--(BUSINESS WIRE)--The "AI in Fintech Market - Growth, Trends, Forecasts (2020-2025)" report has been added to ResearchAndMarkets.com's offering.

The global AI in Fintech market was estimated at USD 6.67 billion in 2019 and is expected to reach USD 22.6 billion by 2025. The market is also expected to witness a CAGR of 23.37% over the forecast period (2020-2025).

Artificial Intelligence improves results by applying methods derived from the aspects of human intelligence but beyond human scale. The computational arms race since the past few years has revolutionized the fintech companies. Further, data and the near-endless amounts of information are transforming AI to unprecedented levels where smart contracts will merely continue the market trend.

Key Highlights

Major Market Trends

Quantitative and Asset Management to Witness Significant Growth

North America Accounts for the Significant Market Share

Competitive Landscape

AI in Fintech market is moving towards fragmented owing to the presence of many global players in the market. Further various acquisitions and collaboration of large companies are expected to take place shortly, which focuses on innovation. Some of the major players in the market are IBM Corporation, Intel Corporation, Microsoft Corporation, among others.

Some recent developments in the market are:

Key Topics Covered

1 INTRODUCTION

1.1 Study Deliverables

1.2 Scope of the Study

1.3 Study Assumptions

2 RESEARCH METHODOLOGY

3 EXECUTIVE SUMMARY

4 MARKET DYNAMICS

4.1 Market Overview

4.2 Industry Attractiveness - Porter's Five Force Analysis

4.2.1 Bargaining Power of Suppliers

4.2.2 Bargaining Power of Buyers/Consumers

4.2.3 Threat of New Entrants

4.2.4 Threat of Substitute Products

4.2.5 Intensity of Competitive Rivalry

4.3 Emerging Use-cases for AI in Financial Technology

4.4 Technology Snapshot

4.5 Introduction to Market Dynamics

4.6 Market Drivers

4.6.1 Increasing Demand for Process Automation Among Financial Organizations

4.6.2 Increasing Availability of Data Sources

4.7 Market Restraints

4.7.1 Need for Skilled Workforce

4.8 Assessment of Impact of COVID-19 on the Industry

5 MARKET SEGMENTATION

5.1 Offering

5.1.1 Solutions

5.1.2 Services

5.2 Deployment

5.2.1 Cloud

5.2.2 On-premise

5.3 Application

5.3.1 Chatbots

5.3.2 Credit Scoring

5.3.3 Quantitative and Asset Management

5.3.4 Fraud Detection

5.3.5 Other Applications

5.4 Geography

5.4.1 North America

5.4.2 Europe

5.4.3 Asia-Pacific

5.4.4 Rest of the World

6 COMPETITIVE LANDSCAPE

6.1 Company Profiles

6.1.1 IBM Corporation

6.1.2 Intel Corporation

6.1.3 ComplyAdvantage.com

6.1.4 Narrative Science

6.1.5 Amazon Web Services Inc.

6.1.6 IPsoft Inc.

6.1.7 Next IT Corporation

6.1.8 Microsoft Corporation

6.1.9 Onfido

6.1.10 Ripple Labs Inc.

6.1.11 Active.ai

6.1.12 TIBCO Software (Alpine Data Labs)

6.1.13 Trifacta Software Inc.

6.1.14 Data Minr Inc.

6.1.15 Zeitgold GmbH

7 INVESTMENT ANALYSIS

8 MARKET OPPORTUNITIES AND FUTURE TRENDS

For more information about this report visit https://www.researchandmarkets.com/r/y1fj00

View original post here:

Artificial Intelligence in Fintech - Global Market Growth, Trends and Forecasts to 2025 - Assessment of the Impact of COVID-19 on the Industry -...

The 10 most innovative artificial intelligence companies of 2020 – Fast Company

Artificial intelligence has reached the inflection point where its less of a trend than a core ingredient across virtually every aspect of computing. These companies are applying the technology to everything from treating strokes to detecting water leaks to understanding fast-food orders. And some of them are designing the AI-ready chips that will unleash even more algorithmic innovations in the years to come.

For enabling the next generation of AI applications with its Intelligent Processing Unit AI chip

As just about every aspect of computing is being transformed by machine learning and other forms of AI, companies can throw intense algorithms at existing CPUs and GPUs. Or they can embrace Graphcores Intelligence Processing Unit, a next-generation processor designed for AI from the ground up. Capable of reducing the necessary number crunching for tasks such as algorithmic trading from hours to minutes, the Bristol, England, startups IPUs are now shipping in Dell servers and as an on-demand Microsoft Azure cloud service.

Read more about why Graphcore is one of the Most Innovative Companies of 2020.

For tutoring clients like Chase to fluency in marketing-speak

Ever tempted to click on the exciting discount offered to you in a marketing email? That might be the work of Persado, which uses AI and data science to generate marketing language that might work best on you. The companys algorithms learn what a brand hopes to convey to potential customers and suggests the most effective approachand it works. In 2019, Persado signed contracts with large corporations like JPMorgan Chase, which signed a five-year deal to use the companys AI across all its marketing. In the last three years, Persado claims that it has doubled its annual recurring revenue.

For becoming a maven in discerning customer intent via messaging apps

We may be a long way from AI being able to replace a friendly and knowledgeable customer-service representative. But LivePersons Conversational AI is helping companies get more out of their human reps. The machine-learning-infused service routes incoming queries to the best agent, learning as it goes so that it grows more accurate over time. It works over everything from text messaging to WhatsApp to Alexa. With Conversational AI and LivePersons chat-based support, the companys clients have seen a two-times increase in agent efficiency and a 20% boost in sales conversions compared to voice interactions.

For catalyzing care after a patients stroke

When a stroke victim arrives at the ER, it can sometimes be hours before they receive treatment. Viz.ai makes an artificial intelligence program that analyzes the patients CT scan, then organizes all the clinicians and facilities needed to provide treatment. This sets up workflows that happen simultaneously, instead of one at a time, which collapses how long it takes for someone to receive treatment and improves outcomes. Viz.ai says that its hospital customer base grew more than 1,600% in 2019.

For transforming sketches into finished images with its GauGAN technology

GauGAN, named after post-Impressionist painter Paul Gauguin, is a deep-learning model that acts like an AI paintbrush, rapidly converting text descriptions, doodles, or basic sketches into photorealistic, professional-quality images. Nvidia says art directors and concept artists from top film studios and video-game companies are already using GauGAN to prototype ideas and make rapid changes to digital scenery. Computer scientists might also use the tool to create virtual worlds used to train self-driving cars, the company says. The demo video has more than 1.6 million views on YouTube.

For bringing savvy to measuring the value of TV advertising and sponsorship

Conventional wisdom has it that precise targeting and measuring of advertising is the province of digital platforms, not older forms of media. But Hives AI brings digital-like precision to linear TV. Its algorithms ingest video and identify its subject matter, allowing marketers to associate their ads with relevant contentsuch as running a car commercial after a chase scene. Hives Mensio platform, offered in partnership with Bain, melds the companys AI-generated metadata with info from 20 million households to give advertisers new insights into the audiences their messages target.

For moving processing power to the smallest devices, with its low-power chips that handle voice interactions

Semiconductor company Syntiant builds low-power processors designed to run artificial intelligence algorithms. Because the companys chips are so small, theyre ideal for bringing more sophisticated algorithms to consumer tech devicesparticularly when it comes to voice assistants. Two of Syntiants processors can now be used with Amazons Alexa Voice Service, which enables developers to more easily add the popular voice assistant to their own hardware devices without needing to access the cloud. In 2019, Syntiant raised $30 million from the likes of Amazon, Microsoft, Motorola, and Intel Capital.

For plugging leaks that waste water

Wint builds software that can help stop water leaks. That might not sound like a big problem, but in commercial buildings, Wint says that more than 25% of water is wasted, often due to undiscovered leaks. Thats why the company launched a machine-learning-based tool that can identify leaks and waste by looking for water use anomalies. Then, managers for construction sites and commercial facilities are able to shut off the water before pipes burst. In 2019, the companys attention to water leaks helped it grow its revenue by 400%, and it has attracted attention from Fortune 100 companies, one of which reports that Wint has reduced its water consumption by 24%.

For serving restaurants an intelligent order taker across app, phone, and drive-through

If youve ever ordered food at a drive-through restaurant and discovered that the items you got werent the ones you asked for, you know that the whole affair is prone to human error. Launched in 2019, Interactions Guest Experience Platform (GXP) uses AI to accurately field such orders, along with ones made via phone and text. The technology is designed to unflinchingly handle complex custom ordersand yes, it can ask you if you want fries with that. Interactions has already handled 3 million orders for clients youve almost certainly ordered lunch from recently.

For giving birth to Kai (born from the same Stanford research as Siri), who has become a finance whiz

Kasisto makes digital assistants that know a lot about personal finance and know how to talk to human beings. Its technology, called KAI, is the AI brains behind virtual assistants offered by banks and other financial institutions to help their customers get their business done and make better decisions. Kasisto incubated at the Stanford Research Institute, and KAI branched from the same code base and research that birthed Apples Siri assistant. Kasisto says nearly 18 million banking customers now have access to KAI through mobile, web, or voice channels.

Read more about Fast Companys Most Innovative Companies:

Read this article:

The 10 most innovative artificial intelligence companies of 2020 - Fast Company

AI could help to reduce diabetic screening backlog in wake of COVID-19 – AOP

Scientists highlight that machine learning could safely halve the number of images that need to be assessed by humans

Pixabay/Michael Schwarzenberger

The study, which was published in British Journal of Ophthalmology, used the AI technology, EyeArt, to analyse 120,000 images from 30,000 patient scans in the English Diabetic Eye Screening Programme.

The technology had 95.7% accuracy for detecting eye damage that would require specialist referral, and 100% accuracy for moderate to severe retinopathy.

The researchers from St Georges, University of London, Moorfields Eye Hospital, UCL, Homerton University Hospital, Gloucestershire Hospitals and Guys and St Thomas NHS Foundation Trusts highlight that the introduction of the technology to the diabetic screening programme could save 10 million per year in England alone.

Professor Alicja Rudnicka, from St Georges, University of London, said that using machine learning technology could safely halve the number of images that need to be assessed by humans.

If this technology is rolled out on a national level, it could immediately reduce the backlog of cases created due to the coronavirus pandemic, potentially saving unnecessary vision loss in the diabetic population, she emphasised.

Moorfields Eye Hospital consultant ophthalmologist Adnan Tufai, highlighted that most AI technology is tested by developers or companies, but this research was an independent study involving images from real-world patients.

The technology is incredibly fast, does not miss a single case of severe diabetic retinopathy and could contribute to healthcare system recovery post-COVID, he said.

Here is the original post:

AI could help to reduce diabetic screening backlog in wake of COVID-19 - AOP

No Jitter Roll: AI Routing in the Contact Center, Voice Analytics | No Jitter – No Jitter

This week we share announcements around intelligent contact center routing, voice analytics tools, a secure access service edge (SASE) and Google Cloud integration, a personalized videoconferencing kit, and CPaaS funding.

Nice Aims to Hyper-Personalize Customer Engagement

For each interaction, Enlighten AI Routing evaluates data from Enlighten AI and other datasets to get a holistic view of the customer and determine the most influential data for that engagement, Nice said in its press release. Likewise, Enlighten AI Routing assesses agent-related data, such as recent training successes, active listening skills, and empathy, to optimize agent assignments, Nice said.

Enlighten AI, which uses machine learning to self-learn and improve datasets with each interaction, then can provide agents with real-time interaction guidance, Nice said. [Agents] can see the impact of their actions on the customer center and are given advice on how to adjust their tone, speed, and other key behaviors such as demonstrating ownership to improve it, Barry Cooper, president of the Nice Workforce and Customer Experience Division, said when introducing the product at Interactions.

TCN Launches Voice Analytics for Contact Center

TCN is initially offering Voice Analytics as a free 60-day trial.

Versa Networks, Google Cloud Team on Integration

Konftel Personalizes Meeting Experience

The Konftel Personal Video Kit, available now, is priced at $279.

IntelePeer Gets Funding Boost

Ryan Daily, No Jitter associate editor, contributed to this article.

More here:

No Jitter Roll: AI Routing in the Contact Center, Voice Analytics | No Jitter - No Jitter

AI creates fictional scenes out of real-life photos – Engadget

Researcher Qifeng Chen of Stanford and Intel fed his AI system 5,000 photos from German streets. Then, with some human help it can build slightly blurry made-up scenes. The image at the top of this article is a example of the network's output.

To create an image a human needs to tell the AI system what goes where. Put a car here, put a building there, place a tree right there. It's paint by numbers and the system generates a wholly unique scene based on that input.

Chen's AI isn't quite good enough to create photorealistic scenes just yet. It doesn't know enough to fill in all those tiny pixels. It's not going to replace the high-end special effects houses that spend months building a world. But, it could be used to create video game and VR worlds where not everything needs to look perfect in the near future.

Intel plans on showing off the tech at the International Conference on Computer Vision in October.

More:

AI creates fictional scenes out of real-life photos - Engadget

How to Stop Sharing Sensitive Content with AWS AI Services – Computer Business Review

Add to favorites

You can use API, CLI, or Console

AWS has released a new tool that allows customers of its AI services to more easily stop sharing their datasets with Amazon for product improvement purposes: something that is currently a default opt-in for many AWS AI services.

Until this week, AWS users had to actively raise a support ticket to opt-out of content sharing. (The default opt-in can see AWS take customers AI workload datasets and store them for its own product development purposes, including outside of the region that end-users had explicitly selected for their own use.)

AWS AI services affected include facial recognition service Amazon Rekognition, voice recording transcription service Amazon Transcribe, natural language processing service Amazon Comprehend and more, listed below.

(AWS users can otherwise choose where data and workloads reside; something that is vital for many for compliance and data sovereignty reasons).

Opting in to sharing is still the default setting for customers: something that appears to have surprised many, as Computer Business Review reported this week.

The company has, however, now updated its opt-out options to make it easier for customers to set opting out as a group-wide policy.

Users can do this in the console, by API or command line.

Users will permission to run organizations:CreatePolicy

Console:

Command Line Interface (CLI) and API

Editors note: AWS has been keen to emphasise a difference between content and data following our initial report, asking us to correct our claim that AI customer data was being shared by default with Amazon, including sometimes outside selected geographical regions. It is, arguably, a curious distinction. The company appears to want to emphasise that the opt-in is only for AI datasets, which it calls content.

(As one tech CEO puts it to us: Only a lawyer that never touched a computer might feel smart enough to venture into content, not data wonderland.)

AWSs own new opt-out page initially read disputed that characterisation.

It read: AWS artificial intelligence (AI) services collect and store data as part of operating and supporting the continuous improvement life cycle of each service.

As an AWS customer, you can choose to opt out of this process to ensure that your data is not persisted within AWS AI service data stores. [Our italics].

AWS has since changed the wording on this page to the more anodyne: You can choose to opt out of having your content stored or used for service improvements and asked us to reflect this.For AWSs full new guide to creating, updating, and deleting AI services opt-out policies, meanwhile, see here.

Go here to see the original:

How to Stop Sharing Sensitive Content with AWS AI Services - Computer Business Review

AI model developed to identify individual birds without tagging – The Guardian

For even the most sharp-eyed of ornithologists, one great tit can look much like another.

But now researchers have built the first artificial intelligence tool capable of identifying individual small birds.

Computers have been trained to learn to recognise dozens of individual birds which could potentially save scientists arduous hours in the field with binoculars, as well as the catching of birds to fit coloured rings to their legs.

We show that computers can consistently recognise dozens of individual birds, even though we cannot ourselves tell these individuals apart, said Andr Ferreira, a PhD student at the Centre for Functional and Evolutionary Ecology (CEFE-CNRS), in France. In doing so, our study provides the means of overcoming one of the greatest limitations in the study of wild birds reliably recognising individuals.

Ferreira began exploring the potential of artificial intelligence while in South Africa, where he studied the co-operative behaviour of the sociable weaver, a bird which works with others to build the worlds largest nest.

He was keen to understand the contribution of each individual to building the nest, but found it hard to identify individual birds from direct observation because they were often hiding in trees or building parts of the nest out of sight. The AI model was developed to recognise individuals simply from a photograph of their backs while they were busy nest-building.

Together with researchers at the Max Planck Institute of Animal Behaviour, in Germany, Ferreira then demonstrated that the technology can be applied to two of the most commonly-studied species in Europe: wild great tits and captive zebra finches.

For AI models to accurately identify individuals they must be trained with thousands of labelled images. This is easy for companies such as Facebook with access to millions of pictures of people voluntarily tagged by users, but acquiring labelled photographs of birds or animals is more difficult.

The researchers, also from institutes in Portugal and South Africa, overcame this challenge by building feeders with camera traps and sensors. Most birds in the study populations already carried a tag similar to microchips which are implanted in cats and dogs. Antennae on the bird feeders read the identity of the bird from these tags and triggered the cameras.

The AI models were trained with these images and then tested with new images of individuals in different contexts. Here they displayed an accuracy of more than 90% for wild great tits and sociable weavers, and 87% for the captive zebra finches, according to the study in the British Ecological Society journal Methods in Ecology and Evolution.

While some larger individual animals can be recognised by the human eye because of distinctive patterns a leopards spots, or a pine martens chest markings, for example AI models have previously been used to identify individual primates, pigs and elephants. But the potential of the models had not been explored in smaller creatures outside the laboratory, such as birds.

According to Ferreira, the lead author of the study, the use of cameras and AI computer models could save on expensive fieldwork, spare animals from procedures such as catching and marking them, and enable much longer-term studies.

It is not very expensive to put a remote camera on a study population for eight years but for somebody to stay there and do the fieldwork for that long is not always possible, he said. It removes the need for the human to be a data collector so researchers can spend more time thinking about the questions instead of collecting the data.

The AI model is currently only able to re-identify individuals it has been shown before. But the researchers are now trying to build more powerful AI models that can identify larger study groups of birds, and distinguish individuals even if they have never seen them before. This would enable new individuals without tags to be recorded by the cameras and recognised by the computers.

See the article here:

AI model developed to identify individual birds without tagging - The Guardian

How AI will help freelancers – VentureBeat

When it comes to artificial intelligence (AI), theres a common question: Will AI technology render certain jobs obsolete? The common prediction is yes, many jobs will be lost as a result. But what do the numbers say? Even more importantly, what does logical thought suggest?

You dont need a special degree to understand the textbook relationship between automation and jobs. The basic theory is that for every job thats automated by technology, theres one less job available to a human who once performed the same task. In other words, if a machine can make a milkshake with the push of a button, then theres no need for the person who previously mixed the shake by hand. If a robot can put a car door on a vehicle in a manufacturing plant, then theres no need for the workers who previously placed the doors on by hand. You get the idea.

But does this theory really hold up on a reliable basis, or is it merely a theory that works in textbooks and PowerPoint presentations? One study or report doesnt discount a theory, but a quick glance at some recent numbers paints a different picture that requires a careful look at this pressing issue.

Upwork, an online platform that connects freelancers with clients, recently published data from its website that shows AI was the second-fastest-growing in-demand skill over the first quarter of 2017.

With artificial intelligence (AI) at the forefront of the conversation around what the future of work holds, its no surprise it is the fastest-growing tech skill and the second fastest-growing skill overall, Upwork explained in a statement. As AI continues to shape our world and influence nearly every major industry and country, companies are thinking about how to incorporate it into their business strategies, and professionals are developing skills to capitalize upon this accelerating tech trend. While some speculate that AI may be taking jobs away, others argue its creating opportunity, which is evidenced by demand for freelancers with this skill.

This latter opinion might be a contrarian view, but in this case, the data supports it. At this point, there really isnt a whole lot of data that says AI is killing jobs. Consider that as you form your opinions.

From a logical point of view, you have to consider the fact that the professional services industry isnt going anywhere. Sure, there might be automated website builders and account software, but the demand for freelancers people isnt going to suddenly disappear. Growth in AI isnt going to replace web developers, accountants, lawyers, and consultants. If anything, its going to assist them and make them more efficient and profitable.

Everything we love about civilization is a product of intelligence, says Max Tegmark, President of the Future of Life Institute. So amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before as long as we manage to keep the technology beneficial.

When you look at freelancers, in particular, you can already see how AI is having a positive impact in the form of powerful tools and resources that can complement and expand existing skillsets. Here are a couple of tools:

Weve only just begun. These technologies will look totally rudimentary when we look back in a few years and recap the growth of AI. However, for now, they serve as an example of what the future holds for the freelance labor market.

The future of work, particularly as it deals with expected growth in AI, is anybodys guess. Ask AI expert Will Lee about his expectations and hell say there are two possible futures:

The first future Lee sees is one where AI has led to high unemployment and people are forced to freelance in order to get by. The only problem is it will be difficult for freelancers to differentiate themselves from the crowd because theyll be offering the same exact services as everyone else.

In this first possibility, people struggle to recognize their value and the uneducated freelance labor force is swallowed up by superior automated technology. But then theres a second possibility, where AI technology actually fuels growth in the freelance economy and humans and machines harmoniously work together.

In the second possibility, weve built a sustainable freelance market based on each individuals special skills and passions, Lee says. Each freelancer is able to find work and make a living due to their ability to differentiate themselves from others in the freelance market.

Experts in the AI field folks like Will Lee, who dedicate their working life to understanding the impact technology will have on labor dont know how this is going to unfold. It could be disastrous, or it could be highly beneficial. Making rash statements about how AI is going to collapse the freelance economy is unwise. You dont know how things will unfold and its better to remain optimistic that everything will work out for the greater good.

One thing is certainly clear: Technology is changing and the ramifications of this evolution will be felt in every capacity of business. Well know a lot more in three to five years, so hold on and enjoy the ride.

Larry Alton is a contributing writer at VentureBeat covering artificial intelligence.

Read this article:

How AI will help freelancers - VentureBeat

Google to launch AI Research Lab in Bengaluru – Economic Times

Google is launching an Artificial Intelligence (AI) Research Lab in Bengaluru in order to create products not just for India but for the rest of the world, the Mountain View headquartered Internet giant said during its flagship Google for India event on Thursday. The Lab will be led by Manish Gupta, a SEM (Society for Experimental Mechanics) Fellow.

The slew of new initiatives in India also include a tie up with state run BSNL for expanding Wi-Fi hotspots in villages in Gujarat, Bihar and Maharashtra. This comes after the company launched a project to connect 500 railway stations in the country and has since claimed to have connected close to 5000 venues across four continents. Google also announced a phone line in partnership Vodafone Idea to get their queries answered with even a 2G phone in English and Hindi. The firm has also added an array of local Indian languages across its products such as Search, Bolo, Discover and Google Assistant among others.

We want to adapt our products for Indians instead of asking Indians to adapt to Google Technology, said Caesar Sengupta, Vice President, Next Billion Users and Payments, Google adding that when the company create products for India it creates for the world.

Ravi Shanker Prasad, union minister for electronics and IT said that during his meeting with Google chief Sundar Pichai in Mountain View last week, he said asked him to make india a launch pad for products. I told him that what becomes successful in India will become successful globally. I am very happy that they are doing that, added Prasad.

Google is also expanding its Google Pay payment app which already has 67 million users and has done transactions worth $110 billion so far to include a new app for Business. Google is also bringing its existing technology of tokenisation to India which will enable payments available to debit and credit card holders through a system of tokenized cards-- to pay for things using a digital token on the phone rather than actual card number

It can be used to pay for merchants which accept Bharat QR code and those that have NFC by just tapping the card. Google said that it will make payments much more secure apart from making online payment much more seamless by doing away with the need to enter card details. It will be rolled out over the next few weeks for Visa Cards in partnerships with banks such as HDFC Bank, Axis Bank, SBI Cards among others. The features will be rolled out with MasterCard and Rupay Cards in the coming months.

It is also launching a Spot Platform which enables merchants to create exclusive shop fronts within the Google Pay app using simple Java script which will save them from building expensive websites or applications. Some applications such as like UrbanClap, Goibibo, MakeMyTrip, RedBus, Eat.Fit and Oven Story are also on board the Spot Platform. It will also extend to entrylevel neighborhood jobs search through the Spot platform on the Google Pay app. This will mainly target the unorganised segment in areas such as retail and hospitality, added Sengupta. Google is partnering with NSDC for this initiative along with firms such as Swiggy and is first being rolled out in Delhi-NCR before a nationwide roll out.

Read the original post:

Google to launch AI Research Lab in Bengaluru - Economic Times

Toby Wals’ 2062: The World that AI made – The Tribune

Joseph Jude

What makes us, humans, remarkable? Is it our physical strength? Or is it our intelligence? Perhaps our desire to live in communities? Some of our cousins in the animal kingdom have all those qualities. So what differentiates us?

Our early ancestors were limited in their ability to learn life-skills for two reasons: they had to be physically around those who knew the skills, and they had to learn only through signs. When we started to speak, learning became easier and faster. When we invented writing, ideas spread wider. Hence language is the differentiating feature, Prof Toby Walsh argues in this book.

Even with speech and writing, learning suffers in a vital way. Speaker translates thought to speech; likewise, the learner turns hearing to thought. Information is lost in multiple translations. Digital companies like Tesla and Apple have eliminated such loss of information. They can transmit what one device learns to millions of other devices as code. Every device immediately knows what every other device learns and without the need for translation.

Professor Walsh expands this idea to imagine the next species in our homo family: Homo digitalis. He says homo digitalis will be both biological and digital, living both in our brains and in the broader digital space.

In exploring the parallel living deeper, he debunks the possibility of singularity the anticipated time when we will have an intelligent machine that can cognitively improve itself. Only futurists and philosophers believe the singularity is inevitable. On the other hand, AI researchers think we will have machines that reach human-level intelligence, surpassing our intelligence in several narrow domains. Experts believe we would have such machines by around 2062, which explains the title of the book. The book focuses on the impact such machines will have on our lives. Interwoven into this focus are the steps we can take now to shape that future better than today.

Professor Walsh deals with ramifications of AI-augmented humans on work, privacy, politics, equality, and wars. He paints neither a dystopian nor a utopian future. As an academician, he carefully constructs his arguments on research, data, and trends.

When we ponder artificial intelligence, we conjure up an intellect superior to humans. What might come as a surprise is that machine intelligence would be artificial. Take flight, for example. We fly, but not like birds. Machines might simulate a storm, but it wont get wet. So values and fairness determined by devices will be unnatural. This should motivate us to take steps to shape a better future, one that isnt unnatural.

The book excels in painting a realistic picture of an AI-based future. However, it falters in the steps we can take to avoid a disastrous fate. The author promotes government regulation as a primary tool to control AI. Did UN-based regulations prevent monstrous state actors from acquiring chemical weapons and using it on their citizens? As and when lethal automatic weapons are manufactured, how difficult would it be for determined non-state actors to obtain them? Another mistake is considering AI as a singular technology. Governments should regulate AI not at the input side collecting, storing, and processing data, but where it is used loan processing, police departments, weapons, and so on.

There is no question that technology as powerful as AI should be regulated. But regulation alone wont work. We need a comprehensive approach that involves academia, citizen activists, steering groups and task forces. The Internet is one example around us.

One data shown in the book should hit Indians the hardest. I wish it forms the wake-up call for policymakers, entrepreneurs, and concerned citizens. Tianjin, a city in China, outspends India in AI. The best time to invest in AI was yesterday, the next best time is now.

Continued here:

Toby Wals' 2062: The World that AI made - The Tribune

Storytelling & Diversity: The AI Edge In LA – Forbes

LA is known as the land of storytellers, but when it comes to its own story, the entertainment business is still front and center. In fact, LA has been at the core of a flourishing AI scene for decades. Since the 1920s through today, elite mathematicians and engineers have been putting their knowledge to work for a multitude of industries such as health, aerospace, and media with relatively little visibility in the tech limelight.

Now, these industries are poised to bring together a convergence of knowledge across cutting edge technologies and LA may finally have its day in the spotlight as a focal point for cross-disciplinary innovation.

LAs history in technology has its roots in the aerospace world, where because of its perfect weather and vast open spaces, it became an ideal setting for the aerospace industry to plant its roots in the early 1900s. Companies like Douglas Aircraft and JPL were able to find multi-acre properties to test rockets and build large airfields.

The engineering know-how and nature of aviation work fueled the manufacturing sector in Southern California during WWII and eventually became the birthplace to the internet as we know it, when UCLA, funded by the Department of Defense, sent the first virtual message via ARPANET in the same year we first landed a man on the moon.

Black Girls Code

Through busts and booms, engineering talent was both attracted to the area and nurtured at many well known and respected educational institutions such as Caltech, USC, and UCLA, helping to augment the labor pool as well as becoming important sources of R&D.

This engineering talent continued to extend its branches out into other industries, such as health and wellness which are natural extensions for a population already obsessed with youth, fitness and body perfection.

Today, LA sits as a unifying center for life sciences, entertainment, media, and aerospace with frontier technologies such as AI pushing innovation across these core industries and providing a platform for new discoveries, cures, and social interactions.

Dave Whelan, Chief Executive Officer of BioscienceLA believes diversity is LAs secret weapon when it comes to its potential to become the global epicenter for AI innovation. He notes LAs widely diverse diaspora, which makes it a perfect place to train AI.

The entire worlds global population resides in LA. If you look at AI for healthcare, you have the raw materials in patient and health data that provide the widest range of possibilities. Combine that with the mix of the creative workforce, diversity of economies, and SoCal mindset, all together making LA a prime center for innovation that has yet to rightly take its place in the sun when compared to the attention that Silicon Valley receives.

The AI opportunity to save lives is particularly meaningful, especially in todays pandemic times. How do we apply AI in a way that can help with early detection, identify clusters, sequence DNA, or source the right treatments? Many aspects of life sciences are computational, and mathematical biologists have been entrenched in LA for some time providing services such as computational epidemiology, which is a multidisciplinary field that leverages computer science and mathematics to understand the spread of diseases and other public health issues.

Brian Dolan, CEO, and Founder of VerdantAI who has his roots in statistical genetics and biomathematics has seen the converging evolution of the tech scene in LA and is actively committed to building out the AI ecosystem. His startup studio is focused on launching new AI companies into market and partnering with large enterprises to help them turn their data into products.

It's not hard to argue that now is the time to focus on the big problems, like COVID and climate change. We need curious, dedicated, intelligent people to take these things on, and Los Angeles certainly offers that kind of talent. Our innovation diversity goes beyond demographics and into industries, geographies, and even ecologies. No other city can really offer that.

Brians previous company Deep 6 AI applies artificial intelligence to the clinical trial process by finding patients for medical trials and getting life-saving cures to people more quickly. Today, Brian and his team at Verdant are incubating technologies to optimize carbon-neutral supply chain networks, leveraging advanced medical NLP technology to read medical texts to create precision digital health experiences, and working on a mental health solution aimed at addiction and recovery.

Building a thriving ecosystem takes time and imagination. AI is both a disruptive force and a major opportunity, but dispelling the myths around AI is important in order to map out its impact and full potential.

Ronni Kimm, founder of Collective Future uses future visioning to help bring outside perspectives into organizations. Future visioning is important for accelerating innovation as it provides the ability to respond to and proactively be part of the stories of change. Her design and innovation studio helps bring strategic transformation to companies from a top-down and bottom-up perspective.

Ronni Kim

Health sciences and life sciences have some of the most interesting challenges in the world but there are not enough stories to help people understand how powerful approaches such as predictive analytics in health science can dramatically impact successful organ transplants predict at-risk patient complications, says Ronni. I see storytelling as one of the most important aspects of accelerating technology, creating more stories around these incredible innovations is where LA can excel in building resilient ecosystems and bringing more of these technologies to market.

Today LA sits at the center of multiple industries, where talent pools cross-pollinate and inspire new ideas. Its diverse and colorful population offers data not readily available in other geographies, making it ideal for big data applications that leverage AI. Its educational institutions feed and train new labor pools and its proximity to creative fields inspires new ways to leverage technology in traditional industries.

Ideas such as bringing the spatial web to life, holograms to offer new methods of care, and digital twins to create cross reality environments are just some of the ideas coming to life in LA.

As technology continues to advance, be sure to be on the lookout for more stories about the rise and influence of AI across these massive industries.

See the original post:

Storytelling & Diversity: The AI Edge In LA - Forbes

The imaging AI field is exploding, but it carries unique challenges – Healthcare IT News

The use of machine learning and artificial intelligence to analyze medical imaging data has grown significantly in recent years with 60 new products approved by the U.S. Food and Drug Administration in 2020 alone.

But AI scaling, particularly in the medical field, can face unique challenges, said Elad Benjamin, general manager of Radiology and AI Informatics at Philips, during an Amazon Web Services presentation this week.

"The AI field is experiencing an amazing resurgence of applications and tools and products and companies," said Benjamin.

The question many companies are seeking to answer is: "How do we use machine learning and deep learning to analyze medical imaging data and identify relevant clinical findings that should be highlighted to radiologists or other imaging providers to provide them with the decision support tools they need?" Benjamin asked.

He outlined three main business models being pursued:

Benjamin described common challenges and bottlenecks in the process of developing and marketing AI tools, noting that some were specifically hard to tackle in healthcare.

Gathering data at scale is one hurdle, he noted, and diversity of information is critical and sometimes difficult to achieve.

And labeling data, for instance, is the most expensive and time-consuming process, and requires a professional's perspective (as opposed to in other industries, when a layperson could label an image as a "car" or a "street" without too much trouble).

Receiving feedback and monitoring are critical too.

"You need to understand how your AI tools are behaving in the real world," Benjamin said. "Are there certain subpopulations where they are less effective? Are they slowly reducing in their quality because of a new scanner or a different patient population that has suddenly come into the fold?"

Benjamin said Philips with the help of AWS tools such as HealthLake, SageMaker and Comprehend is tackling these bottlenecks.

"Without solving these challenges it is difficult to scale AI in the healthcare domain," he said.

Kat Jercich is senior editor of Healthcare IT News.Twitter: @kjercichEmail: kjercich@himss.orgHealthcare IT News is a HIMSS Media publication.

Go here to see the original:

The imaging AI field is exploding, but it carries unique challenges - Healthcare IT News

AI cameras may be used to detect social distancing as US is reopening – Business Insider – Business Insider

As businesses across the United States have gradually begun to reopen, a growing number of companies are investing in camera technology powered by artificial intelligence to help enforce social distancing measures when people may be standing too closely together.

"[If] I want to manage the distance between consumers standing in a line, a manager can't be in all places at once," Leslie Hand, vice president of retail insights for the International Data Corporation, told Business Insider. "Having a digital helper that's advising you when folks are perhaps in need of some advice is useful."

Businesses throughout the country have started operating again under restrictions, such as enforcing social distancing measures, requiring customers to wear masks, and reducing capacity. New York City, which was the epicenter of the virus' outbreak in the US, is set to enter Phase II of its reopening plan on Monday.

The White House's employer guidelines for all phases of reopening include developing policies informed by best practices, particularly social distancing. And some experts believe smart cameras can help retailers and other companies detect whether such protocols are being followed.

"There's some technology coming out on the horizon that will be able to be incorporated into the nuts and bolts that you already have in your store," Barrie Scardina, head of Americas retail for commercial real estate services firm Cushman & Wakefield, said to Business Insider.

Some companies have already begun experimenting with such technologies. Amazon said on June 16 that it developed a camera system that's being implemented in some warehouses to detect whether workers are following social distancing guidelines. The company's so-called "Distance Assistant" consists of a camera, a 50-inch monitor, and a local computing device, which uses depth sensors to calculate distances between employees.

When a person walks by the camera, the monitor would show whether that person is standing six feet apart from nearby colleagues by overlaying a green or red circle around the person. Green would indicate the person is properly socially distanced, while red would suggest the people on camera may be too close together. Amazon is open-sourcing the technology so that other companies can implement it as well.

Motorola Solutions also announced new analytics technology in May that enables its Avigilon security cameras to detect whether people are social distancing and wearing masks. The system uses AI to collect footage and statistical patterns that can be used to provide notifications to organizations about when guidelines around wearing face masks or honoring social distancing measures are being breached.

Pepper Construction, a Chicago-based construction company, has also begun using software from a company called SmartVid.io to keep an eye on where workers may be grouping, as Reuters reported in late April.

Scardina offered some examples illustrating how smart cameras can help retailers enforce social distancing. Workers can use such technologies to see where customers are clustering so that they can make decisions about how to arrange furniture and fixtures within the store. If a table needs to be moved further away from another display because customers don't have space to stand six feet apart, AI camera technology can help retailers spot this.

As far as how widespread that technology will become in stores, Scardina says it will depend on factors such as a retailer's budget and the size of the shop.

While more companies may be investing in either developing or implementing new camera technologies, there will inevitably be challenges that arise when putting them into practice, says Pieter J. den Hamer, senior director of artificial intelligence for Gartner Research.

Not only could implementing such tech raise privacy concerns, but there are also practical limitations. A camera may not know if two people standing close together belong to the same household, for example.

All 50 states have reopened at some capacity, putting an end to stay-at-home orders that had been in effect since March to curb the coronavirus' spread, and some states are now seeing a spike in cases. The New York Times recently reported that at least 14 states have experienced positive cases that have outpaced the average number of administered tests.

The coronavirus has killed at least 117,000 people in the US and infected more than 2.1 million as of June 18, according to the Times, and experts predict there will be a second wave. But President Trump has said the country won't be closing again.

"It's a very, very complex debate full of dilemmas," den Hamer said. "Should we prioritize opening up the economy, or should we prioritize the protection of our privacy?"

The rest is here:

AI cameras may be used to detect social distancing as US is reopening - Business Insider - Business Insider

Creativity and AI: The Next Step – Scientific American

In 1997 IBMs Deep Blue famously defeated chess Grand Master Garry Kasparov after a titanic battle. It had actually lost to him the previous year, though he conceded that it seemed to possess a weird kind of intelligence. To play Kasparov, Deep Blue had been pre-programmed with intricate software, including an extensive playbook with moves for openings, middle game and endgame.

Twenty years later, in 2017, Google unleashed AlphaGo Zero which, unlike Deep Blue, was entirely self-taught. It was given only the basic rules of the far more difficult game of Gogo, without any sample games to study, and worked out all its strategies from scratch by playing millions of times against itself. This freed it to think in its own way.

These are the two main sorts of AI around at present. Symbolic machines like Deep Blue are programmed to reason as humans do, working through a series of logical steps to solve specific problems. An example is a medical diagnosis system in which a machine deduces a patients illness from data by working through a decision tree of possibilities.

Artificial neural networks like AlphaGo Zero are loosely inspired by the wiring of the neurons in the human brain and need far less human input. Their forte is learning, which they do by analyzing huge amounts of input data or rules such as the rules of chess or Gogo. They have had notable success in recognizing faces and patterns in data and also power driverless cars. The big problem is that scientists dont know as yet why they work as they do.

But its the art, literature and music that the two systems create that really points up the difference between them. Symbolic machines can create highly interesting work, having been fed enormous amounts of material and programmed to do so. Far more exciting are artificial neural networks, which actually teach themselves and which can therefore be said to be more truly creative.

Symbolic AI produces art which that is recognizable to the human eye as art, but its art which that has been pre-programmed. There are no surprises. Harold Cohens Aaron AARON algorithm produces rather beautiful paintings using templates which that have been programmed into it. Similarly, Simon Colton at the college of Goldsmiths College in the University of London programs The Painting Fool to create a likeness of a sitter in a particular style. But neither of these ever leaps beyond its program.

Artificial neural networks are far more experimental and unpredictable. The work springs from the machine itself without any human intervention. Alexander Mordvintsev set the ball rolling with his Deep Dream and its nightmare images spawned from convolutional neural networks (ConvNets) and that seem almost to spring from the machines unconscious. Then theres Ian Goodfellows GAN (Generative Adversarial Network) with the machine acting as the judge of its own creations, and Ahmed Elgammals CAN (Creative Adversarial Network), which creates styles of art never seen before. All of these generate far more challenging and difficult worksthe machines idea of art, not ours. Rather than being a tool, the machine participates in the creation.

In AI-created music the contrast is even starker. On the one hand, we have Franois Pachets Flow Machines, loaded with software to produce sumptuous original melodies, including a well-reviewed album. On the other, researchers at Google use artificial neural networks to produce music unaided. But at the moment their music tends to lose momentum after only a minute or so.

AI-created literature illustrates best of all the difference in what can be created by the two types of machines. Symbolic machines are loaded with software and rules for using it and trained to generate material of a specific sort, such as Reuters news reports and weather reports. A symbolic machine equipped with a database of puns and jokes generates more of the same, giving us, for example, a corpus of machine-generated knock-knock jokes. But as with art their literary products are in line with what we would expect.

Artificial neural networks have no such restrictions. Ross Goodwin, now at Google, trained an artificial neural network on a corpus of scripts from science fiction films, then instructed it to create sequences of words. The result was the fairly gnomic screenplay for his film Sunspring. With such a lack of constraints, artificial neural networks tend to produce work that seems obscureor should we say experimental? This sort of machine ventures into territory beyond that of our present understanding of language and can open our minds to a realm often designated as nonsense. NYUs Allison Parrish, a composer of computer poetry, explores the line between sense and nonsense. Thus, artificial neural networks can spark human ingenuity. They can introduce us to new ideas and boost our own creativity.

Proponents of symbolic machines argue that the human brain too is loaded with software, accumulated from the moment we are born, which means that symbolic machines can also lay claim to emulating the brains structure. Symbolic machines, however, are programmed to reason from the start.

Conversely, proponents of artificial neural networks argue that, like children, machines need first to learn before they can reason. Artificial neural networks learn from the data theyve been trained on but are inflexible in that they can only work from the data that they have.

To put it simply, artificial neural networks are built to learn and symbolic machines to reason but with the proper software they can each do a little of the other. An artificial neural network powering a driverless car, for example, needs to have the data for every possible contingency programmed into it so that when it sees a bright light in front of it, it can recognize whether its a bright sky or a white vehicle, in order to avoid a fatal accident.

What is needed is to develop a machine that includes the best features of both symbolic machines and artificial neural networks. Some computer scientists are currently moving in that direction, looking for options that offer a broader and more flexible intelligence than neural networks by combining them with the key features of symbolic machines.

At Deep Mind in London, scientists are developing a new sort of artificial neural network that can learn to form relationships in raw input data and represent it in logical form as a decision tree, as in a symbolic machine. In other words, theyre trying to build in flexible reasoning. In a purely symbolic machine all this would have to be programmed in by hand, whereas the hybrid artificial neural network does it by itself.

In this way combining the two systems could lead to more intelligent solutions and also to forms of art, literature andmusic which that are more accessible to human audiences while also being experimental, challenging, unpredictable and fun.

Continue reading here:

Creativity and AI: The Next Step - Scientific American

Why a $30 million CryptoPunks auction fell apart at the last minute – The Verge

In the Sothebys salesroom one evening in late February, fluorescent lights beamed down on the assembled crowd. A sea of spectators is not unusual for Sothebys the 278-year-old auction house typically hosts more than 600 sales per year but this sale was different. It was the auction houses first-ever evening sale dedicated solely to NFTs.

Sothebys described the event, titled Punk it!, as a truly historic sale for an undeniably historic NFT project. It consisted of a single lot 104 CryptoPunks sold as an all-or-nothing bundle. Sothebys estimated the bundle would go for $2030 million, on par with sales of paintings by David Hockney or Jean-Michel Basquiat.

To drum up interest, the auction house had thrown a series of events aimed at attracting prospective punk-buyers. There was a pre-auction dinner for VIP Punk holders and an afterparty with DJ Seedphrase, known for the enormous CryptoPunk headpiece he wears while playing sets. The campaign worked: the crowd on the day of the auction included Nicole Muniz, the CEO of Yuga Labs, as well as NFT influencer Andrew Wang and Nifty Gateway co-founders Duncan and Griffin Cock Foster.

Eli Tan, a writer at crypto news outlet CoinDesk, remembers a party atmosphere. The actual sale seemed like kind of a secondary thing, he explains.

Then, things got weird. The indicated start time of the sale, 7PM, came and went. Five minutes passed, then 20. Finally, a voice on the intercom announced that the lot had been withdrawn. Gasps could be heard in the salesroom. After weeks of preparation, the sale was canceled and no one was sure why.

Sothebys says the lot was pulled after discussions with the seller, but theres been little other explanation including whether the decision came from the auction house or the seller. Artnet reported that Sothebys pulled the lot due to lack of interest, while the seller tweeted simply that they had decided to hodl.

Its not uncommon for lots to be pulled ahead of sales, although its typically the result of legal concerns or fear of a flop, as The New York Times noted after the failed auction. But for anyone dealing with auction houses or NFTs, it was hard not to speculate on the mysterious no-show. Kenny Schachter, art world provocateur and NFT collector who attended the sale, believes the seller pulled the lot after being informed by the auction house that it was unlikely to sell for the low estimate. Schachter even heard rumors of a legitimate and significant offer that the seller declined in advance of the failed auction one that cleared $10 million but still fell short of the lower end of the estimate.

It should have been a high point for CryptoPunks as a collection. Just a few months earlier, BAYC had sold a bundle of 101 tokens for $24 million, and CryptoPunks fans were primed for a similar victory. Instead, punk-holders left the auction feeling burned and for good reason. They were pretty devastated, Tan says. They were showing me their Punks and they were like, this is the end, probably.

They werent the only ones watching. Less than three weeks after the auction, Yuga Labs would acquire CryptoPunks, effectively ending the projects run as an independent NFT juggernaut. Tan suspects the Yuga Labs team, including CEO Muniz, could have been at the sale to scope out the Punks market.

But despite the risks, theres a real value to putting NFTs up for auction and the anonymous seller seems to have come away from the auction just fine. A few weeks after the sale, it was reported that the seller took out an $8 million loan against the Punks with the help of NFTfi and MetaStreet.

Stephen Young, co-founder of NFTfi, a platform that allows NFT collectors to use their NFTs as collateral on loans, explains that selling at a traditional auction house is a way to give a collection an institutional stamp of approval a precursor to this kind of loan. According to Young, if an NFT from a collection has sold once at Sothebys or Christies, its enough to inflate the price and legitimize the entire collection.

Thats the only reason they do it, Young said of NFT collectors selling at the big houses. You pay the 20% that gives you that [stamp of approval], but its made all of your other CryptoPunks worth 20 percent more, so its more than worth it.

Before Christies $69 million sale of Beeples opus put NFTs on the mainstream art worlds radar in March 2021, Sothebys and Christies were known, at least to those outside the art world, as places to buy expensive rare objects from the collections of the well-off (and often, the recently deceased). Now, the auction houses are selling NFTs of Pepe The Frog with the same pomp and circumstance.

But it didnt happen overnight. Both Sothebys and Christies have been forced to modernize to keep up with an increasingly young and international collector base. Both houses have expanded into selling sneakers and pop culture memorabilia. In the process, theyve elevated Nike SBs and T. rex skeletons into rarified cultural artifacts. It follows that the houses foray into crypto may look like an attempt to elevate NFTs to the status of Monet and Rembrandt, but its actually much simpler: Theres a market.

They would like a chunk of [the NFT] market, of course, says Schachter. They would like a chunk of selling dirty underwear, if there was a market for it. They dont care.

According to Tim Schneider, art business editor at Artnet News who has covered NFTs since before the CryptoKitties days, businesses like Christies and Sothebys have an interest in converting anyone with cash to spend into a power bidder.

Some corners of the crypto world clearly have cash to spend, and NFT sales last year allegedly brought in significant amounts of new bidders (according to Sothebys end-of-year report in 2021, about 80 percent of NFT bidders were new to the auction house). A high-level official at the auction house confirmed that part of their long-term goal is to make it easier for crypto-native collectors to transact, as well as establish NFTs as a new collecting category for the traditional art world.

And in brief flashes, it seems like Sothebys strategy may be working: Crypto billionaire Justin Sun spent more than $100 million on art last fall, including $78 million on a record-breaking Giacometti sculpture at Sothebys but that may be one of the only examples weve seen so far, at least publicly, where a crypto bro has shown a noted interest in fine art.

For all of the lip service being paid to the notion of cross-collecting and people who started off in NFTs getting interested in other traditional artworks, Schneider explains, were not seeing a tremendous amount of organic integration between NFTs and the art establishment. After we spoke, Schneider reported in Artnet that Sothebys has pulled back on the crypto-art-crossover: The auction house notably did not accept cryptocurrency as a payment for any lots during their most recent slate of evening sales, as they had done in 2021.

As embarrassing as the failed auction was in the short term, Schachter still thinks Sothebys and the other houses got back more than they put in. The auction houses are not going to weep, he says. One deal goes down, and then they just move on to the next.

See original here:

Why a $30 million CryptoPunks auction fell apart at the last minute - The Verge

Nanorobotics – Wikipedia

"Nanobots" redirects here. For the They Might Be Giants album, see Nanobots (album).

Emerging technology field

Nanoid robotics, or for short, nanorobotics or nanobotics, is an emerging technology field creating machines or robots whose components are at or near the scale of a nanometer (109 meters).[1][2][3] More specifically, nanorobotics (as opposed to microrobotics) refers to the nanotechnology engineering discipline of designing and building nanorobots with devices ranging in size from 0.1 to 10 micrometres and constructed of nanoscale or molecular components.[4][5] The terms nanobot, nanoid, nanite, nanomachine and nanomite have also been used to describe such devices currently under research and development.[6][7]

Nanomachines are largely in the research and development phase,[8] but some primitive molecular machines and nanomotors have been tested. An example is a sensor having a switch approximately 1.5 nanometers across, able to count specific molecules in the chemical sample. The first useful applications of nanomachines may be in nanomedicine. For example,[9] biological machines could be used to identify and destroy cancer cells.[10][11] Another potential application is the detection of toxic chemicals, and the measurement of their concentrations, in the environment. Rice University has demonstrated a single-molecule car developed by a chemical process and including Buckminsterfullerenes (buckyballs) for wheels. It is actuated by controlling the environmental temperature and by positioning a scanning tunneling microscope tip.

Another definition[whose?] is a robot that allows precise interactions with nanoscale objects, or can manipulate with nanoscale resolution. Such devices are more related to microscopy or scanning probe microscopy, instead of the description of nanorobots as molecular machines. Using the microscopy definition, even a large apparatus such as an atomic force microscope can be considered a nanorobotic instrument when configured to perform nanomanipulation. For this viewpoint, macroscale robots or microrobots that can move with nanoscale precision can also be considered nanorobots.

According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micro-machines (see biological machine). Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the surgeon". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom.[12]

Since nano-robots would be microscopic in size, it would probably be necessary[according to whom?] for very large numbers of them to work together to perform microscopic and macroscopic tasks. These nano-robot swarms, both those unable to replicate (as in utility fog) and those able to replicate unconstrained in the natural environment (as in grey goo and synthetic biology), are found in many science fiction stories, such as the Borg nano-probes in Star Trek and The Outer Limits episode "The New Breed".Some proponents of nano-robotics, in reaction to the grey goo scenarios that they earlier helped to propagate, hold the view that nano-robots able to replicate outside of a restricted factory environment do not form a necessary part of a purported productive nanotechnology, and that the process of self-replication, were it ever to be developed, could be made inherently safe. They further assert that their current plans for developing and using molecular manufacturing do not in fact include free-foraging replicators.[13][14]

A detailed theoretical discussion of nanorobotics, including specific design issues such as sensing, power communication, navigation, manipulation, locomotion, and onboard computation, has been presented in the medical context of nanomedicine by Robert Freitas.[15][16] Some of these discussions[which?] remain at the level of unbuildable generality and do not approach the level of detailed engineering.

A document with a proposal on nanobiotech development using open design technology methods, as in open-source hardware and open-source software, has been addressed to the United Nations General Assembly.[17] According to the document sent to the United Nations, in the same way that open source has in recent years accelerated the development of computer systems, a similar approach should benefit the society at large and accelerate nanorobotics development. The use of nanobiotechnology should be established as a human heritage for the coming generations, and developed as an open technology based on ethical practices for peaceful purposes. Open technology is stated as a fundamental key for such an aim.

In the same ways that technology research and development drove the space race and nuclear arms race, a race for nanorobots is occurring.[18][19][20][21][22] There is plenty of ground allowing nanorobots to be included among the emerging technologies.[23] Some of the reasons are that large corporations, such as General Electric, Hewlett-Packard, Synopsys, Northrop Grumman and Siemens have been recently working in the development and research of nanorobots;[24][25][26][27][28] surgeons are getting involved and starting to propose ways to apply nanorobots for common medical procedures;[29] universities and research institutes were granted funds by government agencies exceeding $2 billion towards research developing nanodevices for medicine;[30][31] bankers are also strategically investing with the intent to acquire beforehand rights and royalties on future nanorobots commercialisation.[32] Some aspects of nanorobot litigation and related issues linked to monopoly have already arisen.[33][34][35] A large number of patents has been granted recently on nanorobots, done mostly for patent agents, companies specialized solely on building patent portfolios, and lawyers. After a long series of patents and eventually litigations, see for example the invention of radio, or the war of currents, emerging fields of technology tend to become a monopoly, which normally is dominated by large corporations.[36]

Manufacturing nanomachines assembled from molecular components is a very challenging task. Because of the level of difficulty, many engineers and scientists continue working cooperatively across multidisciplinary approaches to achieve breakthroughs in this new area of development. Thus, it is quite understandable the importance of the following distinct techniques currently applied towards manufacturing nanorobots:

The joint use of nanoelectronics, photolithography, and new biomaterials provides a possible approach to manufacturing nanorobots for common medical uses, such as surgical instrumentation, diagnosis, and drug delivery.[37][38][39] This method for manufacturing on nanotechnology scale is in use in the electronics industry since 2008.[40] So, practical nanorobots should be integrated as nanoelectronics devices, which will allow tele-operation and advanced capabilities for medical instrumentation.[41][42]

A nucleic acid robot (nubot) is an organic molecular machine at the nanoscale.[43] DNA structure can provide means to assemble 2D and 3D nanomechanical devices. DNA based machines can be activated using small molecules, proteins and other molecules of DNA.[44][45][46] Biological circuit gates based on DNA materials have been engineered as molecular machines to allow in-vitro drug delivery for targeted health problems.[47] Such material based systems would work most closely to smart biomaterial drug system delivery,[48] while not allowing precise in vivo teleoperation of such engineered prototypes.

Several reports have demonstrated the attachment of synthetic molecular motors to surfaces.[49][50] These primitive nanomachines have been shown to undergo machine-like motions when confined to the surface of a macroscopic material. The surface anchored motors could potentially be used to move and position nanoscale materials on a surface in the manner of a conveyor belt.

Nanofactory Collaboration,[51] founded by Robert Freitas and Ralph Merkle in 2000 and involving 23 researchers from 10 organizations and 4 countries, focuses on developing a practical research agenda[52] specifically aimed at developing positionally-controlled diamond mechanosynthesis and a diamondoid nanofactory that would have the capability of building diamondoid medical nanorobots.

The emerging field of bio-hybrid systems combines biological and synthetic structural elements for biomedical or robotic applications. The constituting elements of bio-nanoelectromechanical systems (BioNEMS) are of nanoscale size, for example DNA, proteins or nanostructured mechanical parts. Thiol-ene e-beams resist allow the direct writing of nanoscale features, followed by the functionalization of the natively reactive resist surface with biomolecules.[53] Other approaches use a biodegradable material attached to magnetic particles that allow them to be guided around the body.[54]

This approach proposes the use of biological microorganisms, like the bacterium Escherichia coli[55] and Salmonella typhimurium.[56]Thus the model uses a flagellum for propulsion purposes. Electromagnetic fields normally control the motion of this kind of biological integrated device.[57]Chemists at the University of Nebraska have created a humidity gauge by fusing a bacterium to a silicon computer chip.[58]

Retroviruses can be retrained to attach to cells and replace DNA. They go through a process called reverse transcription to deliver genetic packaging in a vector.[59] Usually, these devices are Pol Gag genes of the virus for the Capsid and Delivery system. This process is called retroviral gene therapy, having the ability to re-engineer cellular DNA by usage of viral vectors.[60] This approach has appeared in the form of retroviral, adenoviral, and lentiviral gene delivery systems.[61][62] These gene therapy vectors have been used in cats to send genes into the genetically modified organism (GMO), causing it to display the trait.[63]

3D printing is the process by which a three-dimensional structure is built through the various processes of additive manufacturing. Nanoscale 3D printing involves many of the same process, incorporated at a much smaller scale. To print a structure in the 5-400m scale, the precision of the 3D printing machine needs to be improved greatly. A two-step process of 3D printing, using a 3D printing and laser etched plates method was incorporated as an improvement technique.[64] To be more precise at a nanoscale, the 3D printing process uses a laser etching machine, which etches the details needed for the segments of nanorobots into each plate. The plate is then transferred to the 3D printer, which fills the etched regions with the desired nanoparticle. The 3D printing process is repeated until the nanorobot is built from the bottom up. This 3D printing process has many benefits. First, it increases the overall accuracy of the printing process.[citation needed] Second, it has the potential to create functional segments of a nanorobot.[64] The 3D printer uses a liquid resin, which is hardened at precisely the correct spots by a focused laser beam. The focal point of the laser beam is guided through the resin by movable mirrors and leaves behind a hardened line of solid polymer, just a few hundred nanometers wide. This fine resolution enables the creation of intricately structured sculptures as tiny as a grain of sand. This process takes place by using photoactive resins, which are hardened by the laser at an extremely small scale to create the structure. This process is quick by nanoscale 3D printing standards. Ultra-small features can be made with the 3D micro-fabrication technique used in multiphoton photopolymerisation. This approach uses a focused laser to trace the desired 3D object into a block of gel. Due to the nonlinear nature of photo excitation, the gel is cured to a solid only in the places where the laser was focused while the remaining gel is then washed away. Feature sizes of under 100nm are easily produced, as well as complex structures with moving and interlocked parts.[65]

There are number of challenges and problems that should be addressed when designing and building nanoscale machines with movable parts. The most obvious one is the need of developing very fine tools and manipulation techniques capable of assembling individual nanostructures with high precision into operational device. Less evident challenge is related to peculiarities of adhesion and friction on nanoscale. It is impossible to take existing design of macroscopic device with movable parts and just reduce it to the nanoscale. Such approach will not work due to high surface energy of nanostructures, which means that all contacting parts will stick together following the energy minimization principle. The adhesion and static friction between parts can easily exceed the strength of materials, so the parts will break before they start to move relative to each other. This leads to the need to design movable structures with minimal contact area [[66]].

Potential uses for nanorobotics in medicine include early diagnosis and targeted drug-delivery for cancer,[67][68][69] biomedical instrumentation,[70] surgery,[71][72] pharmacokinetics,[10] monitoring of diabetes,[73][74][75] and health care.

In such plans, future medical nanotechnology is expected to employ nanorobots injected into the patient to perform work at a cellular level. Such nanorobots intended for use in medicine should be non-replicating, as replication would needlessly increase device complexity, reduce reliability, and interfere with the medical mission.

Nanotechnology provides a wide range of new technologies for developing customized means to optimize the delivery of pharmaceutical drugs. Today, harmful side effects of treatments such as chemotherapy are commonly a result of drug delivery methods that don't pinpoint their intended target cells accurately.[76] Researchers at Harvard and MIT, however, have been able to attach special RNA strands, measuring nearly 10nm in diameter, to nanoparticles, filling them with a chemotherapy drug. These RNA strands are attracted to cancer cells. When the nanoparticle encounters a cancer cell, it adheres to it, and releases the drug into the cancer cell.[77] This directed method of drug delivery has great potential for treating cancer patients while avoiding negative effects (commonly associated with improper drug delivery).[76][78] The first demonstration of nanomotors operating in living organisms was carried out in 2014 at University of California, San Diego.[79] MRI-guided nanocapsules are one potential precursor to nanorobots.[80]

Another useful application of nanorobots is assisting in the repair of tissue cells alongside white blood cells.[81] Recruiting inflammatory cells or white blood cells (which include neutrophil granulocytes, lymphocytes, monocytes, and mast cells) to the affected area is the first response of tissues to injury.[82] Because of their small size, nanorobots could attach themselves to the surface of recruited white cells, to squeeze their way out through the walls of blood vessels and arrive at the injury site, where they can assist in the tissue repair process. Certain substances could possibly be used to accelerate the recovery.

The science behind this mechanism is quite complex. Passage of cells across the blood endothelium, a process known as transmigration, is a mechanism involving engagement of cell surface receptors to adhesion molecules, active force exertion and dilation of the vessel walls and physical deformation of the migrating cells. By attaching themselves to migrating inflammatory cells, the robots can in effect "hitch a ride" across the blood vessels, bypassing the need for a complex transmigration mechanism of their own.[81]

As of 2016[update], in the United States, Food and Drug Administration (FDA) regulates nanotechnology on the basis of size.[83]

Nanocomposite particles that are controlled remotely by an electromagnetic field was also developed.[84] This series of nanorobots that are now enlisted in the Guinness World Records,[84] can be used to interact with the biological cells.[85] Scientists suggest that this technology can be used for the treatment of cancer.[86]

The Nanites are characters on the TV show Mystery Science Theater 3000. They're self-replicating, bio-engineered organisms that work on the ship and reside in the SOL's computer systems. They made their first appearance in Season 8.Nanites are used in a number of episodes in the Netflix series "Travelers". They be programmed and injected into injured people to perform repairs. First appearance in season 1

Nanites also feature in the Rise of Iron 2016 expansion for Destiny in which SIVA, a self-replicating nanotechnology is used as a weapon.

Nanites (referred to more often as Nanomachines) are often referenced in Konami's "Metal Gear" series being used to enhance and regulate abilities and body functions.

In the Star Trek franchise TV shows nanites play an important plot device. Starting with "Evolution" in the third season of The Next Generation, Borg Nanoprobes perform the function of maintaining the Borg cybernetic systems, as well as repairing damage to the organic parts of a Borg. They generate new technology inside a Borg when needed, as well as protecting them from many forms of disease.

Nanites play a role in the video game Deus Ex, being the basis of the nano-augmentation technology which gives augmented people superhuman abilities.

Nanites are also mentioned in the Arc of a Scythe book series by Neal Shusterman and are used to heal all nonfatal injuries, regulate bodily functions, and considerably lessen pain.

Nanites are also an integral part of the Stargate SG1 and Stargate Atlantis, where grey goo scenarios are portrayed.

See original here:

Nanorobotics - Wikipedia