Microsoft’s new AI app describes the world for the visually impaired now available on iPhone – GeekWire

The new Seeing AI app from Microsoft narrates the world for the visually impaired. (Microsoft Photos, via App Store)

Microsoft released a new artificial intelligence app for iPhone this morning that can read text from signs and documents aloud, describe people and their emotions, identify currency values, and narrate the activity taking place in front of the user, among other futuristic features.

The app, called Seeing AI, is designed for the visually impaired but also serves as a showcase for Microsofts artificial intelligence capabilities. The initial release on iPhone continues Microsofts approach, under CEO Satya Nadella, of working with a variety of platforms beyond its own Windows operating system.

Seeing AI was first previewed at a Microsoft conference last year, but wasnt available publicly until now.

The news was part of an event this morning in London where the company made a series of AI announcements, including a new AI research and incubation hub inside Microsoft Research, a new Ethical Design Guide for AI andan initiative called AI for Earth to encourage the use of artificial intelligence for environmental solutions.

Microsoft is competing against a variety of tech companies seeking to make a mark in artificial intelligence, including Amazon, Google, Facebook, Salesforce and others. The company last year formed a new 5,000-person engineering and research team, called the Microsoft AI and Research Group, led by veteran technology exec Harry Shum, to further its AI initiatives.

See original here:

Microsoft's new AI app describes the world for the visually impaired now available on iPhone - GeekWire

Mars Rover’s AI is really good at selecting rocks to analyze – Engadget

"Time is precious on Mars," said lead system engineer Raymond Francis in a statement. "AEGIS allows us to make use of time that otherwise wasn't available because we were waiting for someone on Earth to make a decision."

The AEGIS software operates in two different ways: autonomous target selection and autonomous pointing refinement. Basically, these two systems allow the rover to select targets and refine its own laser targeting to analyze samples chosen according to the parameters scientists have selected beforehand. The software has performed at a very high level, exceeding 93 percent accuracy when choosing the correct materials to analyze. According to the paper, the AEGIS autonomous system has "substantially reduced lost time on the mission" and increased the speed of data collection. Before AEGIS was implemented last year, the rover carried out blind targeting, just in case it hit something worthwhile. "Half the time it would just hit soil -- which was also useful, but rock measurements are much more interesting to our scientists," Francis said.

The system has been so useful that NASA is including AEGIS in its upcoming Mars 2020 mission, with an updated version of the imaging system called SuperCam. This new device will have more advanced analysis capabilities to study the crystal structure of rocks.

Excerpt from:

Mars Rover's AI is really good at selecting rocks to analyze - Engadget

How AI And Machine Learning Are Helping Drive The GE Digital Transformation – Forbes


Forbes
How AI And Machine Learning Are Helping Drive The GE Digital Transformation
Forbes
This is the story of how GE has accomplished this digital transformation by leveraging AI and machine learning fueled by the power of Big Data. Undertaking the Digital Transformation. The GE transformation is an effort that is still in progress, but ...

See original here:

How AI And Machine Learning Are Helping Drive The GE Digital Transformation - Forbes

Top MIT AI Scientist to Elon Musk: Please Simmer Down – Inc.com

Science fiction futures generally come in two flavors -- utopian and dystopian. Will tech kill routine drudgery and elevate humanity la Star Trek or The Jetsons? Or will innovation be turned against us in some 1984-style nightmare? Or, worse yet, will the robots themselves turn against us (as in the highly entertaining Robopocalypse)?

This isn't just a question for fans of futuristic fiction. Currently two of our smartest minds -- Elon Musk and Mark Zuckerberg -- are in a war of words over whether artificial intelligence is more likely to improve our lives or destroy them.

Musk is the pessimist of the two, warning that proactive regulation is needed to keep doomsday scenarios featuring smarter-than-human A.I.s from becoming a reality. Zuckerberg imagines a rosier future, arguing that premature regulation of A.I. will hold back helpful tech progress.

Each has accused the other of ignorance. Who's right in this battle of the tech titans?

If you're looking for a referee, you could do a lot worse than roboticist Rodney Brooks. He is the founding director of MIT's Computer Science and Artificial Intelligence Lab, and the co-founder of iRobot and Rethink Robotics. In short, he's one of the top minds in the field. So what does he think of the whole Zuckerberg vs. Musk smackdown?

In a wide-ranging interview with TechCrunch, Brooks came down pretty firmly on the side of optimists like Zuckerberg:

There are quite a few people out there who've said that A.I. is an existential threat: Stephen Hawking, astronomer Royal Martin Rees, who has written a book about it, and they share a common thread, in that: they don't work in A.I. themselves. For those who do work in A.I., we know how hard it is to get anything to actually work through product level.

Here's the reason that people -- including Elon-- make this mistake. When we see a person performing a task very well, we understand the competence [involved]. And I think they apply the same model to machine learning. [But they shouldn't.] When people saw DeepMind's AlphaGo beat the Korean champion and then beat the Chinese Go champion, they thought, 'Oh my god, this machine is so smart, it can do just about anything!' But I was at DeepMind in London about three weeks ago and [they admitted that things could easily have gone very wrong].

Brooks also argues against Musk's idea of early regulation of A.I., saying it's unclear exactly what should be prohibited at this stage. In fact, the only form of A.I. he would like to see regulated is self-driving cars-- such as those being developed by Musk's Tesla -- which Brooks claims present imminent and very real practical problems. (For example, should a 14-year-old be able to override and "drive" an obviously malfunctioning self-driving car?)

Are you more excited or worried about the future of artificial intelligence?

Originally posted here:

Top MIT AI Scientist to Elon Musk: Please Simmer Down - Inc.com

The real success of AI will only come with treating workers well – ZDNet

AI will change the workplace, but exactly how is up to us.

Just as cloud computing used to be, so now artificial intelligence is the term that pretty much every tech start-up is required to mention if it's going to be considered a contender for the Next Big Thing. If you don't include machine learning and neural networks in your pitch, prepare to be ignored.

But there is the increasingly visible downside to the AI hype. Not just the billionaires arguing about the far future of humanity, but the more pressing worry that AI may destroy more jobs than it creates.

Plenty of industries and roles are potentially at risk as a result of the rise of automation and AI, but the tech industry is in an odd position, in that it is both creating AI tools and is one of the industries that could see job losses from automation.

Some may, of course, consider it poetic justice that IT workers will be among the first to feel the impact of AI in the workplace.

And the IT industry has, in recent years, hardly had a great track record when it comes to thinking about the future of either staff or their skills.

The fashion for outsourcing and off-shoring in the last decade or so may have saved companies money in the short term but has also been blamed by many for doing away with the entry-level jobs that provided a gateway into the industry. The rise of cloud computing has also reduced the number of some application maintenance roles.

The rise of AI could have a similar impact. One of the most complained about features of outsourcing is staff being required to train their replacements. And one of the big secrets of AI is just how much training it requires by humans to be able to do its job. But of course, once that training is done the need for those trainers goes away again.

None of this is necessarily a bad thing; every new technology, from the wheel to the internal combustion engine, has created new and better jobs; the trick is to make sure that the self-driving car continues that trend.

A recent report by PWC on the future of work noted: "Automation and Artificial Intelligence will affect every level of the business and its people. It's too important an issue to leave to IT (or HR) alone. A depth of understanding and keen insight into the changing technology landscape is a must."

That's true, but there's also an opportunity here for the technology industry. IT understands how AI works and needs to show other industries how it can be incorporated without simply destroying jobs. By their own behaviour, IT companies and IT departments need to show that using AI and automation isn't necessarily bad for jobs and skills: to show that harnessing this new technology can still create more jobs than it destroys. That means using AI for more than cutting costs and being willing to help workers adjust to the need for new and different skills. If the IT industry can't balance the adoption of AI with the creation of decent jobs, then what hope is there for any other industry?

AI will certainly create huge benefits, but there are big challenges to consider; how to retrain people after the rise of automation; whether the working week or even the traditional career are still meaningful concepts, and what that means for how we organize society.

We won't resolve all those questions for decades yet, but how companies behave today will have a bearing on those answers down the line. The real success of AI will not be down to technology, it will be down to how we treat people affected by it its arrival.

ZDNet's Monday Morning Opener

The Monday Morning Opener is our opening salvo for the week in tech. Since we run a global site, this editorial publishes on Monday at 8:00am AEST in Sydney, Australia, which is 6:00pm Eastern Time on Sunday in the US. It is written by a member of ZDNet's global editorial board, which is comprised of our lead editors across Asia, Australia, Europe, and the US.

Previously on Monday Morning Opener:

Read more here:

The real success of AI will only come with treating workers well - ZDNet

Introducing The AI & Machine Learning Imperative – MIT Sloan

Topics The AI & Machine Learning Imperative

The AI & Machine Learning Imperative offers new insights from leading academics and practitioners in data science and artificial intelligence. The Executive Guide, published as a series over three weeks, explores how managers and companies can overcome challenges and identify opportunities by assembling the right talent, stepping up their own leadership, and reshaping organizational strategy.

Leading organizations recognize the potential for artificial intelligence and machine learning to transform work and society. The technologies offer companies strategic new opportunities and integrate into a range of business processes customer service, operations, prediction, and decision-making in scalable, adaptable ways.

As with other major waves of technology, AI requires organizations and managers to shed old ways of thinking and grow with new skills and capabilities. The AI & Machine Learning Imperative, an Executive Guide from MIT SMR, offers new insights from leading academics and practitioners in data science and AI. The guide explores how managers and companies can overcome challenges and identify opportunities across three key pillars: talent, leadership, and organizational strategy.

Email Updates on AI, Data & Machine Learning

Get monthly email updates on how artificial intelligence and big data are affecting the development and execution of strategy in organizations.

Please enter a valid email address

Thank you for signing up

Privacy Policy

The series launches Aug. 3, and summaries of the upcoming articles are included below. Sign up to be reminded when new articles launch in the series, and in the meantime, explore our recent library of AI and machine learning articles.

In order to achieve the ultimate strategic goals of AI investment, organizations must broaden their sights beyond creating augmented intelligence tools for limited tasks. To prepare for the next phase of artificial intelligence, leaders must prioritize assembling the right talent pipeline and technology infrastructure.

Recent technical advances in AI and machine learning offer genuine productivity returns to organizations. Nevertheless, finding and enabling talented individuals to succeed in engineering these kinds of systems can be a daunting challenge. Leading a successful AI-enabled workforce requires key hiring, training, and risk management considerations.

AI is no regular technology, so AI strategy needs to be approached differently than regular technology strategy. A purposeful approach is built on three foundations: a robust and reliable technology infrastructure, a specific focus on new business models, and a thoughtful approach to ethics. Available Aug. 10.

CFOs who take ownership of AI technology position themselves to lead an organization of the future. While AI is likely to impact business practices dramatically in the future across the C-suite, its already having an impact today and the time for CFOs to step up to AI leadership is now. Available Aug. 12.

To remain relevant and resilient, companies and leaders must strive to build business models in a way that ensures three key components are working together: AI that enables and powers a centralized data lake of enterprise data, a marketplace of sellers and partners that make individualized offers based on the intelligence of the data collected and powered by AI, and a SaaS platform that is essential for users. Available Aug. 17.

Acquiring the right AI technology and producing results, while critical, arent enough. To gain value from AI, organizations need to focus on managing the gaps in skills and processes that impact people and teams within the organization. Available Aug. 19.

The AI & Machine Learning Imperative offers new insights from leading academics and practitioners in data science and artificial intelligence. The Executive Guide, published as a series over three weeks, explores how managers and companies can overcome challenges and identify opportunities by assembling the right talent, stepping up their own leadership, and reshaping organizational strategy.

Ally MacDonald (@allymacdonald) is a senior editor at MIT Sloan Management Review.

View original post here:

Introducing The AI & Machine Learning Imperative - MIT Sloan

Graphs as a foundational technology stack: Analytics, AI, and hardware – VentureBeat

Elevate your enterprise data technology and strategy at Transform 2021.

How would you feel if you saw demand for your favorite topic which also happens to be your line of business grow 1,000% in just two years time? Vindicated, overjoyed, and a bit overstretched in trying to keep up with demand, probably.

Although Emil Eifrem never used those exact words when we discussed the past, present, and future of graphs, thats a reasonable projection to make. Eifrem is chief executive officer and cofounder of Neo4j, a graph database company that claims to have popularized the term graph database and to be the leader in the graph database category.

Eifrem and Neo4js story and insights are interesting because through them we can trace what is shaping up to be a foundational technology stack for the 2020s and beyond: graphs.

Eifrem cofounded Neo4j in 2007 after he stumbled upon the applicability of graphs in applications with highly interconnected data. His initiation came by working as a software architect on an enterprise content management solution. Trying to model and apply connections between items, actors, and groups using a relational database ended up taking half of the teams time. That was when Eifrem realized that they were trying to fit a square peg in a round hole. He thought theres got to be a better way, and set out to make it happen.

When we spoke for the first time in 2017, Eifrem had been singing the graphs are foundational, graphs are everywhere tune for a while. He still is, but things are different today.

What was then an early adopter game has snowballed to the mainstream today, and its still growing. Graph Relates Everything is how Gartner put it when including graphs in its top 10 data and analytics technology trends for 2021. At Gartners recent Data & Analytics Summit 2021, graph also was front and center.

Interest is expanding as graph data takes on a role in master data management, tracking laundered money, connecting Facebook friends, and powering the search page ranker in a dominant search engine. Panama Papers researchers, NASA engineers, and Fortune 500 leaders: They all use graphs.

According to Eifrem, Gartner analysts are seeing explosive growth in demand for graph. Back in 2018, about 5% of Gartners inquiries on AI and machine learning were about graphs. In 2019, that jumped to 20%. From 2020 until today, 50% of inquiries are about graphs.

AI and machine learning are in extremely high demand, and graph is among the hottest topics in this domain. But the concept dates back to the 18th century, when Leonhard Euler laid the foundation of graph theory.

Euler was a Swiss scientist and engineer whose solution to the Seven Bridges of Knigsberg problem essentially invented graph theory. What Euler did was to model the bridges and the paths connecting them as nodes and edges in a graph.

That formed the basis for many graph algorithms that can tackle real-world problems. Googles PageRank is probably the best-known graph algorithm, helping score web page authority. Other graph algorithms are applied to use cases including recommendations, fraud detection, network analysis, and natural language processing, constituting the domain of graph analytics.

Graph databases also serve a variety of use cases, both operational and analytical. A key advantage they have over other databases is their ability to model intuitively and execute quickly data models and queries for highly interconnected domains. Thats pretty important in an increasingly interconnected world, Eifrem argues:

When we first went to market, supply chain was not a use case for us. The average manufacturing company would have a supply chain two to three levels deep. You can store that in a relational database; its doable with a few hops [or degrees of separation]. Fast-forward to today, and any company that ships stuff taps into this global fine-grained mesh, spanning continent to continent.

All of a sudden, a ship blocks the Suez Canal, and then you have to figure out how that affects your business. The only way you can do that is by digitizing it, and then you can reason about it and do cascading effects. In 2021, youre no longer talking about two to three hops. Youre talking about supply chains that are 20, 30 levels deep. That requires using a graph database its an example of this wind behind our back.

The graph database category is actually a fragmented one. Although they did not always go by that name, graph databases have existed for a long time. An early branch of graph databases are RDF databases, based on Semantic Web technology and dating back about 20 years.

Crawling and categorizing content on the web is a very hard problem to solve without semantics and metadata. This is why Google adopted the technology in 2010, by acquiring MetaWeb.

What we get by connecting data, and adding semantics to information, is an interconnected network that is more than the sum of its parts. This graph-shaped amalgamation of data points, relationships, metadata, and meaning is what we call a knowledge graph. Google introduced the term in 2012, and its now used far and wide.

Knowledge graph use cases are booming. Reaching peak attention in Gartners hype cycle for AI in 2020, applications are trickling down from the Googles and Facebooks of the world to mid-market companies and beyond. Typical use cases include data integration and virtualization, data mesh, catalogs, metadata, and knowledge management, as well as discovery and exploration.

But theres another use of graphs that is blossoming: graph data science and machine learning. We have connected data, and we want to store it in a graph, so graph data science and graph analytics is the natural next step, said Alicia Frame, Neo4j graph data science director.

Once youve got your data in the database, you can start looking for what you know is there, so thats your knowledge graph use case, Frame said. I can start writing queries to find what I know is in there, to find the patterns that Im looking for. Thats where data scientists get started Ive got connected data, I want to store it in the right shape.

But then the natural progression from there is I cant possibly write every query under the sun. I dont know what I dont know. I dont necessarily know what Im looking for, and I cant manually sift through billions of nodes. So, you want to start applying machine learning to find patterns, anomalies, and trends.

As Frame pointed out, graph machine learning is a booming subdomain of AI, with cutting edge research and applications. Graph neural networks operate on graph structures, as opposed to other types of neural networks that operate on vectors. What this means in practice is that they can leverage additional information.

Neo4j was among the first graph databases to expand its offering to data scientists, and Eifrem went as far as to predict that by 2030, every machine learning model will use relationships as a signal. Google started doing this a few years ago, and its proven that relationships are strong predictors of behavior.

What will naturally happen, Eifrem went on to add, is that machine learning models that use relationships via graphs will outcompete those that dont. And organizations that use better models will outcompete everyone else a case of Adam Smiths invisible hand.

This confluence of graph analytics, graph databases, graph data science, machine learning, and knowledge graphs is what makes graph a foundational technology. Its whats driving use cases and adoption across the board, as well as the evolution from databases to platforms that Neo4j also exemplifies. Taking a decade-long view, Eifrem noted, there are four pillars on which this transition is based.

The first pillar is the move to the cloud. Though its probably never going to be a cloud-only world, we are quickly going from on-premises first to cloud-first to database-as-a-service (DBaaS). Neo4j was among the first graph databases to feature a DBaaS offering, being in the cohort of open source vendors Google partnered with in 2019. Its going well, and AWS and Azure are next in line, Eifrem said. Other vendors are pursuing similar strategies.

The second pillar is the emphasis on developers. This is another well established trend in the industry, and it goes hand-in-hand with open source and cloud. It all comes down to removing friction in trying out and adopting software. Having a version of the software that is free to use means adoption can happen in a bottom-up way, with open source having the added benefit of community. DBaaS means going from test cases to production can happen organically.

The third pillar is graph data science. As Frame noted, graph really fills the fundamental requirement of representing data in a faithful way. The real world isnt rows and columns its connected concepts, and its really complex. Theres this extended network topology that data scientists want to reason about, and graph can capture this complexity. So its all about removing friction, and the rest will follow.

The fourth pillar is the evolution of the graph model itself. The commercial depth of adoption today, although rapidly growing, is not on par with the benefits that graph can bring in terms of performance and scalability, as well as intuitiveness, flexibility, and agility, Eifrem said. User experience for developers and data scientists alike needs to improve even further, and then graph can be the No. 1 choice for new applications going forward.

There are actually many steps being taken in that direction. Some of them may come in the form of acronyms such as GraphQL and GQL. They may seem cryptic, but theyre actually a big deal. GraphQL is a way for front-end and back-end developer teams to meet in the middle, unifying access to databases. GQL is a cross-industry effort to standardize graph query languages, the first one that the ISO adopted in the 30-plus years since SQL was formally standardized.

But theres more the graph effect actually goes beyond software. In another booming category, AI chips, graph plays an increasingly important role. This is a topic in and of its own, but its worth noting how, from ambitious upstarts like Blaize, GraphCore and NeuReality to incumbents like Intel, there is emphasis on leveraging graph structure and properties in hardware, too.

For Eifrem, this is a fascinating line of innovation, but like SSDs before it, one that Neo4j will not rush to support until it sees mainstream adoption in datacenters. This may happen sooner rather than later, but Eifrem sees the end game as a generational change in databases.

After a long period of stagnation in terms of database innovation, NoSQL opened the gates around a decade ago. Today we have NewSQL and time-series databases. Whats going to happen over the next three to five years, Eifrem predicts, is that a few generational database companies are going to be crowned. There may be two, or five, or seven more per category, but not 20, so were due for consolidation.

Whether you subscribe to that view, or which vendors to place your bets on, is open for discussion. What seems like a safe bet, however, is the emergence of graph as a foundational technology stack for the 2020s and beyond.

Read more here:

Graphs as a foundational technology stack: Analytics, AI, and hardware - VentureBeat

Artificial intelligence: A round-up of products with AI as a central function – IBC365

Initially focused on automating repetitive tasks, the integration of Artificial intelligence (AI) and machine learning (ML) is opening the door to many other useful applications. IBC365 takes a look at ten recent product launches and service evolutions, some of which have focused on solving the challenges brought about by the coronavirus health crisis.

1. AI-based ad insertion technologyUK-based Mirriad began using AI technology to digitally insert advertisements and products into movies and TV shows after they have been filmed. As reported by IBC365, Mirriad can digitally embed a branded bottle on a table, a new advertisement on an existing billboard, or a commercial running on the TV in the background. The companys platform uses AI to identify placement opportunities, and then employs visual effects technology to insert real-world objects that werent in the original shoot, or overlays existing brand imagery with new product shots. The idea has already generated attention among broadcasters and advertisers. Indeed, producers have been looking for ways to generate additional revenue from their back catalogues after production stalled during the COVID-19 pandemic.

2. AI in video codecs for optimising video flowsIBC365 also reported on how future advances on optimising video streaming workflows will be made in software that is automated by AI. Companies such as Haivision, Harmonic, InterDigital, iSize Technologies, and V-Nova are working on different ways of applying AI and ML techniques within video codecs.

3. Using AI/ML to convert horizontal to vertical formats for smartphonesFrench news channel BFMTV launched live vertical technology that automatically converts the horizontal frames of standard television streams to a vertical format that is better suited to smartphones. Altice-owned BFMTV collaborated with French start-up Wildmoka to develop the product, which allows the traditional horizontal format of television to be automatically rezoned into a mobile-friendly vertical format using AI and ML techniques. Wildmoka calls its product Auto ReZone, and designed it to provide a mobile-first, vertical viewing experience for news. The product extracts content from live streams or recorded videos, and automatically reconstitutes it into a mobile-first video format (typically 9:16 or 1:1). The tool uses AI and ML to detect all the zones of interest in each 16:9 frames; select a vertical layout/template suitable to fit the various zones of interest detected above; extract each zone from the horizontal frame and adjust them individually to fit the zone sizes of the target vertical layout; and re-compose the extracted zones and graphical elements into the overall final vertical frame.

4. AI in full RDK video-based productSoftAtHome launched a new video-based product based on the Reference Design Kit (RDK) open source software for the video industry, and which uses AI techniques for user-friendly navigation and personal data security. The product integrates multicast, DVB, live DASH streaming, a universal search aggregator, new premium video streaming apps, voice controls, and SoftAtHomes white-label ImpressioTV user interface. The company uses AI algorithms to help optimise the user experience and propose personalised content while keeping data private. In addition, AI-based voice control assets have been integrated into the RDK product to make navigation on TV screens more user-friendly.

5. Using AI and ML to prevent customer churnQligent introduced Foresight as a cloud-based service that uses AI, ML, and big data to mitigate content distribution issues, prevent churn, and protect service provider revenue. Foresight is designed to help broadcasters, MVPDs and OTT service providers understand and correlate factors that contribute to higher audience engagement by providing real-time data analytics based on system performance and user behaviour. The aim is to stop so-called silent sufferers from cancelling their subscriptions by predicting and preventing customer churn. AI and ML provide automated data collection, while deep learning technology mines data from hundreds or thousands of layers of data. Big data technology then correlates and aggregates the data for real-time, cloud-based quality assurance, helping service providers to quickly address distribution issues.

6. AI for managing the connected homeAmdocs launched doxi HomeOS, an AI-based cloud-native home operating system (OS) designed to enable service providers to move beyond basic connectivity services in the connected home. doxi HomeOS provides AI-based insights, simple voice commands and touch-free care capabilities to resolve customer support needs. The OS also offers enhanced cybersecurity monitoring capabilities and parental control over the growing number and usage of connected devices and apps in the home. Furthermore, doxi HomeOS offers consumers the ability to self-manage connectivity and WiFi settings as well as automated, AI-based notifications related to usage patterns and media and gaming consumption. Gil Rosen, general manager of amdocs:next, said doxi HomeOS is relevant for all broadband providers, ranging from incumbents looking to differentiate and grow services to CSPs rolling out 5G fixed wireless access to enhance home broadband connectivity.

7. AI in live transcription servicesEpiphan Video launched LiveScrypt as a live transcription service. LiveScrypt is a cloud and AI-based speech-to-text transcription service that enables audiences to engage with live events as they happen regardless of any hearing impediment, native language, or distraction. LiveScrypt is said to transcribe with at least 85% to 90% accuracy. It also adds punctuation and on-the-fly corrections based on the confidence of words in context. North American Industry Classification System (NAICS) codes for standardised industry-related terms are also supported for greater accuracy.

8. AI in robotic camerasTelemetrics introduced AI techniques as well as motion tracking and servo-mechanical excellence as standard on its latest robotic camera products and systems. For example, the OmniGlide robotic roving platform was improved with new ML algorithms, allowing its shot recall settings to intelligently find the best path within the space in which it is operating. This Path Planning means it can figure out the safest way between point A and point B, even when theres an obstruction (like a news desk) in between. This is accomplished in tandem with the Telemetrics RCCP-2A robotics and camera control panel running STS software.

9. AI for video-on-demand (VoD)SPI/ Film Box launched an AI-based content streaming service called FilmBox Plus. The multi-platform service merges linear and on-demand experiences through AI-supported linear channels and video-on-demand (VoD) content. SPI said the new service is an evolution of FilmBox Live, which has been active for over a decade. It is expected that FilmBox Plus will launch globally by the end of 2020, replacing FilmBox Live.

10. Corporate broadcasting using AI-based vPilotMobile Viewpoint partnered with BuckDesign to provide broadcast studio services, with a specific emphasis on companies looking to enhance their corporate communications and marketing initiatives. BuckDesign is using vPilot technology with AI automation from Mobile Viewpoint as part of its inhouse broadcast studio, which can be rented by companies wishing to deploy their own professional TV broadcast studio. vPilot is an automated studio system that controls multiple cameras without the need for an onsite director or camera operators. BuckDesign, in conjunction with vPilot, built its own studio in Alkmaar in North Holland that is available to corporates wishing to undertake a production without investing in a complete studio. For companies that wish to implement their own studio, BuckDesign can provide the full set-up from room design to implementation of the technology.

See original here:

Artificial intelligence: A round-up of products with AI as a central function - IBC365

When The Computer Is YOU: The Rise Of Proxy AI – Forbes

When The Computer Is YOU: The Rise Of Proxy AI
Forbes
In 1945, Vannevar Bush published As We May Think. In it he proposed a Memex, a massive microfilm storage device where users could navigate between documents to concepts throughout the database. His concept directly inspired the invention of ...

Read more here:

When The Computer Is YOU: The Rise Of Proxy AI - Forbes

Why creating an AI department is not a good idea – Policy Options

When I give public talks and training on AI in government, I am often asked why Canada has no Department of Artificial Intelligence to govern AI. Some jurisdictions, like the United Arab Emirates, have a minister for AI, and a few others, like the UK, have a small government office focused on AI; but to my knowledge there are no jurisdictions with a department of AI.

The call for a department of AI is a well-meaning response to the growing importance of AI and highlights the gap that exists between technology adoption in government and the much higher proficiency of the private sector with this technology. A dramatic state of affairs should be addressed through decisive action and, so the logic goes, that decisive action should be nothing short of updating the machinery of government to create a new department or agency directly focused on the governance of AI. While the sentiment is understandable, the idea of creating a department of AI is riddled with misunderstandings and needs to be handled with care.

Understanding AI

First, few applications called AI have much in common with one another from a technical standpoint, at least not in the way that the term AI is generally used today. For instance, the type of AI used in security cameras to identify suspects is a very different technology than the AI that is used to support advanced translation services. There is a large and common overestimation of AIs shared characteristics across applications. In reality, when we talk about AI, we are talking about many different things.

Whats more, a great many of the new applications we call AI are fairly linear, albeit impressive, advances from a pre-existing technology or process, most of which already had a governance framework. In the world of policy, the permitted applications of AI with regard to traffic cameras have much more to do with the existing rules governing traffic cameras than with, say, an AI application that can play chess.

In that sense, AI is less analogous to a field like astronomy and more analogous to something like electricity: both AI and electricity are found in a wide range of fields and would be poorly served by an oversight body created on the basis of all potential applications. Imagine creating a department of electricity to be responsible for regulating every instance and application in which electricity is found, or a department of computers to oversee all conceivable applications of computing power. Not only would such an organization be unwieldy, but it would be unlikely to have clear goals and purposes.

Making new departments

From an administrative and organizational effectiveness standpoint, its not clear that AI capacities should be housed together in a single department. The computer science and data science at the heart of AI are functionally very different from the operation of things like postal services or seaport governance, where activities must be clustered together to achieve economies of scale in the use of a particular piece of machinery or infrastructure.

In contrast to the technologies of the industrial age, which are highly location dependent and benefit immensely from clustering, software and data science applications do not depend on location, or at least not nearly to the same degree. Even if significant benefits of co-location did exist for AI uses in the federal government, being housed in the same departmental structure is no guarantee that staff would even be housed in the same building; most departments are split across multiple sites, and even across multiple provinces. Its hard to see how a new department of AI would ever resemble an army of AI professionals saluting from the same cubicle farm.

Concentrating all the government of Canadas AI capacities in a single institution would also come with negative side effects. With the transfer of those capacities to a centralized institutional vessel, other departments would be left with very little or no AI capacity of their own. AI would be monopolized by the single department of AI, which would then support the AI projects of other departments as needed.

There is a precedent for such an arrangement: the strategy behind the creation in 2011 of Shared Services Canada, which sought to put all government IT services under one roof. For a variety of reasons this change in machinery has been held responsible for harming the quality of government technological services, and the Trudeau government assessed SSC as being in need of renewal only nine years after its founding. Its hard to imagine that a duplication of this approach for AI would fare much better.

Alternative institutional structures

Instead of a megadepartment of AI that houses all potential applications, it is possible to imagine taking a broader approach, permitting applications and governance to be decentralized but with a complementary concentration of expertise and resources that can be lent out to other departments as necessary. But this modified vision of new AI machinery is unlikely to be uniquely different from existing institutions like Statistics Canada, which already has renowned strengths in the data, statistics and modelling techniques that are central to AI. Statistics Canada needs to be brought up to speed on AI, but that can be done without creating a new department.

Perhaps a department of AI could better be viewed as a much more focused federal R&D instrument, responsible for the sorts of specialized, leading-edge AI research that might benefit from the concentration of facilities for supercomputers and the like. An institution with this sole mandate does not exist at the federal level and could even make some sense at a technical level; however, the idea is a complete non-starter due to jurisdictional issues. While the National Research Council does operate in this space for the federal government, most public R&D functions and funding are passed off to Canadas 260 or so post-secondary institutions, which are nominally independent and overseen by provincial governments.

Bringing all of these AI activities under one (figurative) roof would require a huge bureaucratic street fight, to strip powers and functions away from their hundreds of existing owners. With such high cost in political and administrative capital in order to unlock a dubious and unknown benefit, its implausible that such a change would ever be attempted. Certainly, there may be no solitary department addressing AI, but its not obvious that having one would be an improvement.

If it aint broke

Under the existing arrangement, overarching AI policy for the government of Canada is the purview of central agencies and Innovation, Science and Economic Development, with political leadership provided by the new cross-departmental minister of state for digital government. Meanwhile, responsibility for delivery and applications is housed in the line departments, which use AI applications and are closest to their effects. This set-up has been successful so far in practice; Canada ranks among the leading countries in the world for AI preparedness. Even the minister of digital government is not assigned sole responsibility for AI.

The idea of a departmental stovepipe for the new and exciting field of AI is tantalizing, but the state can meaningfully commit to doing more about AI in other ways. Creating a new department would be an incredibly expensive and disruptive undertaking, adding new layers of administration and technical complications. Machinery of government changes are in fact an immense distraction from the everyday business of government; very little AI governance will get done while staff are being moved and new mandates debated.

Due to its onerous nature, changing the machinery of government is seldom regarded as a way of acting either quickly or decisively. Petronius Arbiter, a Roman administrator writing on governance, is generally credited with the observation We tend to meet any new situation by reorganizing, and what a wonderful method it can be for creating the illusion of progress while actually producing confusion, inefficiency, and demoralization. While AI is growing in importance and needs better governance to match, a machinery change should take place only if there are clear problems with the existing institutional structure and clarity that the change will represent an obvious improvement. Its not clear that either of these would be true in the case of AI.

Photo:Shutterstockby byGorodenkoff

Do you have something to say about the article you just read? Be part of thePolicy Optionsdiscussion, and send in your own submission.Here is alinkon how to do it.|Souhaitez-vous ragir cet article ?Joignez-vous aux dbats dOptions politiqueset soumettez-nous votre texte en suivant cesdirectives.

Continued here:

Why creating an AI department is not a good idea - Policy Options

Can Rats AI Rats, That is Shed Light on How Neural Networks Work? – HPCwire

Rats have long been highly-valued model organisms helping researchers better understand biology and pursue drug development. Now, researchers from Harvard and DeepMind say AI-versions of rats can help humans better understand how AI neural networks learn and develop and how their counterparts in real life work. An interesting account of their work appear on IEEE Spectrum today.

Heres brief excerpt from the article written by Edd Gent:

[A]uthors ofa new paperdue to be presented this week at theInternational Conference on Learning Representationshave created a biologically-accurate 3D model of a rat that can be controlled by a neural network in a simulated environment. They also showed that they could use neuroscience techniques foranalyzing biological brain activityto understand how the neural net controlled the rats movements.

The platform could be the neuroscience equivalent of a wind tunnel, saysJesse Marshall, co-author and postdoctoral researcher at Harvard, by letting researchers test different neural networks with varying degrees of biological realism to see how well they tackle complex challenges.

Typical experiments in neuroscience probe the brains of animals performing single behaviors, like lever tapping, while most robots are tailor-made to solve specific tasks, like home vacuuming, he says. This paper is the start of our effort to understand how flexibility arises and is implemented in the brain, and use the insights we gain todesign artificial agents with similar capabilities.

Its a fascinating idea. The researchers built the AI rat model (muscles, joints, vision, movement. Etc.) based on observing real rats and then trained a neural network to guide the rat through four tasksjumping over a series of gaps, foraging in a maze, trying to escape a hilly environment, and performing precisely-timed pairs of taps on a ball.

As the rats improved at the tasks the researchers were abler to watch the controlling neural networks develop. Its early work, and the researchers agree that because they built the model much of what they learned was expected. One interesting insight, though, was that the neural activity seemed to occur over longer timescales than would be expected if it was directly controlling muscle forces and limb movements, according to Diego Aldarondo, a co-author and graduate student at Harvard.

He is quoted in the article, This implies that the network represents behaviors at an abstract scale of running, jumping, spinning, and other intuitive behavioral categories, he says, a cognitive model that has previously been proposed to exist in animals. This kind of work, says the researchers, will help understand both how neural networks evolve and also provide insight into how biology neural networks work.

Link to the IEEE Spectrum article by Ed Gent: https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/ai-powered-rat-valuable-new-tool-neuroscience

Link to the paper: https://openreview.net/forum?id=SyxrxR4KPS

See original here:

Can Rats AI Rats, That is Shed Light on How Neural Networks Work? - HPCwire

Message to ministers: AI can transform the way we live right now – The Guardian

Artificial intelligence AI cant solve the Southern rail dispute, but it can help make services run more smoothly. Photograph: Yui Mok/PA

Artificial intelligence (AI) is likely to prove the most transformative technology of the 21st century. Those of us who work in the field whether in the public or private sector are at a frontier that is advancing at an ever-accelerating rate. Yet my work on tech policy at the Government Digital Service and the Home Office often left me in despair. At a time when the possibilities created by AI are multiplying rapidly, the government isnt really at the races.

The Governments Digital Strategy, published yesterday, and the governments Transformation Strategy, published a couple of weeks ago, are a case in point. It is fantastic that some more money is going into AI and robotics research in our universities, but treating AI as one for the future misses the opportunities of today.

At our own business, ASI, we work with organisations that are achieving radical improvements in efficiency from relatively simple applications of AI. A payments company that increases fraud detection by 93%. An airline that uses machine learning to predict demand for staff in real time, allowing them to cut the number of standby staff required by 33%. A train manufacturer that uses a predictive maintenance model to reduce the number of inspections an engineer needs to perform to find a fault in need of repair from 10,000 to two.

The opportunities are here and now. But the projects that could improve our public services and deliver value for money to the taxpayer were nowhere to be seen in the digital strategy. And government remains embarrassingly short of examples it can point to. In fact, at a conference on government data last week, the chief executive of the Civil Service resorted to praising a list of public toilets released as open data. We can do better than this.

The stakes are high. Even after seven years of austerity, the public sector spends more than 40% of GDP. Yet the services that we rely on are under ever greater pressure. The only way the government can continue to meet the expectations that people have of the NHS, transport or prisons is to find ways to radically improve efficiency.

The good news is that it is easy to imagine ways in which these services could benefit from AI with relatively little investment. It is encouraging that the justice secretary, Liz Truss, has made digital technology so central to her prisons and courts bill. Machine learning could play a big part in this. For example, Harvard researchers found that cell-sharing configurations can reduce reoffending rates by about 15% for drugs and theft offences in French prisons. It stands to reason that choices of cellmates matter, but even very experienced prison officers find it difficult to balance the bewildering array of factors that need to be taken into account. In contrast to humans, machine learning thrives in finding the patterns that matter in this kind of complexity. This could be done right now.

Weve all read about supercomputers that are able to read a million medical journals an hour and spot tumours more accurately than experienced doctors. But there are significant wins to be had from the much more prosaic matter of allocating resources in hospitals more efficiently. Hospitals are complex organisations dealing with unpredictable demands. Machine learning can help them run more smoothly. Recent trials modelled how long particular consultations and operations were likely to take and booked theatre resources accordingly. This hugely increased the utilisation rates of these valuable resources, and reduced the number of over-runs caused by the fixed-time slots.

Transport is another area that could hugely benefit. AI cant solve the Southern rail dispute, but it can help make services run more smoothly. A recent project by ASI built an adaptive scheduling system for a bus operator that modelled the complex ways in which traffic flows through a city. In just a few weeks this was able to make buses 38% more likely to show up at the right time. Cue happier passengers, less crowded busses, and big savings for the bus company.

These are just three easy examples that could be implemented today. There are dozens of others across the entire public sector. But to help kickstart this kind of revolution it is vital that ministers, civil servants and frontline professionals become more familiar with what is possible. To achieve this, government should create a 20m fund for officials to bid into for projects that could demonstrate the value of AI.

Another thing the government could do to move the needle is to provide much better access to the data that is used to train these predictive models. Data.gov.uk has become a dumping ground for nugatory and obscure data sets. Why not require each public body to publish details describing its top 20 data sets that it uses for its own operations? That might help to ferment a proper debate about the new applications that the public might benefit from.

In the next two decades, AI will transform the way we live and work. There is no reason whatsoever why the government shouldnt be doing this too, but it is not. Adopting this technology is the most plausible way of delivering the public services people expect while making the savings we need.

Read this article:

Message to ministers: AI can transform the way we live right now - The Guardian

Rules urgently needed to oversee police use of data and AI report – The Guardian

National guidance is urgently needed to oversee the polices use of data-driven technology amid concerns that it could lead to discrimination, a report has said.

The study, published by the Royal United Services Institute (Rusi) on Sunday, said guidelines were required to ensure that the use of data analytics, artificial intelligence (AI) and computer algorithms developed legally and ethically.

Forces expanding use of digital technology to tackle crime was in part driven by funding cuts, the report said.

Officers are battling against information overload as the volume of data around their work grows, while there is also a perceived need to take a preventative rather than reactive stance to policing.

Such pressures have led forces to develop tools to forecast demand in control centres, triage investigations according to their solvability and to assess the risks posed by known offenders.

Examples of the latter include Hampshire polices domestic violence risk-forecasting model, Durham polices Harm Assessment Risk Tool (Hart) and West Midlands polices draft integrated offender management model.

The report, commissioned by the Centre for Data Ethics and Innovation (CDEI), said that while technology could help improve police effectiveness and efficiency, it was held back by the lack of a robust empirical evidence base, poor data quality and insufficient skills and expertise.

While not directly focused on biometric, live facial recognition or digital forensic technologies, the report explored general issues of data protection and human rights underlying all types of police technology.

It could be argued that the use of such tools would not be necessary if the police force had the resources needed to deploy a non-technological solution to the problem at hand, which may be less intrusive in terms of its use of personal data, the report said.

It advised that an integrated impact assessment was needed to help justify the need for each new police analytics project. Initiatives were often not underpinned by enough evidence as to their claimed benefits, scientific validity or cost-effectiveness, the report said.

The reports authors noted criticism of predictive policing tools being racially biased, but said there was a lack of sufficient evidence to assess whether this occured in England and Wales and if it resulted in unlawful discrimination.

They said studies claiming to demonstrate racial bias were mostly based on analysis conducted in the US, and that it was unclear whether such concerns would transfer to a UK context.

However, there is a legitimate concern that the use of algorithms may replicate or amplify the disparities inherent in police-recorded data, potentially leading to discriminatory outcomes, the report said.

For this reason, ongoing tracking of discrimination risk is needed at all stages of a police data analytics project, from problem formulation and tool design to testing and operational deployment.

Roger Taylor, chairman of the CDEI, said: There are significant opportunities to create better, safer and fairer services for society through AI, and we see this potential in policing. But new national guidelines, as suggested by Rusi, are crucial to ensure police forces have the confidence to innovate legally and ethically.

The report called on the National Police Chiefs Council (NPCC) to work with the Home Office and College of Policing to develop the technology guidelines.

The NPCC lead for information management, Ian Dyson, said it would work with the government and regulators to consider the reports recommendations. He added: Data-driven technology can help us to to keep the public safe. Police chiefs recognise the need for guidelines to ensure legal and ethical development of new technologies and to build confidence in their ongoing use.

Read more:

Rules urgently needed to oversee police use of data and AI report - The Guardian

The cybersecurity battle of the future AI vs. AI – ITProPortal

Artificial intelligence and machine learning continue to gain a foothold in our everyday lives. Whether for complex tasks like computer vision and natural language processing, or something as basic as an online chatbot, their popularity shows no signs of slowing. Companies have also started to explore deep learning, which is an advanced subset of machine learning. By applying deep neural networks deep learning takes inspiration from how the human brain works. Unlike machine learning, deep learning can actually train its processes directly on raw data, requiring little to no human intervention.

Recent research from analyst firm Gartner noted that the number of companies implementing AI technology has increased by around 270 per cent over the past four years. The return on investment is unmistakable as so many industries have started to implement the technology. However, even with the significant progress and given the nature of AI, that same, once helpful technology could fall into the wrong hands and be used to inflict damage on a company or the end user.

This ongoing battle that pits AI for good versus AI for malicious purposes, may not be something playing out in front of our eyes yet, but its not far off. Thankfully, the cost for implementing malicious AI, at any scale, is still somewhat cost prohibitive and requires tools and skills not readily available on the market. But knowing that it could become reality one day, means that companies should start preparing early for what lies ahead.

Heres a look into what that could look like, and what companies can do now to quell the storm.

When malware uses AI algorithms as an integral part of its business logic it learns from its situation and gets smarter at evading detection. But unlike typical malware that is one program running on a server, for example, AI-based malware can shift and change its behavior quickly, adjusting its evasion techniques as needed when it senses something is wrong or detects a threat to its own systems. Its a capability that most companies simply arent prepared for yet.

One example of situational awareness in AI-based malware came from BlackHat 2018. Created by IBM Security, DeepLocker is an encrypted ransomware that can autonomously decide which computer to attack based on a facial recognition algorithm. And as researchers noted, its designed to be stealthy.

The highly targeted malware hides itself in unsuspecting applications, evading detection by most antivirus scanning programs until it has identified its target victim. Once the target is identified through several indicators, including facial feature recognition, audio, location or system-level features, the AI algorithm unlocks the malware and launches the attack. According to researchers, IBM created it to demonstrate how they could combine open-source AI tools with straightforward evasion techniques to build a targeted, evasive and highly effective malware.

The amplified efficiency of AI means that once a system is trained and deployed, malicious AI can attack a far greater number of devices and networks more quickly and cheaply than a malevolent human actor.

And while the researchers also noted that they havent seen something like DeepLocker in the wild yet, the technology they used to create it is readily available, as were the malware techniques they used. Only time will tell whether something like it will emerge that is, if it hasnt already.

Companies can guard against malware like this by fighting fire with fire, using cybersecurity solutions that are based on deep learning, the most advanced form of AI. Its not enough to just get a firewall or basic anti-virus system, companies need to implement systems that can detect AI based malware and take the necessary steps to prevent harm. But also, to go one step further to achieve longer-term detection and pre-emptively stop continued damage. A necessary task with a future that includes AI-based malware.

Another harmful scenario is when malicious AI-based algorithms are used to hinder the functionality of benign AI algorithms by using the same algorithms and techniques used in traditional machine learning.

Rather than provide any helpful functionality, the malware is used to breach the useful algorithm, and manipulate it as a means to take over the functionality or use it for malicious purposes.

One example comes from several researchers studying adversarial machine learning. They investigated how self-driving cars processed street signs, and whether the technology could be manipulated. And while most self-driving cars have the ability to read street signs and act accordingly, researchers were able to trick the technology into believing it was reading a street sign, in this case a stop sign as a speed limit. This was a simple change that the technology onboard the vehicle couldnt detect as harmful. Taking a step back to look at the implications, it meant that the technology available today in self-driving cars could be exploited into causing collisions, resulting in possible deadly outcomes.

Adversarial learning can also be applied to subvert and confuse the efforts of computer vision algorithms, NLP (Natural Language Processing) and malware classifiers, which trick the technology into thinking its something else. The process typically injects malicious data into benign data streams with the intent to overwhelm or to block legitimate data. An example of this is a Distributed Denial of Service (DDoS) attack, which is when a cyberattack aimed at a server is purposely overwhelmed with data and internet traffic, disrupting normal traffic or service to that server, effectively bringing it down.

To block the harmful effects of this technology, companies need a system that understands when an algorithm is benign and working properly, versus one thats been tampered. Its not only protecting systems and the overall functionality of the tech, but could be protecting lives, as seen in the stop sign example. This is where advanced AI becomes necessary for analysis capabilities that enable it to understand and identify when something is amiss.

This type of attack is seen when malware runs on the victims endpoint, but AI-based algorithms are used on the server side to facilitate the attack. A command and control server which is used by an attacker to send and receive information from systems compromised by malware can control any number of functions.

For example, malware that steals data and information, which it then uploads onto a command and control server. Once complete, an additional algorithm identifies relevant details e.g. credit card numbers, passwords and the like which it then passes on to the server and ultimately the attacker on the other end. Through the use of AI, the malware can be executed on mass, without requiring any human intervention and be disseminated on a large scale to encompass thousands of victims.

One recent example Deep Instinct researchers uncovered was ServHelper. A new variant of the ServHelper malware uses an Excel 4.0 macro Dropper, a legacy mechanism still supported by Microsoft Office, and an executable payload signed with a valid digital signature. ServHelper can receive several types of commands from its Command & Control server, including: download a file, enter sleep mode, or even a self-kill function that allows it to remove the malware from the infected machine. This is a classic example of hacker groups using increasingly sophisticated methods, such as certificates, to propagate malware and launch cyberattacks.

Similar to the others, its not enough to just put up a firewall and hope for the best. Companies need to think holistically and protect all of an organisations endpoints and devices from Windows through servers and other platforms such as Mac, Android and iOS. An AI-based solution can help by constantly learning from what is or isnt malicious, helping its human counterparts to act once its identified and ideally stopped the harmful malware from spreading and hurting systems more.

Companies are just beginning to grasp that AI and machine learning can help with customer-facing technology and be used to help create stronger defences against a future of AI-enabled attacks. While the future of malware using AI might still be a few years away, companies can prepare themselves now against attacks of the future.

By using these technologies to spot trends and patterns in behavior now, companies can better prepare themselves against a future that employs AI against them. One way to ensure the technological advantage over any potential AI-based threat is a deep learning-based approach, which fights malicious AI with friendly AI.

Unlike other forms of anti-virus that remain stagnant, once implemented, deep learning is highly scalable. This is especially important as AI-based malware can grow and change constantly, and deep learning can scale to hundreds of millions of training samples, which means that as the training dataset gets larger, the deep learning neural network can continuously improve its ability to detect anomalies, no matter what the future will bring. Its truly fighting AI with AI.

Nadav Maman, CTO and co-founder, Deep Instinct

See original here:

The cybersecurity battle of the future AI vs. AI - ITProPortal

Banking on AI: The time is ripe for Indian banks to embrace artificial intelligence – The Financial Express

By Balakrishna DR

Globally, the financial services industry has proved to be an enthusiastic adopter of Artificial Intelligence (AI) driven by the availability of data and investment appetite. Creative implementation of AI by start-ups and fintechs has helped further this trend. From personalisation to customer service, fraud detection and prevention to compliance, and risk monitoring to intelligent contract documents, AI has helped banks gain better control and predictability.

Today, customers expect faster, personal, and meaningful services and interactions with their banks and little tolerance for generic unsolicited messages. Therefore, banks must leverage AI to balance the need for privacy and security with personalisation and engagement. That said, the Indian banking sector has some amount of catching up to do.

While Indian banks have explored the use of AI, it has primarily been used to improve customer experience by adding chatbots as an additional interface for customers like SIA by State Bank of India, Eva by HDFC and iPal by ICICI. State-owned banks have been slow to leverage AI, largely because AI implementation requires banks to operate outside of the traditional privacy framework. India still does not have robust data protection and privacy policy. Reserve Bank of India (RBI) needs to take a commanding and dynamic role in framing regulations on emerging technologies, data privacy and ensuring the business interests of the banks.

Banks must adopt new business models simultaneously to integrate AI into their strategic plans and explore the use of AI for analytics and to improve customer experience. However, reliance on legacy systems, lack of data science talent, and cost constraints have impeded seamless adoption of AI. They must focus on three key aspects:

Fraud detection: AI plays a vital role in fraud detection, given the heightened threat of cyberattacks. As per the 2019 RBI annual report, losses due to banking frauds have risen by a whopping 73.8% despite the Governments efforts to curb them. What is more alarming is that banks took an average of 22 months between the occurrence of fraud and its detection, as per RBI data. Considering RBIs zero-liability safety net in the event of cyber frauds, it is imperative banks adopt best-fit practices and technology levers to mitigate these risks. With adoption of real-time payments, there has also been rapid innovation in the digital fraud landscape.

Set against this backdrop, banks must deploy context-sensitive AI solutions to enable advanced and adaptive real-time monitoring of their payment networks. These AI solutions additionally leverage relevant data points to assess transaction risk, true identity-matching, and identification of complex typologies and patterns.

Digitisation of processes: The tremendous proliferation of mobile devices and the internet can be leveraged to enable the superior user experience and analytics-based functionalities that give consumers an insight into their spending patterns and provide recommendations on investment and risk profiles. For instance, digitising the KYC process to eliminate the need for physical document submission and verification is something that traditional banks still do not offer. This can be simplified by utilising AI-based computer vision technology to verify documents, Optical/Intelligent Character Recognition (OCR/ICR) technologies to digitise scanned documents, and Natural Language Processing (NLP) to make sense of them.

Decision making: AI is a great fit in areas where decisions are based on available structured and unstructured data. For example, it can help predict potential loan defaulters and offer loss mitigation strategies that will work for them. It can help determine the best time to approach a customer to sell a new product. AI-based smart environments can collate data from multiple sources and drive an inference and enable SMEs to take decisions. AI can also improve straight-through processing using Intelligent Automation to automate repetitive processes that need decision making.

Given the magnitude of the challenge, it might make sense for banks to come together to establish a consortium for knowledge sharing on AI. This would also help Indias numerous regional and cooperative banks that are behind on the technology curve. A consortium could help uplift these small banks and enable them to be integrated seamlessly into a broader nationwide secure banking network. Whichever way it happens, AI in Indian banking is only set to grow.

The author is Senior VP, Service Offering Head Energy, Communications, Services and AI & Automation Services, Infosys

Get live Stock Prices from BSE, NSE, US Market and latest NAV, portfolio of Mutual Funds, calculate your tax by Income Tax Calculator, know markets Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.

Financial Express is now on Telegram. Click here to join our channel and stay updated with the latest Biz news and updates.

Here is the original post:

Banking on AI: The time is ripe for Indian banks to embrace artificial intelligence - The Financial Express

UK aims to boost solar by predicting cloud movements with A.I. – CNBC

LONDON The U.K. is planning to use artificial intelligence software to try to better predict when cloud movements will affect solar power generation.

National Grid Electricity System Operator, or ESO, which moves electricity around the country, has signed a deal with non-profit Open Climate Fix to create an AI-powered tracking system that matches cloud movements with the exact locations of solar panels.

The grid operator said that the software, which is set to be used in the national control room, could help it to forecast cloud movements in minutes and hours instead of days.

Open Climate Fix's "nowcasting" technology has the potential to improve solar forecasting accuracy by up to 50%, a spokesperson for National Grid ESO told CNBC.

The project, which commenced in August, is set to last 18 months and it is being funded by U.K. energy regulator Ofgem with 500,000 ($683,100).

Natonal Grid ESO is responsible for maintaining the balance of supply and demand for the U.K. electricity grid down to the second.

This is challenging with fossil fuels and nuclear power, but the unpredictable nature of solar and wind makes the task even more complex.

To help address the issue, London-headquartered Open Climate Fix says it has trained a machine-learning model to read satellite images and understand how and where clouds are moving in relation to solar panels on the ground.

"Accurate forecasts for weather-dependent generation like solar and wind are vital for us in operating a low carbon electricity system," said Carolina Tortora, head of innovation strategy and digital transformation at National Grid ESO, in a statement last week.

"The more confidence we have in our forecasts, the less we'll have to cover for uncertainty by keeping traditional, more controllable fossil fuel plants ticking over," she added.

Co-founded by former DeepMind employee Jack Kelly in 2018, Open Climate Fix was backed by Google's philanthropic arm, Google.org, with 500,000 in April.

At one point, DeepMind wanted to use its own AI technology to optimize National Grid. However, last March, it emerged that talks had broken down between DeepMind and National Grid.

While DeepMind denies it has shifted its focus from climate change to other areas of science, several key climate change researchers that were part of the company's energy unit have left the company over the last two years, and it has made few climate change-related announcements.

See more here:

UK aims to boost solar by predicting cloud movements with A.I. - CNBC

Google CEO says company will review disputed dismissal of top AI researcher | TheHill – The Hill

Google will review the process that led to the dismissal of a top artificial intelligence (AI) researcher who said she was fired after voicing concerns about the handling of a report on AI bias, though the companys head of research said Google merely accepted the researchers resignation.

CEO Sundar Pichai sent a memo to employees on Wednesday that addressed the departure of AI ethics researcher Timnit Gebru, apologizing for the doubts it led to among the Google community and pledging to have an internal review of the process, according to a copy of the memo firstreported by Axios.

Ive heard the reaction to Dr. Gebrus departure loud and clear: it seeded doubts and led some in our community to question their place at Google. I want to say how sorry I am for that, and I accept the responsibility of working to restore your trust, Pichai reportedly wrote.

We will begin a review of what happened to identify all the points where we can learn considering everything from de-escalation strategies to new processes we can put in place, he added.

A Google spokespersonconfirmed the memo is authentic, but said the company has "nothing further to share."

Gebru, however, dismissed the efficacy of Pichais memo in addressing the root of the issues.

I see no plans for accountability and there was further gaslighting in the statement, she tweeted after the memo was reported.

I see no plans for accountability and there was further gaslighting in the statement. https://t.co/Q8MAYSJwdU

Gebru, a co-leader of Googles Ethical AI team, last week said on Twitter that shewas fired from Google after an email she sent to a group of company researchers called Brain Women and Allies. In the email she reportedly commented on a report others were working on about minority hiring, referencing her own experience withpushbackon an AI ethics paper she submitted for a conference.

Google requested Gebru retract an AI ethics paper she had co-written with six others, including fourfellow employees, that was submitted for an industry conference next year, or at least remove the names of Google employees, she told Bloomberg News.

The paper detailed ethical considerations of large language models that are used in AI research, including in products such as Googles search engine, according to a blog post by a group protesting Gebrus dismissal.

Gebru said she asked for an explanation on the request to retract the paper and said thatwithout more discussion on the paper and the way it was handled she would plan to resign after a transition period.

In the email to the Brain Women and Allies group, Gebru wrote, Stop writing your documents because it doesnt make a difference, according to Bloomberg.

There is no way more documents or more conversations will achieve anything, she reportedly wrote.

The next day she said she was fired by email.

Google senior vice president and head of research Jeff Dean last week pushed back on Gebrus claims that she was fired. He shared an email sent to the Google Research team that said the company accepted her decision to resign after her emaildetailed a number of conditions she had "in order for her to continue working at Google."

Timnit wrote that if we didnt meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google, Dean wrote, according to the copy of the emailhe shared.

I understand the concern over Timnits resignation from Google. Shes done a great deal to move the field forward with her research. I wanted to share the email I sent to Google Research and some thoughts on our research process.https://t.co/djUGdYwNMb

Thousands of Google employees, however, rallied behind Gebru,as didacademics, industry and civil society supporters who signed a petition in protest of her dismissal and calling on Google to be transparent about her departure.

Google Walkout for Real Changes petition underscored its plea for transparency on Gebrus dismissal by noting that she was one of very few Black women working as research scientists at the company.

Google Walkout for Real Change on Monday wrote an additional blog post disputing Deans claims that Gebru resigned, backing the researchers account of the situation.

The controversy over Gebrus dismissal adds to concerns over Googles treatment of its employees.

The National Labor Relations Board earlier this month filed a complaint against the tech giant alleging it illegally spied on and then fired two employees for organizing.

Google defended the firings, arguing the two workers had violated company policy.

Read the rest here:

Google CEO says company will review disputed dismissal of top AI researcher | TheHill - The Hill

Legaltech 2017: Announcements, AI, And The Future Of Law – Above the Law

I spent most of last week in the Midtown Hilton in New York City attending Legaltech 2017, or Legalweek: The Experience, or some sort of variation of the two. For the most part, it pretty much had the same feel as every other Legaltech Ive attended. But I agree with my fellow Above the Law tech columnist, Bob Ambrogi, that ALM deserves kudos for trying to change the focus a bit. It may take a year or two of experimentation to get it right, but at least theyre trying.

This year, one of the topics that popped up over and over throughout the conference was artificial intelligence and its potential impact on the practice of law. In part the AI focus was attributable to the Keynote speaker on the opening day of the conference,Andrew McAfee, author of The Second Machine Age(affiliate link). His talk focused on ways that AI would disrupt business as usual in the years to come. His predictions were in part premised on his assertion that key technologies had improved greatly in recent years and as a result were in the midst of a convergence of these technologies such that AI is finally coming of age.

I was particularly excited about this keynote sinceId started reading McAfeesbook in mid-December after Klaus Schauser, the CTO of AppFolio, MyCases parent company, recommended it to me. As McAfee explains in his book, its abundantly clear that AI is already having an incredible impact on other industries.

But what about the legal industry? I started mulling over this issue last September after attending ILTA in D.C. andwriting about a few different legal software platforms grounded in AI concepts. Because I find this topic to be so interesting, I decided to hone in on it during my interviews at Legaltech as well, which I livestreamed via Periscope.

First I met with Mark Noel, managing director of professional services at Catalyst Repository Systems. After he shared the news ofCatalysts latest release, Insight Enterprise, a platform for corporate general counsel designed to centralize and streamline discovery processes, we turned to AI and his thoughts on how it will affect the legal industry over the next year. He believes that AI will eventually manage the more tedious parts of practicing law, thus allowing lawyers to focus on the analytical aspects that tend to be more interesting: Some of the types of tasks lawyers are best at I dont see AI taking over anytime soon. A lot of what lawyers work with is justice, fairness, and equity, which are more abstract. The ultimate goal of legal practice the human practitioner is going to have to do, but the the grunt work and repeatable stuff like discovery which is becoming more onerous because of growing data volumes those are the kinds of things these tools can take over for us. You can watch the full interview here.

Next I spoke with AJ Shankar, the founder of Everlaw, an ediscovery platform that recently rolled out an integrated litigation case management tool as well, which I wrote about here. According to AJ, AI is undergoing a renaissance across many different industries. But when it comes to the legal space, its a different story. AI is not ready to make the tough judgments that lawyers make, but it is ready to augment human processes. AI will become a very important assistant for you. It will work hand in hand with humans who will then provide the valuable context. You can watch the full interview here.

I also met with Jack Grow, the president of LawToolBox, which provides calendaring and docketing softwareand he talked to me about their latest integration with DocuSign. Then we moved onto AI and Jack suggested that in the short term, the focus would be on aggregating the data needed to build useful AI platforms for the legal industry. Over the next year software vendors will figure out how to collect better data that can be consumed for analysis later on, so it can be put into an algorithm to make better use of it. Theyll be building the foundation and infrastructure so that they can later take advantage of artificial intelligence. You can watch the full interview here.

And last but certainly not least, I spoke with Jeremiah Kelman, the president of Everchron, a company that Ive covered previously, which provides a collaborative case management platform for litigators. Jeremiah predicts that AI will provide very targeted and specific improvements for lawyers. Replacement of lawyers sounds interesting, but its more about leveraging the information you have and the data that is out there and using it to provide insights and give direction to lawyers as they do their tasks and speed up what they do. From research, ediscovery, case management, and things across the spectrum, well see it in targeted areas and youll get the most impact from leveraging and improving within the existing framework. You can watch the full interview here.

Nicole Black is a Rochester, New York attorney and the Legal Technology Evangelist at MyCase, web-based law practice management software. Shes been blogging since 2005, has written a weekly column for the Daily Record since 2007, is the author of Cloud Computing for Lawyers, co-authors Social Media for Lawyers: the Next Frontier, and co-authors Criminal Law in New York. Shes easily distracted by the potential of bright and shiny tech gadgets, along with good food and wine. You can follow her on Twitter @nikiblack and she can be reached at niki.black@mycase.com.

View original post here:

Legaltech 2017: Announcements, AI, And The Future Of Law - Above the Law

Surge in Remote Working Leads iManage to Launch Virtual AI University for Companies that Want to Harness the Power of the RAVN AI Engine -…

CHICAGO, April 09, 2020 (GLOBE NEWSWIRE) -- iManage, the company dedicated to transforming how professionals work, today announced that it has rolled out a virtual Artificial Intelligence University (AIU), as an adjunct to its customer on-site model. With the virtual offering, legal and financial services professionals can actively participate in project-driven, best-practice remote AI workshops that use their own, real-world data to address specific business issues even amidst the disruption caused by the COVID-19 outbreak.

AIU helps clients to quickly and efficiently learn to apply machine learning and rules-based modeling to classify, find, extract and analyze data within contracts and other legal documents for further action, often automating time-consuming manual processes. In addition to delivering increases in speed and accuracy of data search results, AI frees practitioners to focus on other high-value work. Driven both by the need of organizations to reduce operational costs and to adapt to fundamental shifts toward remote work practices, virtual AIU is playing an important role in helping iManage clients continue to work and collaborate productively. The curriculum empowers end users with all the skills they need to quickly ramp up the efficiency and breadth of their AI projects using the iManage RAVN AI engine.

Participating in AIU was a huge win for us. We immediately saw the impact AI would have in surfacing information we need and allowing us to action it to save time, money and frustration, said Nikki Shaver, Managing Director, Innovation and Knowledge, Paul Hastings. The workshop gave us deep insight into how to train the algorithm effectively for the best possible effect. And, very quickly, more opportunities came to light as to how AI could augment our business in the longer term, continued Shaver.

AI is a transformation technology thats continuing to gain momentum in the legal, financial and professional services sectors. But many firms dont yet have the internal knowledge or training to deliver on its promise. iManage is committed to helping firms establish AI Centers of Excellence not just sell them a kit and walk away, said Nick Thomson, General Manager, iManage RAVN. Weve found the best way to ensure client success is to educate and build up experience inside the firm about how AI works and how to apply it to a broad spectrum of business problems.

Deep Training Delivers Powerful Results

iManage AIUs targeted, hands-on training starts with the fundamentals but delves much deeper enabling organizations to put the flexibility and speed of the technology to work across myriad scenarios. RAVN easily helps facilitate actions like due diligence, compliance reviews or contract repapering, as well as more sophisticated modeling that taps customized rule development to address more unique use cases.

The advanced combination of machine learning and rules-based extraction capabilities in RAVN make it the most trainable platform on the market. Users can teach the software what to look for, where to find it and then how to analyze it using the RAVN AI engine.

Armed with the tools and training to put AI to work across their data stores and documents, AIU graduates can help their organizations unlock critical knowledge and insights in a repeatable way across the enterprise.

Interactive Curriculum Builds Strong Skillsets

The personalized, interactive course is delivered over three half-day sessions, via video conferencing, to a small team of customer stakeholders. Such teams may include data scientists, knowledge managers, lawyers, partners, contract specialists, and trained legal staff. AIU is also available to firms that are considering integrating the RAVN engine and would like to see AI in action as they assess the potential impact of the solution on their businesses.

Expert iManage AI instructors, with deep technology and legal expertise, work with clients in advance to help identify use cases for the virtual AIU. The iManage team fully explores client use cases prior to the training to facilitate the most effective approach to extraction techniques for client projects.

The daily curriculum includes demonstrations with user data and individual and group exercises to evaluate and deepen user skills. Virtual breakout rooms for project drill down and feedback mechanisms, such as polls and surveys, help solidify learning and make the sessions more interactive. Recordings and transcripts allow customers to revisit AIU sessions at any time.

For more information on iManage virtual AIU or on-site training read our AI blog post or contact us at AIU@imanage.com.

Follow iManage via: Twitter: https://twitter.com/imanageinc LinkedIn: https://www.linkedin.com/company/imanage

About iManageiManage transforms how professionals in legal, accounting and financial services get work done by combining artificial intelligence, security and risk mitigation with market leading document and email management. iManage automates routine cognitive tasks, provides powerful insights and streamlines how professionals work, while maintaining the highest level of security and governance over critical client and corporate data. Over one million professionals at over 3,500 organizations in over 65 countries including more than 2,500 law firms and 1,200 corporate legal departments and professional services firms rely on iManage to deliver great client work securely.

Press Contact:Anastasia BullingeriManage +1.312.868.8411press@imanage.com

See the original post here:

Surge in Remote Working Leads iManage to Launch Virtual AI University for Companies that Want to Harness the Power of the RAVN AI Engine -...

True AI cannot be developed until the ‘brain code’ has been cracked: Starmind – ZDNet

Marc Vontobel, CTO & Pascal Kaufmann, CEO, Starmind

Artificial intelligence is stuck today because companies are likening the human brain to a computer, according to Swiss neuroscientist and co-founder of Starmind Pascal Kaufmann. However, the brain does not process information, retrieve knowledge, or store memories like a computer does.

When companies claim to be using AI to power "the next generation" of their products, what they are unknowingly referring to is the intersection of big data, analytics, and automation, Kaufmann told ZDNet.

"Today, so called AI is often just the human intelligence of programmers condensed into source code," said Kaufmann, who worked on cyborgs previously at DARPA.

"We shouldn't need 300 million pictures of cats to be able to say whether something is a cat, cow, or dog. Intelligence is not related to big data; it's related to small data. If you can look at a cat, extract the principles of a cat like children do, then forever understand what a cat is, that's intelligence."

He even said that it's not "true AI" that led to AlphaGo -- a creation of Google subsidiary DeepMind -- mastering what is revered as the world's most demanding strategy game, Go.

The technology behind AlphaGo was able to look at 10 to 20 potential future moves and lay out the highest statistics for success, Kaufmann said, and so the test was one of rule-based strategy rather than artificial intelligence.

The ability for a machine to strategise outside the context of a rule-based game would reflect true AI, according to Kaufmann, who believes that AI will cheat without being programmed not to do so.

Additionally, the ability to automate human behaviour or labour is not necessarily a reflection of machines getting smarter, Kaufmann insisted.

"Take a pump, for example. Instead of collecting water from the river, you can just use a pump. But that is not artificial intelligence; it is the automation of manual work ... Human-level AI would be able to apply insights to new situations," Kaufmann added.

While Facebook's plans to build a brain-computer interface and Elon Musk's plans to merge the human brain with AI have left people wondering how close we are to developing true AI, Kaufmann believes the "brain code" needs to be cracked before we can really advance the field. He said this can only be achieved through neuroscientific research.

Earlier this year, founder of DeepMind Demis Hassabis communicated a similar sentiment in a paper, saying the fields of AI and neuroscience need to be reconnected, and that it's only by understanding natural intelligence that we can develop the artificial kind.

"Many companies are investing their resources in building faster computers ... we need to focus more on [figuring out] the principles of the brain, understand how it works ... rather than just copy/paste information," Kaufmann said.

Kaufmann admitted he doesn't have all the answers, but finds it "interesting" that high-profile entrepreneurs such as Musk and Mark Zuckerberg, none of whom have AI or neuroscience backgrounds, have such strong and opposing views on AI.

Musk and Zuckerberg slung mud at each other in July, with the former warning of "evil AI" destroying humankind if not properly monitored and regulated, while the latter spoke optimistically about AI contributing to the greater good, such as diagnosing diseases before they become fatal.

"One is an AI alarmist and the other makes AI look charming ... AI, like any other technology, can be used for good or used for bad," said Kaufmann, who believes AI needs to be assessed objectively.

In the interim, Kaufmann believes systems need to be designed so that humans and machines can work together, not against each other. For example, Kaufmann envisions a future where humans wear smart lenses -- comparable to the Google Glass -- that act as "the third half of the brain" and pull up relevant information based on conversations they are having.

"Humans don't need to learn stuff like which Roman killed the other Roman ... humans just need to be able to ask the right questions," he said.

"The key difference between human and machine is the ability to ask questions. Machines are more for solutions."

Kaufmann admitted, however, that humans don't know how to ask the right questions a lot of the time, because we are taught to remember facts in school, and those who remember the most facts are the ones who receive the best grades.

He believes humans need to be educated to ask the right questions, adding that the question is 50 percent of the solution. The right questions will not only allow humans to understand the principles of the brain and develop true AI, but will also keep us relevant even when AI systems proliferate, according to Kaufmann.

If we want to slow down job loss, AI systems need to be designed so that humans are at the centre of it, Kaufmann said.

"While many companies want to fully automate human work, we at Starmind want to build a symbiosis between humans and machines. We want to enhance human intelligence. If humans don't embrace the latest technology, they will become irrelevant," he added.

The company claims its self-learning system autonomously connects and maps the internal know-how of large groups of people, allowing employees to tap into their organisation's knowledge base or "corporate brain" when they have queries.

Starmind platform

Starmind is integrated into existing communication channels -- such as Skype for Business or a corporate browser -- eliminating the need to change employee behaviour, Kaufmann said.

Questions typed in the question window are answered instantly if an expert's answer is already stored in Starmind, and new questions are automatically routed to the right expert within the organisation, based on skills, availability patterns, and willingness to share know-how. All answers enhance the corporate knowledge base.

"Our vision is if you connect thousands of human brains in a smart way, you can outsmart any machine," Kaufmann said.

On how this is different to asking a search engine a question, Kaufmann said Google is basically "a big data machine" and mines answers to questions that have been already asked, but is not able to answer brand new questions.

"The future of Starmind is we actually anticipate questions before they're even asked because we know so much about the employee. For example, we can say if you are a new hire and you consume a certain piece of content, there will be a 90 percent probability that you will ask the following three questions within the next three minutes and so here are the solutions."

Starmind is being currently used across more than 40 countries by organisations such as Accenture, Bayer, Nestl, and Telefonica Deutschland.

While Kaufmann thinks it is important at this point in time to enhance human intelligence rather than replicate it artificially, he does believe AI will eventually substitute humans in the workplace. But unlike the grim picture painted by critics, he doesn't think it's a bad thing.

"Why do humans need to work at all? I look forward to all my leisure time. I do not need to work in order to feel like a human," Kaufmann said.

When asked how people would make money and sustain themselves, Kaufmann said society does not need to be ruled by money.

"In many science fiction scenarios, they do not have money. When you look at the ant colonies or other animals, they do not have cash," Kaufmann said.

Additionally, if humans had continuous access to intelligent machines, Kaufmann said "the acceleration of human development will pick up" and "it will give rise to new species".

"AI is the ultimate tool for human advancement," he firmly stated.

Link:

True AI cannot be developed until the 'brain code' has been cracked: Starmind - ZDNet