Page 112«..1020..111112113114..120130..»

Category Archives: Ai

A spotlight on the EUs AI legislation Realizing the full potential of AI – ITProPortal

Posted: September 1, 2021 at 12:24 am

The EUs proposed AI legislation published in April, sparked debate on the true impact that new AI rules would have on businesses. Overall, it seems to be that the legislation has the potential to benefit society as a whole, but this could ultimately hinder companies and how they use AI in the long term.

However, OReillys recent AI in the Enterprise research discovered that, while more businesses are continuing to use AI or are considering implementing it in the near future, only 52 percent of these companies are checking for issues of fairness or bias within their AI systems.

One of the major roadblocks to AIs advancement has been a lack of trust in the technology. This is especially true in the public sector, where AI-assisted choices may have a significant influence on peoples lives. The EUs AI legislation aims to correct this by assisting organizations in navigating ethical AI usage. This will help to establish trust over time, allowing businesses to ultimately realize AIs full potential. Businesses must now change how they use and implement AI to ensure that they always fall on the right side of the line.

The AI-train has rapidly been gaining momentum in recent years, both in terms of business usage and results. Were now seeing the technology being used for cancer detection, climate change analysis, the control of traffic and marketing for businesses. Globally, a quarter (26 percent) of businesses have reached the mature stage of AI usage. This means that they have revenue-yielding AI products in production. In the UK, this figure is even higher, with 36 percent classifying their AI usage as mature.

Looking at the industry breakdown, retail came out on top, with 40 percent claiming that their usage of AI was mature. This was closely followed by financial services (38 percent) and telecommunications (37 percent). Comparatively, education (10 percent) and government (16 percent) were the least mature in their usage of AI.

The stats suggest that, while AI adoption in the private sector is snowballing, the public sector is struggling to keep up. The question is: why?

There is likely more than one factor as to why the public sector is struggling in its uptake of AI. Budgetary concerns could certainly be a key issue, but perhaps not enough to account for such a large difference between the public and private sector. The other glaring issue is public trust.

The general public already had their guard up against the use of AI in the public sector. Their worst fears were then proven correct in 2020 when A-Level and GCSE grades were predicted using an AI algorithm that faced accusations of bias. This led to the results being scrapped and replaced by predicted grades given by teachers. Its examples like these which damage public trust in AI.

In terms of checking AI models for bias, the UK is ahead of the global standard. Across the globe, just 52 percent of companies are checking their algorithms for bias. Meanwhile, in the UK, this figure rises to 56 percent. However, when it comes to decisions that impact peoples lives and their futures, a little better than half isnt enough. This counts for both the public and the private sector. Private sector companies, such as banks, also have the power to make decisions that can impact peoples lives.

The EUs AI legislation, which focuses heavily on AI ethics, should force companies to confront these shortcomings and be the starting point for organizations to build public trust and, in time, release the handbrake which is holding AI back. A more educated approach to AI will be key to achieving this.

Its clear that not enough businesses are checking for bias in their AI models. However, research suggests that this isnt necessarily negligence but, instead, a lack of training and skills. Globally, the biggest bottlenecks to AI adoption are a lack of skilled people (19 percent) and data quality (18 percent). In the UK, a quarter (25 percent) labeled a lack of data/data quality as a major hindrance and 14 percent said the same about skills within the organization.

This skills gap is already having a huge impact on the adoption of AI and, with the introduction of the EUs AI legislation, will have an even greater impact if businesses do not act soon. Half of UK businesses admitted that only about 50 percent of their AI projects are actually completed. Meanwhile, as weve seen, those that are completed run a risk of being biased. Moving forward, neither of these options will be profitable for companies.

To close this skills gap, businesses must ensure that they are providing adequate training for their AI-handling employees. This means equipping them with the necessary knowledge to develop and train an algorithm that is highly functional and ethical. Feeding the algorithm with high-quality and unbiased data is the first step, but employees must also be trained to consistently check the algorithm for bias or inconsistencies and make the necessary changes.

With the introduction of the new AI laws, some employees may be nervous to make a mistake. Businesses can take this fear away by empowering their employees to learn in the flow of work. This means allowing them to ask questions and receive quick answers, based on the most up-to-date guidance, which they can apply to their work. The learning platforms to enable this exist, and its now time for employers to start leaning on them. Or they could be one of the first organizations to feel the sting of the new AI legislation.

Businesses and organizations may be tempted to interpret the new AI regulation as a restriction on their technological ambitions. Instead, it should be viewed as advice that will assist them in making the most out of AI. Companies can roll out AI initiatives without fear of public backlash if they stay inside the confines of the new legislation. This will then enable them to test new AI technologies with greater confidence in the long term. However, to build this trust, businesses need to continually keep getting it right when it comes to AI. This means no more instances of AI bias or technologies which push the boundaries of privacy. Regular education and training is the only way to achieve this level of continued excellence.

Rachel Roumeliotis, Vice President of Data and AI, OReilly

Read the rest here:

A spotlight on the EUs AI legislation Realizing the full potential of AI - ITProPortal

Posted in Ai | Comments Off on A spotlight on the EUs AI legislation Realizing the full potential of AI – ITProPortal

FogHorn and Lightning Edge AI Platform Recognized as Overall Leader by ABI Research – Business Wire

Posted: at 12:24 am

SUNNYVALE, Calif.--(BUSINESS WIRE)--FogHorn, a leading developer of Edge AI software for industrial and commercial Internet of Things (IoT) solutions, today announced its ranking as the Overall Leader, Top Innovator and Top Implementer by ABI Researchs competitive vendor assessment on IoT Edge Analytics: Hardware-Agnostic SaaS and PaaS.

ABI Research assessed vendors based on their deployment of advanced edge analytics and artificial intelligence (AI) technologies to customers in various industries, go-to-market strategies, scalability and efficiency. The competitive ranking offers an unbiased assessment of edge-cloud software-as-a-service (SaaS) and platform-as-a-service (PaaS) technologies enabling the Internet of Things (IoT) for enterprises, covering vendors that are providing hardware agnostic machine learning (ML) and AI.

Leveraging edge intelligence enables enterprises to achieve operational efficiency, reduce costs and enhance workplace and asset monitoring, said Chris Penrose, Chief Operating Officer at FogHorn. Were honored to be recognized by ABI Research for enabling our customers to reach these goals with our Lightning Edge AI Platform and Solutions. This competitive assessment showcases the value were driving for our customers and highlighting a variety of edge AI use cases that ultimately enhance their decision-making and ROI with data-driven insights.

FogHorn established itself as the leader due to its performance in the industrial vertical and wide range of clients and strategic partnerships. Additionally, ABI Research noted FogHorns ability to serve multiple IoT use cases and sophisticated capabilities for predictive analytics and ML as a key consideration of its evaluation. As highlighted by ABI Research, FogHorn received higher implementation scores because of its influence and adoption rate of video analytics for the IoT domain.

FogHorn was also ranked as a vendor successfully monetizing market opportunities resulting from the COVID-19 pandemic. In June 2020, FogHorn announced its Health and Safety monitoring solution, which enables enterprises to address employee wellbeing and help prevent exposure to COVID-19 and monitor workplace safety through personal protective equipment detection and hazard monitoring. This solution, delivered as a ready-to-use package that utilized ML combined with video analytics, diversified FogHorns product portfolio compared to other edge AI vendors by ABI Research.

FogHorns Lightning Edge AI Platform was the first edge-native AI solution built for secure, on-site intelligence. Its edge processing capabilities are ideal for low-latency use cases enabling real-time data processing harnessing ML and AI capabilities within a minimal compute footprint. In addition to its recognition as an Overall Leader, FogHorn earned a top mark for predictive and ML modeling, as well as measurement against ABI Researchs unique innovation criteria.

Download a copy of ABI Researchs competitive assessment ranking on IoT Edge Analytics: Hardware-Agnostic SaaS/PaaS from the FogHorn website here.

About FogHorn

FogHorn is a leading developer of edge AI software for industrial and commercial IoT application solutions. FogHorns software platform brings the power of advanced analytics and machine learning to the on-premises edge environment enabling a new class of applications for advanced monitoring and diagnostics, machine performance optimization, proactive maintenance, and operational intelligence use cases. FogHorns technology is ideally suited for OEMs, systems integrators and end customers in manufacturing, power and water, oil and gas, renewable energy, mining, transportation, healthcare, retail, as well as smart grid, smart city, smart building, and connected vehicle applications.

See the original post here:

FogHorn and Lightning Edge AI Platform Recognized as Overall Leader by ABI Research - Business Wire

Posted in Ai | Comments Off on FogHorn and Lightning Edge AI Platform Recognized as Overall Leader by ABI Research – Business Wire

Patent Protection On AI Inventions – Intellectual Property – United States – Mondaq News Alerts

Posted: at 12:24 am

31 August 2021

Sheppard Mullin Richter & Hampton

To print this article, all you need is to be registered or login on Mondaq.com.

In recent years, AI patent activity has exponentially increased.The figure below shows the volume of public AI patent applicationscategorized by AI component in the U.S. from 1990-2018. The eightAI components in FIG. 1 are defined inan article published in 2020by theUSPTO. Most of the AI components have experienced explosive growthin the past decade, especially in the areas of planning/control andknowledge processing (e.g., using big data in automatedsystems).

Figure 1. AI patent activities byyear

AI technology is complex and includes different parts acrossdifferent fields. Inventors and patent attorneys often face thechallenge of effectively protecting new AI technology development.The rule of thumb is to focus the patent protection on what theinventors improve over the conventional technology. However,inventors often need to improve various aspects of an existing AIsystem to make it fit and work for their applications. In thefollowing sections, we will discuss an illustrative list of subjectareas that may offer patentable AI inventions.

The training phase of an AI system includes most of the excitingtechnical aspects of machine learning algorithms exploring thelatent patterns embedded in the training data. A typical trainingprocess includes preparing training data, transforming the trainingdata to facilitate the training process, feeding the training datato a machine learning model, fitting (training) the machinelearning model based on the training data, testing the trainedmachine learning model, and so on. Different AI models or machinelearning models may have different training processes, such assupervised training based on labeled training data, unsupervisedtraining that infers a function to describe a hidden structure fromunlabeled training data, semi-supervised training based onpartially-labeled training data, reinforcement learning (RL), etc.Common areas in the training phase that may yieldpatent-protectable ideas include:

The application phase of an AI system includes applying thetrained models to make predictions, inferences, classifications,etc. This phase generally covers the real application of the AIsystem. It can provide easier infringement detectability and thusvaluable patent protection for the AI system. In this digital era,AI systems can be applied to almost every aspect of our life. Forexample, an AI patent can claim or describe how the AI system helpsthe user to make better decisions or perform previously impossibletasks. These applications may be deemed as practical applicationsthat are powerful in overcoming potential "abstract idea"rejections during the prosecution of the AI patent.

On the other hand, simply claiming an AI system as a magicalblack box that generates accurate predictions based on input datawill likely trigger rejections during prosecution, such aspatentable subject matter rejections (e.g., a simple application ofthe black box may be categorized as human activities). There arevarious ways to reduce the chances of getting such rejections. Forexample, adding a brief description of the training process or themachine learning model structure helps overcome U.S.C. 101rejections.

Another flavor of AI patents is related to accelerators,hardware pieces with built-in software logic accelerating trainingand/or inferencing process. These AI patents may be claimed fromeither a software perspective or hardware perspective. Someexamples include specially designed hardware to improve trainingefficiency by working with GPU/TPU/NPU/xPU (e.g., by reducing datamigrations among different components/units), memory layout changesto improve the computational efficiency of computing-intensivesteps, arrangement of processing units for easy data sharing, andefficient parallel training (e.g., segmenting tensors to evenlydistribute workloads to processors), an architecture that fullyexploits the sparsity of tensors to improve computationefficiency.

The state-of-art AI systems are far from perfection. Robustness,safety, reliability, data privacy, are just some of the mostnoticeable pain points in training and deploying AI systems. Forexample, an AI model trained from a first domain may havenear-perfect accuracy for inferencing in the first domain, butgenerate disastrous inferences when being deployed in a seconddomain, even though the domains share some similarities. Therefore,how to train an AI model efficiently and adaptively so that it isrobust when being deployed in all domains of interest is bothchallenging and intriguing.

As another example, AI systems trained based on training datamay be easily fooled by adversarial attacks. For instance, asecond deep neural network may be designed to compete against thefirst one to identify its weaknesses. The safety and reliability ofsuch AI systems will be critical in the coming years and may beimportant patentable subject matters.

As yet another example, training data in many cases may includesensitive data (e.g., customer data), directly using such trainingdata may result in serious data privacy breaches. This problembecomes more alarming when a plurality of entities collectivelytrain a model using their own training data. Accordingly,researchers and engineers have been exploring differential privacyprotection and federated learning to address these issues.

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

POPULAR ARTICLES ON: Intellectual Property from United States

Obhan & Associates

Trademarks Comparative Guide for the jurisdiction of India, check out our comparative guides section to compare across multiple countries

Haug Partners

In a June 11, 2021 decision, Yu v. Apple Inc., a Federal Circuit panel issued a precedential decision, with a dissent, upholding the invalidation of patent claims to a digital camera on a motion to dismiss.

Brown Rudnick LLP

There are two dates that must be specified in every trademark/service mark application. They are, the date of first use anywhere and the date of first use in commerce.

Original post:

Patent Protection On AI Inventions - Intellectual Property - United States - Mondaq News Alerts

Posted in Ai | Comments Off on Patent Protection On AI Inventions – Intellectual Property – United States – Mondaq News Alerts

The past, present, and future of AI in financial services – Finextra

Posted: at 12:24 am

As the use cases for AI in financial services continue to grow and deliver value for organizations and customers alike, Id like to provide some insight on where I think the technology is delivering most value at the moment, and also where I think we are headed. Firstly, though, a little on how far weve come.

Many people dont realize that AI has been around since the 1950s. Models like linear regression, support vector machines, and many more have been used for decades. The application of traditional and novel algorithmic design choices continues to unlock real value in financial services.

Deep learning has also been around for a long time, but use cases only gained traction in the mid-2000s as datasets and computational power expanded enough to showcase its true potential. As financial services use cases evolved, deep learning became a key tool to solving problems we otherwise could not accomplish with more rudimentary machine learning models.

Today, we are seeing a lot of investment in neural network-based models, totaling billions of parameters, and being trained on multi-billion point datasets. Compared to first-generation models, these models are computationally expensive and highly complex. The ability to train models of this size, with increasing ease, shows how far technology has come.

Competition, collaboration and customer experience

The computational hardware and the advent of increasingly powerful GPUs is expanding the boundary for larger neural networks being trained on massive datasets. All of this advancement is paying off for banks, consumers and the fintech ecosystem at large. Creative solutions from third parties, fintechs, and challenger banks are solving tough problems in the financial services sector, which is pushing incumbent banks to challenge the challengers and harness the power of AI in the products and services they offer to their customers. One thing incumbents have over their agile rivals, however, is troves of data, which is the lifeblood of AI. Knowing how to harness this data is perhaps the key challenge incumbents face, which is naturally leading to increased collaboration with fintechs and third-party data specialists.

Customers are greatly benefitting from the current competitive environment on both the corporate and retail banking sides. Personal finance is a great example of the latter, as AI now allows consumers to have a personal assistant for their own finances, democratizing access to advisory services.

While chatbots have existed for a number of years, it is only recent advancements that have enabled more compelling use cases. From traditional rule-based bots to research in deep learning based generative model based bots, we have seen tremendous advancement in chat-bot quality. Neural network-based chatbots, for example, can provide an easy interface for users to get spending advice, understand their balance and spending, and get insight into transaction details.

With the advent of multi-billion parameter knowledge models, finetuned on personal finance data, the performance and usability of chatbots is better than ever, with most capable of delivering detailed account insights. The increase in the move towards digital channels brought about by the pandemic has also created a wealth of data on which models can be trained, further improving personal finance products and services.

These use cases have tangible and immediate benefits for both banks and consumers. Customers no longer have to spend time waiting in line to speak to customer services representatives when they know exactly the information they need and how to access it via automated channels. Meanwhile, banks can improve customer experience and reduce overheads tied to their customer support cost-centers.

Looking to the future

Over the next few years, I believe there will be an increase in data-sharing between banks and fintechs. The data banks hold is sensitive and highly safeguarded, but improvements in federated learning and synthetic data generation methods will allow partnerships between those developing models, and those holding the data, to flourish.

The next area is natural language processing (NLP). As mentioned above, there have been huge advances in massive billion parameter neural network-based architectures trained on multi-billion point datasets. From these, there are countless possibilities for transfer learning and knowledge distillation on more specific tasks. One only needs to look at the incredible use cases enabled by GPT-3 to understand the potential of such models.

Another area where I believe data and AI will create opportunities is in providing access to credit through leveraging alternative data. Using such data, banks can begin to provide services to the credit invisible, unlocking financial support for those without traditional credit histories, providing a fairer environment for consumers to gain access to capital.

In a similar vein, an area that I believe will see significant investment is algorithmic fairness and the push for elimination of algorithmic bias in predictive and decision-making models. Increasingly, banks will be required to understand and explain how and why decision-making models arrive at certain outcomes, particularly if they negatively affect certain groups or individuals.

AI can often be maligned by those who believe every advancement will move us closer to the decimation of the job market or even a more outlandish science fiction scenario. But AI is increasingly being used for good across industries, from healthcare and environmental modeling to financial services. Data is the most effective asset we have in the fight against inefficiency, inequality and injustice and AI is the means by which we will unlock its true potential.

Read the rest here:

The past, present, and future of AI in financial services - Finextra

Posted in Ai | Comments Off on The past, present, and future of AI in financial services – Finextra

Illinois Tech Research Wants AI to ID Online Extremists – Government Technology

Posted: at 12:23 am

Before Patrick Crusius killed 23 people in a Walmart in El Paso, Texas, in 2019, he posted a manifesto of white nationalist and anti-immigrant rhetoric on 8chan, an Internet message board. Before John Earnest shot up a synagogue outside San Diego, he posted an anti-Semitic open letter on 8chan and tried to livestream the attack on Facebook. Before Brenton Tarrant killed 51 people in Christchurch, New Zealand, he shared a 74-page manifesto about the white genocide conspiracy theory on Twitter and 8chan before livestreaming one of his crimes on Facebook.

Typical of domestic terrorists and violent extremists today, each of these culprits was active on social media, leaving online records of words and thoughts related to their crimes. Now building upon military tactics to locate terror threats online, researchers at the Illinois Institute of Technology think machine learning and artificial intelligence could turn these social media posts into breadcrumb trails for governments and investigators to identify anonymous accounts.

In a paper co-authored by assistant professors from Illinois Tech and the University of Nebraska, graduate students Andreas Vassilakos and Jose Luis Castanon Remy combined Maltego software, an application in the digital forensics platform Kali Linux, with a process used by the military called open source intelligence (OSINT). With Maltego, they compiled various social media posts on Twitter, 4chan and Reddit and did a link analysis to find the same entity appearing in more than one place for instance connecting a Twitter feed to a name in online court documents.

One problem with the manual process: Its highly time-consuming, and there are already too few people doing those jobs. There are 464,420 job openings nationwide in the cybersecurity field, public and private sector combined, according to cyberseek.org. Dawson said his research team is in the process of coding the AI and machine learning to automate some of the work of scraping and link analysis, and he mentioned domestic terrorism and gang activity as possible use cases.

What were trying to do is find a way to make this fully open source and available to anyone who wants to do it, namely state and federal government, and have it automated. If we have a domestic terrorism event, lets create an intelligence profile of this event. This profile can be created from tweets and stuff like that, so weve created the process, and now were continuing to use AI and machine learning to further automate this, he said. You could take this technology and identify who these people are and go to these entities even before something happens.

In order to mitigate the problem, you have to automate a lot of these tasks, he said.

Andrew Westrope is managing editor of the Center for Digital Education. Before that, he was a staff writer for Government Technology, and previously was a reporter and editor at community newspapers. He has a bachelors degree in physiology from Michigan State University and lives in Northern California.

Follow this link:

Illinois Tech Research Wants AI to ID Online Extremists - Government Technology

Posted in Ai | Comments Off on Illinois Tech Research Wants AI to ID Online Extremists – Government Technology

AI identifies heart failure patients best suited to beta-blocker treatment – Health Europa

Posted: at 12:23 am

Researchers from the University of Birmingham have used a series of Artificial Intelligence (AI) techniques to identify the heart failure patients most likely to benefit from treatment with beta-blockers.

The findings from the study have been published in The Lancet.

Beta-blockers work predominantly by slowing down the heart, which they do by blocking the action of hormones like adrenaline. Although they are commonly used to treat conditions such as angina, heart failure, and atrial fibrillation (AF), beta-blockers are not suitable for everyone. For example, beta-blockers are not recommended for patients with low blood pressure, metabolic acidosis, or lung disease.

Aiming to integrate AI techniques to improve the care of cardiovascular patients, researchers looked at data involving 15,669 patients with heart failure and reduced left ventricular ejection fraction (low function of the hearts main pumping chamber), 12,823 of which were in normal heart rhythm and 2,837 of which had atrial fibrillation- a heart rhythm condition commonly associated with heart failure that leads to worse outcomes.The research was led by the cardAIc group, a multi-disciplinary team of clinical and data scientists at the University of Birmingham and the University Hospitals Birmingham NHS Foundation Trust.

Using AI techniques to deeply investigate the clinical trial data, the team found that this approach could determine different underlying health conditions for each patient, as well as the interactions of these conditions, to isolate response to beta-blocker therapy.This worked in patients with normal heart rhythm, where doctors would normally expect beta-blockers to reduce the risk of death, as well as in patients with AF where previous work has found a lack of effectiveness. In normal heart rhythm, a cluster of patients (who had a combination of older age, less severe symptoms, and lower heart rate than average) was identified with reduced benefit from beta-blockers.Conversely, in patients with AF, the research found a cluster of younger patients with lower rates of prior heart attack but similar heart function to the average AF patient who had a substantial reduction in death with beta-blockers (from 15% to 9%).

The study used data collated and harmonised by the Beta-blockers in Heart Failure Collaborative Group, a global consortium dedicated to enhancing treatment for patients with heart failure. The research used individual patient data from nine landmark trials in heart failure that randomly assigned patients to either beta-blockers or a placebo. The average age of study participants was 65 years, and 24% were women. The AI-based approach combined neural network-based variational autoencoders and hierarchical clustering within an objective framework, and with detailed assessment of robustness and validation across all the trials.

The researchers say that these AI approaches could go further than this research into a specific treatment, with the potential to be applied to a range of other cardiovascular conditions and more.

Corresponding author Georgios Gkoutos, Professor of Clinical Bioinformatics at the University of Birmingham, Associate Director of Health Data Research Midlands, and co-lead for the cardAIc group, said: Although tested in our research in trials of beta-blockers, these novel AI approaches have clear potential across the spectrum of therapies in heart failure, and across other cardiovascular and non-cardiovascular conditions.

Corresponding author Dipak Kotecha, Professor andConsultant in Cardiology at the University of Birmingham, international lead for the Beta-blockers in Heart Failure Collaborative Group, and co-lead for the cardAIc group, added: Development of these new AI approaches is vital to improving the care we can give to our patients; in the future this could lead to personalised treatment for each individual patient, taking account of their particular health circumstances to improve their well-being.

First Author Dr Andreas Karwath, Rutherford Research Fellow at the University of Birmingham and member of the cardAIc group, added: We hope these important research findings will be used to shape healthcare policy and improve treatment and outcomes for patients with heart failure.

Recommended Related Articles

Originally posted here:

AI identifies heart failure patients best suited to beta-blocker treatment - Health Europa

Posted in Ai | Comments Off on AI identifies heart failure patients best suited to beta-blocker treatment – Health Europa

How The United States Army Is Leveraging AI: Interview With Kristin Saling, Chief Analytics Officer & Acting Dir., Army People Analytics – Forbes

Posted: at 12:23 am

The modern warfighter needs to rely on various technologies and increasingly advanced systems to help provide advantages over capable adversaries and competitors. The US Department of Defense (DoD) understands this all too well and must therefore integrate Artificial Intelligence and Machine Learning more effectively across their operations to maintain advantages.

Kristin Saling, Chief Analytics Officer & Acting Dir., Army People Analytics

To remain competitive, the US Army has created the Army Talent Management Task Force to address the current and future needs of the war fighter. In particular, the Data and Artificial Intelligence (AI) Team shapes the creation and implementation of a holistic Officer/NCO/Civilian Talent Management System. This system has transformed the Army's efforts to acquire, develop, employ, and retain human capital through a hyper-enabled data-rich environment and enables the Army to dominate across the spectrum of conflict as a part of the Joint Force. LTC Kristin Saling is an integral part of getting the Army AI ready and shared her insights with us for this article. She will also be presenting at an upcoming AI in Government event where she will discuss where the US Army currently stands on its data collection and AI efforts, some of the challenges they face, and a roadmap for where the DoD and Army is headed.

What are some innovative ways youre leveraging data and AI to benefit the Army Talent Management Task Force?

LTC Kristin Saling: We are leveraging AI in a number of different ways. But one of the things were doing that most people dont think about is leveraging AI in order to leverage AI and by that I mean were using optical character recognition and natural language processing to read tons and tons of paper documents and process their contents into data we can use to fuel our algorithms. Were also reading in and batching tons of occupational survey information to develop robust job competency models we can use to make recommendations in our marketplace.

On the other end, were leveraging machine learning models to predict attrition and performance for targeted retention incentives. We have partnered with the Institute for Defense Analysis to field the Retention Prediction Model Army (RPM-A) which generates an individual prediction vector for retention for every single Active Army member. Were developing the Performance Prediction Model Army (PPM-A) as a companion model to use a number of different factors, from performance to skills crosswalked with market demand, to identify the individuals the Army most wants to keep. These models used in tandem and informed by a number of retention incentive randomized controlled trials will provide a powerful toolkit for Army leaders to provide the most likely to succeed incentive menus to the personnel likely to attrition that the Army most wants to keep.

How are you leveraging automation at all to help on your journey to AI?

LTC Kristin Saling: We are looking at ways to employ Robotic Process Automation throughout the people enterprise. RPA is an unsung hero when it comes to personnel processes and talent management, especially in a distributed environment. We can automate a huge portion of task tracking, onboarding, leave scheduling, and so forth, but Im particularly looking at it in terms of data management. Were migrating a huge portion of our personnel data from 187 different disparate systems into a smaller number of data warehouses and enterprise systems, and this is the perfect opportunity to use RPA to ensure that we have data compatibility and model ready datasets.

How do you identify which problem area(s) to start with for your automation and cognitive technology projects?

LTC Kristin Saling: We do a lot of process mapping and data mapping before we start digging into a project. We need to understand all the different parts of the system that our changes are going to effect. And we revisit this frequently as we develop an automation solution. Sometimes the way were developing the solution renders different parts of the system obsolete and we need to make sure were bypassing them appropriately with the right data architecture. Sometimes there are some additional things we need to build because of where the information generated by the new automation needs to be fed. Its just important for us to remember that nothing we build truly stands alone, that its all part of a larger system.

What are some of the unique opportunities the public sector has when it comes to data and AI?

LTC Kristin Saling: The biggest opportunities I think we have (in the Army at least) are that we have extremely unique and interesting problem sets and applications, and we also have an extremely large and innovative workforce. While we have a number of challenges, we also have a lot of really talented people joining our workforce who were drawn here by the variety of applications we have to solve and some of the unique data sets we have to work with.

What are some use cases you can share where you successfully have applied AI?

LTC Kristin Saling: Successfully applying AI is a tricky question. Weve created successful AI models, but applying them becomes extremely difficult when you consider the ethics of taking actions on the information were generating. The first I can cite is the STARRS program Studies to Assess Readiness and Resilience in Service members. Its an AI model in development that identifies personnel at the highest risk for harmful behaviors, particularly suicide. Taking that information and applying it in an ethical way that enables commanders and experts to enact successful interventions is extremely difficult. We have a team of scientific experts working on this problem.

Can you share some of the challenges when it comes to AI and ML in the public sector?

LTC Kristin Saling: The availability of good data is a challenge. We have a lot of data, but not all of it is good data. We also have a lot of restrictions on our ability to use data, from the Privacy Act of 1974, the Paperwork Reduction Act, and all of the policies and directives derived from those. Without an appropriate System of Record Notice (SORN) that states how the data was collected and how it is to be used, we cant collect data, and that SORN significantly limits how that data can be used. The best AI models cant make better decisions on bad data than we can they can just make bad decisions faster. We really have to get at our data problem.

How do analytics, automation, and AI work together at your agency?

LTC Kristin Saling: We see all of these things as solutions in our data and analytics toolkit to improve processes. Everything starts with good data first and foremost, and automation, when inserted in the right places in the process, helps us get to good data. We treat AI as the top end of the progression of analytics descriptive analytics help us see ourselves, diagnostic analytics help us see what has happened over time and potentially why, predictive analytics help us see what is likely to happen, prescriptive analytics recommend a course of action using the prediction, and if you add one more step in decision autonomy, enabling the machine to make the decision instead of just recommending a course of action, you have narrow artificial intelligence. Weve been most successful when weve looked at our data, our analytics, our people, our decision processes, and the environment these operate in as a total system than when weve tried to develop solutions piecemeal.

How are you navigating privacy, trust, and security concerns around the use of AI?

LTC Kristin Saling: Our privacy office, human research protection program, and cyber protection programs do a lot to mitigate some concerns about the use of AI. However, there are still a lot of concerns about the ethical use of AI. To a large portion of the population, its a black box entity or black magic at best, Skynet in the making at worst. The best way for us to combat this is education. Were sending many of our leaders to executive courses on analytics and artificial intelligence, and developing a holistic training program for the Army on data and analytics literacy. I firmly believe when our leaders better understand how artificial intelligence works and walk through appropriate use cases, they will be able to make better decisions about how to ethically employ AI, better trust how we employ it, and ensure that we are preserving privacy and data/cyber security.

What are you doing to develop an AI ready workforce?

LTC Kristin Saling: Our Army AI Integration Center (AI2C - formerly the Army AI Task Force) has established an education program called AI Scholars, where about 40 students a year, both military and civilian, will take part in graduate degree programs at Carnegie Mellon and eventually at other institutions in advanced data science and data engineering, followed by a tour at the AI2C applying their skills to developing AI solutions. Our HQDA G-8 has sponsored over 50 Army leaders through executive courses in AI at Carnegie Mellon, and ASA(ALT) has sponsored still more through executive courses at the University of Virginia. Our FA49 Operations Research and Systems Analysis career specialty and FA26 Network Science and Data Engineering career specialty have sponsored officers through graduate level AI programs. Through all of this education and its application to a host of innovative problem sets, the Army has created a significant AI workforce and is continually working to improve how we employ this workforce.

What AI technologies are you most looking forward to in the coming years?

LTC Kristin Saling: Im a complexity scientist by background, and Im fascinated about the applications of this field in autonomous systems and particularly swarm, and the host of things well be able to do with these applications. Thats my futurist side speaking. My practical side is just looking forward to simple automation being widely adopted. If we can just modernize our Army leave system from its current antiquated process, I will count that as a success.

LTC Kristin Saling has a lot to say on this subject. If youd like to engage with her directly she will be presenting at an upcoming AI in Government event where she will discuss where the US Army currently stands on its data collection and AI efforts, some of the challenges they face, and a roadmap for where the DoD and Army is headed.

Read the rest here:

How The United States Army Is Leveraging AI: Interview With Kristin Saling, Chief Analytics Officer & Acting Dir., Army People Analytics - Forbes

Posted in Ai | Comments Off on How The United States Army Is Leveraging AI: Interview With Kristin Saling, Chief Analytics Officer & Acting Dir., Army People Analytics – Forbes

AI Startup Begins Offering Artificial Intelligence Consulting Services To Help Companies Hurt By The Pandemic Recover – The Free Press Tampa

Posted: at 12:23 am

Our Artificial Intelligence Consulting Services Can Help Businesses Breakthrough The Pandemics Barriers.

AI Exosphere, an AI Startup, begins offering artificial intelligence consulting services to help companies hurt by the pandemic recover.

Sal Peer, CEO of AI Exosphere.

ORLANDO, FLORIDA, USA, August 31, 2021 /EINPresswire.com/ AI Exosphere, an AI Startup, begins offering artificial intelligence consulting services to help companies hurt by the pandemic recover. Formed by Sal Peer and Alex Athey, a pair of dedicated professionals with a vision to free the entrepreneur, resolve enterprise-level problems, and empower the everyday Joe.

Our team is dedicated to increasing inclusion, accessibility, and scalability through AI innovations. Experienced in helping small business owners automate time-consuming tasks through AI and machine learning practices. Therefore, WE FEEL CONFIDENT WE CAN HELP SAVE OUR SMALL BUSINESSES.

From NLP solutions to full server deployment of cloud-based applications, our team is ready to help our customers build the future.

Our team's expertise lies in Python full stack server development. Applying our expertise in supervised, unsupervised, and reinforcement machine learning, neural networks, and deep learning, we build intelligent systems that make the best decisions with little to no human help.

We build real-time speech recognition and conversational AI applications that drive user experience and increase engagement. Some of our current projects feature bleeding-edge tech AI operations by GPT-3 and GPT-J.

We also offer server development. Cloud and server development is essential for artificial intelligence operations. Our trained IT professionals can help design and deploy customizable server environments for any AI task your business could need.

"My goal with our consulting service division is to increase access to bleeding-edge tools for our small business community and find AI automation that can address the current situation," said Sal Peer, CEO of AI Exosphere.

We know for sure that artificial intelligence has many possibilities to transform your business. The use cases below are just a few examples of how our AI consulting and development services drive business efficiency and improve the bottom line.

-Automation Development-Facial Recognition-Image Data Labeling-Activity Recognition

We tailor our artificial intelligence solutions to our customer's particular needs using our knowledge of industry-specific business processes and challenges. So whether we can help to automate back-office operations, boost customer experience, improve security, or launch a genuinely innovative software product, our AI developers are up for the challenge.

About AI ExosphereAt AI Exosphere, our focus is on Project Hail (HailyAI), an AI voice business assistant who can take complex digital actions and act in a sales and customer support role.

Sal PeerAI Exosphere LLC+1 888-578-2485email us hereVisit us on social media:TwitterLinkedIn

Related

Publishers Note: While The Free Press will always be free for our readers, and ad-supported, we are asking our loyal readers to consider a monthly donation of $3 to maintain our local journalism and help us grow, as we ramp up ad sales locally.

You can click here to support us.

We thank you all for your consideration and for supporting local journalism.

Go here to read the rest:

AI Startup Begins Offering Artificial Intelligence Consulting Services To Help Companies Hurt By The Pandemic Recover - The Free Press Tampa

Posted in Ai | Comments Off on AI Startup Begins Offering Artificial Intelligence Consulting Services To Help Companies Hurt By The Pandemic Recover – The Free Press Tampa

What is AI? Here’s everything you need to know about …

Posted: August 22, 2021 at 3:58 pm

What is artificial intelligence (AI)? It depends who you ask.

Back in the 1950s, the fathers of the field,MinskyandMcCarthy, described artificial intelligence as any task performed by a machine that would have previously been considered to require human intelligence.

That's obviously a fairly broad definition, which is why you will sometimes see arguments over whether something is truly AI or not.

Modern definitions of what it means to create intelligence are more specific. Francois Chollet, an AI researcher at Google and creator of the machine-learning software library Keras, has said intelligence is tied to a system's ability to adapt and improvise in a new environment, to generalise its knowledge and apply it to unfamiliar scenarios.

"Intelligence is the efficiency with which you acquire new skills at tasks you didn't previously prepare for,"he said.

"Intelligence is not skill itself; it's not what you can do; it's how well and how efficiently you can learn new things."

It's a definition under which modern AI-powered systems, such as virtual assistants, would be characterised as having demonstrated 'narrow AI', the ability to generalise their training when carrying out a limited set of tasks, such as speech recognition or computer vision.

Typically, AI systems demonstrate at least some of the following behaviours associated with human intelligence: planning, learning, reasoning, problem-solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

At a very high level, artificial intelligence can be split into two broad types:

Narrow AI

Narrow AI is what we see all around us in computers today -- intelligent systems that have been taught or have learned how to carry out specific tasks without being explicitly programmed how to do so.

This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems on self-driving cars, or in the recommendation engines that suggest products you might like based on what you bought in the past. Unlike humans, these systems can only learn or be taught how to do defined tasks, which is why they are called narrow AI.

General AI

General AI is very different and is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets or reasoning about a wide variety of topics based on its accumulated experience.

This is the sort of AI more commonly seen in movies, the likes of HAL in 2001 or Skynet in The Terminator, but which doesn't exist today and AI experts are fiercely divided over how soon it will become a reality.

There are a vast number of emerging applications for narrow AI:

New applications of these learning systems are emerging all the time. Graphics card designerNvidia recently revealed an AI-based system Maxine, which allows people to make good quality video calls, almost regardless of the speed of their internet connection. The system reduces the bandwidth needed for such calls by a factor of 10 by not transmitting the full video stream over the internet and instead of animating a small number of static images of the caller in a manner designed to reproduce the callers facial expressions and movements in real-time and to be indistinguishable from the video.

However, as much untapped potential as these systems have, sometimes ambitions for the technology outstrips reality. A case in point is self-driving cars, which themselves are underpinned by AI-powered systems such as computer vision. Electric car company Tesla is lagging some way behind CEO Elon Musk's original timeline for the car's Autopilot system being upgraded to "full self-driving" from the system's more limited assisted-driving capabilities, with the Full Self-Driving option only recently rolled out to a select group of expert drivers as part of a beta testing program.

A survey conducted among four groups of experts in 2012/13 by AI researchers Vincent C Mller and philosopher Nick Bostrom reported a 50% chance that Artificial General Intelligence (AGI) would be developed between 2040 and 2050, rising to 90% by 2075. The group went even further, predicting that so-called 'superintelligence' which Bostrom defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest" -- was expected some 30 years after the achievement of AGI.

However, recent assessments by AI experts are more cautious. Pioneers in the field of modern AI research such as Geoffrey Hinton, Demis Hassabis and Yann LeCunsay society is nowhere near developing AGI. Given the scepticism of leading lights in the field of modern AI and the very different nature of modern narrow AI systems to AGI, there is perhaps little basis to fears that a general artificial intelligence will disrupt society in the near future.

That said, some AI experts believe such projections are wildly optimistic given our limited understanding of the human brain and believe that AGI is still centuries away.

While modern narrow AI may be limited to performing specific tasks, within their specialisms, these systems are sometimes capable of superhuman performance, in some instances even demonstrating superior creativity, a trait often held up as intrinsically human.

There have been too many breakthroughs to put together a definitive list, but some highlights include:

AlexNet's performance demonstrated the power of learning systems based on neural networks, a model for machine learning that had existed for decades but that was finally realising its potential due to refinements to architecture and leaps in parallel processing power made possible by Moore's Law. The prowess of machine-learning systems at carrying out computer vision also hit the headlines that year, withGoogle training a system to recognise an internet favorite: pictures of cats.

The next demonstration of the efficacy of machine-learning systems that caught the public's attention wasthe 2016 triumph of the Google DeepMind AlphaGo AI over a human grandmaster in Go, an ancient Chinese game whose complexity stumped computers for decades. Go has about possible 200 moves per turn compared to about 20 in Chess. Over the course of a game of Go, there are so many possible moves that are searching through each of them in advance to identify the best play is too costly from a computational point of view. Instead, AlphaGo was trained how to play the game by taking moves played by human experts in 30 million Go games and feeding them into deep-learning neural networks.

Training these deep learning networks can take a very long time, requiring vast amounts of data to be ingested and iterated over as the system gradually refines its model in order to achieve the best outcome.

However,more recently, Google refined the training process with AlphaGo Zero, a system that played "completely random" games against itself and then learned from it. Google DeepMind CEO Demis Hassabis has also unveiled a new version of AlphaGo Zero that has mastered the games of chess and shogi.

And AI continues to sprint past new milestones:a system trained by OpenAI has defeated the world's top playersin one-on-one matches of the online multiplayer game Dota 2.

That same year, OpenAI created AI agents that invented theirown languageto cooperate and achieve their goal more effectively, followed by Facebook training agents tonegotiateandlie.

2020 was the year in which an AI system seemingly gained the ability to write and talk like a human about almost any topic you could think of.

The system in question, known as Generative Pre-trained Transformer 3 or GPT-3 for short, is a neural network trained on billions of English language articles available on the open web.

From soon after it was made available for testing by the not-for-profit organisation OpenAI, the internet was abuzz with GPT-3's ability to generate articles on almost any topic that was fed to it, articles that at first glance were often hard to distinguish from those written by a human. Similarly, impressive results followed in other areas, with its ability toconvincingly answer questions on a broad range of topicsandeven pass for a novice JavaScript coder.

But while many GPT-3 generated articles had an air of verisimilitude, further testing found the sentences generated often didn't pass muster,offering up superficially plausible but confused statements, as well as sometimes outright nonsense.

There's still considerable interest in using the model's natural language understanding as to the basis of future services. It isavailable to select developers to build into software via OpenAI's beta API. It will also beincorporated into future services available via Microsoft's Azure cloud platform.

Perhaps the most striking example of AI's potential came late in 2020 when the Google attention-based neural network AlphaFold 2 demonstrated a result some have called worthy of a Nobel Prize for Chemistry.

The system's ability to look at a protein's building blocks, known as amino acids, and derive that protein's 3D structure could profoundly impact the rate at which diseases are understood, and medicines are developed. In the Critical Assessment of protein Structure Prediction contest, AlphaFold 2 determined the 3D structure of a protein with an accuracy rivaling crystallography, the gold standard for convincingly modelling proteins.

Unlike crystallography, which takes months to return results, AlphaFold 2 can model proteins in hours. With the 3D structure of proteins playing such an important role in human biology and disease, such a speed-up has beenheralded as a landmark breakthrough for medical science, not to mention potential applications in other areas where enzymes are used in biotech.

Practically all of the achievements mentioned so far stemmed from machine learning, a subset of AI that accounts for the vast majority of achievements in the field in recent years. When people talk about AI today, they are generally talking about machine learning.

Currently enjoying something of a resurgence, in simple terms, machine learning is where a computer system learns how to perform a task rather than being programmed how to do so. This description of machine learning dates all the way back to 1959 when it was coined by Arthur Samuel, a pioneer of the field who developed one of the world's first self-learning systems, the Samuel Checkers-playing Program.

To learn, these systems are fed huge amounts of data, which they then use to learn how to carry out a specific task, such as understanding speech or captioning a photograph. The quality and size of this dataset are important for building a system able to carry out its designated task accurately. For example, if you were building a machine-learning system to predict house prices, the training data should include more than just the property size, but other salient factors such as the number of bedrooms or the size of the garden.

The key to machine learning success is neural networks. These mathematical models are able to tweak internal parameters to change what they output. A neural network is fed datasets that teach it what it should spit out when presented with certain data during training. In concrete terms, the network might be fed greyscale images of the numbers between zero and 9, alongside a string of binary digits -- zeroes and ones -- that indicate which number is shown in each greyscale image. The network would then be trained, adjusting its internal parameters until it classifies the number shown in each image with a high degree of accuracy. This trained neural network could then be used to classify other greyscale images of numbers between zero and 9. Such a network was used in a seminal paper showing the application of neural networks published by Yann LeCun in 1989 and has been used by the US Postal Service to recognise handwritten zip codes.

The structure and functioning of neural networks are very loosely based on the connections between neurons in the brain. Neural networks are made up of interconnected layers of algorithms that feed data into each other. They can be trained to carry out specific tasks by modifying the importance attributed to data as it passes between these layers. During the training of these neural networks, the weights attached to data as it passes between layers will continue to be varied until the output from the neural network is very close to what is desired. At that point, the network will have 'learned' how to carry out a particular task. The desired output could be anything from correctly labelling fruit in an image to predicting when an elevator might fail based on its sensor data.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a large number of sizeable layers that are trained using massive amounts of data. These deep neural networks have fuelled the current leap forward in the ability of computers to carry out tasks like speech recognition and computer vision.

There are various types of neural networks with different strengths and weaknesses. Recurrent Neural Networks (RNN) are a type of neural net particularly well suited to Natural Language Processing (NLP) -- understanding the meaning of text -- and speech recognition, while convolutional neural networks have their roots in image recognition and have uses as diverse as recommender systems and NLP. The design of neural networks is also evolving, with researchersrefining a more effective form of deep neural network called long short-term memoryor LSTM -- a type of RNN architecture used for tasks such as NLP and for stock market predictions allowing it to operate fast enough to be used in on-demand systems like Google Translate.

The structure and training of deep neural networks.

Another area of AI research isevolutionary computation.

It borrows from Darwin's theory of natural selection. It seesgenetic algorithms undergo random mutations and combinations between generations in an attempt to evolve the optimal solution to a given problem.

This approach has even been used to help design AI models, effectively using AI to help build AI. This use of evolutionary algorithms to optimize neural networks is called neuroevolution. It couldhave an important role to play in helping design efficient AI as the use of intelligent systems becomes more prevalent, particularly as demand for data scientists often outstrips supply. The technique was showcased byUber AI Labs, which released paperson using genetic algorithms to train deep neural networks for reinforcement learning problems.

Finally, there areexpert systems, where computers are programmed with rules that allow them to take a series of decisions based on a large number of inputs, allowing that machine to mimic the behaviour of a human expert in a specific domain. An example of these knowledge-based systems might be, for example, an autopilot system flying a plane.

As outlined above, the biggest breakthroughs for AI research in recent years have been in the field of machine learning, in particular within the field of deep learning.

This has been driven in part by the easy availability of data, but even more so by an explosion in parallel computing power, during which time the use of clusters of graphics processing units (GPUs) to train machine-learning systems has become more prevalent.

Not only do these clusters offer vastly more powerful systems for training machine-learning models, but they are now widely available as cloud services over the internet. Over time the major tech firms, the likes of Google, Microsoft, and Tesla, have moved to using specialised chips tailored to both running, and more recently, training, machine-learning models.

An example of one of these custom chips is Google's Tensor Processing Unit (TPU), the latest version of which accelerates the rate at which useful machine-learning models built using Google's TensorFlow software library can infer information from data, as well as the rate at which they can be trained.

These chips are used to train up models for DeepMind and Google Brain and the models that underpin Google Translate and the image recognition in Google Photos and services that allow the public to build machine-learning models usingGoogle's TensorFlow Research Cloud. The third generation of these chips was unveiled at Google's I/O conference in May 2018 and have since been packaged into machine-learning powerhouses called pods that can carry out more than one hundred thousand trillion floating-point operations per second (100 petaflops). These ongoing TPU upgrades have allowed Google to improve its services built on top of machine-learning models, for instance,halving the time taken to train models used in Google Translate.

As mentioned, machine learning is a subset of AI and is generally split into two main categories: supervised and unsupervised learning.

Supervised learning

A common technique for teaching AI systems is by training them using many labelled examples. These machine-learning systems are fed huge amounts of data, which has been annotated to highlight the features of interest. These might be photos labelled to indicate whether they contain a dog or written sentences that have footnotes to indicate whether the word 'bass' relates to music or a fish. Once trained, the system can then apply these labels to new data, for example, to a dog in a photo that's just been uploaded.

This process of teaching a machine by example is called supervised learning. Labelling these examples is commonly carried out byonline workers employed through platforms like Amazon Mechanical Turk.

Training these systems typically requires vast amounts of data, with some systems needing to scour millions of examples to learn how to carry out a task effectively --although this is increasingly possible in an age of big data and widespread data mining. Training datasets are huge and growing in size --Google's Open Images Dataset has about nine million images, while its labelled video repositoryYouTube-8Mlinks to seven million labelled videos.ImageNet, one of the early databases of this kind, has more than 14 million categorized images. Compiled over two years, it was put together by nearly 50 000 people -- most of whom were recruited through Amazon Mechanical Turk -- who checked, sorted, and labelled almost one billion candidate pictures.

Having access to huge labelled datasets may also prove less important than access to large amounts of computing power in the long run.

In recent years, Generative Adversarial Networks (GANs) have been used in machine-learning systems that only require a small amount of labelled data alongside a large amount of unlabelled data, which, as the name suggests, requires less manual work to prepare.

This approach could allow for the increased use of semi-supervised learning, where systems can learn how to carry out tasks using a far smaller amount of labelled data than is necessary for training systems using supervised learning today.

Unsupervised learning

In contrast, unsupervised learning uses a different approach, where algorithms try to identify patterns in data, looking for similarities that can be used to categorise that data.

An example might be clustering together fruits that weigh a similar amount or cars with a similar engine size.

The algorithm isn't set up in advance to pick out specific types of data; it simply looks for data that its similarities can group, for example, Google News grouping together stories on similar topics each day.

Reinforcement learning

A crude analogy for reinforcement learning is rewarding a pet with a treat when it performs a trick. In reinforcement learning, the system attempts to maximise a reward based on its input data, basically going through a process of trial and error until it arrives at the best possible outcome.

An example of reinforcement learning is Google DeepMind's Deep Q-network, whichhas been used to best human performance in a variety of classic video games. The system is fed pixels from each game and determines various information, such as the distance between objects on the screen.

By also looking at the score achieved in each game, the system builds a model of which action will maximise the score in different circumstances, for instance, in the case of the video game Breakout, where the paddle should be moved to in order to intercept the ball.

The approachis also used in robotics research, where reinforcement learning can help teach autonomous robots the optimal way to behave in real-world environments.

Many AI-related technologies are approaching, or have already reached, the "peak of inflated expectations" in Gartner's Hype Cycle, with the backlash-driven 'trough of disillusionment' lying in wait.

With AI playing an increasingly major role in modern software and services, each major tech firm is battling to develop robust machine-learning technology for use in-house and to sell to the public via cloud services.

Each regularly makes headlines for breaking new ground in AI research, although it is probably Google with its DeepMind AI AlphaFold and AlphaGo systems that have probably made the biggest impact on the public awareness of AI.

All of the major cloud platforms -- Amazon Web Services, Microsoft Azure and Google Cloud Platform -- provide access to GPU arrays for training and running machine-learning models, withGoogle also gearing up to let users use its Tensor Processing Units-- custom chips whose design is optimized for training and running machine-learning models.

All of the necessary associated infrastructure and services are available from the big three, the cloud-based data stores, capable of holding the vast amount of data needed to train machine-learning models, services to transform data to prepare it for analysis, visualisation tools to display the results clearly, and software that simplifies the building of models.

These cloud platforms are even simplifying the creation of custom machine-learning models, with Google offeringa service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service builds custom image-recognition models and requires the user to have no machine-learning expertise.

Cloud-based, machine-learning services are constantly evolving. Amazon now offers a host of AWS offeringsdesigned to streamline the process of training up machine-learning modelsandrecently launched Amazon SageMaker Clarify, a tool to help organizations root out biases and imbalances in training data that could lead to skewed predictions by the trained model.

For those firms that don't want to build their own machine=learning models but instead want to consume AI-powered, on-demand services, such as voice, vision, and language recognition, Microsoft Azure stands out for the breadth of services on offer, closely followed by Google Cloud Platform and then AWS. Meanwhile, IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed at everything from healthcare to retail, grouping these offerings together under itsIBM Watson umbrella, and having invested $2bn in buying The Weather Channelto unlock a trove of data to augment its AI services.

Internally, each tech giant and others such as Facebook use AI to help drive myriad public services: serving search results, offering recommendations, recognizing people and things in photos, on-demand translation, spotting spam -- the list is extensive.

But one of the most visible manifestations of this AI war has been the rise of virtual assistants, such as Apple's Siri, Amazon's Alexa, the Google Assistant, and Microsoft Cortana.

Relying heavily on voice recognition and natural-language processing and needing an immense corpus to draw upon to answer queries, a huge amount of tech goes into developing these assistants.

But while Apple's Siri may have come to prominence first, it is Google and Amazon whose assistants have since overtaken Apple in the AI space -- Google Assistant with its ability to answer a wide range of queries and Amazon's Alexa with the massive number of 'Skills' that third-party devs have created to add to its capabilities.

Over time, these assistants are gaining abilities that make them more responsive and better able to handle the types of questions people ask in regular conversations. For example, Google Assistant now offers a feature called Continued Conversation, where a user can ask follow-up questions to their initial query, such as 'What's the weather like today?', followed by 'What about tomorrow?' and the system understands the follow-up question also relates to the weather.

These assistants and associated services can also handle far more than just speech, with the latest incarnation of the Google Lens able to translate text into images and allow you to search for clothes or furniture using photos.

Despite being built into Windows 10, Cortana has had a particularly rough time of late, with Amazon's Alexa now available for free on Windows 10 PCs. At the same time, Microsoftrevamped Cortana's role in the operating systemto focus more on productivity tasks, such as managing the user's schedule, rather than more consumer-focused features found in other assistants, such as playing music.

It'd be a big mistake to think the US tech giants have the field of AI sewn up. Chinese firms Alibaba, Baidu, and Lenovo, invest heavily in AI in fields ranging from e-commerce to autonomous driving. As a country, China is pursuing a three-step plan to turn AI into a core industry for the country,one that will be worth 150 billion yuan ($22bn) by the end of 2020to becomethe world's leading AI power by 2030.

Baidu has invested in developing self-driving cars, powered by its deep-learning algorithm, Baidu AutoBrain. After several years of tests, with its Apollo self-driving car havingracked up more than three million miles of driving in tests, it carried over 100 000 passengers in 27 cities worldwide.

Baidu launched a fleet of 40 Apollo Go Robotaxis in Beijing this year. The company's founder has predicted that self-driving vehicles will be common in China's cities within five years.

The combination of weak privacy laws, huge investment, concerted data-gathering, and big data analytics by major firms like Baidu, Alibaba, and Tencent, means that some analysts believe China will have an advantage over the US when it comes to future AI research, with one analyst describing the chances ofChina taking the lead over the US as 500 to 1 in China's favor.

Baidu's self-driving car, a modified BMW 3 series.

While you could buy a moderately powerful Nvidia GPU for your PC -- somewhere around the Nvidia GeForce RTX 2060 or faster -- and start training a machine-learning model, probably the easiest way to experiment with AI-related services is via the cloud.

All of the major tech firms offer various AI services, from the infrastructure to build and train your own machine-learning models through to web services that allow you to access AI-powered tools such as speech, language, vision and sentiment recognition on-demand.

Robots and driverless cars

The desire for robots to be able to act autonomously and understand and navigate the world around them means there is a natural overlap between robotics and AI. While AI is only one of the technologies used in robotics, AI is helping robots move into new areas such asself-driving cars,delivery robotsand helping robotslearn new skills. At the start of 2020,General Motors and Honda revealed the Cruise Origin, an electric-powered driverless car and Waymo, the self-driving group inside Google parent Alphabet, recently opened its robotaxi service to the general public in Phoenix, Arizona,offering a service covering a 50-square mile area in the city.

Fake news

We are on the verge of having neural networks that cancreate photo-realistic imagesorreplicate someone's voice in a pitch-perfect fashion. With that comes the potential for hugely disruptive social change, such as no longer being able to trust video or audio footage as genuine. Concerns are also starting to be raised about how such technologies will be used to misappropriate people's images, with tools already being created to splice famous faces into adult films convincingly.

Speech and language recognition

Machine-learning systems have helped computers recognise what people are saying with an accuracy of almost 95%. Microsoft's Artificial Intelligence and Research group also reported it had developed a system that transcribesspoken English as accurately as human transcribers.

With researchers pursuing a goal of 99% accuracy, expect speaking to computers to become increasingly common alongside more traditional forms of human-machine interaction.

More:

What is AI? Here's everything you need to know about ...

Posted in Ai | Comments Off on What is AI? Here’s everything you need to know about …

AI Pick of the Week (8-21) – VSiN Exclusive News – News – VSiN

Posted: at 3:58 pm

Weve been tinkering with the Artificial Intelligence programs at 1/ST BET, which gives us more than 50 data points (such as speed, pace, class, jockey, trainer and pedigree stats) for every race based on how you like to handicap.

Last Saturday, we lost with Domestic Spending in the Mister D. Stakes at Arlington Park, but our record is still 10-of-22 overall since taking over this feature. Based on a $2 Win bet on the A.I. Pick of the Week, thats $44 in wagers and payoffs totalling $47.70 for a respectable ROI of $2.17 for every $2 wagered.

This week, I ran Saturdays plays from me and my handicapping friends in my Tuleys Thoroughbred Takes column at vsin.com/horses through the 1/ST BET programs and came up with our A.I. Pick of the Week:

Saturday, Aug. 21

Saratoga Race No. 10 (6:13 p.m. ET/3:13 p.m. PT)

#6 Malathaat (6-5 ML odds)

Malathaat ranks 1st in 15 of the 52 factors used by 1/ST BET A.I.

This 3-year-old filly also ranks 1st in 7 of the Top 15 Factors at betmix.com, including top win percentage, best average speed in last three races and best average lifetime earnings.

He also ranks in the Top 5 in 5 of the other 8 categories, including best speed at the track and average off-track earnings. And we also get the classic combo of trainer Todd Pletcher and jockey John Velazquez.

See original here:

AI Pick of the Week (8-21) - VSiN Exclusive News - News - VSiN

Posted in Ai | Comments Off on AI Pick of the Week (8-21) – VSiN Exclusive News – News – VSiN

Page 112«..1020..111112113114..120130..»