Page 150«..1020..149150151152..160170..»

Category Archives: Ai

OMNIQ’s Q Shield AI-Based Vehicle Recognition Technology Selected in Georgia to Crack Down on Crime and Enforce Uninsured and Registration Violations…

Posted: February 6, 2021 at 8:45 am

SALT LAKE CITY, Feb. 05, 2021 (GLOBE NEWSWIRE) -- OMNIQ Corp.(OTCQB: OMQS) (OMNIQ or the Company), a provider of Supply Chain and Artificial Intelligence (AI)-based solutions, today announced that the Company has been selected by a city in the state of Georgia to deploy its Q Shield vehicle recognition systems (VRS) technology to identify any vehicle driving through city which is uninsured or in violation of its registration requirements. Q Shield addresses a problem in Georgia that is endemic across the United States, that approximately 36 million uninsured vehicles are traversing our nation's roads every day and states are losing millions of dollars from unregistered vehicles on the road.

Q Shield, OMNIQs AI-based machine vision VRS solution uses patented Neural Network algorithms that imitate human brains for pattern recognition and decision-making. More than 17,000 OMNIQ AI-based machine vision sensors are installed worldwide, including approximately 7,000 in the U.S. Based on superior accuracy and patented features like identification of make and color combined with superior accuracy based on the sophisticated algorithm and machine learning that largely depends on accumulated provided by thousands of sensors already deployed.

OMNIQs Battle Proven AI-based Machine Vision systems are installed in over 30 airports in the US, including JFK, La Guardia, LAX, Miami and many others, as well as in sensitive areas worldwide, for Safe City/Security purposes.

When a vehicle that does not have the required liability coverage or is in violation of the vehicle registration requirements passes Q Shields sensors, deployed throughout the city, in real-time OMNIQs Q Shield system triggers a 'notice of violation' which will be mailed to the vehicles registered owner, said Sandy Mayer VP Sales and Marketing OMNIQ Vision.

For this phase of the program, Q Shield, OMNIQs VRS solution will be installed in several key intersections throughout the city to efficiently and accurately capture vehicle data, including license plate number, color, make, and model. Q Shields technology will also be used to provide local law enforcement with timely alerts for any vehicle on a federal, state, and local law enforcement wanted list in addition to enforcing the traffic violations above,said Sandy Mayer.

We are excited to provide our machine vision VRS technology to benefit the citizens of the city and assist the citys local Police Department, said Shai Lustgarten, CEO of OMNIQ.

Despite their usefulness in helping police solve crimes, automatic license plate and vehicle recognition (VRS) solutions are often beyond the reach of many smaller municipalities. The cost of a such needed, especially in todays environment, efficient solution, can often exceed budgetary limits. We are proud with our Q Shield product, deployed and a major player in terror prevention for governments, around the world, now available and affordable to protect all citizens anywhere,said Shai Lustgarten.

Municipalities now are able to join our program which thanks to the revenues generated through Q Shields offering, and a unique pricing model introduced by OMNIQ, we are delighted to be able to overcome those hurdles that prevent municipalities today, from getting their citizens the security, safety and services they deserve, said Mr. Lustgarten.

About OMNIQ Corp.OMNIQ Corp. (OTCQB: OMQS) provides computerized and machine vision image processing solutions that use patented and proprietary AI technology to deliver data collection, real-time surveillance and monitoring for supply chain management, homeland security, public safety, traffic & parking management, and access control applications. The technology and services provided by the Company help clients move people, assets, and data safely and securely through airports, warehouses, schools, national borders, and many other applications and environments.

OMNIQs customers include government agencies and leading Fortune 500 companies from several sectors, including manufacturing, retail, distribution, food and beverage, transportation and logistics, healthcare, and oil, gas, and chemicals. Since 2014, annual revenues have grown to more than $50 million from clients in the USA and abroad.

The Company currently addresses several billion-dollar markets, including the Global Safe City market, forecast to grow to $29 billion by 2022, and the Ticketless Safe Parking market, forecast to grow to $5.2 billion by 2023. For more information, visit http://www.omniq.com.

Information about Forward-Looking Statements

Safe Harbor Statement under the Private Securities Litigation Reform Act of 1995. Statements in this press release relating to plans, strategies, economic performance and trends, projections of results of specific activities or investments, and other statements that are not descriptions of historical facts may be forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995, Section 27A of the Securities Act of 1933 and Section 21E of the Securities Exchange Act of 1934.

This release contains forward-looking statements that include information relating to future events and future financial and operating performance. The words anticipate, may, would, will, expect, estimate, can, believe, potential and similar expressions and variations thereof are intended to identify forward-looking statements. Forward-looking statements should not be read as a guarantee of future performance or results, and will not necessarily be accurate indications of the times at, or by, which that performance or those results will be achieved. Forward-looking statements are based on information available at the time they are made and/or managements good faith belief as of that time with respect to future events, and are subject to risks and uncertainties that could cause actual performance or results to differ materially from those expressed in or suggested by the forward-looking statements. Important factors that could cause these differences include, but are not limited to: fluctuations in demand for the Companys products particularly during the current health crisis, the introduction of new products, the Companys ability to maintain customer and strategic business relationships, the impact of competitive products and pricing, growth in targeted markets, the adequacy of the Companys liquidity and financial strength to support its growth, the Companys ability to manage credit and debt structures from vendors, debt holders and secured lenders, the Companys ability to successfully integrate its acquisitions, and other information that may be detailed from time-to-time in OMNIQ Corp.s filings with the United States Securities and Exchange Commission. Examples of such forward-looking statements in this release include, among others, statements regarding revenue growth, driving sales, operational and financial initiatives, cost reduction and profitability, and simplification of operations. For a more detailed description of the risk factors and uncertainties affecting OMNIQ Corp., please refer to the Companys recent Securities and Exchange Commission filings, which are available at https://www.sec.gov. OMNIQ Corp. undertakes no obligation to publicly update or revise any forward-looking statements, whether as a result of new information, future events or otherwise, unless otherwise required by law.

Investor Contact:888-309-9994IR@omniq.com

James CarbonaraHayden IR(646)-755-7412james@haydenir.com

Brett MaasHayden IR(646) 536-7331brett@haydenir.com

Read the original here:

OMNIQ's Q Shield AI-Based Vehicle Recognition Technology Selected in Georgia to Crack Down on Crime and Enforce Uninsured and Registration Violations...

Posted in Ai | Comments Off on OMNIQ’s Q Shield AI-Based Vehicle Recognition Technology Selected in Georgia to Crack Down on Crime and Enforce Uninsured and Registration Violations…

Artificial Intelligence Is Great For Day-To-Day Stuff, But It Cant Build A Business – Forbes

Posted: December 29, 2020 at 12:22 am

AI can't build a business the way humans can.

Lets face it, artificial intelligence is probably one of the best management tools to come along since the first commercial computers rolled out decades ago. It can show and predict when customers are ready to buy, it can tighten up supply chains, and help prioritize the work of teams. The management potential of AI algorithms is unlimited. Still, when it comes to actual leadership, AI falls flat on its face.

AI will be able to do almost any managerial task in the future. That is because of the way we define management as being focused on the idea of creating stability, order, consistency, predictability, by means of using metrics e.g., KPI, says David De Cremer, founder and director of the Centre on AI Technology for Humankind at the National University of Singapore Business School and author of Leadership by Algorithm: Who Leads and Who Follows in the AI Era?

Still, AI is beset by bias and overhyped promises. Most organizations reported some failures among their AI projects, with a quarter of them reporting up to 50% failure rate, a survey by IDC showed. Lack of skilled staff and unrealistic expectations were identified as the top reasons for failure.

Still, there are zones where AI has potential to excel in a big way such as at repetitive tasks in which it could replace many managerial functions. Thats as far as AIs potential reaches inspired, visionary business leadership is still very much a human skill, De Cremer points out in a recent interview published at Knowledge@Wharton. AI algorithms can never replace the qualities of human leadership. The perception that AI may be in charge is understandable we are moving into a society where people are being told by algorithms what their taste is, and, without questioning it too much, most people comply easily. Given these circumstances, it does not seem to be a wild fantasy anymore that AI may be able to take a leadership position.

But the idea that AI will take the lead is actually a fantasy, he points out. AI will never have a soul and it cannot replace human leadership qualities that let people be creative and have different perspectives, he says. Leadership is required to guide the development and applications of AI in ways that best serve the needs of humans.

For starters, because AI is so far-reaching, the actual technology is only part of the equation. The integration between social sciences, humanity, and artificial intelligence was not getting as much attention as it should, says De Cremer. AI is particularly good at repetitive, routine tasks and thinking systematically and consistently. This already implies that the tasks and the jobs that are most likely to be taken over by AI are the hard skills, and not so much the soft skills.

These soft skills include the ability of business leaders to understand how, where, and why to use algorithms, automation, to have more efficient decision-making, he says. Many business leaders have problems making business cases for why they should use AI. They are struggling to make sense of what AI can bring to their companies.

This means figuring out how AI will serve a co-working roles in teams. This consists of deciding where in the loop of the business process do you automate, where is it possible to take humans out of the loop, and where do you definitely keep humans in the loop, De Cremer. Leaders need to design a work culture where people feel that they are being supervised by a machine, or being treated like robots. Leaders build cultures, and in doing this they communicate and represent the values and norms the company uses to decide how work needs to be done to create business value.

AI will never be able to figure out how to build a productive and forward-looking work culture.

(Disclosure: In my role as an independent consultant, I have performed project work over the past year for IDC, mentioned in this post.)

Read more:

Artificial Intelligence Is Great For Day-To-Day Stuff, But It Cant Build A Business - Forbes

Posted in Ai | Comments Off on Artificial Intelligence Is Great For Day-To-Day Stuff, But It Cant Build A Business – Forbes

5 questions on the future of the Pentagon’s top AI office – NavyTimes.com

Posted: at 12:22 am

WASHINGTON This year, the Pentagons top artificial intelligence office kicked off its first joint war fighting initiative, realigned the organization to meet new department needs and got a new director.

If the fiscal 2021 defense policy bill, known as the National Defense Authorization Act, is signed into law, the departments Joint Artificial Intelligence Center, will gain acquisition authority, a board of advisers and report up to the deputy secretary of defense, elevating the JAIC in the Pentagons hierarchy. In the upcoming year, the JAIC will continue to reach out to components across the department to accelerate the adoption of artificial intelligence across the enterprise.

In a recent Removing Stovepipes webinar with C4ISRNET, Greg Allen, the JAIC chief of strategy and communications, discussed the future of the AI hub and the ongoing progress on joint war fighting activities.

This transcript has been edited for clarity and brevity.

C4ISRNET: This year the office pivoted from JAIC 1.0 to JAIC 2.0. Can you talk about that change and what it means for the department?

Allen: I think this is fundamentally good news because that transformation from JAIC 1.0 to JAIC 2.0 is centered around the progress that the department has made. The JAIC was established to be the focal point of the DoD AI strategy and the engine of the DoDs AI transformation. Well, a lot of progress has been made in the past two years. And so the types of things that the Department of Defense needs from the Joint AI Center are also changing. When we got started in 2018, there was only a handful of projects that were trying to use AI in sort of any kind of operational sense Project Maven being the most well-known example and the rest of AI efforts across the DoD were primarily at the basic research stage without a clear path into operational use.

The great news is that now theres a lot of exciting projects across the department in each of the armed services. In U.S. [Special Operations Command], the starting point for a lot of organizations has already been reached. And so now theyre at a greater degree of maturity. Now, of course, there are plenty, really an endless number, of organizations throughout the Department of Defense who are just getting started or have not even gotten started with the Department of Defense. But there are now examples of programs being run well, in each of the services, and thats great news.

C4ISRNET: If this years National Defense Authorization Act is signed into law, the JAIC will have acquisition authority. What does that mean for the JAIC?

Don't miss the top Navy stories, delivered each afternoon

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the Navy Times Daily News Roundup.

Allen: First and foremost, this helps us move faster. If you have your own acquisition authority, then you are probably your own top priority. Of course, the contracting offices that we leverage, some of them are terrific partners, but they also have a larger customer base than just the Joint AI Center. And so its not always possible to make them understand that you should be at the top of the priority list for you know the kinds of challenges you can imagine them having.

The second thing that it helps us do is that one of the functions that JAIC is hoping to perform on behalf of the department is getting contract vehicles out there that are relevant to AI efforts across the Department of Defense. A great example of this is a contract vehicle that were hoping to put together related to testing and evaluation. Some parts of testing and evaluation are unique and specific to every unique type of AI project. But many aspects of them are common across different types of efforts, whether thats a computer vision effort, a natural language processing effort, or some other type of AI-enabled capability.

The nice thing is that if you can get these testing and evaluation functions specified in contract performance of work statements. Then [because of] what the JAIC has learned by executing its projects over the past two years, we can actually codify that and contract vehicles that reflect our contracting best practices, preserving government ownership rights where appropriate, making sure that the contractor is incentivized by certain types of financial incentives and not by other types of financial incentives, we can take what weve learned in terms of best practices for AI contracting and plug that into contract vehicles that now other organizations throughout the DoD can benefit from, so that they dont have to start from scratch when theyre doing their own contracting efforts. They can benefit from what weve already learned.

C4ISRNET: Under the defense policy bill, the JAIC will report directly to the deputy secretary of defense and receive a board of advisers. It seems like the JAIC is starting to get the resources it needs to be successful. Is 2021 viewed as a big year internally?

Allen: Yeah, I would say, every year has felt like, OK, it couldnt possibly be the case that next year would be a bigger deal than what we went through the previous year. Because weve grown from, as I said, in 2018 really just an idea and a handful of people to a really large organization with an enormous mandate and enormous breadth of activities underway. But the three points that you made, its rooted in a current National Defense Authorization Act ... if that becomes law, that is obviously an enormous increase, perhaps not in responsibility, because our responsibilities have basically remained the same, but an enormous increase in the in the suite of tools and authorities that we might have in order to help transform the department.

That would obviously, I think, prove myself wrong yet again. And yes .... 2021 probably would be the most instrumental year in the in the organizations history.

C4ISRNET: One of the most exiting JAIC efforts that started in 2020 was the joint war fighting network. What progress was made this year and where will that go moving forward?

The great news is that in May 2020, we awarded the prime contract vehicle for our joint war fighting initiative. And that effort, which has a ceiling value of $800 million, allows us a great deal of flexibility in executing the portfolio of projects under the joint warfighting initiative. ... There are other efforts going around, beyond just automating and making ISR more autonomous. Theres other efforts going on around Joint All-Domain Command and Control. The JAIC is actually heavily involved in the sort of AI-enabled workstreams related to the JADC2 effort that youve all heard so much about. We have a great suite of partners across the Department of Defense and making that happen. And I would say that the joint war fighting mission initiative is really underway in development and has some exciting capabilities already sort of in the testing stages.

C4ISRNET: With the pivot to JAIC 2.0 and the new authorities youre going to have in the upcoming year, how should we gauge the success of the JAIC now? What are some of the metrics we should be looking for?

Allen: Well, I think the metrics for individual programs might look a little bit different from the metrics and the organization as a whole. Ill start by talking about some of the metrics for the programs. In our predictive maintenance effort, one of the things that we try and look at is aircraft uptime. Because if aircrafts are being saddled, spending half of their operational lifetime in maintenance depots, well, thats quite unfortunate. If you can reduce that time, its the sort of functional equivalent of buying new aircraft. So in that case, a key metric for that program might be: Does this actually drive additional uptime for the aircraft?

In other parts of our programs, which are dedicated to the financial performance of the Department of Defense and the financial management of the DoD, it might really be sort of $1 saved, or time and productivity gains by the employee.

But for the JAIC as a whole, our goal is to drive the transformation across the entire Department of Defense, which is, of course, an extraordinary mandate. And so what were trying to understand is how many policy barriers did we lower for the DoD that made it possible for them to accelerate their AI adoption faster, or in the case of the Joint Common Foundation, which is designed not to lower policy barriers, but to lower technical barriers. It might be something sort of like user adoption How many folks are actually using the JCF, and how many programs have their primary development activities occurring on the JCF?

View original post here:

5 questions on the future of the Pentagon's top AI office - NavyTimes.com

Posted in Ai | Comments Off on 5 questions on the future of the Pentagon’s top AI office – NavyTimes.com

Neurals AI predictions for 2021 – The Next Web

Posted: at 12:22 am

Its that time of year again! Were continuing our longrunning tradition of publishing a list of predictions fromAI experts who know whats happening on the ground, in the research labs, and at the boardroom tables.

Without further ado, lets dive in and see what the pros think will happen in the wake of 2020.

Dr. Arash Rahnama, Head of Applied AI Research at Modzy:

Just as advances in AI systems are racing forward, so too are opportunities and abilities for adversaries to trick AI models into making wrong predictions. Deep neural networks are vulnerable to subtle adversarial perturbations applied to their inputs adversarial AI which are imperceptible to the human eye. These attacks pose a great risk to the successful deployment of AI models in mission critical environments. At the rate were going, there will be a major AI security incident in 2021 unless organizations begin to adopt proactive adversarial defenses into their AI security posture.

2021 will be the year of explainability. As organization integrate AI, explainability will become a major part of ML pipelines to establish trust for the users. Understanding how machine learning reasons against real-world data helps build trust between people and models. Without understanding outputs and decision processes, there will never be true confidence in AI-enabled decision-making. Explainability will be critical in moving forward into the next phase of AI adoption.

The combination of explainability, and new training approaches initially designed to deal with adversarial attacks, will lead to a revolution in the field. Explainability can help understand what data influenced a models prediction and how to understand bias information which can then be used to train robust models that are more trusted, reliable and hardened against attacks. This tactical knowledge of how a model operates, will help create better model quality and security as a whole. AI scientists will re-define model performance to encompass not only prediction accuracy but issues such as lack of bias, robustness and strong generalizability to unpredicted environmental changes.

Dr. Kim Duffy, Life Science Product Manager at Vicon.

Forming predictions for artificial intelligence (AI) and machine learning (ML) is particularly difficult to do while only looking one year into the future. For example, in clinical gait analysis, which looks at a patients lower limb movement to identify underlying problems that result in difficulties walking and running, methodologies like AI and ML are very much in their infancy. This is something Vicon highlights in our recent life sciences report, A deeper understanding of human movement. To utilize these methodologies and see true benefits and advancements for clinical gait will take several years. Effective AI and ML requires a mass amount of data to effectively train trends and pattern identifications using the appropriate algorithms.

For 2021, however, we may see more clinicians, biomechanists, and researchers adopting these approaches during data analysis. Over the last few years, we have seen more literature presenting AI and ML work in gait. I believe this will continue into 2021, with more collaborations occurring between clinical and research groups to develop machine learning algorithms that facilitate automatic interpretations of gait data. Ultimately, these algorithms may help propose interventions in the clinical space sooner.

It is unlikely we will see the true benefits and effects of machine learning in 2021. Instead, well see more adoption and consideration of this approach when processing gait data. For example, the presidents of Gait and Postures affiliate society provided a perspective on the clinical impact of instrumented motion analysis in their latest issue, where they emphasized the need to use methods like ML on big-data in order to create better evidence of the efficiency of instrumented gait analysis. This would also provide better understanding and less subjectivity in clinical decision-making based on instrumented gait analysis. Were also seeing more credible endorsements of AI/ML such as the Gait and Clinical Movement Analysis Society which will also encourage further adoption by the clinical community moving forward.

Joe Petro, CTO of Nuance Communications:

In 2021, we will continue to see AI come down from the hype cycle, and the promise, claims, and aspirations of AI solutions will increasingly need to be backed up by demonstrable progress and measurable outcomes. As a result, we will see organizations shift to focus more on specific problem solving and creating solutions that deliver real outcomes that translate into tangible ROI not gimmicks or building technology for technologys sake. Those companies that have a deep understanding of the complexities and challenges their customers are looking to solve will maintain the advantage in the field, and this will affect not only how technology companies invest their R&D dollars, but also how technologists approach their career paths and educational pursuits.

With AI permeating nearly every aspect of technology, there will be an increased focus on ethics and deeply understanding the implications of AI in producing unintentional consequential bias. Consumers will become more aware of their digital footprint, and how their personal data is being leveraged across systems, industries, and the brands they interact with, which means companies partnering with AI vendors will increase the rigor and scrutiny around how their customers data is being used, and whether or not it is being monetized by third parties.

Dr. Max Versace, CEO and Co-Founder, Neurala:

Well see AI be deployed in the form of inexpensive and lightweight hardware. Its no secret that 2020 was a tumultuous year, and the economic outlook is such that capital intensive, complex solutions will be sidestepped for lighter-weight, perhaps software-only, less expensive solutions. This will allow manufacturers to realize ROIs in the short term without massive up-front investments. It will also give them the flexibility needed to respond to fluctuations the supply chain and customer demands something that weve seen play out on a larger scale throughout the pandemic.

Humans will turn their attention to why AI makes the decisions it makes. When we think about the explainability of AI, it has often been talked about in the context of bias and other ethical challenges. But as AI comes of age and gets more precise, reliable and finds more applications in real-world scenarios, well see people start to question the why? The reason? Trust: humans are reluctant to give power to automatic systems they do not fully understand. For instance, in manufacturing settings, AI will need to not only be accurate, but also explain why a product was classified as normal or defective, so that human operators can develop confidence and trust in the system and let it do its job.

Another year, another set of predictions. You can see how our experts did last year by clicking here. You can see how our experts did this year by building a time machine and traveling to the future. Happy Holidays!

Published December 28, 2020 07:00 UTC

View original post here:

Neurals AI predictions for 2021 - The Next Web

Posted in Ai | Comments Off on Neurals AI predictions for 2021 – The Next Web

Chatroulette Is On the Rise AgainWith Help From AI – WIRED

Posted: at 12:22 am

A decade ago, Chatroulette was an internet supernova, exploding in popularity before collapsing beneath a torrent of male nudity that repelled users. Now, the app, which randomly pairs strangers for video chats, is getting a second chance, thanks in part to a pandemic that has restricted in-person social contact, but also thanks to advances in artificial intelligence that help filter the most objectionable images.

User traffic has nearly tripled since the start of the year, to 4 million monthly unique visitors, the most since early 2016, according to Google Analytics. Founder and chairman Andrey Ternovskiy says the platform offers a refreshing antidote of diversity and serendipity to familiar social echo chambers. On Chatroulette, strangers meet anonymously and dont have to give away their data or wade through ads.

One sign of how thoroughly Chatroulette has cleaned up its act: an embryonic corporate conference business. Bits & Pretzels, a German conference about startups, hosted a three-day event on Chatroulette in September, including a Founders Roulette session that matched participants. Without nudes though, but full of surprising conversations, the conference heralded. Another change: Women now are 34 percent of users, up from 11 percent two years ago.

The AI thats helped keep visitors free of unwanted nudity or masturbation has been a good investment, says Ternovskiy. It may also offer lessons for much larger social networks struggling to moderate content that can veer into falsehoods or toxicity. But Ternovskiy still dreams of a platform that creates happy human connections, and cautions that technology cant deliver that alone. I doubt the machine will be ever able to predict: Is this content desirable for my user base? he says.

A 17-year-old Ternovskiy coded and created Chatroulette in November 2009 from his Moscow bedroom as a way to kill boredom. Three months later, the site attracted 1.2 million daily visitors. Then came the exodus. Ternovskiy dabbled in some ill-fated partnerships with Sean Parker and others to try to keep Chatroulette relevant. In 2014, he launched a premium offering that paired users based on desired demographics, which generated some revenue. He invested some of that money in cryptocurrency ventures that brought additional gains. Chatroulette today is based in Zug, Switzerland, a crypto hub.

In 2019, Ternovskiy decided to give Chatroulette one more spin, as a more respectable business, led by a professional team, with less adult chaos. The company was incorporated in Switzerland. Ternovskiy hired Andrew Done, an Australian with expertise in machine learning, as CTO. Earlier this year, Done became CEO. He was joined by a senior product researcher with a PhD in psychology, a community manager, a talent acquisition manager, and more engineers. Then Covid-19 hit, and traffic boomed.

The new team tapped the surge in traffic to conduct user research and test ways to moderate content, including AI tools from Amazon and Microsoft. It created a filtered channel, now known as Random Chat, designed to exclude nudity, alongside an Unmoderated channel. By demarcating the two channels, Chatroulette hoped to make the filtered feed feel safer and attract users interested in human connection. The unfiltered channel remains popular, but usage is shrinking, and Ternovskiy plans to eliminate it by the middle of 2021.

In June, Chatroulette brought in San Francisco-based Hive, an AI specialist, for a test on detecting nudity. Hives software also moderates content on Reddit. Executives were quickly impressed with Hives accuracy, especially in not flagging innocent users and actions. At the same time, Chatroulette tested moderation tools from Amazon Rekognition and Microsoft Azure; it had previously tried Google Clouds Vision AI.

Hive is at a level of accuracy that makes it practical to use this technology at scale, which was not previously possible, Done says. He says Hive is so accurate that using humans in the moderation loop hurts the systems performance. That is, humans introduce more errors than they remove.

See more here:

Chatroulette Is On the Rise AgainWith Help From AI - WIRED

Posted in Ai | Comments Off on Chatroulette Is On the Rise AgainWith Help From AI – WIRED

DeepMinds big losses, and the questions around running an AI lab – VentureBeat

Posted: at 12:22 am

Last week, on the heels of DeepMinds breakthrough in using AI to predict protein-folding, came news that the U.K.-based AI company is still costing its parent company Alphabet hundreds of millions of dollars in losses each year.

A tech company losing money is nothing new. The tech industry is replete with examples of companies that burned through investor money long before becoming profitable. But DeepMind is not a normal company seeking to grab a share of a specific market. It is an AI research lab that has had to repurpose itself into a semi-commercial outfit to ensure its survival.

And while its owner, which is also Googles parent company, is currently happy footing the bill for DeepMinds expensive AI research, there is no guarantee that it will continue to do so forever.

According to itsannual report filed with the U.K.s Companies House register, DeepMind has more than doubled its revenue, raking in 266 million in 2019, up from 103 million in 2018. But the companys expenses continue to grow as well, increasing from 568 million in 2018 to 717 million in 2019. The companys overall losses grew from 470 million in 2018 to 477 million in 2019.

Above: DeepMinds AlphaFold project used AI to help advance the complicated challenge of protein-folding

At first glance, this isnt bad news. Compared to previous years, DeepMinds revenue growth is accelerating while its losses are plateauing.

But the report contains a few more significant facts. The document mentions Turnover research and development remuneration from other group undertakings. This means DeepMinds main customer is its owner. Alphabet is paying DeepMind to apply its AI research and talent to Googles services and infrastructure. In the past, Google has used DeepMinds services for tasks such as managing its datacenters power grids and improving its voice assistants AI.

Above: DeepMinds revenue and losses from 2016 to 2019

What this also means is that there isnt yet a market for DeepMinds AI, and if there is, it will only be available through Google.

The document also mentions that the growth of costs mainly relates to a rise in technical infrastructure, staff costs, and other related charges.

This is an important point. DeepMinds technical infrastructure runs mainly on Googles huge cloud services and its special AI processors, the Tensor Processing Unit (TPU). DeepMinds main area of research is deep reinforcement learning, which requires access to very expensive compute resources. The companys projects in 2019 included work on an AI system that played StarCraft 2and another that playedQuake 3, both of which cost millions of dollars in training.

A spokesperson for DeepMind told the media that the costs mentioned in the document also included work on AlphaFold, the companys celebrated protein-folding AI, another very expensive project.

There are no public details to indicate how much Google charges DeepMind for access to its cloud AI services, but Google is most likely renting its TPUs at a discount. This means that without Googles support and backing, the companys expenses would have been much higher.

Staff cost is another important issue. While participation in machine learning courses has increased in the past few years, scientists who can engage in the kind of cutting-edge AI research DeepMind is involved in are very scarce. And by some accounts, top AI talent commands seven-digit salaries.

The growing interest indeep learning and its applicability to commercial settings has created an arms race between tech companies to acquire top AI talent. Most of the industrys top AI scientists and pioneers are working either full- or half-time at large companies like Google, Facebook, Amazon, and Microsoft. The fierce competition for top AI talent has had two consequences. First, as in every other field where supply doesnt meet demand, it has resulted in a steep incline in the salaries of AI scientists. Second, it has driven many AI scientists from academic institutions that cant afford stellar salaries to wealthy tech companies that can. Some scientists continue to stay in academia for the sake of continuing scientific research, but they are too few and far between.

And without the backing of a large tech company like Google, research labs like DeepMind cant afford to hire new researchers for their projects.

So while DeepMind shows signs of slowly turning around its losses, its growth has made it even more dependent on Googles financial resources and large cloud infrastructure.

Above: DeepMind developed an AI system called AlphaStar that can beat the best players at the real-time strategy game StarCraft2

According to DeepMinds annual report, Google Ireland Holdings Unlimited, one of the investment branches of Alphabet, waived the repayment of intercompany loans and all accrued interest amounting to 1.1 billion.

DeepMind has also received written assurances from Google that it will continue to provide adequate financial support to the AI firm for a period of at least 12 months.

For the time being, Google seems to be satisfied with the progress DeepMind has made, which is also reflected in remarks made by Google and Alphabet executives.

In Julys quarterly earnings call with investors and analysts, Alphabet CEO Sundar Pichai said, Im very happy with the pace at which our R&D on AI is progressing. And for me, its important that we are state-of-the-art as a company and we are leading. And to me, Im excited at the pace at which our engineering and R&D teams are working both across Google and DeepMind.

But the corporate world and scientific research move at different paces.

Scientific research is measured in decades. Much of the AI technology used today in commercial applications has been in the making since the 1970s and 1980s. Likewise, a lot of the cutting-edge research and techniques presented at AI conferences today will probably not find their way into the mass market in the coming years. DeepMinds ultimate goal, developing artificial general intelligence(AGI), is by the most optimistic estimates at least decades away.

On the other hand, the patience of shareholders and investors is measured in months and years. Companies that cant turn over a profit in years or at least show hopeful signs of growth fall afoul of investors. DeepMind currently has none of those. It doesnt have measurable growth because its only client is Google itself. And its not clear when if ever any of its technology will be ready for commercialization.

Above: Google CEO Sundar Pichai is satisfied with the pace of AI research and development at DeepMind

And heres that DeepMinds dilemma lies. At heart, it is a research lab that wants to push the limits of science and make sure advances in AI are beneficial to all humans. Its owners goal, however, is to build products that solve specific problems and turn profits. The two goals are diametrically opposed, pulling DeepMind in different directions: maintaining its scientific nature or transforming into a product-making AI company. The company has already had trouble finding a balance between scientific research and product development in the past.

And DeepMind is not alone. OpenAI, DeepMinds implicit rival, has been facing a similar identity crisis, transforming from an AI research lab to a Microsoft-backed for-profit company thatrents its deep learning models.

Therefore, while DeepMind doesnt need to worry about its unprofitable research yet, as it becomes more enmeshed in the corporate dynamics of its owner, it should think deeply about its future and the future of scientific AI research.

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics. This post was originally published here.

Read more:

DeepMinds big losses, and the questions around running an AI lab - VentureBeat

Posted in Ai | Comments Off on DeepMinds big losses, and the questions around running an AI lab – VentureBeat

AI Weekly: The trends that shaped 2020 – VentureBeat

Posted: at 12:22 am

A few days ago, I published a story about books that I read throughout the year to improve and inform my job covering artificial intelligence and adjacent industries. In all, the multi-part review contains nine books published in 2020 that explore subjects like business strategy, policy, and geopolitics, as well as the human rights consequences associated with AI deployments.

Too Smart, for example, looks at the smart city and smart home and their role in technopolitics and power. Monopolies Suck examines how big businesses fleece the average person. And in a year filled with calls to dismantle the social hierarchy of white supremacy, the exploration of Afrofuturism and Black joy detailed in Black Futures and Distributed Blackness were very welcome to me.

That process, reviewing books I read throughout the year, put me in a reflective mood, and so in this final AI Weekly of 2020, we take a look back at the kinds of stories VentureBeat saw recur throughout 2020. Given so much news in a year full of unprecedented history, it seems like a good idea.

2020 kicked off with AI startups bringing in more funding than any previous year, according to CB Insights. Companies built on data, like Palantir and Snowflake, went public, while a collection of AI startup acquisitions helped Big Tech businesses concentrate their power.

The year began with COVID-19 spreading around the world and ended with algorithms deciding who gets the vaccine after a year of watching Black and brown people die at disproportionate rates. Questions continue to be asked about who gets the vaccine and when.

At the beginning of 2020, the world learned the story of Clearview AI, a company that scraped billions of photos from the internet to make its facial recognition software and has extensive ties to far-right and white supremacist groups. Despite public outrage, being forced out of Canada, and an alleged biometric law violation, at the end of the year, news emerged that Clearview AI landed a Department of Defense contract.

In another harrowing case, on December 28, reports emerged of a Black man in New Jersey who was incorrectly identified using Clearview AI facial recognition and arrested. According to NJ.com, he spent a year fighting charges that could have carried a penalty of up to 20 years of prison time. This incident comes to light less than six months after the first reported case of false arrest due to use of facial recognition was reported in Detroit; Robert Williams, the innocent man in that incident, was also Black.

The year ends with additional policy reverberations for facial recognition. Boston and Portland passed citywide bans, and New York signed a bill into law placing a moratorium on facial recognition use in schools; meanwhile, a statewide ban stalled in Massachusetts. One of my favorite anecdotes from books I read this year was from Too Smart, which said that when you consider what smart cities look like, dont think of futuristic metropolises sold in PR campaigns think of New Orleans and the predictive policing that perpetuates historic bias.

Palantir and many other companies sold policing tools to New Orleans over the years, but 2020 ends with the New Orleans City Council passing a ban on predictive policing and facial recognition. The Gulf Coast news outlet The Lens reports that legislation is watered down from its original version, but considering the fact that two years ago the city council didnt know police were using Palantir, its a story worth remembering.

Its here, New Orleans councilmember Jason Williams told The Lens. The technology is here before theres laws guiding the technology. And I think its a very dangerous position for communities to be in.

In early 2020, Ruha Benjamin warned the AI community it needs to take steps to include historical and social context or risk becoming party to gross human rights violations like IBM, which provided technology used to document the Holocaust. Earlier this month, news reports emerged that Alibaba and Huawei are developing facial recognition for tracking and detaining members of Muslim minority groups in China. With more than one million people detained today, its a phenomenon that often draws comparisons with Nazi concentration camps and the Jewish genocide of World War II. IBM agreed to stop selling facial recognition software in June.

There were also two reports in 2020 named The State of AI. One, from Air Street Capital, found evidence of brain drain from academia to industry. Another, from McKinsey, found businesses were increasing their use of AI, but that few business managers who took part in a survey are meeting 10 major measurements of risk mitigation, a trend that carries consequences beyond those typically placed on marginalized communities. It can also leave businesses vulnerable, a situation that appeared to undergo little change this year compared to the same survey administered a year earlier.

This is of course an incomplete collection of trends. Ive got no grand statement or takeaway to offer here that ties all these together, but given these trends and that we are currently living in the deadliest month in the deadliest year in American history, it only seems right that people end the year by sticking up for humanity. In the ML community, that means confronting issues of inequality and the potential to automate oppression within its own ranks, and not allowing events of bias or human rights violations to become normalized.

The need to continue to emphasize this is underscored by a few important recent events. Amid the fallout from Google firing Timnit Gebru and other events that seriously call into question the objectivity of AI research produced with corporate funding, an AI research survey completed in part by Google AI researchers called for a major culture change. Following high-profile instances of AI bias revealed in computer vision and natural language models, coauthors of the recent survey say the machine learning community needs to shift away from using massive, poorly curated datasets and toward treating data like its not just numbers but respectful of human privacy and property rights.

This wasnt a year anyone saw coming, full of challenges new and old. Happy New Year to everyone reading this. Lets ensure that as we confront the challenges of 2021, we keep our humanity intact and fight to defend the humanity of others so that we can together ensure AI is a technology that serves everyone, not just a handful of engineers and Big Tech companies.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Khari Johnson

Senior AI Staff Writer

Read the rest here:

AI Weekly: The trends that shaped 2020 - VentureBeat

Posted in Ai | Comments Off on AI Weekly: The trends that shaped 2020 – VentureBeat

Artificial Intelligence (AI): Is It All Just Costly Hype? – Dice Insights

Posted: at 12:22 am

Earlier this year, two partners at prominent venture-capital firm Andreessen Horowitz published an interesting blog post about artificial intelligence (A.I.). Specifically, is A.I. (and by extension, machine learning) capable of powering a sustainable business? Or is the tech industry infatuated with a technology thats just a lot of empty hype?

Its a worthy question as we close out 2020, considering how much money and resources companies are pouring into all things A.I.-related (often despite budget cutbacks related to the COVID-19 pandemic). Martin Casado and Matt Bornstein, the partners in question, conclude that A.I. is indeed viablebut that A.I.-centric businesses cant operate like traditional software firms.

Specifically, A.I. companies have lower gross margins (dueto the need for lots of expensive and talented humans, as well asinfrastructure expenses), scaling challenges (due to edge cases), and weakerdefensive moats (because of more A.I. tools and apps becoming commoditized,among other issues).

Traininga single A.I. model can cost hundreds of thousands of dollars(ormore) incomputeresources, they wrote. Whileits tempting to treat this as a one-time cost, retraining is increasinglyrecognized as an ongoing cost, since the data that feeds AI models tends tochange over time (a phenomenon known as data drift).

If the A.I. model is training on something storage-intensive like video, things get even worse. Add on top of that the cost of humans to design and wrangle the models, and you can see how any hoped-for profits from an A.I. project could quickly evaporate.

The entire Andreessen Horowitz posting is worth reading, especially if youre debating whether to jump aboard an artificial intelligence startup. Amidst all the discussions of cloud-infrastructure costs and model complexity, though, one thing stands out: the overwhelming presence of human beings within A.I. systems that are supposedly becoming more and more automated.

Its not just a question of employing people who can build and continually maintain models. For many tasks, especially those requiring greater cognitive reasoning, humans are often plugged into A.I. systems in real time, the posting added. Social media companies, for example, employ thousands of human reviewers to augment A.I.-based moderation systems. Many autonomous vehicle systems include remote human operators, and most A.I.-based medical devices interface with physicians as joint decision makers.

And theres no end in sight to intervention: Many problemslike self-driving carsare too complex to be fully automated with current-generation A.I. techniques. Issues of safety, fairness, and trust also demand meaningful human oversighta fact likely to be enshrined in A.I. regulations currently under development in theUS,EU, and elsewhere.

Weve seen these sorts of issues cropping up already among companies with artificial intelligence products. A few years ago, for example, Google rolled out Duplex, its automated voice assistant, which it predicted would revolutionize the process of making reservations and dealing with customer service. However, journalists quickly demonstrated there were relatively straightforward ways to stump Duplex. As of mid-2019, 25 percent of Google Duplex calls were supposedly made by human operators as opposed to an A.I.

Now consider all the A.I.-centric (or A.I. hopeful, for those still trying to develop an application) businesses that dont have Googles talent or resources. The dream of building an artificial intelligence model thats fully capable of performing its assigned task without any sort of human interventionwell, thats likely years away.

Andreessen Horowitz isnt the first firm to warn about thisissue. In 2019, ArvindKrishna, IBMs senior vice president of cloud and cognitive software, warnedthat A.I. initiatives could implode once companies realize how much effort istruly necessary to prep the related data. You run out of patience along theway, because you spend your first year just collecting and cleansing thedata,he told the audience at The Wall Street Journals Future ofEverything Festival,according to the newspaper.

Ina 2018 blog posting,A.I. researcher Filip Piekniewski listed all the ways in which theartificial intelligencehype wasnt matching withreality, includinga lack of progress in Googles DeepMind. Two years later, its clear thatA.I. is still grinding forward as a discipline, consuming lots of cash andtalent as companies hope for incremental advances.

But at least artificial intelligence researchers are still making lots of cash. And, despite these challenges, keep in mind that automation is still a long-term risk to many professions.

Ultimately, A.I. and machine learning technologies that help companies handle customer personalization and communication, data analytics and processing, and a host of other applications will continue to grow, even if it takes longer than expected to achieve seamless automation. An IDC report found three-quarters of commercial enterprise applications could lean on A.I. by next year alone, while an Analytics Insight report projects more than 20 million available jobs inartificial intelligenceby 2023.

Whether youre a manager or a software developer, in other words, prepare for A.I. (even weaker A.I.) to change how you work. Make sure to review the 10 jobs that could be radically impacted by these technologies sooner than you think.

Want more great insights?Create a Dice profile today to receive the weekly Dice Advisor newsletter, packed with everything you need to boost your career in tech. Register now

More here:

Artificial Intelligence (AI): Is It All Just Costly Hype? - Dice Insights

Posted in Ai | Comments Off on Artificial Intelligence (AI): Is It All Just Costly Hype? – Dice Insights

The Increasing Use Of AI In The Pharmaceutical Industry – Forbes

Posted: at 12:22 am

The pharmaceutical industry has long relied on cutting edge technologies to help deliver safe, reliable drugs to market. With the recent pandemic, its proved more important than ever for pharmaceutical companies to get drugs and vaccines to market faster than ever before.

Subroto Mukherjee, Head of Innovation and Emerging Technology, Americas at GlaxoSmithkline Consumer ... [+] Healthcare

Artificial intelligence and machine learning have been playing a critical role in the pharmaceutical industry and consumer healthcare business. From augmented intelligence applications such as disease identification and diagnosis, helping identify patients for clinical trials, drug manufacturing, and predictive forecasting, these technologies have proven critical. On a recent episode of the AI Today podcast Subroto Mukherjee, who is Head of Innovation and Emerging Technology, Americas at GlaxoSmithkline Consumer Healthcare discussed how AI and ML are being applied to the pharmaceutical industry and some unique use cases for AI and ML technology. In this follow up interview he shares his insights in more detail.

How is AI currently being applied in the pharmaceutical industry?

Subroto Mukherjee: AL and ML have been critical in the pharmaceutical industry and consumer healthcare business. AI and ML are playing an important role during this pandemic, driven by COVID and the race to discover effective vaccines. The top-level uses in Pharma and Consumer Healthcare arena as follows:

Apart from the Healthcare conditions, we see many AI ML usage in Digital Transformation areas for Pharma and Healthcare companies such as Martech, AdTech, Supply Chain, Sales, and Customer Service.

What are some unique use cases for AI and ML technology in the pharmaceutical industry?

Subroto Mukherjee: As per the article in guardian-Artificial intelligence group,DeepMindhas cracked a serious scientific problem that has stumped researchers for half a century. AlphaFold, the company and research laboratory using the AI program, showed it could predict how proteins fold into 3D shapes. The advantage of this discovery is that it will help researchers discover the mechanisms that drive some diseases and pave the way for - designer medicines, more nutritious crops, and "green enzymes" that can break down plastic pollution.

Another unique case and my favorite and involved in enabling the GSK consumer R&D team is AI in Sensory Science. AI and ML are ramping up predicting parameters in foods, beverages, agriculture, andmedicine. This could lead to hyper-personalized products for food, beverage, and medicines customized for different demographics and ethnicities; we extensively usesensoryproperties beyond taste, such as smell, appearance, and texture, influencing what we select to eat or drink.

Can you share use cases where AI was successfully applied at GlaxoSmithKline?

Subroto Mukherjee: Let me share some use cases in our consumer healthcare line of business.

Predictive Forecasting:We have popular seasonal brands in the Allergy and Cold and Flu category. The business use case is to have a predictive model that predicts how the upcoming season for allergy or cold and flu would shape up in different regions, and when are the predicted peaks and troughs. The advantage of this information is to inform consumers on our brand.com website, improve our national and regional media delivery and inform retailers of seasonal activation timing (distribution, stock up, display and secondary support).

Sensory Models Humans react differently, to taste, size, texture, color, and Sensory AI models help in a holistic way of understanding, predicting, and optimizing consumer preference. We use multiple parameters, such as taste, texture, color, and ML models, to understand the relationship between the consumer and the desired product experience. Our brands offer gummies, tablets, and liquids for our over-the-counter products, and these models are beneficial.

AI in eye-tracking:We do studies with our consumers and retailers in our shoppers science lab and monitor how they look at our products while they shop online or in stores. Consumers and retail teams with consent in our labs wear eye-tracking glasses and look at the products on shelf or online. During this process, images are captured and analyzed using AI. The analysis includes Areas of Interest (AOI) metrics, including the time to first fixation and time spent, gaze plots, heatmaps, and video replays. This helps in better product placement, improves our art and labeling, and helps us understand consumer behavior.

What are some of the challenges to AI adoption at larger organizations?

Subroto Mukherjee: Key challenges to AI adoption at larger organizations are as follows:

What are some of the challenges around data privacy, security, ethics, and transparency that organizations such as GSK are dealing with?

Subroto Mukherjee: Data privacy and security are of the highest importance for our organization. We constantly ensure all data privacy, security laws are followed, and appropriate training is provided across our different portfolios and adhered to by our partners and complementary workers. Data classification (PII, CSI, Sensitive), adherence of our systems, and processes to the GDPR or California privacy rights act's needs are some of the challenges we constantly face.

For AI ethics and transparency, we make sure MLOps processes are in place, and Machine learning (ML) models model scoring is established, monitoring and drift detection, the feedback loop is transparently followed. We bring a diverse ML team with diverse experience embedded in the team and test the models constantly to bring transparency and remove bias from the Machine learning models.

The global pandemic has really shaken up the pharma industry. How are you seeing AI and machine learning being put to use in the fight against the pandemic?

Subroto Mukherjee: Concerning the pandemic the biggest use of AI and machine learning from my understanding is to tease out COVID's biological secrets and identify the few molecules which will help end COVID among the millions and to reduce the time to market drugs either be discovery, development to clinical trials and final FDA approvals. Look at the speed and agility of the current vaccine it took 300 days from identifying the coronavirus genome to the first vaccine study, which has previously taken an average of eight to ten years.

Medical Mining - Let me focus on one specific initiative - "US White House - Call to Action." to analyze and Transform COVID-19 Data into Clinical Knowledge. White House is partnering with the AI research community to understand the novel coronavirus by mining medical literature. Natural language processing is one of the fastest-growing practices in this area, helping with this initiative. Medical imaging companies using AI and ML claimed record-level accuracy in detecting covid-induced pneumonia from CT scans, despite concerns from some stakeholders on the quality of training data.

Another important impact of COVID-19 is the impact of the supply chain. All companies, including ours, are facing the impact of COVID in the supply chain and manufacturing. Be it the supply of raw material or distribution of finished goods, it helps in pre-empting the risks associated with it. Companies are scrambling to respond to rapidly shifting consumer demand, limited supply of some products, and new workplace rules. AI and ML are used in Planning and Forecasting, Bots for automation and collaboration, and many key areas of the value chain.

How do large organizations approach change management for transformative technologies such as AI?

Subroto Mukherjee: We are implementing agile transformation across the business to create an effective and simple change management structure. Our technology organization, business team, and leadership team have undergone agile training. The change management discipline has been re-oriented with a clear hierarchy of approvals (key decision-makers) for onboarding new AI technology solutions. We define clear business objectives and value for now, next, and later for these transformative technologies.

What do you see as critical needs for workforce development around AI?

Subroto Mukherjee: We need reskilling and education among the workforce, not only in technical aspects but also in AI's business value. AI for Good or AI ethics is another key aspect that employees and the business community need to understand. Workers should not be afraid of AI, but rather embrace it and understand the benefits of AI. In terms of workforce, organizations need to scale up slowly with monitored results and a pool of data scientists knowing the business, data engineers, and subject matter experts.

How is the global regulatory environment impacting the pharma industrys adoption of AI?

Subroto Mukherjee: It is necessary to meet compliance and regulatory requirements as regulators need to safeguard consumers, and it does impact the timelines of new AI solutions to be rolled out. But organizations should be collaborating with regulators to streamline this process to the benefit of all. Both regulators and pharma companies can embrace AI and other digital transformation initiatives to drive the economy, cost efficiency, and value-driven effectiveness of regulatory operations.

What AI technologies are you most looking forward to in the coming years?

Subroto Mukherjee: I am looking forward to the advancement and extended use of Natural language processing, Robotics, Speech, and computer vision in the coming years.

See the original post here:

The Increasing Use Of AI In The Pharmaceutical Industry - Forbes

Posted in Ai | Comments Off on The Increasing Use Of AI In The Pharmaceutical Industry – Forbes

AI developers need to avoid creating tech-Frankensteins – The Financial Express

Posted: at 12:22 am

The aim of the board is to suggest participants to go beyond their remit and develop technology that considers larger ramifications for the society. This year, of the 9,467 submissions, 290 were flagged, and four even got rejected.

There may be a dime a dozen examples of human bias creeping into artificial intelligence (AI) algorithms, little is being done by companies to factor in ethical considerations in AI decisions. Rather, most companies tend to carry out corrections post facto, with much apologising, of course. However, a reputed conference is trying to change the rules of the game. As per Nature, the Neural Information Processing Systems (NeurIPS) conference that took place earlier this month incorporated an ethics board to screen papers and assess whether the technology suggested would have any potential side-effects or if the research could have unintended uses. The aim of the board is to suggest participants to go beyond their remit and develop technology that considers larger ramifications for the society. This year, of the 9,467 submissions, 290 were flagged, and four even got rejected.

Although many would argue that screening papers on ethical aspects would take away focus from research, what also needs to be considered is that the world is increasingly battling cases of technology misuse. For instance, deep-fake technology is being used by miscreants to spread fake news. Although many companies like IBM have withdrawn their facial recognition offerings for policing, many police departments and city administrations still use such technology, despite reports of bias and faulty recognition. Ethical considerations would force researchers to consider aspects of privacy and incorrect use of technology. Given how deeply technology can penetrate every-day life, developers need to avoid creating tech-Frankensteins, and NeurIPS shows the way.

Get live Stock Prices from BSE, NSE, US Market and latest NAV, portfolio of Mutual Funds, Check out latest IPO News, Best Performing IPOs, calculate your tax by Income Tax Calculator, know markets Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.

Financial Express is now on Telegram. Click here to join our channel and stay updated with the latest Biz news and updates.

See the original post:

AI developers need to avoid creating tech-Frankensteins - The Financial Express

Posted in Ai | Comments Off on AI developers need to avoid creating tech-Frankensteins – The Financial Express

Page 150«..1020..149150151152..160170..»