Page 42«..1020..41424344..5060..»

Category Archives: Artificial Intelligence

Reply: Automation and Artificial Intelligence Are the Strategic Keys for an Effective Defense Against Growing Threats in the Digital World – Business…

Posted: May 31, 2022 at 2:31 am

TURIN, Italy--(BUSINESS WIRE)--Today, cybersecurity represents an essential priority in the implementation of new technologies, especially given the crucial role that they have come to play in our private and professional lives. Smart Homes, Connected Cars, Delivery Robots: this evolution will not stop and so, in tandem, it will be necessary to develop automated and AI-based solutions to combat the growing number of security threats. The risks from these attacks are attributable to several factors, such as increasingly complex and widespread digital networks and a growing sensitivity to data privacy issues. These are the themes that emerge from the new Cybersecurity Automation research conducted by Reply, thanks to the proprietary SONAR platform and the support of PAC (Teknowlogy Group) in measuring the markets and projecting their growth.

In particular, the research estimates the principal market trends in security system automation, based on analysis of studies of the sector combined with evidence from Replys own customers. The data compares two different clusters of countries: the Europe-5 (Italy, Germany, France, the Netherlands, Belgium) and the Big-5 (USA, UK, Brazil, China, India) in order to understand how new AI solutions are implemented in the constantly evolving landscape of cybersecurity.

As cyberattacks like hacking, phishing, ransomware and malware have become more frequent and sophisticated, resulting in trillions of euros in damages for businesses both in terms of profit and brand reputation, the adoption of hyperautomation techniques has demonstrated how artificial intelligence and machine learning represent possible solutions. Furthermore, these technologies will need to be applied at every stage of protection, from software to infrastructure, and from devices to cloud computing.

Of the 300 billion in investments that the global cybersecurity market will make in the next five years, a large part will be directed toward automating security measures in order to improve detection and response times to threats in four different segments: Application security, Endpoint security, Data security and protection, Internet of Things security.

Application Security. Developers who first introduced the concept of security by design, an adaptive approach to technology design security, are now focusing on an even closer collaboration with the operations and security teams, termed DevSecOps. This newer model emphasizes the integration of security measures throughout the entire application development lifecycle. Automating testing at every step is crucial for decreasing the number of vulnerabilities in an application, and many testing and analysis tools are further integrating AI to increase their accuracy or capabilities. Investments in application security automation in the Europe-5 market are expected to see enormous growth, around seven times the current value, reaching 669 million euros by 2026. A similar growth is forecast in the Big-5 market, with investments rising to 3.5 billion euros.

Endpoint security. Endpoints, such as desktops, laptops, smartphones and servers, are sensitive elements and therefore possible sources of entry for cyberattacks if not adequately protected. In recent years, the average number of endpoints within a company has significantly increased, so identifying and adopting efficient and comprehensive protection tools is essential for survival. Endpoint detection and response (EDR) and Extended detection and response (XDR) are both tools created to accelerate the response time to emerging security threats, delegating repetitive and monotonous tasks to software that can manage them more efficiently. Investments in these tools are expected to increase in both the Europe-5 and Big-5 markets over the next few years, reaching 757 million euros and 3.65 billion euros respectively. There are also a multitude of other tools and systems dedicated to incident management that can be integrated at the enterprise level. For example, in Security Orchestration Automation and Response (SOAR) solutions, AI can be introduced in key areas such as threat management or incident response.

Data security and protection. Data security threats, also called data breaches, can cause significant damage to a business, resulting in risky legal complications or devaluating brand reputation. Ensuring that data is well-preserved and well-stored is an increasingly important challenge. It is easy to imagine how many different security threats can come from poor data manipulation, cyberattacks, untrustworthy employees, or even just from inexperienced technology users. Artificial intelligence is a tool for simplifying these data security procedures, from discovery to classification to remediation. Security automation is expected to reduce the cost of a data breach by playing an important role in various phases of a cyberattack, such as in data loss prevention tools (DLP), encryption, and tokenization. In an effort to better protect system security and data privacy, companies in the Europe-5 cluster are expected to invest 915 million euros in data security automation by 2026. The Big-5 market will quadruple its value, reaching 4.4 billion euros in the same timeframe.

Internet of Things security. The interconnected nature of IoT allows for every device in a network to be a potential weak point, meaning even a single vulnerability could be enough to shut down an entire infrastructure. By 2026, it is estimated that there will be 80 billion IoT devices on earth. The impressive range of abilities offered by IoT devices for different industries, though enabling smart factories, smart logistics, or smart speakers, prevents the creation of a standardized solution for IoT cybersecurity. As IoT networks reach fields ranging from healthcare to automotive, the risks only multiply. Therefore, IoT security is one of the most difficult challenges: the boundary between IT and OT (Operational Technology) must be overcome in order for IoT to unleash its full business value. As such, it is estimated that the IoT security automation market will exceed the 1-billion-euro mark in the Europe-5 cluster by 2026. In the Big-5 market, investments will reach a whopping 4.6 billion euros.

Filippo Rizzante, Replys CTO, has stated: The significant growth that we are witnessing in the cybersecurity sector is not driven by trend, but by necessity. Every day, cyberattacks hit public and private services, government and healthcare systems, causing enormous damage and costs; therefore, it is more urgent than ever to reconsider security strategies and reach new levels of maturity through automation, remembering that though artificial intelligence has increased the threat of the hacker, it is through taking advantage of AIs opportunities that cyberattacks can be prevented and countered.

The complete research is downloadable here. This new research is part of the Reply Market Research series, which includes the reports From Cloud to Edge, Industrial IoT: a reality check and Hybrid Work.

ReplyReply [EXM, STAR: REY, ISIN: IT0005282865] is specialized in the design and implementation of solutions based on new communication channels and digital media. Reply is a network of highly focused companies supporting key European industrial groups operating in the telecom and media, industry and services, banking, insurance and public administration sectors in the definition and development of business models enabled for the new paradigms of AI, cloud computing, digital media and the Internet of Things. Reply services include: Consulting, System Integration and Digital Services. http://www.reply.com

View original post here:

Reply: Automation and Artificial Intelligence Are the Strategic Keys for an Effective Defense Against Growing Threats in the Digital World - Business...

Posted in Artificial Intelligence | Comments Off on Reply: Automation and Artificial Intelligence Are the Strategic Keys for an Effective Defense Against Growing Threats in the Digital World – Business…

How to leverage the artificial intelligence solar system – ComputerWeekly.com

Posted: at 2:31 am

Artificial intelligence (AI) is on the priority list for every executive who uses technology to enable their business. And today, every business is a technology business. Despite the excitement around AI and investments in its capabilities, only about a third of companiessay theyve adopted leading operational practices for AI but an increasing percentage are working toward that goal.

While AI is often seen as the golden ticket to take business operations into the 21st century and it can to do so, the technology must be approached specifically and strategically, not as an all-in-one solution.

In the universe of technology, one can picture a solar system of interdependent capabilities. At the core, cloud technology serves as the sun a central power source fuelling and enabling other technologies. Underlying cloud platforms, such as Amazon Web Services or Google Cloud, provide the basis for other capabilities to flourish in the technology universe.

Rotating around cloud platforms, there are various AI planets in orbit that build off of cloud infrastructure to deliver solutions such as automation, machine learning, robotic process automation, and more. Many business leaders are eager to enter the orbit of artificial intelligence solutions, but must first start by building the necessary foundation for successful AI implementations.

Once the centre of the AI solar system is in place, to effectively unlock the power of AI, its important that business leaders understand what it is they are trying to solve. And while many suppliers have powerful offerings, AI is not one-size-fits-all in its approach or implementation. It takes several capabilities and applications to drive true end-to-end AI outcomes.

This ecosystem strategy can ultimately offer flexibility and stability for IT decision-makers looking to harness business data and drive meaningful results for their organisations. Key to demonstrating the importance of AI ecosystems is discussing current barriers a company is trying to overcome and what specific AI capabilities will solve for them.

Today, business leaders are looking to define the function of artificial intelligence in their organisations and how they can effectively implement AI given their current technology stacks.

For example, a banking executive may look to automate some of their companys digital banking capabilities. To get there, the institution must consider how they are currently housing their data, how that data will be processed and then refined for usage, and finally how the data can provide insight to their workforce and what insights will be most valuable to them.

In this case, an organisation may have to consider combining the technology and environment they have in place with new technology and capabilities to achieve their desired outcome of a new automated banking tool. The allure of a one-stop shop for AI needs may sway businesses to heavily invest in one provider, which can put up roadblocks on the journey to a meaningful, AI-powered solution.

Part of the trouble with seeing one supplier as a silver-bullet solution is that businesses may invest too heavily in a provider that wont help them move the needle on all of their specific AI goals. Given the hefty budgets businesses are developing for their IT departments, its critical to understand that investments are going towards the appropriate solution(s) and that more money towards a nebulous, blanket AI may not always equate to unlocking business success.

IT decision-makers must have a clear understanding of their companys technology solar system before implementing a new AI tool Anthony Ciarlo and Frank Farrell, Deloitte

Moreover, the overarching cloud environment in which an AI solution is deployed can make or break its success. This means IT decision-makers must have a clear understanding of their companys technology solar system before implementing a new AI tool. When AI-related requests for proposal come across our desks, our first goal is to work through the specific needs of the clients organisation and if the resources they are putting behind the AI solutions will get them where they want to be.

End to end, it is difficult for any one supplier to meet all of the AI needs of an organisation. Some are leaders in automation, while others are leaders in data analytics or machine learning understanding these different strengths enables Deloitte to provide meaningful, tailored assessments as to what investments should be made.

As a systems integrator, once the Deloitte team has holistic insight into an organisations pain points, it can provide confident recommendations as to where money should be invested and how companies can see the greatest return on investment in their technology budgets. The Deloitte team delivers confidence in integrating and navigating the solar system to provide the desired outcomes its clients and their clients need.

The ecosystem approach to AI solutions marks an important shift for how systems integrators should be approaching their client solutions. In years to come, its likely that there will be increased collaboration across market providers, resulting in more streamlined, transparent AI implementation processes.

The key driver for this shift is continued conversations with business and technology leaders who understand that AI is not an isolated entity, but rather serves as a key component within a solar system of interconnected platforms and tools that can offer individualised solutions for the most pressing business challenges.

Anthony Ciarlo is strategy and analytics alliances leader and Frank Farrell is principal for cloud analytics and AI ecosystems at Deloitte.

Read the original here:

How to leverage the artificial intelligence solar system - ComputerWeekly.com

Posted in Artificial Intelligence | Comments Off on How to leverage the artificial intelligence solar system – ComputerWeekly.com

Regulating Artificial Intelligence in judiciary and the myth of judicial exceptionalism – The Leaflet

Posted: at 2:31 am

With the continued adoption of artificial intelligence in courts of law, can efficiency and effectiveness trump the concerns of legitimacy and justice?

-

Academics and researchers gathered recently to discuss the findings of a new report on algorithms and their possibilities in the judicial system. Prepared and presented by DAKSH, a research centre that works on access to justice and judicial reforms, the report has been described as a superlative introduction to the various problems that ail our courts and how the usage of algorithms and allied technologies complicates it.

Artificial Intelligence (AI) systems have seen increased use in the Indian justice system, with the introduction of the Supreme Court Vidhik Anuvaad Software (SUVAS), used to translate judgments from English into other Indian languages, and the Supreme Court Portal for Assistance in Courts Efficiency (SUPACE), which would help the judges conduct legal research.

However, such systems are shrouded in secrecy as their rules, regulations, internal policy and functioning have not been properly documented or made available publicly. As such systems directly impact the efficiency and accessibility of the justice system in India, a framework that promotes accountability and transparency is warranted.

The new report examines the various domains of the judicial process where AI has been or can potentially be deployed, including predictive tools, risk assessment, dispute resolution, file management, and language recognition. It elaborates on the various ethical principles of regulating AI in the judicial space, and enumerates the challenges to regulation as observed in foreign jurisdictions. It also suggests several institutional mechanisms that would aid in regulating AI and making it a force for good.

The event was attended by the founder of Aapti Institute, Dr Sarayu Natarajan; associate professor in the Department of Humanities and Social Sciences at IIT-Delhi, Prof Naveen Thayyil; and the executive director at the Centre for Communication Governance at NLU-Delhi, Jhalak Kakkar. The event was moderated by senior research fellow at DAKSH, Sandhya PR.

Discussion traversed the domains of algorithmic accountability and the ethics of deploying such tools in a judicial system that seldom stays on an even keel.

Dr Natarajan praised the report for its comprehensive overview of the subject. She stated that the use of algorithms in availing judicial remedies should be understood with respect to various social categories such as caste, religion and economic backgrounds, which impact access. AI runs a chance of further alienating or marginalizing such social categories as far as access to justice is concerned.

Prof. Thayyil talked about his belief that AI will impact the coming two or three decades of the course of the Indian judiciary. This would escalate as the judicial system increases the use of such technologies in various facets of its functioning. As such, regulation of such technologies is crucial.

At present, no clear guidelines are available as to the control and effective management of AI and other tools in the justice system, and professionals would have to refer to the experience of other countries to adopt best practices.

To evaluate the desirability and degree of control that would be required, one needs to examine the impact of such technology in the real world. This concerns issues such as effectiveness, avoiding bias, ethical considerations, access issues, etc.

To measure such an impact, Prof Thayyil preferred the lens of regulatory ethics. He discussed his strong faith in the parameter of legitimacy, that is usually ignored during impact assessment of such tools, in favour of the more popular parameters of effectiveness and efficiency. He also stated that an ethics-based scrutiny of such systems would have to go beyond the procedure of such tools, and into the norms and values that inform them.

Notably, all panellists were cautious about stating which specific parts of the judicial process would be best optimised or were most likely to be experimented with, in the use of such technologies. They explained this by referring to the variety of parameters and access issues that have to be considered before deploying them.

The lack of public consensus and widespread distrust in AI would have to be factored with public consultations and reviews from industry experts.

Sandhya referred to the lack of explainability in many of the tasks touted to be accomplished. Explainability refers to features of AI systems which affect the capacity of humans to understand and trust the results of an AI system. Legitimacy, as such, is deeply impacted. Dr Natarajan expressed concern about the impact of technological intervention on the worst off among us.

Kakkar pointed out another challenge that complicates the deployment of such technologies, which is that they are usually developed by private parties and then enforced by the State. This makes it difficult to ensure accountability and transparency of the technologies.

AI systems are supposed to learn from the data fed to them, and this could perpetuate the discriminatory tendencies and practices already present within the judicial system. The need for transparency, she emphasized, was crucial, and this could be refined and adopted by subjecting the question to public scrutiny and expert audits.

Prof Thayyil resonated with the views presented and commented on the perception that the use of technology increases efficiency. Contrary to that, the reality may be that by reducing access and introducing bias, the efficiency may, in fact, decrease, he suggested.

There is also the argument of such technologization becoming the norm in the near future, which would make a return to a non-AI system difficult. The lack of transparency and accountability of such systems was addressed by Sandhya by referring to them as black boxes.

Developing policies on AI tools in India would have to go to the basics of an open justice framework, to make such technologies more coherent with the ends of justice being contemplated. Such a framework would necessitate the disclosure of the functioning and guidelines on the working of such technology and also subject them to effective control.

A cautious approach to such questions was reiterated with Kakkar stating that designing policies and managing data as means to regulation were inherently complex problems. The Indian experiment with regulation has been, so far, mixed, he suggested.

Since regulators function under legislation, the crucial question would be if it was too early or too late for a country like ours to have regulatory mechanisms for AI, in general.

If it is too early to have such a framework, the legislation would not be able to capture the nuances of the system that are yet to find use in the Indian justice system, but may eventually do. If it is too late for it, there is a chance that such regulation may be ineffective as the AI system has been irreversibly embedded in the way the judiciary functions.

The possibility and desirability of such a regulatory mechanism, and framing policies on the same, would depend on the goals sought to be achieved. For example, a target of enhanced security would necessitate an autonomous regulator with regulatory capacity to question both public institutions, which deploy such tools, and private institutions, which build them.

Kakkar reiterated Indias lack of a substantive data protection law, in which case the critical question is, what framework would be used to protect the fundamental and human rights of people whose data is being used by such systems. There are data gaps, as marginalized communities are generally neglected in building such technologies.

There is also the aforementioned possibility of the perpetuation of bias if such a regulatory mechanism is attempted by the courts themselves, in the absence of a regulatory legislation. Kakkar also agreed with Prof Thayyils anxiety about path dependencies, which suggests that the future course of AI depends on its deployment and percolation at present, and function creep, which suggests data may be used for other ends than demonstrated.

These issues may aggrandize these systems, expanding the scope of possibly harmful practices.

Dr Natarajan believed that if such a regulatory function was left to the courts, the myth of judicial exceptionalism would have to have sufficient heft to hold muster. To regular observers of the courts of law, it is obvious that such exceptionalism is hardly the norm, she observed.

As such, the judiciary cannot be solely trusted with such a regulatory task. Suggesting that it might be a little early to have a regulatory legislation for such technologies, Dr Natarajan affirmed her belief in the need for some basic regulatory mechanisms. These would examine the background of the developer of such technologies, prevent bias, among other things.

The panellists talked about regulation of similar tools in other domains, and the need to cull out a regulatory principle for AI which was more or less uniform across varied fields.

Best practices from different domains, such as healthcare, would have to be adapted because the ends of the two fields differ. This is because, while accuracy is the goal aimed to be achieved through such tech in healthcare, it is not the end but only a means to one in the case of law.

Similarly, adopting practices from other countries would have to take into account the resource settings of various jurisdictions, and a low resource country like ours would have to make certain adjustments before adopting practices from high resource jurisdictions such as China or Germany, it was felt.

Read this article:

Regulating Artificial Intelligence in judiciary and the myth of judicial exceptionalism - The Leaflet

Posted in Artificial Intelligence | Comments Off on Regulating Artificial Intelligence in judiciary and the myth of judicial exceptionalism – The Leaflet

Global Space Industry Report 2022: The Future of AI-Enabled Space Services – PR Newswire

Posted: at 2:31 am

DUBLIN, May 30, 2022 /PRNewswire/ -- The "Global Artificial Intelligence in Space Growth Opportunities" report has been added to ResearchAndMarkets.com's offering.

The multiple NewSpace start-ups entering the space industry as downstream services providers have created a fragmented market with increasing competition. Services providers are evolving their capabilities, including AI, to differentiate themselves.

AI-enabled space services will become an industry-wide trend, particularly in the downstream and satellite operations areas. The competition is slowly developing in the market and will increase in the next 5 years.

If you are an AI developer or interested in understanding how ICT capabilities such as AI, this study will help you get started with your research.

The study provides an assessment of the state of artificial intelligence (AI) deployment in the global space industry. The analysis covers key segments of the space industry where AI deployment could add value and explores the potential impact of the growing NewSpace economy. The research lists important satellite constellations and discusses their influence on the need for suitable AI capabilities.

Key Issues Addressed:

Key Topics Covered:

1. Strategic Imperatives

2. Growth Opportunity Analysis

3. Growth Opportunity Universe - AI in Space

For more information about this report visit https://www.researchandmarkets.com/r/lcory

About ResearchAndMarkets.comResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

Media Contact:

Research and Markets Laura Wood, Senior Manager [emailprotected]

For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1907 Fax (outside U.S.): +353-1-481-1716

SOURCE Research and Markets

See original here:

Global Space Industry Report 2022: The Future of AI-Enabled Space Services - PR Newswire

Posted in Artificial Intelligence | Comments Off on Global Space Industry Report 2022: The Future of AI-Enabled Space Services – PR Newswire

Growth Opportunities for Artificial Intelligence in the Global Space Industry: Customized AI Solutions for NewSpace Missions, Deep Space Missions and…

Posted: at 2:31 am

Company Logo

Dublin, May 27, 2022 (GLOBE NEWSWIRE) -- The "Global Artificial Intelligence in Space Growth Opportunities" report has been added to ResearchAndMarkets.com's offering.

The multiple NewSpace start-ups entering the space industry as downstream services providers have created a fragmented market with increasing competition. Services providers are evolving their capabilities, including AI, to differentiate themselves.

AI-enabled space services will become an industry-wide trend, particularly in the downstream and satellite operations areas. The competition is slowly developing in the market and will increase in the next 5 years.

If you are an AI developer or interested in understanding how ICT capabilities such as AI, this study will help you get started with your research.

The study provides an assessment of the state of artificial intelligence (AI) deployment in the global space industry. The analysis covers key segments of the space industry where AI deployment could add value and explores the potential impact of the growing NewSpace economy. The research lists important satellite constellations and discusses their influence on the need for suitable AI capabilities.

Key Issues Addressed:

What are the key satellite constellations slated for launch up to 2040?

What are the drivers and restraints that will impact deployment of AI in the space industry?

Which segments of the space industry will gain value from AI capabilities?

What are the growth opportunities in the space industry for ICT market participants that specialize in AI?

Key Topics Covered:

1. Strategic Imperatives

Why is it Increasingly Difficult to Grow?

The Impact of the Top 3 Strategic Imperatives on Artificial Intelligence (AI) in the Space Industry

Growth Opportunities Fuel the Growth Pipeline Engine

2. Growth Opportunity Analysis

Growth Drivers

Growth Restraints

Satellite Constellations

AI in Automated Constellation Operations

AI in Space Situational Awareness Capabilities

AI in Satellite Data Processing

AI in Deep Space Missions

Story continues

3. Growth Opportunity Universe - AI in Space

Growth Opportunity 1: Customized AI Solutions for NewSpace Missions

Growth Opportunity 2: Customized AI Solutions for Deep Space Missions

Growth Opportunity 3: Customized AI Solutions for Downstream Services

For more information about this report visit https://www.researchandmarkets.com/r/j5aw7t

About ResearchAndMarkets.comResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

More:

Growth Opportunities for Artificial Intelligence in the Global Space Industry: Customized AI Solutions for NewSpace Missions, Deep Space Missions and...

Posted in Artificial Intelligence | Comments Off on Growth Opportunities for Artificial Intelligence in the Global Space Industry: Customized AI Solutions for NewSpace Missions, Deep Space Missions and…

Three Key Artificial Intelligence Adoption Pitfalls to Avoid in 2022 and Beyond – EnterpriseTalk

Posted: May 27, 2022 at 2:19 am

The avenue to adopting Artificial intelligence (AI) isnt always straightforward. Business leaders must collect the appropriate data, identify the right technologies for their firm, and teach their employees how to construct and enhance AI models. Even if leaders have identified the ideal AI for their company and have properly on boarded it, theres still a chance they wont receive what they want or need from it.

Artificial intelligence (AI) has found its route into practically every field, and its popularity is only growing. AI can enhance productivity and deliver valuable insights to corporate executives when used effectively. However, many leaders are confused about how to employ technology effectively, and a misguided AI program might do more damage than good.

Best practices must be followed to ensure that AI benefits rather than destroys the organization. Here are three pitfalls to avoid when using AI to achieve business objectives.

Not having the proper team size

Most firms are aware that AI solutions are robust, but many overlook the complexity they entail. AI implementations need an adequately sized crew to keep the algorithms in top form. As a result, many corporations choose to outsource AI development projects or extend their AI development teams using on-demand staffing services.

Also Read: Three Potent Ways Artificial Intelligence Can Assist With Pricing

Failure to retain AI effectiveness

To be a successful solution over time, AI will require involvement. For example, if AI fails or corporate objectives shift, AI procedures must also shift. If nothing is done or proper intervention is not implemented, AI advice may obstruct or contradict corporate goals.

Take, for example, AI-based pricing systems. If the AI system is not put up to adapt to market changes, its efficacy will suffer. To put it another way, the AI system must make adjustments to the current market as the source data changes.

The performance of the sales staff is one approach to assessing AI efficacy. Effective sales teams want to follow price suggestions that help them meet their objectives. Therefore, they should be willing to have their performance evaluated based on how well they use AI that delivers value. Profit margin and revenue are two common pricing-related KPIs. KPI tracking may also reveal which sales teams or team individuals use AI. If the recommendations are not helping them meet their KPIs, its time to step in.

To reduce the strain on AI users, interventions should be scalable and repeatable through highly automated procedures. The intervention should consist of two parts: examining the AI systems inputs and confirming that its output is as intended. Each of these activities should be done on a regular basis throughout the year.

Also Read: Artificial Intelligence and its Impact on The Future of Recruitment

Ignoring the architectural fit

Despite the urge to get started with AI, it can be challenging to reap the benefits that organizations want if they lack the proper data infrastructure, leading to a slew of errors.

Before contemplating AI, a business must be able to acquire, store, and process data in order to get value from it. If they dont, firms risk employing inexperienced analytics, making teams more vulnerable to a variety of mistakes.

Check Out The NewEnterprisetalk Podcast.For more such updates follow us on Google NewsEnterprisetalk News.

More:

Three Key Artificial Intelligence Adoption Pitfalls to Avoid in 2022 and Beyond - EnterpriseTalk

Posted in Artificial Intelligence | Comments Off on Three Key Artificial Intelligence Adoption Pitfalls to Avoid in 2022 and Beyond – EnterpriseTalk

MIT Engineers Use Artificial Intelligence To Capture the Complexity of Breaking Waves – SciTechDaily

Posted: at 2:19 am

Using machine learning along with data from wave tank experiments, MIT engineers have found a way to model how waves break. With this, you could simulate waves to help design structures better, more efficiently, and without huge safety factors, says Themis Sapsis. Credit: iStockphoto

The new models predictions should help researchers improve ocean climate simulations and hone the design of offshore structures.

Waves break once they swell to a critical height, before cresting and crashing into a shower of droplets and bubbles. These waves can be as big as a surfers point break and as small as a gentle ripple rolling to shore. For decades, the dynamics of how and when a wave breaks have been too complex for scientists to predict.

Now, MIT engineers have found a new method for modeling how waves break. The researchers tweaked equations that have previously been used to predict wave behavior using machine learning and data from wave-tank tests. Engineers frequently use such equations to help them design robust offshore platforms and structures. But until now, the equations have not been able to capture the complexity of breaking waves.

The researchers discovered that the modified model predicted how and when waves would break more accurately. The model, for example, assessed a waves steepness shortly before breaking, as well as its energy and frequency after breaking, more accurately than traditional wave equations.

Their results, published recently in the journal Nature Communications, will help scientists understand how a breaking wave affects the water around it. Knowing precisely how these waves interact can help hone the design of offshore structures. It can also improve predictions for how the ocean interacts with the atmosphere. Having better estimates of how waves break can help scientists predict, for instance, how much carbon dioxide and other atmospheric gases the ocean can absorb.

Wave breaking is what puts air into the ocean, says study author Themis Sapsis, an associate professor of mechanical and ocean engineering and an affiliate of the Institute for Data, Systems, and Society at MIT. It may sound like a detail, but if you multiply its effect over the area of the entire ocean, wave breaking starts becoming fundamentally important to climate prediction.

The studys co-authors include lead author and MIT postdoc Debbie Eeltink, Hubert Branger, and Christopher Luneau of Aix-Marseille University, Amin Chabchoub of Kyoto University, Jerome Kasparian of the University of Geneva, and T.S. van den Bremer of Delft University of Technology.

To predict the dynamics of a breaking wave, scientists typically take one of two approaches: They either attempt to precisely simulate the wave at the scale of individual molecules of water and air, or they run experiments to try and characterize waves with actual measurements. The first approach is computationally expensive and difficult to simulate even over a small area; the second requires a huge amount of time to run enough experiments to yield statistically significant results.

The MIT team instead borrowed pieces from both approaches to develop a more efficient and accurate model using machine learning. The researchers started with a set of equations that is considered the standard description of wave behavior. They aimed to improve the model by training the model on data of breaking waves from actual experiments.

We had a simple model that doesnt capture wave breaking, and then we had the truth, meaning experiments that involve wave breaking, Eeltink explains. Then we wanted to use machine learning to learn the difference between the two.

The researchers obtained wave breaking data by running experiments in a 40-meter-long tank. The tank was fitted at one end with a paddle which the team used to initiate each wave. The team set the paddle to produce a breaking wave in the middle of the tank. Gauges along the length of the tank measured the waters height as waves propagated down the tank.

It takes a lot of time to run these experiments, Eeltink says. Between each experiment, you have to wait for the water to completely calm down before you launch the next experiment, otherwise they influence each other.

In all, the team ran about 250 experiments, the data from which they used to train a type of machine-learning algorithm known as a neural network. Specifically, the algorithm is trained to compare the real waves in experiments with the predicted waves in the simple model, and based on any differences between the two, the algorithm tunes the model to fit reality.

After training the algorithm on their experimental data, the team introduced the model to entirely new data in this case, measurements from two independent experiments, each run at separate wave tanks with different dimensions. In these tests, they found the updated model made more accurate predictions than the simple, untrained model, for instance making better estimates of a breaking waves steepness.

The new model also captured an essential property of breaking waves known as the downshift, in which the frequency of a wave is shifted to a lower value. The speed of a wave depends on its frequency. For ocean waves, lower frequencies move faster than higher frequencies. Therefore, after the downshift, the wave will move faster. The new model predicts the change in frequency, before and after each breaking wave, which could be especially relevant in preparing for coastal storms.

When you want to forecast when high waves of a swell would reach a harbor, and you want to leave the harbor before those waves arrive, then if you get the wave frequency wrong, then the speed at which the waves are approaching is wrong, Eeltink says.

The teams updated wave model is in the form of an open-source code that others could potentially use, for instance in climate simulations of the oceans potential to absorb carbon dioxide and other atmospheric gases. The code can also be worked into simulated tests of offshore platforms and coastal structures.

The number one purpose of this model is to predict what a wave will do, Sapsis says. If you dont model wave breaking right, it would have tremendous implications for how structures behave. With this, you could simulate waves to help design structures better, more efficiently, and without huge safety factors.

Reference: Nonlinear wave evolution with data-driven breaking by D. Eeltink, H. Branger, C. Luneau, Y. He, A. Chabchoub, J. Kasparian, T. S. van den Bremer & T. P. Sapsis, 29 April 2022, Nature Communications.DOI: 10.1038/s41467-022-30025-z

This research is supported, in part, by the Swiss National Science Foundation, and by the U.S. Office of Naval Research.

More:

MIT Engineers Use Artificial Intelligence To Capture the Complexity of Breaking Waves - SciTechDaily

Posted in Artificial Intelligence | Comments Off on MIT Engineers Use Artificial Intelligence To Capture the Complexity of Breaking Waves – SciTechDaily

Using Artificial Intelligence to Predict Life-Threatening Bacterial Disease in Dogs – University of California, Davis

Posted: at 2:19 am

Leptospirosis, a disease that dogs can get from drinking water contaminated with Leptospira bacteria, can cause kidney failure, liver disease and severe bleeding into the lungs. Early detection of the disease is crucial and may mean the difference between life and death.

Veterinarians and researchers at the University of California, Davis, School of Veterinary Medicine have discovered a technique to predict leptospirosis in dogs through the use of artificial intelligence. After many months of testing various models, the team has developed one that outperformed traditional testing methods and provided accurate early detection of the disease. The groundbreaking discovery was published in Journal of Veterinary Diagnostic Investigation.

Traditional testing for Leptospira lacks sensitivity early in the disease process, said lead author Krystle Reagan, a board-certified internal medicine specialist and assistant professor focusing on infectious diseases. Detection also can take more than two weeks because of the need to demonstrate a rise in the level of antibodies in a blood sample. Our AI model eliminates those two roadblocks to a swift and accurate diagnosis.

The research involved historical data of patients at the UC Davis Veterinary Medical Teaching Hospital that had been tested for leptospirosis. Routinely collected blood work from these 413 dogs was used to train an AI prediction model. Over the next year, the hospital treated an additional 53 dogs with suspected leptospirosis. The model correctly identified all nine dogs that were positive for leptospirosis (100% sensitivity). The model also correctly identified approximately 90% of the 44 dogs that were ultimately leptospirosis negative.

The goal for the model is for it to become an online resource for veterinarians to enter patient data and receive a timely prediction.

AI-based, clinical decision making is going to be the future for many aspects of veterinary medicine, said School of Veterinary Medicine Dean Mark Stetter. I am thrilled to see UC Davis veterinarians and scientists leading that charge. We are committed to putting resources behind AI ventures and look forward to partnering with researchers, philanthropists, and industry to advance this science.

Leptospirosis is a life-threatening zoonotic disease, meaning it can transfer from animals to humans. As the disease is also difficult to diagnose in people, Reagan hopes the technology behind this groundbreaking detection model has translational ability into human medicine.

My hope is this technology will be able to recognize cases of leptospirosis in near real time, giving clinicians and owners important information about the disease process and prognosis, said Reagan. As we move forward, we hope to apply AI methods to improve our ability to quickly diagnose other types of infections.

Reagan is a founding member of the schools Artificial Intelligence in Veterinary Medicine Interest Group comprising veterinarians promoting the use of AI in the profession. This research was done in collaboration with members of UC Davis Center for Data Science and Artificial Intelligence Research, led by professor of mathematics Thomas Strohmer. He and his students were involved in the algorithm building. The center strives to bring together world-renowned experts from many fields of study with top data science and AI researchers to advance data science foundations, methods, and applications.

Reagans group is actively pursuing AI for prediction of outcome for other types of infections, including a prediction model for antimicrobial resistant infections, which is a growing problem in veterinary and human medicine. Previously, the group developed an AI algorithm to predict Addisons disease with an accuracy rate greater than 99%.

Other authors include Shaofeng Deng, Junda Sheng, Jamie Sebastian, Zhe Wang, Sara N. Huebner, Louise A. Wenke, Sarah R. Michalak and Jane E. Sykes. Funding support comes from the National Science Foundation.

Visit link:

Using Artificial Intelligence to Predict Life-Threatening Bacterial Disease in Dogs - University of California, Davis

Posted in Artificial Intelligence | Comments Off on Using Artificial Intelligence to Predict Life-Threatening Bacterial Disease in Dogs – University of California, Davis

Artificial Intelligence Is Revolutionizing The IGaming Industry In India In 2022 – Inventiva

Posted: at 2:19 am

Artificial intelligence is revolutionizing the iGaming industry

As the number of players that play for real money on a PC or mobile device grows, security measures are needed to provide a secure environment. AI has shown to be a great way for operators to protect player privacy while processing payments in the safest way possible.

Artificial intelligence is becoming more prevalent in many areas of daily life, including the online gaming industry. To give a better experience for gamers, both land-based and online gaming have progressed and used cutting-edge technologies. Players now have a safer and more realistic way to enjoy the same games they would find in a conventional casino thanks to the advent of artificial intelligence in online casinos.

To give the most latest games and services, the online gaming industry makes use of curated space and smart algorithms. When you visit a website, such as a betting site, algorithms use the information you provide to predict what you desire.

AI, which is essentially a computer system that replicates human intelligence while making decisions, is at the heart of many algorithms. We examine the impact of artificial intelligence (AI) on online casinos and how it may provide players with a better and safer way to play games from major game developers from the comfort of their own homes.

The use of artificial intelligence (AI) increases the online safety of players.

As the number of players playing for real money on a PC or mobile device grows, security measures are needed to provide a secure environment. AI has shown to be a great way for operators to protect player privacy while processing payments in the most secure way possible. To provide a safe atmosphere for making bets, the greatest websites will employ strong AI technology.

SSL encryption is one example of AI in action. In online gambling settings, this is one of the most crucial cybersecurity considerations. Its a security feature that helps secure sensitive data during transaction processing. It prevents account hacking and fraud by keeping information out of the hands of third parties.

Bettors want to know that their funds and account information are always safe. Websites may give amazing levels of security with the use of contemporary technology, avoiding the chance of banking information or credit card data being leaked to hackers or criminals.

Members want a personalized experience while playing online games, and AI can help with that. AI will gather information from players to determine which games they play the most, how much they wager, and even how frequently a site is visited. These particulars are then used to create projections. When you join your account, operators may then customise your online gaming experience by proposing specific games.

Artificial intelligence improves the ability of websites to detect cheaters and fraudsters. When AI software is used to capture behavioural patterns of members, the data may be used to determine if somebody is cheating when playing games. While AI has a favourable impact on cheating, technology does have a drawback. Gamblers can also employ AI systems to get over detection measures in place at a casino.

The capacity to discern specific patterns that can identify players who are cheating or attempting to influence game results has consequences for online casinos. Those who are found to be engaging in unfair play may have their accounts suspended pending an investigation. While cheating is impossible while playing video games, it is possible when playing table games or live casino games that are not supervised by a random number generator.

Artificial intelligence is paving the path for a more secure and enjoyable online gaming experience. This technology is certain to transform the way we play in the future, with enhanced experiences, tailored suggestions, greater security measures, and the ability to aid in the prevention of gambling problems. The casino industry is always changing, and artificial intelligence will have an impact on both how we play online gambling and how casino games are created.

edited and proofread by nikita sharma

Like Loading...

Related

Read the original:

Artificial Intelligence Is Revolutionizing The IGaming Industry In India In 2022 - Inventiva

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Is Revolutionizing The IGaming Industry In India In 2022 – Inventiva

States And Localities Begin To Focus On Use Of Artificial Intelligence – New Technology – United States – Mondaq

Posted: at 2:19 am

As artificial intelligence (AI) becomes increasingly embeddedinto products, services, and business decisions, state and locallawmakers have been considering and passing a range of lawsaddressing AI. These vary from laws that promote AI to moreregulatory approaches that impose obligations on AI in specificareas. In a development that parallels the evolution of privacylaws, states and localities have moved ahead with initiatives ontheir own. However, unlike in privacy, where a set of legislativeapproaches has been debated for years, approaches to dealing withAI have been far more varied and scattershot. This kind of apatchwork approach, if it continues, may create issues withmanaging regulatory compliance for many uses of AI acrossjurisdictions.

States and Localities Are Beginning to Move Forward with aPiecemeal Approach to AI

In 2021, five jurisdictions Alabama, Colorado, Illinois,Mississippi, and New York City enacted legislationspecifically directed at the use of AI. Their approaches varied,from creating bodies to study the impact of AI to regulating theuse of AI in contexts where governments have been concerned aboutincreased risk of harm to individuals.

Some of these laws have focused on promoting AI. For instance,Alabama's law establishes a council to review and advise theGovernor, the legislature, and other interested parties on the useand development of advanced technology and AI in the state. TheMississippi law implements a mandatory K-12 curriculum thatincludes instruction in AI.

Conversely, some laws are more regulatory and skeptical of AI.For example, Illinois has adopted two AI laws onethatdevelopsa task force to study the impactof emerging technologies, including AI, on the future of work andanother thatmandatesnotice, consent, and reportingobligations for employers that use AI in hiring. Under existingIllinois law, an employer that asks applicants to record videointerviews and uses an AI analysis must: (1) notify the applicantthat AI may be used to analyze the applicant's video interviewand consider the applicant's fitness for the position; (2)provide each applicant with information explaining how the AI worksand what general types of characteristics the AI uses to evaluateapplicants; and (3) obtain consent from the applicant. The law alsolimits the sharing of the videos and extends to applicants a rightto delete the videos. A 2021 amendment imposes reportingrequirements on an employer that relies solely upon an AI analysisof a video interview to determine whether an applicant will beselected for an in-person interview. The state Department ofCommerce and Economic Opportunity is required to annually analyzecertain demographic data reported and report to the Governor andGeneral Assembly whether the data discloses a racial bias in theuse of AI.

Colorado's law takes a sectoral approach,prohibitinginsurers from using anyinformation sources as well as any algorithms or predictive modelsin a way that produces unfair discrimination. Unfair discriminationincludes "the use of one or more external consumer data andinformation sources, as well as algorithms or predictive modelsusing external consumer data and information sources, that have acorrelation to race, color, national or ethnic origin, religion,sex, sexual orientation, disability, gender identity, or genderexpression, and that use results in a disproportionately negativeoutcome for such classification or classifications, which negativeoutcome exceeds the reasonable correlation to the underlyinginsurance practice, including losses and costs forunderwriting." This law comes in addition to Colorado'scomprehensive privacy law, theColorado Privacy Act, set to go into effect onJuly 1, 2023, which provides consumers with a right to opt out ofthe processing of their personal data for purposes of targetedadvertising, the sale of personal data, or automated profiling infurtherance of decisions that produce legal or similarlysignificant effects.

In late 2021, New York City notablyenacteda specific algorithmicaccountability law, becoming the first jurisdiction in the UnitedStates to require algorithms used by employers in hiring orpromotion to be audited for bias. New York City's law bars AIhiring systems that do not pass annual audits checking for race- orgender-based discrimination. The bill would require the developersof such AI tools to disclose the job qualifications andcharacteristics that will be used by the tool and would provideemployment candidates the option of choosing an alternative processfor employers to review their application. The law imposes fines onemployers or employment agencies of up to $1,500 per violation.

California's Privacy Regulations May Also TargetAI

California's California Privacy Protection Agency (CPPA), the new agency charged with rulemakingand enforcement authority over the California Privacy Rights Act(CPRA), is expected to issue regulations governing AI by 2023. Thestatute specifically addresses a consumer's right to understandand opt out of automating decision-making technologies such as AIand machine learning. In particular, the agency is charged with"[i]ssuing regulations governing access and opt-out rightswith respect to businesses' use of automated decisionmakingtechnology, including profiling and requiring businesses'response to access requests to include meaningful information aboutthe logic involved in those decisionmaking processes, as well as adescription of the likely outcome of the process with respect tothe consumer."

In September 2021, the CPPAreleasedan Invitation for PreliminaryComments on Proposed Rulemaking (Invitation) and accepted commentsthrough November 8, 2021. The Invitation to comment issued by theCPPA asked four questions regarding interpretation of theagency's automated decision-making rulemaking authority:

While the statute calls for final rules to be adopted by July2022, at a February 17 CPPA board meeting, Executive DirectorAshkan Soltani announced that draft regulations will be delayed. Aswe've previouslydiscussed, this effort in California toregulate certain automated decision-making processes may open thedoor to greater regulation of AI and should be watchedclosely.

Even as the federal governmentlooks more closelyat AI, some states andlocalities appear to be poised to jump ahead. Indeed, many otherstates continue to debate AI proposals in 2022. Companiesdeveloping and deploying AI should continue to monitor this area asthe regulatory landscape develops.

2022 Wiley Rein LLP

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

More:

States And Localities Begin To Focus On Use Of Artificial Intelligence - New Technology - United States - Mondaq

Posted in Artificial Intelligence | Comments Off on States And Localities Begin To Focus On Use Of Artificial Intelligence – New Technology – United States – Mondaq

Page 42«..1020..41424344..5060..»