Page 85«..1020..84858687..90100..»

Category Archives: Artificial Intelligence

New Draft Rules on the Use of Artificial Intelligence – Lexology

Posted: May 18, 2021 at 4:23 am

On 21 April 2021, the European Commission published draft regulations (AI Regulations) governing the use of artificial intelligence (AI). The European Parliament and the member states have not yet adopted these proposed AI Regulations.

The proposed AI Regulations:

In more detail

The European Commission's proposed AI Regulations are the first attempt the world has seen at creating a uniform legal framework governing the use, development and marketing of AI. They will likely have a resounding impact on all businesses that use AI for years to come.

Scope

The AI Regulations will apply to the following:

Timing

The AI Regulations will become effective 20 days after publication in the Official Journal. They will then be need to be implemented within 24 months, with some provisions going into effect sooner. The long implementation period increases the risk that some provisions will become irrelevant or moot because of technological developments.

Risk-based approach

In its 2020 white paper, the European Commission proposed splitting the AI ecosystem into two general categories: high risk or low risk. The European Commission's new graded system is more nuanced and likely to ensure a more targeted approach, since the level of compliance requirements matches the risk level of a specific use case.

The new AI Regulations follow a risk-based approach and differentiate between the following: (i) prohibited AI systems whose use is considered unacceptable and that contravene union values (e.g., by violating fundamental rights); (ii) uses of AI that create a high risk; (iii) those which create a limited risk (e.g., where there is a risk of manipulation, for instance via the use of chatbots)); and (iv) uses of AI that create minimal risk.

Under the requirements of the new AI Regulations, the greater the potential of algorithmic systems to cause harm, the more far-reaching the intervention. Limited risk uses of AI face minimal transparency requirements and minimal risk uses can be developed and used without additional legal obligations. However, makers of "limited" or "minimal" risk AI systems will be encouraged to adopt non-legally binding codes of conduct. The "high risk" uses will be subject to specific regulatory requirements before and after launching into the market (e.g., ensuring the quality of data sets used to train AI systems, applying a level of human oversight, creating records to enable compliance checks and providing relevant information to users). Some obligations may also apply to distributors, importers, users or any other third parties, thus affecting the entire AI supply chain.

Enforcement

Member states will be responsible for enforcing these regulations. Penalties for noncompliance are up to 6% of global annual turnover or EUR 30 million, whichever is greater.

Criticism of exemptions for law enforcement

Overly broad exemptions for law enforcement's use of remote biometric surveillance have been the target of criticism. There are also concerns that the AI Regulations do not go far enough to address the risk of possible discrimination by AI systems.

Although the AI Regulations detail prohibited AI practices, some find there are too many problematic exceptions and caveats. For example, the AI Regulations create an exception for narrowly defined law enforcement purposes such as searching for a missing child or a wanted individual or preventing a terror attack. In response, some EU lawmakers and digital rights groups want the carve-out removed due to fears authorities may use it to justify the widespread future use of the technology, which can be intrusive and inaccurate.

Support

The AI Regulations include measures supporting innovation such as setting up regulatory sandboxes. These facilitate the development, testing and validation of innovative AI systems for a limited time before their placement on the market, under the direct supervision and guidance of the competent authorities to ensure compliance.

Database

According to the AI Regulations, the European Commission will be responsible for setting up and maintaining a database for high-risk AI practices (Article 60). The database will contain data on all stand-alone AI systems considered high-risk. To ensure transparency, all information processed in the database will be accessible to the public.

It remains to be seen whether the European Commission will extend this database to low-risk practices to increase transparency and enhance the possibility of supervision for practices that are not high-risk initially but may become so at a later stage.

Employment-specific observations

1. High-risk

AI practices that involve employment, worker management and access to self-employment are considered high-risk. These high-risk systems specifically include the following AI systems:

2. Biases

According to the AI Regulations, the training, validation and testing of data sets must be subject to appropriate data governance and management practices, including in relation to possible biases. Providers of continuously learning high-risk AI systems must ensure that possibly biased outputs are equipped with proper mitigation measures if they will be used as input in future operations (feedback loops).

However, the AI Regulations are unclear on how AI systems will be tested for possible biases, specifically whether the benchmark will be equality in opportunity or equality in outcomes. Companies should consider how these systems might affect individuals with disabilities and individuals at the intersection of multiple social groups.

3. Processing of special categories of personal data to mitigate biases is permissible

The AI Regulations carve out an exception allowing AI providers to process special categories of personal data if it is strictly necessary to ensure bias monitoring, detection and correction. However, AI providers processing this personal data are still subject to appropriate safeguards for the fundamental rights and freedoms of natural persons (e.g., technical limitations on the reuse and use of state-of-the-art security and privacy-preserving measures such as pseudonymization or encryption). It remains to be seen whether individuals will sufficiently trust these systems to provide them with their sensitive personal data.

4. Human oversight

High-risk AI systems must be capable of human oversight. Individuals tasked with oversight must:

As indicated in our recent Trust Continuum report, this will require substantial involvement from the human decision-maker (in practice, often an individual from HR), which proves to be challenging for most companies.

Beyond the EU

We have already noted the potential for extra-territorial effect of the AI Regulations. But many AI systems will not be caught if they have no EU nexus. The EU is, as it often is, at the vanguard of governmental intervention into protection of human rights - it is the first to lay down a clear marker of expectations around use of AI. But the issue is under review in countries across the world. In the UK, the Information Commissioner has published Guidance on AI and data protection. In the US, a number of states have introduced draft legislation governing use of AI. None have quite the grand plan feel of the EU's AI Regulations, but there will certainly be more to follow.

Read the original:

New Draft Rules on the Use of Artificial Intelligence - Lexology

Posted in Artificial Intelligence | Comments Off on New Draft Rules on the Use of Artificial Intelligence – Lexology

BBVA AI Factory, among the world’s best financial innovation labs, according to Global Finance – BBVA

Posted: at 4:23 am

The AI Factory is the global development center where BBVA builds its artificial intelligence capabilities. Its mission is to help create data products adapted to the needs of an increasingly digital population and to position BBVA as a leading player in the new worlds banking scene.

Optimizing remote agents work to improve customer service or providing BBVA teams with key knowledge to detect fraud. These are just some of the areas that have benefited from the work with data that began at BBVA more than a decade ago and gave way to the creation of the Artificial Intelligence Factory in 2019. Today the team has 50 professionals from different disciplines: data scientists, engineers, software developers, data architects, and business translators, that is, professionals who serve as a bridge between analytical capabilities and business needs. In addition, the number of people working under the BBVA AI Factory project umbrella is close to 200, including professionals from the BBVA Group.

BBVA AI Factory is one of the financial sectors largest global bets seizing the opportunities of the data age for everyone. This recognition is a boost to the companys strategic approach, which seeks to maximize the value we generate in the Group, aligning our objectives with BBVAs strategic priorities, says Francisco Maturana, BBVA AI Factorys CEO. For this it is important to adopt a product company mindset, creating reusable and multipurpose solutions and adapting our operating model to the banks needs.

The AI Factory is also one of the teams involved in the improvements to BBVAs app functionalities, in order to achieve a much more personalized experience for clients based on artificial intelligence capabilities.

The financial innovation labs on Global Finances annual list are where tomorrows solutions are being incubated, said Joseph Giarraputo, Global Finances editorial director.

Global Finances Innovator Awards list of the worlds best financial innovation labs aims to reward the worlds most innovative financial institutions, as well as the most innovative products and services.

Read the original:

BBVA AI Factory, among the world's best financial innovation labs, according to Global Finance - BBVA

Posted in Artificial Intelligence | Comments Off on BBVA AI Factory, among the world’s best financial innovation labs, according to Global Finance – BBVA

New artificial intelligence regulations have important implications for the workplace – Workplace Insight

Posted: at 4:23 am

The European Commission recently announced its proposal for the regulation of artificial intelligence, looking to ban unacceptable uses of artificial intelligence. Up until now, the challenges for businesses getting AI wrong were bad press, reputation damage, loss of trust and market share, and most importantly for sensitive applications, harm to individuals. But with these new rules, two new consequences are arising: plain interdiction of certain AI systems, and GDPR-like fines.

While for now this is only proposed for the EU, the definitions and principles set out may have wider-reaching implications, not only on how AI is perceived but also on how businesses should handle and work with AI. The new regulation sets four levels of risk: unacceptable, high, low, minimum, with HR AI systems sitting in the High Risk category.

The use of AI for hiring and firing has already stirred up some controversy, with Uber and Uber Eats among the latest companies to have made headlines for AI unfairly dismissing employees. It is precisely due to the far reaching impact of some HR AI applications, that it has been categorised as high risk. Afterall, a key purpose of the proposal is to ensure that fundamental human rights are upheld.

Yet, despite the bumps on the road and the focus on the concerns, it needs to be remembered that AI is in fact the best means for helping remove discrimination and bias if the AI is ethical. Continue to replicate the same traditional approaches and processes as found in existing data, and well definitely repeat the same discriminations, even unconsciously. Incorporate ethical and regulatory considerations into the development of AI systems, and Im convinced we will make a great step forward. We need to remember that the challenges lie with how AI is developed and used, not the actual technology itself. This is precisely the issue the EU proposal is looking to address.

AI, let alone ethical AI, is still not fully understood and there is an important education piece that needs to be undertaken. From the data engineers and data scientists to those in HR using the technology, the purpose, how and why the AI is being used must be understood to ensure it is being used as intended. HR also needs a level of comprehension of the algorithm itself to identify if those intentions are not being followed.

Defining the very notion of what is ethical is not that simple, but regulations like the one proposed by the EU, codes of conducts, data charts and certifications will help us move towards generally shared concepts of what is and isnt acceptable, helping to create ethical frameworks for the application of AI and ultimately, greater trust.

These are no minor challenges, but the HR field has an unique opportunity to lead the effort and prove that ethical AI is possible, for the greater good of organisations and individuals.

Original post:

New artificial intelligence regulations have important implications for the workplace - Workplace Insight

Posted in Artificial Intelligence | Comments Off on New artificial intelligence regulations have important implications for the workplace – Workplace Insight

The future of artificial intelligence and its impact on the economy – Brookings Institution

Posted: at 4:23 am

Advances in artificial intelligence are likely to herald an unprecedented period of rapid innovation and technological change, which will fundamentally alter current industries and economy. What is different to previous periods of technological progress is the speed at which these developments are happening and the extent to which they will shape markets around the world. How will they affect prosperity and inequality? How can AI be deployed for the greater good and to improve economic outcomes?

On Thursday, May 20, Sanjay Patnaik, director of the Center on Regulation and Markets (CRM) at Brookings, will sit down with Katya Klinova, the head of AI, labor, and the economy at the Partnership on AI to explore these questions and many others. Klinova focuses on studying the mechanisms for steering AI progress towards greater equality of opportunity and improving the working conditions along the AI supply chain. She previously worked at the UN Executive Office of the Secretary-General (SG) on preparing the launch of the SGs Strategy for New Technology, and at Google in a variety of managerial roles in Chrome, Play, Developer Relations, and Search departments, where she was responsible for launching and driving the worldwide adoption of Googles early AI-enabled services.

Viewer can submit questions for speakers by emailing events@brookings.edu or via Twitter using #AIGovernance.

This event is part of CRMs Reimagining Modern-day Markets and Regulations series, which focuses on analyzing rapidly changing modern-day markets and on how to regulate them most effectively.

Link:

The future of artificial intelligence and its impact on the economy - Brookings Institution

Posted in Artificial Intelligence | Comments Off on The future of artificial intelligence and its impact on the economy – Brookings Institution

Famed Artificial Intelligence-Based ETF Has Loaded Up $1.4M Tesla Shares On Dip This Month – Benzinga

Posted: at 4:23 am

The Qraft AI-Enhanced US Large Cap Momentum ETF (NYSE:AMOM), an exchange-traded fund driven by artificial intelligence, bought about $1.4 billion worth of Tesla Inc. (NASDAQ:TSLA) shares on the dip earlier this month.

What Happened: The ETF now has Tesla as its third-largest stock holding, behind social media giant Facebook Inc. (NASDAQ:FB) and e-commerce giant Amazon.com Inc. (NASDAQ:AMZN). Tesla now accounts for more than 5% of the funds portfolio.

According to MarketWatch, the fund has a history of accurately predicting the price moves of Tesla shares.

The fund had previously sold its entire Tesla holdings before the start of February this year when the electric vehicle makers shares were near their all-time high of $900.40.

See Also: Tesla, Nio Significantly Cut From Baillie Gifford Portfolio, Here's What The Firm Bought Instead In Q1

Why It Matters: Teslas stock has shed more than a third of its peak valueand emerged as a strong buy the dip candidate. It has dropped 16.4% year-to-date.

Tesla and other automakers are grappling with semiconductor shortages. Of late, the Palo Alto-based company is also facing rough weather in China, a market that contributes nearly 30% of the electric vehicle maker's global sales and is its second-largest market after the U.S.

AMOM, a product of South Korea-based fintech group Qraft, tracks 50 large-cap U.S. stocks and reweighs its holdings each month. The fund uses AI technology to automatically search for patterns that have the potential to produce excess returns and construct actively managed portfolios.

See also:How to Invest in Tesla Stock

AMOM has delivered year-to-date returns of 3.7%, compared to its benchmark the Invesco S&P 500 Momentum ETF (NYSE:SPMO) which has returned just 0.3% so far this year.

Price Action: Tesla shares closed almost 3.2% higher on Friday at $589.74.

Your Resource for Non-Stop Trade Ideas

Join ZINGERNATION on "Power Hour", as we work hard to deliver trade ideas every day.

2021 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

See the rest here:

Famed Artificial Intelligence-Based ETF Has Loaded Up $1.4M Tesla Shares On Dip This Month - Benzinga

Posted in Artificial Intelligence | Comments Off on Famed Artificial Intelligence-Based ETF Has Loaded Up $1.4M Tesla Shares On Dip This Month – Benzinga

Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) Standards Association Has Received Substantial Proposals in Response to Its…

Posted: at 4:23 am

PR.com2021-05-17

Geneva, Switzerland, May 17, 2021 --(PR.com)-- At its 8th General Assembly, the international, unaffiliated Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) standards association has received substantial proposals in response to its Call for Technologies on AI-based Company Performance Prediction Use Case. Meanwhile the development of its foundational AI Framework standard is steadily progressing and the technical review of responses to the Context-based Audio Enhancement (MPAI-CAE) and Multimodal Conversation (MPAI-MMC) Calls for Technologies has been completed.

The goal of the AI Framework (https://mpai.community/standards/mpai-aif/) standard, nicknamed MPAI-AIF, is to enable creation and automation of mixed Machine Learning (ML) - Artificial Intelligence (AI) - Data Processing (DP) inference workflows, implemented as software, hardware, or mixed software and hardware. A major MPAI-AIF feature is enhanced explainability of MPAI standard applications.

Development of two new standards has started after completing the technical review of responses to the Calls for Technologies. Context-based Audio Enhancement (MPAI-CAE https://mpai.community/standards/mpai-cae/) covers four instances: adding a desired emotion to a speech without emotion, preserving old audio tapes, improving the audioconference experience and removing unwanted sounds while keeping the relevant ones to a user walking in the street. and Multimodal Conversation (MPAI-MMC https://mpai.community/standards/mpai-mmc/) covers three instances: audio-visual conversation with a machine impersonated by a synthesised voice and an animated face, request for information about a displayed object, translation of a sentence using a synthetic voice that preserves the speech features of the human.

Substantial proposals received in response to the MPAI-CUI Call for Technologies (https://mpai.community/standards/mpai-cui/#CfT) has allowed starting the work on a fourth standard, AI-based Company Performance Prediction, part of the Compression and Understanding of Industrial Data standard. The standard will enable prediction of performance, e.g., organisational adequacy or default probability, by extracting information from governance, financial and risk data of a given company.

The MPAI website provides information about other AI-based standards being developed: AI-Enhanced Video Coding (MPAI-EVC https://mpai.community/standards/mpai-evc/) will improve the performance of existing video codecs using AI, Server-based Predictive Multiplayer Gaming (MPAI-SPG https://mpai.community/standards/mpai-spg/) will compensates the loss of data and detect false data in online multiplayer gaming and Integrative Genomic/Sensor Analysis (MPAI-GSA https://mpai.community/standards/mpai-gsa/) will compress and understand data from combined genomic and other experiments produced by related devices/sensors.

MPAI develops data coding standards for applications that have AI as the core enabling technology. Any legal entity who supports the MPAI mission may join MPAI (https://mpai.community/how-to-join/join/) if it is able to contribute to the development of standards for the efficient use of data.

Visit the MPAI website (https://mpai.community/) and contact the MPAI secretariat (secretariat@mpai.community) for specific information.

Contact Information:MPAILeonardo Chiariglione00390119350461Contact via Emailhttp://mpai.community

Read the full story here: https://www.pr.com/press-release/836616

Press Release Distributed by PR.com

The rest is here:

Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) Standards Association Has Received Substantial Proposals in Response to Its...

Posted in Artificial Intelligence | Comments Off on Moving Picture, Audio and Data Coding by Artificial Intelligence (MPAI) Standards Association Has Received Substantial Proposals in Response to Its…

How Artificial Intelligence (AI) Is Helping Musicians Unlock Their Creativity – Forbes

Posted: at 4:23 am

Wondering who that hot new collaborator is, on your favorite artists new album?

It might just be artificial intelligence.

Progress in AI music is accelerating rapidly, thanks to researchers and musicians at major tech conferences and universities who want to integrate widespread AI into the music world.

How Artificial Intelligence (AI) Is Helping Musicians Unlock Their Creativity

Many artists feel were about to enter a golden age of creativity, powered by artificial intelligence, that can push music in new directions.

Lets look at some of the newest ways artificial intelligence is transforming the music industry from top to bottom.

For 30 years, musician and composer David Cope has been working on Experiments in Musical Intelligence (EMI). EMI originally began in 1982 as an effort to help Cope overcome "composer's block," and now his algorithms have produced a large number of original compositions in a variety of genres and styles.

AIVA uses AI and deep learning algorithms to help mainstream users compose their own soundtrack music and scores. Its the perfect tool for content creators on Youtube, Twitch, Tik Tok, and Instagram who need a steady supply of music but dont have an endless budget for royalties.

Grammy-nominated producer Alex da Kid used IBM Watson to analyze five years of hit songs, as well as cultural data from films, social media, and online articles to figure out a theme for an AI-generated song that fans would enjoy. The final song, Not Easy, reached number four on the iTunes Hot Tracks chart within 48 hours after its release.

Composers Drew Silverstein, Sam Estes, and Michael Hobe were working on music for big-budget movies like The Dark Knight when they started getting requests for simple background music for television and video games. They worked together to create Amper, a consumer-friendly online tool that helps non-musicians and online content creators make royalty-free music using their own parameters in a few seconds.

One thing is clear: Since the start of the pandemic, fans miss going to concerts.

To fill the void, Authentic Artists has introduced a large collection of AI-powered virtual artists who can deliver new music experiences.

Authentic Artists animated virtual musicians generate all-original compositions to play on screen, and also respond to audience feedback by increasing or decreasing the tempo or intensity, or even fast-forwarding to the next song in the set.

Audio-on-demand streaming like Spotify totaled $534 billion in the United States alone, according to Buzz Angle Musics 2018 report.

So how do promising new artists get discovered, with all that competition?

Artificial intelligence helps the music industry with A&R (artist and repertoire) discovery by combing through music and trying to identify the next breakout star.

Warner Music Group acquired a tech start-up last year that uses an algorithm to review social, streaming, and touring data to find promising talent. In 2018, Apple also acquired Asaii, a start-up that specializes in music analytics, to help them boost their A&R.

AI technology is transforming the music industry in a myriad of ways, but creatives shouldnt be worried about losing their jobs and being replaced by computers. Were still a long way from artificial intelligence being able to create hit songs on their own.

But as tools develop and the music industry learns how to use AI as a supplement to human creativity, our world will continue to sound sweeter and sweeter every year.

Original post:

How Artificial Intelligence (AI) Is Helping Musicians Unlock Their Creativity - Forbes

Posted in Artificial Intelligence | Comments Off on How Artificial Intelligence (AI) Is Helping Musicians Unlock Their Creativity – Forbes

CDW Tech Talk Explains How to Get Ahead with Automation and Security – BizTech Magazine

Posted: at 4:23 am

Using your data effectively can help your organization stand out from the competition. However, doing so requires an IT infrastructure capable of the digital transformation needed to meet your business objectives. And no matter where your data is stored, it must remain secure to be effective.

The use of automation and zero-trust security strategies will be the focus of the nextCDW Tech Talk series webcast.

Allen Whipple, distributor business development channel consultant, and Rony Adaimy, category manager, at Hewlett Packard Enterprise will join the conversation to highlight the value of zero trust security strategies to protect data and defend against cybercrime. Theyll also delve into the advantages offered by the use of artificial intelligence and machine learning.

Corey Carrico, senior field marketing manager at CDW, will also join us to talk about using zero trust to stay ahead of cybercriminals--as well as your competitors.

REGISTER:To watch Tuesday's session live at 1 p.m. Central time, register for the CDW Tech Talk series below.

The CDW Tech Talk series is a weekly webcast that covers a wide variety of IT topics demonstrating how businesses can gain a competitive edge, reimagine the future of work and manage evolving infrastructures.

Most recently, we took a closer look at what you need to build your organizations optimal infrastructure.

Other recent topics of discussion includebuilding resilient workspaces, wireless technology,worker flexibilityandemployee workflows. Register for the serieshere, and followBizTechs full coverage of the eventhere.

Getty Images/ metamorworks

Read more:

CDW Tech Talk Explains How to Get Ahead with Automation and Security - BizTech Magazine

Posted in Artificial Intelligence | Comments Off on CDW Tech Talk Explains How to Get Ahead with Automation and Security – BizTech Magazine

A Citizens Guide To Artificial Intelligence: A Nice Focus On The Societal Impact Of AI – Forbes

Posted: April 13, 2021 at 6:28 am

Artificial Intelligence

A Citizens Guide to Artificial Intelligence, by a cast of thousands (John Zerilli, John Danaher, James Maclaurin, Colin Gavaghan, Alistair Knott, Joy Liddicoat, and Merel Noorman) is a nice high level view of some of the issues surrounding the adoption of artificial intelligence (AI). The author bios describe them as all lawyers and philosophers except for Noorman, and with that crowd its no surprise the book is much better at discussing the higher level impacts than AI itself. Luckily, theres a whole lot more of the latter than there is the former. The real issue is theyre better at explaining things than at coming to logical conclusions. Well get to that, but its still a useful read.

The issue about understanding of AI is shown early, when they first give a nice explanation of false positives and false negatives, but then write Its hard to measure the performance of unsupervised learning systems because they dont have a specific task. As this column has repeatedly mentioned, the key use of unsupervised learning is the task of detecting anomalous behavior, especially when anomalies are sparse. The difference between supervised and unsupervised learning is in knowing what youre looking for:

Supervised: Hey, heres attack XYZ!

Unsupervised learning: Hey, heres this weird thing that might be an attack!

So skim chapter one to get to the good stuff. Chapter two is about transparency, and Figure 2.1 is a nice little graphic about the types of transparency they are describing. What I really like is that accessibility is in the top tier. It doesnt matter if the designers and owners of a system are claiming to be responsible and are also inspecting the results to check accuracy; if the information isnt accessible to all parties involved in and impacted by the AI system, theres a problem.

The one issue I have with the transparency chapter is in the section human explanatory standards. They seem to be claiming that since were hard to understand, why should we expect better from AI systems? They state, A crucial premise of this chapter has been that standards of transparency should be applied consistently, regardless of whether were dealing with humans or machines. Yes, a silly premise. We didnt create ourselves. Were building AI systems for the same reasons weve built other thing in order to do things easier or more accurately than we can do them. Since were building the system, we should expect to be able to require more transparency to be built into a system.

The next three chapters are on bias, responsibility & liability, and control. They are good overviews of those issue. The control chapter is intriguing because its not just about us controlling the systems, but also covers issues about giving up control to systems.

Privacy is a critical issue, and chapter six is nice coverage of that. The most interesting section is on inferred data. We talk about inference engines, making inferences on the data; but the extension of that to privacy is to say there might be ethical limits to what engines should be allowed to infer. Theres the old case of a system knowing a young woman is pregnant and sending pregnancy sales pitches to her home before she had told her parents, but there are far worse situations. Consider societies that are intolerant of sexual orientation, but that can be inferred from other data. A government could use that to persecute people. Theres a wide spectrum in between those examples, and the chapter does a nice job of getting people to think about the issue.

The next chapter covers autonomy and makes some very good points. One is that humans have always challenged each others autonomy, but that AI and lack of laws and regulations make it far easier for governments and a few companies to remove our autonomy in much more opaque ways than have previously been available.

Algorithms in government and employment are given a good introduction in the next chapters, but with a lot of the same information seen elsewhere. The most interesting part of the back portion of the book comes in chapter ten, about oversight and regulation. There is a suggestion that, given the complexity of AI, there is logic to creating a new oversight agency for the national government. As they point out, an FDA for AI. Think of it in business terms, its a center of excellence in AI, able to formulate national policy for business and citizens, while also serving to help other agencies adapt the general policies to their specific oversight areas. That makes excellent sense.

No book is perfect, but Im partially surprised that a book with so many authors attached flows as well. Then I remember they all are academics, used to research papers with multiple authors. Of course, with that many academics, the risk is always that a book will sound like a research paper. Fortunately, they seem to have escaped that problem. A Citizens Guide is a good read to help people understand key issues in having AI make the major impact on society that it will. More people need to realize that quickly and get governments to focus on protecting people.

Link:

A Citizens Guide To Artificial Intelligence: A Nice Focus On The Societal Impact Of AI - Forbes

Posted in Artificial Intelligence | Comments Off on A Citizens Guide To Artificial Intelligence: A Nice Focus On The Societal Impact Of AI – Forbes

Proteins, artificial intelligence, and future of pandemic responses – Dailyuw

Posted: at 6:27 am

The Institute for Protein Design (IPD) at the UW announced March 31 a $5 million grant from Microsoft to collaborate on applying artificial intelligence to protein design.

Microsofts chief scientific officer Eric Horvitz and the IPDs director David Baker, in an article with GeekWire, said they believe that this collaboration will lead to major strides in medicine and technology, and accelerate the scientific response to future pandemics.

The IPD designs proteins molecules that carry out a wide range of functions from defending against pathogens to harnessing energy during photosynthesis from scratch, with the goal of making a whole new world of synthetic proteins to address modern challenges, according to the institutes website.

Researchers at the IPD have developed promising anti-viral and ultra-potent vaccine candidates against SARS-CoV-2, the virus that causes COVID-19, that are currently in human clinical trials.

And in protein design, form follows function.

We use 3D protein structures on the computer to design the protein sequences, Brian Coventry, a research scientist in the Baker Lab at the IPD, said. When we order the protein sequence, its function in real life should exactly mirror that on the computer.

But that does not always happen.

The problem with this method, which is based on the first principles of both physics and chemistry, is that it produces an abundance of possible proteins which must be tested, the majority of which do not have the exact desired form, Coventry said.

Coventry recently worked on a team that developed a SARS-CoV-2 antiviral medication candidate, and he stressed that for antivirals, it is important that the designed protein be precisely atomically correct.

In the context of a pandemic, the fast development of highly accurate therapeutic synthetic proteins is desirable. This is where deep learning, a subset of artificial intelligence modeled after the brains neural networks, comes into play.

There is a lot of room for improvement, Minkyung Baek, a postdoctoral scholar in the Baker Lab at the IPD, said about the first principles-based method of protein design. Baek believes that deep learning methods can be used to quickly discriminate between possible proteins and optimize design to produce proteins that are more stable and bind more tightly to targets.

Deep learning models are given a training data set, in this case experimental results of the structures of designed proteins, and then can learn based on real-world data. They use that information to predict and design protein structures, Baek said.

Microsoft has given the IPD access to their cloud computing service Azure, which will enable them to train and test deep learning models about 10 times faster, according to Baek.

Baek hopes that this will speed up the development of effective deep learning models, which will be helpful not only for designing proteins that match existing biological proteins, but also for discovering the structure of naturally occurring proteins.

There are many real-world situations where the structure of the target is not precisely known. In these situations, researchers must predict the shape of the metaphorical lock and design the key simultaneously.

Being able to better predict the structure of a protein when given its genetic code is important, with Baek using the variants of the COVID-19 virus as an example.

Using our deep learning base, we can predict the protein structure of the variant, and starting from there we may get some clue [about] why that variant may have been more severe or easy to spread, Baek said.

But these deep learning models have some limitations. They are limited by the available training data set, are not always generalizable to multiple situations, and do not explain the reasoning behind their decisions, Coventry said.

Despite these factors, Coventry and Baek are both optimistic about the potential for deep learning to improve the protein design process.

At the end of the day, Id like to see a 100% success rate, you know, Coventry said. Someday Im sure its possible.

Reach reporter Nuria Alina Chandra at news@dailyuw.com. Twitter: @AlinaChandra

Like what youre reading? Support high-quality student journalism by donatinghere.

Read more:

Proteins, artificial intelligence, and future of pandemic responses - Dailyuw

Posted in Artificial Intelligence | Comments Off on Proteins, artificial intelligence, and future of pandemic responses – Dailyuw

Page 85«..1020..84858687..90100..»