Monthly Archives: February 2022

The sinister return of eugenics – The New Statesman

Posted: February 15, 2022 at 5:25 am

In July 1912 800 delegates met at the Hotel Cecil on the Strand in London for the First International Eugenics Congress. Some of the foremost figures of the day including the former and future British prime ministers Arthur Balfour and Winston Churchill were there. The delegates represented a wide spectrum of opinion. Not only right-wing racists but also liberals and socialists believed eugenic policies should be used to raise what they regarded as the low quality of sections of the population.

The Liberal founder of the welfare state, William Beveridge, wrote in 1906 that men who through general defects are unemployable should suffer complete and permanent loss of all citizen rights including not only the franchise but civil freedom and fatherhood. In Marriage and Morals (1929), Bertrand Russell, while criticising American states that had implemented involuntary sterilisation too broadly, defended enforcing it on people who were mentally defective. In 1931 an editorial in this magazine endorsed the legitimate claims of eugenics, stating they were opposed only by those who cling to individualistic views of parenthood and family economics.

The timing of the 1912 congress may be significant. In May 1912 a private members Feeble-Minded Control Bill was presented to the House of Commons. The bill aimed to implement the findings of a royal commission, published in the British Medical Journal in 1908, which recommended that lunatics or persons of unsound mind, idiots, imbeciles, feeble-minded or otherwise should be afforded by the state such special protection as may be suited to their needs. The recommended measures included segregating hundreds of thousands of people in asylums and making marrying any of them a criminal offence. Curiously, the commission specified the number of people requiring this protection as being exactly 271,607.

The bill failed, partly as a result of intensive lobbying by the writer and Catholic apologist GK Chesterton of the Liberal MP Josiah Wedgewood. Despite continuing agitation by eugenicists, no law enabling involuntary sterilisation was ever passed in Britain. In 1913, however, parliament passed the Mental Deficiency Act, which meant a defective could be isolated in an institution under the authority of a Board of Control. The act remained in force until 1959.

Adam Rutherford, who reports these facts, writes that though wildly popular across political dividesplenty of people vocally and publicly opposed the principles and the enactment of eugenics policies in the UK and abroad. This may be so, but very few of the active opponents of eugenics were progressive thinkers. During the high tide of eugenic ideas between the start of the 20th century and the 1930s, no leading secular intellectual produced anything comparable to Chestertons Eugenics and Other Evils (1922), a powerful and witty polemic in which he argued for the worth of every human being.

[See also: Why liberalism is in crisis]

By no means all Christians shared Chestertons stance. As Rutherford points out, the dean of St Pauls Cathedral and professor of divinity at Cambridge, the Reverend WR Inge (1860-1954), wrote in favour of eugenic birth control, suggesting that the urban proletariat may cripple our civilisation, as it destroyed that of ancient Rome.

While Christians were divided on eugenics, progressive thinkers were at one in supporting it. The only prominent counter-example Rutherford cites is HG Wells, whom he calls a long-standing opponent of eugenics. Given the statements welcoming the extinction of non-white peoples in Wellss 1901 book Anticipations, this seems an oversimplified description.

Awkwardly for todays secular progressives, opposition to eugenics during its heyday in the West came almost exclusively from religious sources, particularly the Catholic Church. Eugenic ideas were disseminated everywhere, but few Catholic countries applied them. The only involuntary sterilisation legislation in Latin America was enacted in the state of Veracruz in Mexico in 1932. In Catholic Europe, Spain, Portugal and Italy passed no eugenic laws. By contrast, Norway and Sweden legalised eugenic sterilisation in 1934 and 1935, with Sweden requiring the consent of those sterilised only in 1976. In the US, more than 70,000 people were forcibly sterilised during the 20th century, with sterilisation without the inmates consent being reported in female prisons in California up to 2014.

For the secular intelligentsia in the first three decades of the last century, eugenics the deliberate crafting of a society by biological design, as Rutherford defines it was a necessary part of any programme of human betterment. This was how eugenics was understood by the Victorian polymath Francis Galton (1822-1911), who invented the term, a conjunction of the Greek words for good and offspring. Controlled breeding, aimed at raising the quality of the human beings who were born, was the path to the human good.

This was not a new idea. Selective mating was an integral part of the ugly utopia envisioned by Plato in The Republic. Galtons innovation was to link eugenics with the classification of human beings into racial categories, which developed in the 18th century as part of the Enlightenment. In his book Hereditary Genius (1869), he wrote: The idea of investigating the subject of hereditary genius occurred to me during the course of a purely ethnological inquiry, into the mental peculiarities of different races.

Since the Second World War, the idea of progress has been spelled out in terms of greater personal autonomy and social equality. The occasion of this shift was the revelation of what eugenics entailed in Nazi Germany and the countries it occupied.

The discovery that six million European Jews were murdered in the Holocaust, along with hundreds of thousands of people with physical disabilities, mental illnesses or other characteristics such as simply being gay that supposedly made their lives unworthy of living, was a rupture in history. Ideas and policies that had been regarded by an entire generation of thinkers as guides to improving the species were seen to be moral abominations. Eugenics had enabled an unparalleled crime. An earlier generations understanding of progress was not just revised. It was rejected, and something more like its opposite accepted.

This reversal should be unsettling for progressive thinkers today. How can they be sure that their current understanding will not also be found wanting? Rutherford, who shares much of the prevailing progressive consensus, seems untroubled by this possibility. As he notes on several occasions, he writes chiefly as a scientist. He has little background in moral philosophy, and at times this shows.

[See also: How fear makes us human]

The strength of Rutherfords book is in his demonstration that eugenicists pursue an illusion of control. Edwardian and Nazi schemes for weeding out the human attributes they judged undesirable were unworkable. Even eye pigmentation is complex and not fully understood. A primitive model of monogenetic determinism lies behind the current revival in eugenic ideas. Advances in gene editing are welcomed by some and greeted with horror by others for making possible the manufacture of designer babies. There has been loose talk of increasing the IQ of future generations, but there is nothing in current knowledge that suggests such a policy to be practicable.

Eugenics is a busted flush, Rutherford writes, a pseudoscience that cannot deliver on its promise. His scientific demolition of the eugenic project is brilliantly illuminating and compelling. His book will be indispensable for anyone who wants to assess the wild claims and counter-claims surrounding new genetic technologies. It is less successful as a study of the profound ethical questions they open up.

The principal purpose of eugenics in the 19th and early-20th centuries was to legitimise European colonial power. Eugenic ideology always had other functions. As Rutherford observes, the evils of Western societies were depicted as resulting from the inferiority of those they oppressed. Poverty was a consequence of stupidity and fecklessness, not a lack of education and opportunity. But it is the most radical ambition of eugenics to re-engineer the human species, or privileged sections of it that is likely to be most dangerous in future. Rather than exploring this threatening prospect, which has the backing of powerful tech corporations that are researching anti ageing therapies and technological remedies for mortality, Rutherford focuses on soft targets fringe figures and organisations attempting to revive discredited theories of scientific racism.

There is a direct line connecting early 20th-century eugenics with 21st-century transhumanism. The link is clearest in the eugenicist and scientific humanist Julian Huxley (1887-1975). In 1924 Huxley wrote a series of articles for the Spectator, in which he stated that the negro mind is as different from the white mind as the negro from the white body. By the mid-Thirties, Huxley had decided that racial theories were pseudoscience and was a committed anti-fascist.

He had not abandoned eugenics. In a lecture entitled Eugenics in an Evolutionary Perspective, delivered in 1962, Huxley reasserted the value of eugenic ideas and policies. Earlier, in 1951, in a lecture that appeared as a chapter in his book New Bottles for New Wine (1957), he had coined the term transhumanism to describe the idea of humanity attempting to overcome its limitations and to arrive at fuller fruition.

Huxley is a pivotal figure because he links eugenics with its successor ideology. Rutherford devotes only a sentence to him, noting that he advised his friend Wells on the 1932 film adaptation of The Island of Dr Moreau. But Huxley merits more extensive and deeper examination, for he illustrates a fundamental difficulty in both eugenics and transhumanism. Who decides what counts as a better kind of human being, and on what basis is the evaluation made?

Rutherford says little on foundational issues in ethics, and what he does say is muddled. He cites the US Declaration of Independence for its affirmation of the inalienable rights to life, liberty and the pursuit of happiness. Authorised by God and enshrined in natural law, these rights are asserted to be self-evident. Rightly, Rutherford dismisses this assertion: They are of course fictions, noble lies. Yet Rutherford relies on something very like inalienable rights when he considers the moral dilemmas surrounding advances in genetics.

Discussing terminating a pregnancy in light of a pre-natal diagnosis, he writes that it is an absolute personal choice and should be an unstigmatised right for women and parents. Like Rutherford, I believe womens choices should be paramount. But if rights are fictions, how can these choices be considered absolute entitlements? Different societies will configure these fictive rights in different ways. One that enforced a dominant conception of collective welfare might restrict abortion for some women and enforce it on others, as appears to be the case in China.

[See also: Living in Fernando Pessoas world]

Rutherford goes on to contend that utilitarian arguments preclude the crimes of eugenics, such as killing people with disabilities. But a utilitarian calculus cannot give disabled people a right to life. In his book Practical Ethics (1979), the Australian utilitarian Peter Singer maintained that selective infanticide of severely disabled infants need not be morally wrong. Using the utilitarian metric, happiness could be maximised in a world without these human beings. Against utilitarian arguments of this kind, Rutherford writes: If we truly wanted to reduce the sum total of human suffering then we should eradicate the powerful, for wars are fought by people but started by leaders.

This may be rhetorically appealing, but it is thoroughly confused. The suggestion that suffering could be minimised by eradicating the powerful is nonsense. As Rutherford must surely realise, the powerful are not a discrete human group that can be eliminated from society.

The fundamental ethical objection to eugenics is that it licenses some people to decide whether the lives of others are worth living. Part of an intellectual dynasty that included the Victorian uber-Darwinian TH Huxley and the novelist Aldous, Julian Huxley never doubted that an improved human species would match his own high-level brainpower. But not everyone thinks intellect is the most valuable human attribute. General de Gaulles daughter Anne had Downs syndrome, and the famously undemonstrative soldier and Resistance leader referred to her as my joy, and when at the age of 20 she died he wept. The capacity to give and receive love may be more central to the good life than self-admiring cleverness.

This is where transhumanism comes in. It is not normally racist, and typically involves no collective coercion, only the voluntary actions of people seeking self enhancement. But like eugenicists, transhumanists understand human betterment to be the production of superior people like themselves. True, the scientific knowledge and technology required to create these people are not yet available; but as Rutherford acknowledges, someday they may be.

The likely upshot of transhumanism in practice a world divided between a rich, smart, beautified few whose lifespans can be indefinitely extended, and a mass of unlovely, disposable, dying deplorables seems to me a vision of hell. But it may well be what is in store for us, if the current progressive consensus turns out to be as transient as the one that preceded it.

John Grays most recent book is Feline Philosophy: Cats and the Meaning of Life (Penguin)

Control: The Dark History and Troubling Present of EugenicsAdam Rutherford Weidenfeld & Nicolson, 288pp, 12.99

Sign up for The New Statesmans newsletters Tick the boxes of the newsletters you would like to receive. Morning Call Quick and essential guide to domestic and global politics from the New Statesman's politics team. World Review The New Statesmans global affairs newsletter, every Monday and Friday. The New Statesman Daily The best of the New Statesman, delivered to your inbox every weekday morning. Green Times The New Statesmans weekly environment email on the politics, business and culture of the climate and nature crises - in your inbox every Thursday. This Week in Business A handy, three-minute glance at the week ahead in companies, markets, regulation and investment, landing in your inbox every Monday morning. The Culture Edit Our weekly culture newsletter from books and art to pop culture and memes sent every Friday. Weekly Highlights A weekly round-up of some of the best articles featured in the most recent issue of the New Statesman, sent each Saturday. Ideas and Letters A newsletter showcasing the finest writing from the ideas section and the NS archive, covering political ideas, philosophy, criticism and intellectual history - sent every Wednesday. Events and Offers Sign up to receive information regarding NS events, subscription offers & product updates.

This article appears in the 09 Feb 2022 issue of the New Statesman, Sunak's Game

Read the original:

The sinister return of eugenics - The New Statesman

Posted in Eugenics | Comments Off on The sinister return of eugenics – The New Statesman

WisDems: Nicholson and Kleefisch want to turn back the clock on health care – WisPolitics.com

Posted: at 5:25 am

MADISON, Wis. As Republican legislators this weekcontinue to push an identical version of the unpopular and divisive Texas-style abortion ban, Kevin Nicholson and Rebecca Kleefisch have both doubled down on their promise to turn back the clock on health care in Wisconsin by slashing health care services, banning abortion care with no exceptions, and cutting cancer screenings for thousands of Wisconsinites.

Nicholson, whoconfirmed yesterday that he would sign the unpopular and divisive Texas-style billif he somehow became governor, believes that no one should be able to access abortion care with zero exceptions. During his last failed campaign, Nicholson received a perfect 100 percent rating from Pro-Life Wisconsin, which confirmed to the Associated Press that Nicholson promised to support all of their demands, including banning abortion in all cases, with no exceptions for rape, incest, or when a mothers life is in jeopardy.

Nicholson has long wanted to defund Planned Parenthood, which tens of thousands of Wisconsinites rely on for basic care like cancer screenings, testing, and access to contraception. Hes even previouslytweetedthat Planned Parenthood is not a healthcare provider; its a eugenics shop.

As lieutenant governor, Kleefisch worked to passfive bills limiting critical reproductive health care access and even agreed that survivors of rape should turn lemons into lemonade instead of being empowered to make their own health care decisions.

Kevin Nicholson and Rebecca Kleefisch want to insert themselves into some of the most personal decisions that Wisconsinites make, said Democratic Party of Wisconsin Communications Director Iris Riis. Whether its banning abortion care with no exceptions or cutting cancer screenings, the Republicans running for governor have staked out the most extreme and divisive positions on health care.

Learn more about the Republicans running for governor and their radical agendas atAntiChoiceKevin.comandRadicalRebecca.com.

Read the original:

WisDems: Nicholson and Kleefisch want to turn back the clock on health care - WisPolitics.com

Posted in Eugenics | Comments Off on WisDems: Nicholson and Kleefisch want to turn back the clock on health care – WisPolitics.com

NASA’s Hubble Space Telescope and its Asteroid Detection Ability Will Be Hampered by Starlink Gen2 – iTech Post

Posted: at 5:23 am

SpaceX's Starlink Gen2 satellite has recently submitted an application to the Federal Communications Commission (FCC) to deploy 30,000 Starlink Gen2 satellites.

However, NASA sent a letter to the FCC stating that it encourages the agency to do more research and careful deployment of these satellites.

NASA added that the Hubble space telescope might be affected by the deployment of the plethora of satellites since eight percent of Hubble telescope images are impacted by satellites captured during exposures.

NASA expressed that the license Starlink seeks approval for states 10,000 satellites that are positioned in or above the orbital range of Hubble.

In this case, this would double the number of Hubble's degraded images.

NASA added that it estimates that the presence of a Starlink satellite will be spotted in every single asteroid image captured by the Hubble telescope.

The agency does not take it lightly as this would mean a difficulty of detecting asteroids that might further cause harm towards the planet, the satellites, and NASA's space missions. This might also go as far as having numerous image renders that are unusable.

As reported by Ars Technica, NASA wants "additional information including spacecraft and laser specifications including deployed dimensions, communications plan, ground segment expansion, orbital spacing, and deployment schedule."

"This will inform a thorough analysis of risks and impacts to NASA's missions and enable a mitigation strategy," the report adds.

Read Also: Life on Mars? NASA Discovers Abundant Water Source In The Red Planet

The letter that NASA sent to the FCC does not discourage the agency from rejecting the application of SpaceX Starlink Gen2. Rather, it pushes for the meticulous overseeing of the project to guarantee safe spaceflight in future missions and a long-term sustainable space environment.

It has been reported that NASA has legally expressed its concern regarding the significant increase of space satellites that might possibly cause collisions with other crewed spacecraft missions.

Space traffic might further endanger space exploration due to a possible crowded orbit.

According to Space.com, due to the five-fold increase of satellites in space, NASA expressed its concern about whether or not SpaceX's automated collision avoidance system would be capable of handling an enormous amount of traffic.

The conjunction that may possibly happen between satellites and other space crafts will likely have an effect on both crewed and uncrewed space missions since there will be more objects in close proximity.

Due to the resurfacing concern, SpaceX claims that there is zero risk in Starlink satellites colliding with other spacecraft in orbit.

As reported previously here oniTechPost, NASA told the FCC that "the assumption of zero risk from a system-level standpoint lacks statistical substantiation"

In addition, they added that with "the potential for multiple constellations with thousands and tens of thousands of spacecraft, it is not recommended to assume propulsion systems, ground detection systems, and software are 100% reliable, or that manual operations (if any) are 100% error-free."

PC Magalso reported that SpaceX's Starlink Gen2 satellites are aimed to launch as soon as next month. This leaves SpaceX hoping for the FCC to accept its proposal for deploying 30,000 satellites.

Related Article: NASA Raises Issues Over SpaceX CEO Elon Musk's Plans of Sending 30,000 Starlink Satellites

Go here to read the rest:
NASA's Hubble Space Telescope and its Asteroid Detection Ability Will Be Hampered by Starlink Gen2 - iTech Post

Posted in Hubble Telescope | Comments Off on NASA’s Hubble Space Telescope and its Asteroid Detection Ability Will Be Hampered by Starlink Gen2 – iTech Post

The Top 10 Movies to Help You Envision Artificial Intelligence – Inc.

Posted: at 5:23 am

Artificial intelligence has been with us for decades -- just throw on a movie if you don't believe it.

Even though A.I. may feel like a newer phenomenon, the groundwork of these technologies are more dated than you'd think. The English mathematician Alan Turing, considered by some as the father of modern computer science, started questioning machine intelligence in 1950. Those questions resulted in the Turing Test, which gauges a machine's capacity to give the impression of"thinking" like a human.

The concept of A.I. can feel nebulous, but it doesn't fall under just one umbrella. From smart assistants and robotics to self-driving cars, A.I. manifests in different forms...some more clear than others. Spoiler alert! Here are 10 movies in chronological order that can help you visualize A.I.:

1. Metropolis (1927)

German directorFritz Lang's classicMetropolis showcases one of the earliest depictions of A.I. in film, with the robot, Maria, transformed intothe likeness of a woman. The movie takes place in an industrial city called Metropolis that is strikingly divided by class, where Robot Maria wreaks havoc across the city.

2. 2001: A Space Odyssey (1968)

Stanley Kubrick's 2001 is notable for its early depictionof A.I. and is yet another cautionary tale in which technology takes a turn for the worse. A handful of scientists are aboard a spacecraft headed to Jupiter where a supercomputer, HAL(IBM to the cynical), runs most of the spaceship's operations. After HAL makes a mistake and tries to attribute it to human error, the supercomputer fights back when those aboard the ship attempt to disconnect it.

3. Blade Runner (1982)and Blade Runner 2049 (2017)

The original Blade Runner (1982) featured Harrison Ford hunting down "replicants,"or humanoids powered by A.I., which are almost indistinguishable from humans. In Blade Runner2049 (2017), Ryan Gosling's character, Officer K, lives with an A.I. hologram, Joi. So at least we're getting along better with our bots.

4. The Terminator (1984)

The Terminator's plot focuses on a man-made artificial intelligence network referred to as Skynet -- despite Skynet being created for military purposes, the system ends up plotting to kill mankind. Arnold Schwarzenegger launched his acting career out ofhis role as the Terminator, a time-traveling cyborg killer that masquerades as a human. The film probes the question -- and consequences -- of what happens when robots start thinking for themselves.

5. The Matrix Series (1999-2021)

Keanu Reeves stars in this cult classic as Thomas Anderson/Neo, a computer programmer by day and hacker by night who uncovers the truth behind the simulation known as "the Matrix." The simulated reality is a product of artificially intelligent programs that enslaved the human race. Human beings are kept asleep in "pods," where they unwittingly participate in the simulated reality of the Matrix while their bodies are used to harvest energy.

6. I, Robot (2004)

Thissci-fiflickstarring Will Smith takes place in 2035 in a society where robots with human-like featuresserve humankind. An artificial intelligent supercomputer, dubbed VIKI (which stands for Virtual Interactive Kinetic Intelligence), is one to watch, especially once a programming bug goes awry. The defect in VIKI's programming leads the supercomputer to believe that the robots must take charge in order to protect mankind from itself.

7. WALL-E (2008)

Disney Pixar's WALL-E follows a robot of the same namewhose main role is to compact garbage on a trash-ridden Earth. But after spending centuries alone, WALL-E evolves into a sentient piece of machinery who turns out to be very lonely. The movie takes place in2805 and follows WALL-E and another robot, named Eve, who's job is toanalyzeif a planet is habitable for humans.

8. Tron Legacy (2010)

The Tron universe is filled to the brim with A.I. given that it takes place in a virtual world, known as "the Grid." The movie's protagonist, Sam, finds himself accidentally uploaded to the Grid, where he embarks on an adventure that leads him face-to-face with algorithms and computer programs.The Grid is protected by programs such as Tron, but corrupt A.I. programs surface as well throughout the virtual network.

9. Her (2013)

Joaquin Phoenix plays Theodore Twombly, a professional letter writer going through a divorce. To help himself cope, Theodore picks up a new operating system with advanced A.I. features. He selects a female voice for the OS, naming the device Samantha (voiced by Scarlett Johansson), but it proves to have smart capabilities of itsown. Or is it, her own?Theodore spends a lot of time talking with Samantha, eventually falling in love. The film traces their budding relationship and confronts the notion of sentience and A.I.

10. Ex-Machina (2014)

After winning a contest at his workplace, programmer Caleb Smith meets his company's CEO, Nathan Bateman. Nathan reveals to Caleb that he's created a robot with artificial intelligence capabilities. Caleb's task? Assess if the feminine humanoid robot, Ava, is able to show signs of intelligent human-like behavior: in other words, pass the Turing Test. Ava has a human-like face and physique, but her "limbs" are composed of metal and electrical wiring. It's later revealed that other characters aren't exactly human, either.

Excerpt from:

The Top 10 Movies to Help You Envision Artificial Intelligence - Inc.

Posted in Artificial Intelligence | Comments Off on The Top 10 Movies to Help You Envision Artificial Intelligence – Inc.

Tying Artificial intelligence and web scraping together [Q&A] – BetaNews

Posted: at 5:23 am

Artificial intelligence (AI) and machine learning (ML) seem to have piqued the interest of automated data collection providers. While web scraping has been around for some time, AI/ML implementations have appeared in the line of sight of providers only recently.

Aleksandras ulenko, Product Owner at Oxylabs.io, who has been working with these solutions for several years, shares his insights on the importance of artificial intelligence, machine learning, and web scraping.

BN: How has the implementation of AI/ML solutions changed the way you approach development?

AS: AI/ML has an interesting work-payoff ratio. Good models can sometimes take months to write and develop. Until then, you dont really have anything. Dedicated scrapers or parsers, on the other hand, can take up to a day or two. When you have an ML model, however, maintaining it takes a lot less time for the amount of work it covers.

So, theres always a choice. You can build dedicated scrapers and parsers, which will take significant amounts of time and effort to maintain once they start stacking up. The other choice is to have "nothing" for a significant amount of time, but a brilliant solution later on, which will save you tons of time and effort.

Theres some theoretical point where developing custom solutions is no longer worth it. Unfortunately, theres no mathematical formula to arrive at the correct answer. You have to make a decision when all the repetitive tasks are just too much of a hog on resources.

BN: Have these solutions had a visible impact on the deliverability and overall viability of the project?

AS: Getting started with machine learning is tough, though. Its still, comparatively speaking, a niche specialization. In other words, you wont find many developers that dabble in ML, and knowing how hard it can be to find one for any discipline, its definitely a tough river to cross.

Yet, if the business approach to scraping is based on a long-term vision, ML will definitely come in handy sometime down the road. Every good vision has scaling in it and with scaling comes repetitive tasks. These are best handled with machine learning.

Our awesome achievement we call Adaptive Parser is a great example. It was once almost unthinkable that a machine learning model could be of such high benefit. Now the solution can deliver parsed results from a multitude of e-commerce product pages, irrespective of the changes between them or any that happen over time. Such a solution is completely irreplaceable.

BN: In a previous interview, youve mentioned the importance of making things more user-friendly for web scraping solutions. Is there any particular reason you would recommend moving development towards no-code implementations?

AS: Even companies that have large IT departments may have issues with integration. Developers are almost always busy. Taking time out of their schedules for integration purposes is tough. Most end-users of the data Scraper APIs, after all, arent tech-savvy.

Additionally, the departments that would need scraping the most such as marketing, data analytics, etc., might not have enough sway in deciding the roadmaps of developers. As such, even relatively small hurdles can become impactful enough. Scrapers should now be developed with a non-tech user in mind.

There should be plenty of visuals that allow for a simplified construction of workflows with a dashboard thats used to deliver information clearly. Scraping is becoming something done by everyone.

BN: What do you think lies in the future of scraping? Will websites become increasingly protective of their data, or will they eventually forego most anti-scraping sentiment?

AS: There are two of the answers I can give. One is "more of the same". Surely, a boring one, but its inevitable. Delving deeper into scaling and proliferation of web scraping isnt as fun as the next question -- the legal context.

Currently, it seems as if our position in the industry isnt perfectly decided. Case law forms the basis of how we think and approach web scraping. Yet, it all might change on a whim. Were closely monitoring the developments due to the inherent fragility of the situation.

Theres a possibility that companies will realize the value of their data and start selling it on third-party marketplaces. It would reduce the value of web scraping as a whole as you could simply acquire what you need for a small price. Most businesses, after all, need the data and the insights, not web scraping. Its a means to an end.

Theres a lot of potential in the grand vision of Web 3.0 -- the initiative to make the whole Web interconnected and machine-readable. If this vision came to life, the whole data gathering landscape would be vastly transformed: the Web would become much easier to explore and organize, parsing would become a thing of the past, and webmasters would get used to the idea of their data being consumed by non-human actors.

Finally, I think user-friendliness will be the focus in the future. I dont mean just the no-code part of scraping. A large part of getting data is exploration -- finding where and how its stored and getting to it. Customers will often formulate an abstract request and developers will follow up with methods to acquire what is needed.

In the future, I expect, the exploration phase will be much simpler. Maybe well be able to take the abstract requests and turn them into something actionable through an interface. In the end, web scraping is breaking away from its shell of being something code-ridden or hard to understand and evolving into a daily activity for everyone.

Photo Credit: Photon photo/Shutterstock

See more here:

Tying Artificial intelligence and web scraping together [Q&A] - BetaNews

Posted in Artificial Intelligence | Comments Off on Tying Artificial intelligence and web scraping together [Q&A] – BetaNews

Inside the EU’s rocky path to regulate artificial intelligence – International Association of Privacy Professionals

Posted: at 5:23 am

In April last year, the European Commission published its ambitious proposal to regulate Artificial Intelligence. The regulation was meant to be the first of its kind, but the progress has been slow so far due to the file's technical, political and juridical complexity.

Meanwhile, the EU lost its first-mover advantage as other jurisdictions like China and Brazil have managed to pass their legislation first. As the proposal is entering a crucial year, it is high time to take stock of the state of play, the ongoing policy discussions, notably around data, and potential implications for businesses.

For the European Parliament, delays have been mainly due to more than six months of political disputes between lawmakers over who was to take the lead in the file. The result was a co-lead between the centrists and the center-left, sidelining the conservative European People's Party.

Members of European Parliament are now trying to make up for lost time. The first draft of the report is planned for April, with discussions on amendments throughout the summer. The intention is to reach a compromise by September and hold the final vote in November.

The timeline seems particularly ambitious since co-leads involve double the number of people, inevitably slowing down the process. The question will be to what extent the co-rapporteurs will remain aligned on the critical political issues as the center-right will try to lure the liberals into more business-friendly rules.

Meanwhile, the EU Council made some progress on the file, however, limited by its highly technical nature. It is telling that even national governments, which have significantly more resources than MEPs, struggle to understand the new rules' full implications.

Slovenia, which led the diplomatic talks for the second half of 2021, aimed to develop a compromise for 15 articles, but only covered the first seven. With the beginning of the French presidency in January, the file is expected to move faster as Paris aims to provide a full compromise by April.

As the policy discussions made some progress in the EU Council, several sticking points emerged. The very definition of AI systems is problematic, as European governments distinguish them from traditional software programs or statistical methods.

The diplomats also added a new category for "general purpose" AI, such as synthetic data packages or language models. However, there is still no clear understanding of whether the responsibility should be attributed upstream, to the producer, or downstream, to the provider.

The use of real-time biometric recognition systems has primarily monopolized the public debate, as the commission's proposal falls short of a total ban for some crucial exceptions, notably terrorist attacks and kidnapping. In October, lawmakers adopted a resolution pushing for a complete ban, echoing the argument made by civil society that these exceptions provide a dangerous slippery slope.

By contrast, facial recognition technologies are increasingly common in Europe. A majority of member states wants to keep or even expand the exceptions to border control, with Germany so far relatively isolated in calling for a total ban.

"The European Commission did propose a set of criteria for updating the list of high-risk applications. However, it did not provide a justification for the existing list, which might mean that any update might be extremely difficult to justify," Lilian Edwards, a professor at Newcastle University, said.

Put differently, since the reasoning behind the lists of prohibited or high-risk AI uses are largely value-based, they are likely to remain heatedly debated points point through the whole legislative process.

For instance, the Future of Life Institute has been arguing for a broader definition of manipulation, which might profoundly impact the advertising sector and the way online platforms currently operate.

A dividing line that is likely to emerge systematically in the debate is the tension between the innovation needs of the industry, as some member states already stressed, and ensuring consumer protection in the broadest sense, including the use of personal data.

This underlying tension is best illustrated in the ongoing discussion for the report of the parliamentary committee on Artificial Intelligence in a Digital Age, which are progressing in parallel to the AI Act.

In his initial draft, conservative MEP Axel Voss attacked the General Data Protection Regulation, presenting AI as part of a technological race where Europe risks becoming China's "economic colony" if it did not relax its privacy rules.

The report faced backlash from left-to-center policymakers, who saw it as an attempt to water down the EU's hard-fought data protection law. For progressive MEPs, data-hungry algorithms fed with vast amounts of personal data might not be desirable, and they draw a parallel with their activism in trying to curb personalized advertising.

"Which algorithms do we train with vast amounts of personal data? Likely those that automatically classify, profile or identify people based on their personal details often with huge consequences and risks of discrimination or even manipulation. Do we really want to be using those, let alone 'leading' their development?" MEP Kim van Sparrentak said.

However, the need to find a balance with data protection has also been underlined by Bojana Bellamy, president of the Centre for Information Policy Leadership, who notes how some fundamental principles of the GDPR would be in contradiction with the AI regulation.

In particular, a core principle of the GDPR is data minimization, namely that only the personal data strictly needed for completing a specific task is processed and should not be retained for longer than necessary. Conversely, the more AI-powered tools receive data, the more robust and accurate they become, leading (at least in theory) to a fairer and non-biased outcome.

For Bojana, this tension is due to a lack of a holistic strategy in the EU's hectic digital agenda, arguing that policymakers should follow a more result-oriented approach to what they are trying to achieve. These contradicting notions might fall on the industry practitioners, which might be requested to square a fair and unbiased system while also minimizing the amount of personal data collected.

The draft AI law includes a series of obligations for system providers, namely the organizations that make the AI applications available on the market or put them into services. These obligations will need to be operationalized, for instance, what it means to have a "fair" system, to what length should "transparency" go and how is "robustness" defined.

In other words, providers will have to put a system in place to manage risks and ensure compliance with support from their suppliers. For instance, a supplier of training data would need to detail how the data was selected and obtained, how it was categorized and the methodology used to ensure representativeness.

In this regard, the AI Act explicitly refers to harmonized standards that industry practitioners must develop to exchange information to make the process cost-efficient. For example, the Global Digital Foundation, a digital policy network, is already working on an industry coalition to create a relevant framework and toolset to share information consistently across the value chain.

In this context, European businesses fear that if the EU's privacy rules are not effectively incorporated in the international standards, they could be put at a competitive disadvantage. The European Tech Alliance, a coalition of EU-born heavyweights such as Spotify and Zalando, voiced concerns that the initial proposal did not include an assessment for training dataset collected in third countries that might use data collected via practices at odds with the GDPR.

Adopting industry standards creates a presumption of conformity, minimizing the risk and costs for compliance. These incentives are so strong that harmonized standards tend to become universally adopted by industry practitioners, as the cost for departing from them become prohibitive. Academics have defined standardization as the "real rulemaking" of the AI regulation.

"The regulatory approach of the AI Act, i.e. standards compliance, is not a guarantee of low barriers for the SMEs. On the contrary, standards compliance is often perceived by SMEs as a costly exercise due to expensive conformity assessment that needs to be carried out by third parties," Sebastiano Toffaletti, secretary-general of the European DIGITAL SME Alliance, said.

By contrast, European businesses that are not strictly "digital" but that could embed AI-powered tools into their daily operations see the AI Act as a way to bring legal clarity and ensure consumer trust.

"The key question is to understand how can we build a sense of trust as a business and how can we translate it to our customers," Nozha Boujemaa, global vice president for digital ethics and responsible AI at IKEA, said.

Photo by Michael Dziedzic on Unsplash

View original post here:

Inside the EU's rocky path to regulate artificial intelligence - International Association of Privacy Professionals

Posted in Artificial Intelligence | Comments Off on Inside the EU’s rocky path to regulate artificial intelligence – International Association of Privacy Professionals

Learning to improve chemical reactions with artificial intelligence – EurekAlert

Posted: at 5:23 am

image:INL researchers perform experiments using the Temporal Analysis of Products (TAP). view more

Credit: Idaho National Laboratory

If you follow the directionsin a cake recipe, you expect to end up with a nice fluffy cake.In Idaho Falls,though, the elevation can affecttheseresults.When baked goods dont turn outas expected, the troubleshooting begins.This happens in chemistry,too.Chemistsmustbeable to account for how subtle changes or additions may affect the outcome for better or worse.

Chemists maketheir version ofrecipes, known as reactions,to create specific materials.These materialsare essential ingredients to an array of products found in healthcare, farming, vehicles andother everyday productsfrom diapers to diesel.When chemists develop new materials, they rely on information from previous experiments and predictions based onpriorknowledge ofhowdifferent starting materials interact with others and behave underspecificconditions.There are a lot of assumptions, guesswork and experimentation in designing reactions using traditional methods.New computational methods like machine learning can help scientists better understand complex processes like chemical reactions.While it can be challenging forhumans topick outpatternshiddenwithin the data from many different experiments, computers excel at this task.

Machine learning isan advancedcomputational toolwhereprogrammers givecomputerslots ofdata andminimalinstructions about how to interpret it. Instead of incorporatinghuman bias into the analysis, the computer isonly instructed to pull out what it finds to be important from the data. This could be an image of a cat (if the input is all the photos on the internet) orinformation about how a chemical reactionproceeds through a series ofsteps, as is thecasefora set of machine learning experiments that are ongoing at Idaho National Laboratory.

At the lab,researchersworking with the innovative Temporal Analysis of Products (TAP)reactorsystemaretryingto improveunderstanding of chemical reactions by studying the role of catalysts,whicharecomponentsthat can be added toamixture of chemicals to alter thereactionprocess.Oftencatalystsspeed up thereaction,but they can do other things,too. In baking and brewing,enzymesact as catalyststo speed up fermentationandbreakdown sugars in wheat (glucose) into alcohol and carbon dioxide,which creates the bubbles that make bread riseand beer foam.

In the laboratory,perfectinga new catalystcan be expensive, time-consuming and even dangerous.According toINLresearcher Ross Kunz, Understanding how and why a specific catalyst behavesin a reaction is theholygrail ofreaction chemistry.To help find it,scientists arecombiningmachine learningwith a wealth of new sensor datafrom the TAP reactorsystem.

The TAP reactor system uses an array of microsensors to examine the different componentsof a reaction in realtime.For the simplestcatalytic reaction,the system captures8uniquemeasurementsin each of 5,000timepointsthat make up the experiment.Assembling the timepoints into a single data set provides 165,000 measurements foroneexperiment on a very simple catalyst.Scientiststhenuse the datatopredict what is happening in the reaction at a specific timeand how different reaction steps work together in a larger chemical reaction network.Traditional analysis methods canbarelyscratch the surfaceofsuch a large quantity of datafor a simple catalyst, let alonethe many more measurements thatare produced by acomplex one.

Machine learning methods can take theTAP dataanalysis further. Using a type of machine learning called explainableartificial intelligence, orAI,theteam caneducatethe computer about known properties of thereactionsstarting materialsand the physics that govern these types of reactions, a process called training.The computer can apply thistrainingand the patterns that it detects in the experimental data to better describe theconditions inareactionacross time.The team hopes that theexplainable AI method will produce adescription of the reaction that can be used toaccuratelymodelthe processes that occur during theTAP experiment.

In most AI experiments, a computer is given almost no trainingon the physicsand simply detects patterns in the data based upon what it can identify,similar tohow a baby might react to seeing something completely new.By contrast,the value of explainable AI lies in the fact that humanscan understand the assumptions and information that lead to the computers conclusions.This human-level understanding can make it easier for scientists to verify predictions and detect flaws and biases in the reaction description produced by explainable AI.

Implementing explainable AIis not as simple or straightforward as it might sound.With support from the Department of Energys Advanced Manufacturing office, theINLteam has spent two years preparing theTAPdata for machine learning,developing andimplementingthe machinelearning program, andvalidating the results for a common catalyst in a simple reaction that occursinthe car you driveeveryday. This reaction,the transformation of carbon monoxideinto carbon dioxide,occurs ina carscatalytic converter andrelies onplatinumasthe catalyst. Since this reaction is well studied,researcherscan checkhow well the results of the explainable AI experiments match known observations.

In April 2021, the INL team published their results validating the explainable AI method with the platinum catalyst in the article Data driven reaction mechanism estimation via transient kinetics and machine learninginChemical Engineering Journal.Now that the team has validated the approach, they are examining TAP data frommore complex industrialcatalystsused in the manufacture of smallmolecules like ethylene, propylene and ammonia. They are also working with collaborators at Georgia Institute of Technologyto applythemathematical models that result from themachine learningexperiments tocomputersimulationscalled digital twins. This type of simulation allows the scientists topredict what will happen if they change an aspectof the reaction. When a digital twin is based on avery accurate model of a reaction, researcherscanbe confident in itspredictions.

Bygivingthe digital twinthe taskto simulate a modification to a reaction or new type of catalyst, researchers can avoid doing physical experiments for modifications that are likely to lead to poor results or unsafe conditions. Instead,the digital twin simulation can savetime and moneyby testing thousands of conditions,while researchers can testonly a handful of the mostpromising conditions in the physical laboratory.

Plus, this machine learning approach can produce newer and more accurate modelsfor each new catalyst and reaction condition testedwith the TAP reactorsystem.In turn, applying these models to digital twin simulations gives researchers the predictive power to pick the best catalysts and conditions to test next in the TAP reaction. As a result, each roundof testing, model development and simulationproducesa greater understanding of how a reactionworksand howtoimprove it.

These toolsarethe foundation of a new paradigm incatalyst science but alsopave the way for radical new approaches inchemical manufacturing,said Rebecca Fushimi, who leads the project team.

About Idaho National LaboratoryBattelle Energy Alliance manages INL for the U.S. Department of Energys Office of Nuclear Energy. INL is the nations center for nuclear energy research and development,and alsoperforms research in each of DOEs strategic goal areas: energy, national security, science and the environment. For more information, visitwww.inl.gov. Follow us on social media:Twitter,Facebook,InstagramandLinkedIn.

Chemical Engineering Journal

Data driven reaction mechanism estimation via transient kinetics and machine learning

18-Apr-2021

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

More here:

Learning to improve chemical reactions with artificial intelligence - EurekAlert

Posted in Artificial Intelligence | Comments Off on Learning to improve chemical reactions with artificial intelligence – EurekAlert

Life and health insurers to use advanced artificial intelligence to reduce benefits fraud – Canada NewsWire

Posted: at 5:23 am

TORONTO, Feb. 14, 2022 /CNW/ - The Canadian Life and Health Insurance Association (CLHIA) is pleased to announce the launch of an industry initiative to pool claims data and use advanced artificial intelligence tools to enhance the detection and investigation of benefits fraud.

Every insurer in Canada has their own internal analytics to detect fraud within their book of business. This new initiative, led by the CLHIA and its technology provider Shift Technology will deploy advanced AI to analyze industry-wide anonymized claim data. By identifying patterns across millions of records, the program is enhancing the effectiveness of benefits fraud investigations across the industry.

We expect that the initiative will expand in scope over the coming years to include even more industry data.

"Fraudsters are taking increasingly sophisticated steps to avoid detection," said Stephen Frank, CLHIA's President and CEO. "This technology will give insurers the edge they need to identify patterns and connect the dots across a huge pool of claims data over time, leading to more investigations and prosecutions."

"The capability for individual insurers to identify potential fraud has already proven incredibly beneficial," explained Jeremy Jawish, CEO and co-founder of Shift Technology. "Through the work Shift Technology is doing with the CLHIA, we are expanding that benefit across all member organizations, and providing a valuable fraud fighting solution to the industry at large."

Insurers paid out nearly $27 billion in supplementary health claims in 2020. Employers and insurers lose what is estimated to be millions of dollars each year to fraudulent group health benefits claims. The costs of fraud are felt by insurers, employers and employees and put the sustainability of group benefits plans at risk.

About CLHIAThe CLHIA is a voluntary association whose member companies account for 99 per cent of Canada's life and health insurance business. These insurers provide financial security products including life insurance, annuities (including RRSPs, RRIFs and pensions) and supplementary health insurance to over 29 million Canadians. They hold over $1 trillion in assets in Canada and employ more than 158,000 Canadians. For more information, visit http://www.clhia.ca.

About Shift TechnologyShift Technology delivers the only AI-native decision automation and optimization solutions built specifically for the global insurance industry. Addressing several critical processes across the insurance policy lifecycle, the Shift Insurance Suite helps insurers achieve faster, more accurate claims and policy resolutions. Shift has analyzed billions of insurance transactions to date and was presented Frost & Sullivan's 2020 Global Claims Solutions for Insurance Market Leadership Award. For more information, visit http://www.shift-technology.com.

SOURCE Canadian Life and Health Insurance Association Inc.

For further information: Kevin Dorse, Assistant Vice President, Strategic Communications and Public Affairs, CLHIA, (613) 691-6001, [emailprotected]; Rob Morton, Corporate Communications, Shift Technology, 617-416-9216, [emailprotected]

See the original post here:

Life and health insurers to use advanced artificial intelligence to reduce benefits fraud - Canada NewsWire

Posted in Artificial Intelligence | Comments Off on Life and health insurers to use advanced artificial intelligence to reduce benefits fraud – Canada NewsWire

Toronto tech institute tracking long COVID with artificial intelligence, social media – The Globe and Mail

Posted: at 5:23 am

The Vector Institute has teamed up with Telus Corp., Deloitte and Roche Canada to help health care professionals learn more about the symptoms of long COVID.Nathan Denette/The Canadian Press

A Toronto tech institute is using artificial intelligence and social media to track and determine which long-COVID symptoms are most prevalent.

The Vector Institute, an artificial intelligence organization based at the MaRS tech hub in Toronto, has teamed up with telecommunications company Telus Corp., consulting firm Deloitte and diagnostics and pharmaceuticals business Roche Canada to help health care professionals learn more about the symptoms that people with a long-lasting form of COVID experience.

They built an artificial intelligence framework that used machine learning to locate and process 460,000 Twitter posts from people with long COVID defined by the Canadian government as people who show symptoms of COVID-19 for weeks or months after their initial recovery.

Lest we forget: We need to support the veterans who survived their war against COVID-19

Opera voice coach helps long-term COVID-19 sufferers

The framework parsed through tweets to determine which are first-person accounts about long COVID and then tallied up the symptoms described. It found fatigue, pain, brain fog, anxiety and headaches were the most common symptoms and that many with long COVID experienced several symptoms at once.

Replicating that research without AI would have taken a huge amount of hours worked and staff members, who would have had to manually locate hundreds of thousands of social-media posts or people and siphon out those without long-COVID or first-person accounts and count symptoms.

AI is very good at taking large sets of large amounts of data to find patterns, said Cameron Schuler, Vectors chief commercialization officer and vice-president of industry innovation. Its for stuff that is way too big for any human to actually be able to hold this in their brain.

The framework speeds up the research process around a virus that is quickly evolving and still associated with so many unknowns.

So far, long COVID isnt well understood. Theres no uniform way to diagnose it nor a single treatment to ease or cure it. Information is key to giving patients better outcomes and ensuring hospitals arent overwhelmed in the coming years.

A survey conducted in May, 2021, of 1,048 Canadians with long COVID, also known as post-COVID syndrome, found more than 100 symptoms or difficulties with everyday activities.

COVID-19 can affect people for the long haul and theyre getting the short shrift

Canada should lead the effort to help COVID long-haulers

About 80 per cent of adults surveyed by Viral Neuro Exploration, COVID Long Haulers Support Group Canada and Neurological Health Charities Canada reported one or more symptoms between four and 12 weeks after they were first infected.

Sixty per cent reported one or more symptoms in the long term. The symptoms were so severe that about 10 per cent are unable to return to work in the long term.

Researchers and those behind the technology are hopeful it will quickly contribute to the worlds fight against long COVID, but are already imagining ways they can advance the framework even further or apply it to other situations.

This is a novel kind of tool, said Dr. Angela Cheung, a senior physician scientist at the University Health Network, who is running two large studies on long COVID.

Im not aware of anyone else having done this and so I think it really may be quite useful going forward in health research.

Researchers say preliminary uses of the framework show it can help uncover patterns related to symptom frequencies, co-occurrence and distribution over time.

It could also be applied to other health events such as emerging infections or rare diseases or the effects of booster shots on infection.

Sign up for the Coronavirus Update newsletter to read the days essential coronavirus news, features and explainers written by Globe reporters and editors.

This content appears as provided to The Globe by the originating wire service. It has not been edited by Globe staff.

Follow this link:

Toronto tech institute tracking long COVID with artificial intelligence, social media - The Globe and Mail

Posted in Artificial Intelligence | Comments Off on Toronto tech institute tracking long COVID with artificial intelligence, social media – The Globe and Mail

AION Labs, Powered by BioMed X, Launches Third Global Call for Application: Artificial Intelligence for Design and Optimization of Antibodies for…

Posted: at 5:23 am

REHOVOT, Israel and HEIDELBERG, Germany, Feb. 13, 2022 /PRNewswire/ --AION Labs, a first-of-its-kind innovation lab spearheading the adoption of AI technologies and computational science to solve therapeutic challenges, and German independent research institute BioMedX, announced today the launch of the third global call for application to identify biomedical scientists and inventors to form a new startup at AION Labs' headquarters in Rehovot, Israel.

The chosen AION Labs startup team will be sponsored by several industry-leading partners and supported by the Israel Innovation Authority (IIA) and Digital Israel office. The sponsors of this call for application are AstraZeneca, Israel Biotech Fund, Merck, Pfizer and Teva Pharmaceuticals, with close support from Amazon Web Services (AWS).

Antibody treatments continue to be the standard of care for several disease areas and have emerged as cornerstone therapies during the current pandemic. However, despite being primary treatment modalities for over two decades, the cycle times for the discovery and optimization of therapeutic antibodies can still span several years. In order to achieve developable antibody therapeutics exhibiting target-specific binding, stability and scalability, several biophysical parameters need to be streamlined. The use of artificial intelligence (AI) has the potential to broaden the explored sequence space, accelerate the selection of fully optimized antibodies, and shorten overall lead discovery times by successfully predicting relevant parameters.

AION Labs is inviting computational biologists, bioinformatics and cheminformatics scientists, AI researchers, and antibody or protein engineers at academic and industry research labs worldwide to propose the development of a next-generation computational platform to optimize antibodies for targeted therapies with enhanced properties, including developability or manufacturability, stability, aggregation, immunogenicity, pharmacokinetics and tissue distribution. The ultimate solution is an AI platform that receives sequences of binders and generates novel variants with optimized IgG sequences, biophysical and targeting properties. The goal of the AI algorithm is to make an existing antibody a better drug while reducing design iterations, optimization of cycle times and lowering attrition rates. The AION Labs pharma partners involved in this project will provide a wealth of data for model training and their expertise in setting specifications and evaluating the outcome. Original ideas that go far beyond the current state-of-the-art are being encouraged.

"AION Labs is eager to tackle yet another pharmaceutical R&D challenge," said Dr. Yair Benita, CTO of AION Labs. "We're anticipating another strong round of applications, and look forward to working together with the chosen startup to develop a cutting-edge solution to substantially improve the design and optimization of antibodies for targeted therapies."

As part of the online application procedure, interested candidates are requested to submit a competitive project proposal. After a preliminary short-listing round, candidates will be invited to a five-day innovation boot camp in Rehovot. With the support of experienced mentors from the pharma, tech and VC industries, the winning team of scientists will be trained and guided during a fully-funded incubation period of up to four years towards becoming an independent startup.

Further details about this call for application can be found on the AION Labs website: http://www.aionlabs.com. Interested candidates are invited to apply via the BioMed X Career Space at https://career.bio.mx/call/2022-AIL-C03 before April 10, 2022.

Sign up here to join us for an informative webinar to learn more about AION Labs and this challenge on March 10, 2022 at 11 AM EST: https://us02web.zoom.us/webinar/register/WN_Qu788xk8SfycAm9vaTCpZg

About AION LabsAION Labs is a first-of-its-kind alliance of AstraZeneca, Merck, Pfizer, Teva, the Israel Biotech Fund and Amazon Web Services (AWS) that have come together with one clear mission: to create and adopt groundbreaking new AI technologies that will transform the process of drug discovery and development in order to contribute to the health and well-being of all people world-wide.

AION Labs is a unique venture hub where brilliant innovators and scientist-founders convene from around the world to solve the biggest R&D challenges guided by years of accumulated know-how, data and experience in pharma. The lab leverages its partners' wealth of knowledge and a new multidisciplinary mindset with the ingenuity, agility and innovative power of Israel's start-up ecosystem, to develop strong companies with clear long-term strategies, that will pave the way to the future of healthcare. AION Labs cultivates innovation from within; its unique venture creation process bridges the gap between outstanding academic research in the field of AI and the biggest R&D needs in the discovery and development of new medicines for the benefit of patients.

For more information, visit aionlabs.com

About BioMed XBioMed X is an independent research institute located on the campus of the University of Heidelberg in Germany, with a world-wide network of partner locations. Together with our partners, we identify big biomedical research challenges and provide creative solutions by combining global crowdsourcing with local incubation of the world's brightest early-career research talents. Each of the highly diverse research teams at BioMed X has access to state-of-the-art research infrastructure and is continuously guided by experienced mentors from academia and industry. At BioMed X, we combine the best of two worlds academia and industry and enable breakthrough innovation by making biomedical research more efficient, more agile, and more fun.

For more information, visit bio.mx

Media Contact:Lior FeiginFINN Partners for AION Labs[emailprotected]@LiorFeigin+1 929 588 2016+972 54 282 4503

Logo - https://mma.prnewswire.com/media/1708278/AION_Labs_Logo.jpg

SOURCE AION Labs

Read the rest here:

AION Labs, Powered by BioMed X, Launches Third Global Call for Application: Artificial Intelligence for Design and Optimization of Antibodies for...

Posted in Artificial Intelligence | Comments Off on AION Labs, Powered by BioMed X, Launches Third Global Call for Application: Artificial Intelligence for Design and Optimization of Antibodies for…