Where to Watch and Stream Max Winslow and The House of Secrets Free Online – EpicStream

Cast: Sydne MikelleTanner BuchananJason GenaoEmery KellyJade Chynoweth

Geners: FamilyScience FictionThriller

Director: Sean Olson

Release Date: May 29, 2020

Five teenagers compete to win a mansion owned by entrepreneur and scientist Atticus Virtue. To win the teens must face-off against a super computer named HAVEN who controls the mansion.

Unfortunately, Max Winslow and The House of Secrets is not on Netflix. But you can't go too wrong with what is still considered the most popular streaming service, though. For $9.99 per month Basic, $15.99 Standard, or $19.99 Premium, you can enjoy a huge volume of TV shows, documentaries, kids content, and more.

They're not on Hulu, either! But prices for this streaming service currently start at $6.99 per month, or $69.99 for the whole year. For the ad-free version, it's $12.99 per month, $64.99 per month for Hulu + Live TV, or $70.99 for the ad-free Hulu + Live TV.

Sorry, Max Winslow and The House of Secrets is not streaming on Disney Plus. With Disney+, you can have a wide range of shows from Marvel, Star Wars, Disney+, Pixar, ESPN, and National Geographic to choose from in the streaming platform for the price of $7.99 monthly or $79.99 annually

You won't find Max Winslow and The House of Secrets on HBO Max. But if you're still interested in the service, it's $14.99 per month, which gives you full access to the entire vault, and is also ad-free, or $9.99 per month with ads. However, the annual versions for both are cheaper, with the ad-free plan at $150 and the ad-supported plan at $100.

As of now, Max Winslow and The House of Secrets is not available to watch for free on Amazon Prime Video. You can still buy or rent other movies through their service.

Max Winslow and The House of Secrets is not available to watch on Peacock at the time of writing. Peacock offers a subscription costing $4.99 a month or $49.99 per year for a premium account. As their namesake, the streaming platform is free with content out in the open, however, limited.

Max Winslow and The House of Secrets is not on Paramount Plus. Paramount Plus has two subscription options: the basic version ad-supported Paramount+ Essential service costs $4.99 per month, and an ad-free premium plan for $9.99 per month.

No dice. Max Winslow and The House of Secrets isn't streaming on the Apple TV+ library at this time. You can watch plenty of other top-rated shows and movies like Mythic Quest, Tedd Lasso, and Wolfwalkers for a monthly cost of $4.99 from the Apple TV Plus library.

Nope. Max Winslow and The House of Secrets is not currently available to watch for free on Virgin TV Go. There are plenty of other shows and movies on the platform which may interest you!

Starz Play Amazon Channel

$8.99

Starz Roku Premium Channel

See original here:

Where to Watch and Stream Max Winslow and The House of Secrets Free Online - EpicStream

Supercomputer predicts the likely result for Grimsby Town v Northampton Town, Gillingham v Rochdale, Hartlepool United v AFC Wimbledon and every other…

It went to plan for Northampton Town as they put last seasons pain behind them with a fine 3-2 win over Colchester United.

They head to newcomers Grimsby Town this weekend looking to make it two wins from two, with the supercomputer giving them a 47 per cent chance of doing the business.

Title favourites Salford City make the trip to Swindon in what should be a cracking fixture.

Relegation favourites Hartlepool United need a reaction against AFC Wimbledon, after being battered by Walsall last weekend.

Heres how supercomputer sees every League Two match going.

Get all the latest Cobblers news here.

Home win: 36%Draw: 29%Away win: 35%

Photo: Pete Norton

Home win: 64%Draw: 24%Away win: 12%

Photo: Pete Norton

Home win: 32%Draw: 27%Away win: 41%

Photo: JPCO Sport

Home win: 49%Draw: 25%Away win: 26%

Photo: Gareth Copley

Read more here:

Supercomputer predicts the likely result for Grimsby Town v Northampton Town, Gillingham v Rochdale, Hartlepool United v AFC Wimbledon and every other...

Supercomputer predicts the likely result for Mansfield Town v Tranmere Rovers, Colchester United v Carlisle United, Crewe Alexandra v Harrogate Town…

Mansfield and Tranmere will be looking to get up and running this weekend after suffering defeats in matchday one. It will be Stags who claim the points though, according to the supercomputer

Title favourites Salford City make the trip to Swindon in what should be a cracking fixture.

Promotion-chasing Northampton head to Grimsby Town, while relegation favourites Hartlepool United need a reaction against AFC Wimbledon, after being battered by Walsall last weekend.

Heres how supercomputer sees every League Two match going.

Get all the latest Stags news here.

Home win: 36%Draw: 29%Away win: 35%

Photo: Pete Norton

Home win: 64%Draw: 24%Away win: 12%

Photo: Pete Norton

Home win: 32%Draw: 27%Away win: 41%

Photo: JPCO Sport

Home win: 49%Draw: 25%Away win: 26%

Photo: Gareth Copley

Originally posted here:

Supercomputer predicts the likely result for Mansfield Town v Tranmere Rovers, Colchester United v Carlisle United, Crewe Alexandra v Harrogate Town...

The power of visual influence – EurekAlert

image:The new approach determines a users real-time reaction to an image or scene based on their eye movement, particularly saccades, the super-quick movements of the eye that jerk between points before fixating on an image or object. The researchers will demonstrate their new work titled, Image Features Influence Reaction Time: A Learned Probabilistic Perceptual Model for Saccade Latency, at SIGGRAPH 2022 held Aug. 8-11 in Vancouver, BC, Canada. view more

Credit: ACM SIGGRAPH

What motivates or drives the human eye to fixate on a target and how, then, is that visual image perceived? What is the lag time between our visual acuity and our reaction to the observation? In the burgeoning field of immersive virtual reality (VR) and augmented reality (AR), connecting those dots, in real time, between eye movement, visual targets, and decision-making is the driving force behind a new computational model developed by a team of computer scientists at New York University, Princeton University, and NVIDIA.

The new approach determines a users real-time reaction to an image or scene based on their eye movement, particularly saccades, the super-quick movements of the eye that jerk between points before fixating on an image or object. Saccades allow for frequent shifts of attention to better understand ones surroundings and to localize objects of interest. Understanding the mechanism and behavior of saccades is vital in understanding human performance in visual environments, representing an exciting area of research in computer graphics.

The researchers will demonstrate their new work titled, Image Features Influence Reaction Time: A Learned Probabilistic Perceptual Model for Saccade Latency, at SIGGRAPH 2022 held Aug. 8-11 in Vancouver, BC, Canada. The annual conference, which will be in-person and virtual this year, spotlights the worlds leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

There has recently been extensive research to measure the visual qualities perceived by humans, especially for VR/AR displays, says the papers senior author Qi Sun, PhD, assistant professor of computer science and engineering at New York University Tandon School of Engineering.

But we have yet to explore how the displayed content can influence our behaviors, even noticeably, and how we could possibly use those displays to push the boundaries of our performance that are otherwise not possible.

Inspired by how the human brain transmits data and makes decisions, the researchers implement a neurologically-inspired probabilistic model that mimics the accumulation of cognitive confidence that leads to a human decision and action. They conducted a psychophysical experiment with parameterized stimuli to observe and measure the correlation between image characteristics, and the time it takes to process them in order to trigger a saccade, and whether/how the correlation differs from that of visual acuity.

They validate the model, using data from more than 10,000 trials of user experiments using an eye-tracked VR display, to understand and formulate the correlation between the visual content and the speed of decision-making based on reaction to the image. The results show that the new model prediction accurately represents real-world human behavior.

The proposed model may serve as a metric for predicting and altering eye-image response time of users in interactive computer graphics applications, and may also help to improve design of VR experiences and player performances in esports. In other sectors such as healthcare and auto, the new model could help estimate a physicians or a drivers ability to rapidly respond and react to emergencies. In esports, it can be applied to measure the competition fairness between players or to better understand how to maximize ones performance where reaction times come down to milliseconds.

In future work, the team plans to explore the potential of cross-modal effects such as visual-audio cues that jointly affect our cognition in scenarios such as driving. They are also interested in expanding the work to better understand and represent the accuracy of human actions influenced by visual content.

The papers authors, Budmonde Duinkharjav (NYU); Praneeth Chakravarthula (Princeton); Rachel Brown (NVIDIA); Anjul Patney (NVIDIA); and Qi Sun (NYU), are set to demonstrate their new method Aug. 11 at SIGGRAPH as part of the program, Roundtable Session: Perception. The paper can be found here.

About ACM SIGGRAPHACM SIGGRAPH is an international community of researchers, artists, developers, filmmakers, scientists and business professionals with a shared interest in computer graphics and interactive techniques. A special interest group of the Association for Computing Machinery (ACM), the worlds first and largest computing society, our mission is to nurture, champion and connect like-minded researchers and practitioners to catalyze innovation in computer graphics and interactive techniques.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Follow this link:

The power of visual influence - EurekAlert

Tapping HPC and AI for Global Health and Wellness – HPCwire

Heres a look at how HPC, AI, and other technologies are being used throughout the world by organizations to enhance healthcare research, drug development, public health, and patient outcomes.

The ability to gather, process, and analyze data from genomics, bioinformatics, microscopy, medical imaging, and other areas in the life sciences has been supercharged with HPC systems and artificial intelligence (AI) algorithms. Researchers can sequence vast quantities of DNA data faster than ever before with supercomputer resources and use AI to identify patterns and make predictions. They can now use these available and affordable technologies to study genes and proteins, to predict health events, automate imaging analysis, and generate ideas for improving healthcare delivery. Heres a look at how HPC, AI, and other technologies are being used throughout the world by organizations to enhance healthcare research, drug development, public health, and patient outcomes.

The COVID-19 pandemic has provided a test case for the ability of HPC to accelerate genomic sequencing as scientists around the world seek to track, understand, and combat the SARS-CoV-2 virus. In England, researchers at the Wellcome Sanger Institute have tracked the spread and mutations of the virus by sequencing over 300,000 coronavirus genomes. The institutes HPC cluster has 38,000 cores of compute, 23.5 petabytes of file systems, and a 30-petabyte virtualized storage repository, all supported by a 60 Gbps network backbone. Its complemented with an OpenStack private cloud with more compute and storage resources.

With HPC architectures and the use of machine learning and AI constantly evolving, the institute has worked with companies like Dell Technologies to build their HPC environment. Genomic sequencing data is stored for computational analysis on Dell PowerScale scale-out storage. Researchers use the data to determine the relatedness of different viruses and help identify chains of transmission, super-spreader events, and fast-growing variants.

A similar collaboration for genomic research between the Texas Advanced Computing Center (TACC) at the University of Texas and Dell Technologies spawned the Lonestar6 supercomputer, which can perform almost three quadrillion mathematical operations per second. It is being used by faculty members from throughout the University of Texas system at other universities for COVID-19 drug discovery and genomic research.

In another pandemic-related role for HPC, the staff at the Ohio Supercomputer Center at Ohio State University designed the COVID-19 Analytics and Targeted Surveillance System (CATS) to help school administrators decide whether it was safe to bring students back to classrooms or if fully remote or hybrid learning should be used instead. Supported by a HPC system with Dell PowerEdge servers with Intel Xeon processors, CATS serves 21 school districts and 238,000 students and tracks data like school nurse visits, student and teacher absences, and other metrics to watch for outbreaks and inform decision making. Sixteen different dashboards are used daily by thousands of people to provide the rationale for decisions like closing or opening schools or specific buildings on campuses.

In yet another use of HPC for COVID-19 research, Swansea University in Wales has built an open platform for mathematical modeling of disease transmission. It provides comparisons of multiple models to help researchers determine demographic, socioeconomic, and clinical risk factors for COVID-19 infection, morbidity, and mortality, among other uses. Supercomputer resources at two hubs, built by Dell Technologies and Atos, contain more than 13,000 cores, tens of terabytes of memory, and hundreds of terabytes of high-performance storage, all interconnected by low-latency, high-bandwidth networking.

The Cineca Consortium is a national supercomputing facility in Italy that supports public and industry research institutions with HPC resources. Among Cinecas 4000 projects is the Human Brain Project, which, in conjunction with 90 European research institutes, aspires to be the worlds most detailed model of the brain. A dedicated supercomputer has been built for the project with HPC technology from Dell and Intel.

To-date, researchers participating in the Human Brain Project have used HPC to explore brain mechanisms behind cognition, learning, and plasticity. Their research has led to more than 1,400 journal articles, a new treatment for spinal cord injuries, a brain prosthesis for the blind, and better modeling and understanding of epilepsy and autism.

A collaboration among researchers at the Washington University School of Medicine, the Memorial Sloan-Kettering Cancer Center, and Temple University, the [emailprotected] distributed computing project uses HPC to simulate how proteins impact a variety of diseases. To visualize protein dynamics on a molecular level requires enormous computational power and the projects founders came up with an original HPC solution harnessing the unused processing power of PCs from volunteers around the world. Each volunteer downloads an application that runs small parts of much larger simulations for the project. On the backend servers, algorithms put the separate parts together to create composite simulations.

Today this distributed HPC network has the equivalent of 2.4 exaflops of computational power, making it the first exascale computer. Teams from Dell Technologies and VMware are part of the legions of volunteers and the [emailprotected] client software resides on a VMware vSphere appliance. In December 2020, [emailprotected] was awarded the HPCwire Readers Choice Award for Best Use of HPC in Response to Societal Plights for its simulations of SARS-CoV-2 proteins.

For more on Dell Technologies for healthcare, life sciences, and HPC, please visitDellTechnologies.com/healthcareandDellTechnologies.com/hpc.

Read more:

Tapping HPC and AI for Global Health and Wellness - HPCwire

Supercomputer predicts every League One result this weekend with Sheffield Wednesday, Barnsley, Derby County, Ipswich Town, Lincoln City and Wycombe…

Lifelong fan David Clowes is now in charge and senior operators recruited in the likes of David McGoldrick, James Chester, Tom Barkhuisen, Conor Hourihane and James Collins give their promotion pitch an immediate element of seriousness and respect.

Whether Derby, in their first season at this level since 1985-86, return to the Championship will probably have as much to do with the impact of two talented young midfielders in Jason Knight and Max Bird and whether they keep them.

Another relegated side in Peterborough will also be conscious of keeping the family silver, with teenage defender Ronnie Edwards being courted by Manchester City. Sammie Szmodics, Harrison Burrows and Jack Taylor have also been linked with moves to higher-division clubs.

Further forward, Posh have a natural scorer at this level in ex-Rotherham player Jonson Clarke-Harris. If he fires and key players are retained, they should have a strong season led by a successful manager at this level in ex-Hull and Doncaster boss Grant McCann.

After the sale of Harry Darling and Scott Twine, MK Dons will do well to emulate their play-off feats of 21-22. At the other end of Buckinghamshire, Wycombes hopes will probably depend on getting another good year from elder statesmen Sam Vokes and Garath McCleary.

Ipswich, who have signed ex-Leeds defender Leif Davis for a significant fee for a League One club and former Millers forward Freddie Ladapo, should be firmly in the picture. The signing of Marcus Harness, for around 600,000, is a further indicator of their ambition.

The ambitions of the above clubs means that Sheffield Wednesday and Barnsley will have their work cut out this season as they seek promotion.

Ahead of the opening weekend's third-tier action data experts at FiveThirtyEight have crunched the numbers to give the probable outcome of every match...

Kick-off: 3pm, Saturday. Supercomputer prediction: Home win - 62%. Away win - 18%. Draw - 20%.

Photo: Getty Images

Kick-off: 3pm, Saturday. Supercomputer prediction: Home win - 41%. Away win - 33%. Draw - 26%.

Photo: Getty Images

Kick-off: 3pm, Saturday. Supercomputer prediction: Home win - 56%. Away win - 19%. Draw - 24%.

Photo: Getty Images

Kick-off: 3pm, Saturday. Supercomputer prediction: Home win - 35%. Away win - 41%. Draw - 24%.

Photo: Getty Images

Original post:

Supercomputer predicts every League One result this weekend with Sheffield Wednesday, Barnsley, Derby County, Ipswich Town, Lincoln City and Wycombe...

National Science Foundation and European Awards support students in Sweden and the U.S. WSU Insider – WSU News

Students from Washington State University and Swedens Linkping University will participate in a pioneering exchange and research program in engineering and scientific computing, emphasizing the computing-based design philosophy that is supporting the international development of Boeings and Saabs new T-7A Red Hawk training aircraft.

The aircraft is an all-new advanced pilot training system designed for the U.S. Air Force, which will train the next generation of pilots for decades to come. As Boeing and Swedens Saab have long-standing ties to WSU and LiU, respectively, they will also support this program, which provides students with an unparalleled opportunity to learn how challenging designs are advanced through international cooperation of multinational corporations.

WSU was awarded $300,000 by the National Science Foundation to support WSU students in Sweden. The WSU-LiU team also received matching funding from the European Erasmus+ program and the Swedish Foundation for International Cooperation in Research and Higher Education to support the LiU students at WSU.

One of the objectives of this program is to graduate profession-ready students who are internationally educated and ready for leadership in a globalized society, said Joseph Iannelli, a professor of mechanical engineering in WSUs School of Engineering and Applied Sciences, who is leading the program.

Jan Nordstrm, a distinguished professor of computational mathematics, and Andrew Winters, a WSU alumnus and assistant professor in computational mathematics will supervise the students research projects at LiU.

LiUs multi-disciplinary strategy with Boeing and SAAB projects will expand students preparation for international high-tech environments, said Nordstrm.

This project will also prepare students for employment opportunities with corporations that employ scientific computing and operate in the U.S. and Sweden. said Iannelli. Students will benefit from studying in Sweden and the U.S. while gaining familiarity with the cultures of both countries.

Boeing and Saab will enrich this program. They will advise on aerospace-related scientific computing projects and mentor students, who will be offered opportunities for company site visits and internships. Students will also learn how computing-based designs lower development costs, increase first-time quality of prototypes, and decrease time to bring complex systems, such as aircraft, to markets.

Boeing is proud to support the education of up-and-coming engineers through this unique exchange and research program, said Craig Bomben, Boeing vice president of flight operations and enterprise chief pilot. This partnership will prepare students for the engineering field and help them fulfill their career ambitions.

WSU and LiU have been developing their international partnership for several years, after Iannellis 2018 outreach to LiU. Thereafter, the two universities signed a memorandum of understanding and a reciprocal student exchange agreement.

A comprehensive internationally-ranked peer university, LiU emphasizes multidisciplinary research and manages Swedens National Supercomputer Centre (NSC). By pooling their teams and financial resources, WSU and LiU can advance education and research at the international level more effectively, Iannelli said.

The three-year project will involve 42 diverse students; 21 from WSU and 21 from LiU. Each of the participating WSU students will receive a $12,000 fellowship.The project synergistically integrates two study-abroad semesters with a research experience and matched student cohorts. At LiU, the WSU students will be collaborating with an equal number of LiU students who will then complete an exchange semester at WSU. At LiU the Swedish students will assist the WSU students with the local culture and vice versa at WSU.

In Sweden, the WSU and LiU students willlearn how physical systems function through computer-based simulations that rely on mathematical algorithms. The WSU students will also take English-taught courses at LiU and transfer their academic credits towards their WSU degree requirements. The program is expected to begin in January.

View original post here:

National Science Foundation and European Awards support students in Sweden and the U.S. WSU Insider - WSU News

Supercomputer predicts every League Two result this weekend with Bradford City, Doncaster Rovers, Swindon Town, Mansfield Town, Tranmere Rovers and…

Last seasons beaten play-off finalists Mansfield Town managed by a shrewd operator in the shape of ex-Sheffield United chief Nigel Clough lock horns with Salford City.

The Stags recruitment has been pretty quiet in truth, but they have retained last seasons cast who have considerable experience at lower-division level in the likes of Stephen McLaughlin, Jordan Bowery, Ollie Hawkins, John-Joe OToole and George Lapsie, George Maris and Rhys Oates.

Elliot Watt rejected fresh terms at Bradford to join Salford, who have also signed ex-Barnsley duo Stevie Mallan and Elliot Simoes and striker Callum Hendry, who hit 12 goals for St Johnstone in 21-22.

Not too far away from Salford, Stockport County back in the EFL after an 11-year absence have major momentum and their arrivals have also caught the eye. They have brought in former Northampton Town defender Fraser Horsfall, with the Huddersfield-born player and PFA League Two Team of the Year inclusion rejecting higher-level interest to join.

Kyle Wootton, who netted 22 times in all competitions for Notts County last term has also joined and another of the leading National League players of last season has signed in Torquay midfielder Connor Lemonheigh-Evans, whose goals total extended into double figures last term.

Crawley Town, backed by American-based crypto consortium WAGMI United LLC, could be ones to watch. They have brought in Newport County striker Dom Telford, who fired 26 goals for the Exiles last term.

Ahead of the opening weekend's fourth-tier action data experts at FiveThirtyEight have crunched the numbers to give the probable outcome of every match...

Kick-off: 3pm, Saturday. Supercomputer prediction: Home win - 52%. Away win - 22%. Draw - 26%.

Photo: Getty Images

Kick-off: 3pm, Saturday. Supercomputer prediction: Home win - 43%. Away win - 28%. Draw - 29%.

Photo: Getty Images

Kick-off: 3pm, Saturday. Supercomputer prediction: Home win - 26%. Away win - 51%. Draw - 23%.

Photo: Getty Images

Kick-off: 3pm, Saturday. Supercomputer prediction: Home win - 53%. Away win - 22%. Draw - 26%.

Photo: Getty Images

Read the original:

Supercomputer predicts every League Two result this weekend with Bradford City, Doncaster Rovers, Swindon Town, Mansfield Town, Tranmere Rovers and...

Bristol Citys predicted finish in worrying verdict as Cardiff City, QPR and others also rated – BristolWorld

The Robins failed to pick up points after defeat to Hull City at the weekend and face Sunderland at home in their next EFL Championship fixture.

Bristol City kicked off their 2022/23 EFL Championship campaign with a 2-1 defeat to Hull City at the MKM Stadium on Saturday.

The Robins are tipped for another difficult season in English footballs second tier and defeat on the opening day of the season came despite Andreas Weimanns 30th minute opener giving them the lead against the Tigers.

The result has done little to see them climb up the predicted final table of the FiveThirtyEight supercomputer - which uses Forecasts and Soccer Power Index (SPI) ratings to show each teams percentage chance of winning the title, reaching the play-offs and being relegated.

The statistics also provide a predicted points tally that each team will finish on when the regular season wraps up in May 2023.

The supercomputer predicts that Nigel Pearsons side will end the season on 56 points but what would that mean for their league position and what are their chances of out performing their current prediction?

Here is how the supercomputer predicts the 2022/23 EFL Championship table will look at the end of the season as of Monday, August 1 - after the weekends opening fixtures but before Watford take on Sheffield United in the Monday night game:

Win Championship: 23%, Promoted: 47%, Play-offs: 34%, Relegated: <1%

Win Championship: 19%, Promoted: 44%, Play-offs: 34%, Relegated: <1%

Win Championship: 13%, Promoted: 34%, Play-offs: 34%, Relegated: 1%

Win Championship: 12%, Promoted: 33%, Play-offs: 34%, Relegated: 1%

View post:

Bristol Citys predicted finish in worrying verdict as Cardiff City, QPR and others also rated - BristolWorld

Artificial Intelligence Regulation Updates: China, EU, and U.S – The National Law Review

Wednesday, August 3, 2022

Artificial Intelligence (AI) systems are poised to drastically alter the way businesses and governments operate on a global scale, with significant changes already under way. This technology has manifested itself in multiple forms including natural language processing, machine learning, and autonomous systems, but with the proper inputs can be leveraged to make predictions, recommendations, and even decisions.

Accordingly,enterprises are increasingly embracing this dynamic technology. A2022 global study by IBMfound that 77% of companies are either currently using AI or exploring AI for future use, creating value by increasing productivity through automation, improved decision-making, and enhanced customer experience. Further, according to a2021 PwC studythe COVID-19 pandemic increased the pace of AI adoption for 52% of companies as they sought to mitigate the crises impact on workforce planning, supply chain resilience, and demand projection.

For these many businesses investing significant resources into AI, it is critical to understand the current and proposed legal frameworks regulating this novel technology. Specifically for businesses operating globally, the task of ensuring that their AI technology complies with applicable regulations will be complicated by the differing standards that are emerging from China, the European Union (EU), and the U.S.

China has taken the lead in moving AI regulations past the proposal stage. In March 2022, China passed aregulationgoverning companies use of algorithms in online recommendation systems, requiring that such services are moral, ethical, accountable, transparent, and disseminate positive energy. The regulation mandates companies notify users when an AI algorithm is playing a role in determining which information to display to them and give users the option to opt out of being targeted. Additionally, the regulation prohibits algorithms that use personal data to offer different prices to consumers. We expect these themes to manifest themselves in AI regulations throughout the world as they develop.

Meanwhile in the EU, the European Commission has published an overarchingregulatory framework proposaltitled the Artificial Intelligence Act which would have a much broader scope than Chinas enacted regulation. The proposal focuses on the risks created by AI, with applications sorted into categories of minimal risk, limited risk, high risk, or unacceptable risk. Depending on an applications designated risk level, there will be corresponding government action or obligations. So far, the proposed obligations focus on enhancing the security, transparency, and accountability of AI applications through human oversight and ongoing monitoring. Specifically, companies will be required to register stand-alone high-risk AI systems, such as remote biometric identification systems, in an EU database. If the proposed regulation is passed, the earliest date for compliance would be the second half of 2024 with potential fines for noncompliance ranging from 2-6% of a companys annual revenue.

Additionally, the previously enacted EU General Data Protection Regulation (GDPR) already carries implications for AI technology.Article 22prohibits decisions based on solely automated processes that produce legal consequences or similar effects for individuals unless the program gains the users explicit consent or meets other requirements.

In the United States there has been a fragmented approach to AI regulation thus far, with states enacting their own patchwork AI laws. Many of the enacted regulations focus on establishing various commissions to determine how state agencies can utilize AI technology and to study AIs potential impacts on the workforce and consumers. Common pending state initiatives go a step further and would regulate AI systems accountability and transparency when they process and make decisions based on consumer data.

On a national level, the U.S. Congress enacted theNational AI Initiative Actin January 2021, creating theNational AI Initiativethat provides an overarching framework to strengthen and coordinate AI research, development, demonstration, and education activities across all U.S. Departments and Agencies . . . . The Act created new offices and task forces aimed at implementing a national AI strategy, implicating a multitude of U.S. administrative agencies including the Federal Trade Commission (FTC), Department of Defense, Department of Agriculture, Department of Education, and the Department of Health and Human Services.

Pending national legislation includes theAlgorithmic Accountability Act of 2022, which was introduced in both houses of Congress in February 2022. In response to reports that AI systems can lead to biased and discriminatory outcomes, the proposed Act would direct the FTC to create regulations that mandate covered entities, including businesses meeting certain criteria, to perform impact assessments when using automated decision-making processes. This would specifically include those derived from AI or machine learning.

While the FTC has not promulgated AI-specific regulations, this technology is on the agencys radar. In April 2021 the FTC issued amemowhich apprised companies that using AI that produces discriminatory outcomes equates to a violation of Section 5 of the FTC Act, which prohibits unfair or deceptive practices. And the FTC may soon take this warning a step fartherin June 2022 theagency indicatedthat it will submit an Advanced Notice of Preliminary Rulemaking to ensure that algorithmic decision-making does not result in harmful discrimination with the public comment period ending in August 2022. The FTC also recently issued areportto Congress discussing how AI may be used to combat online harms, ranging from scams, deep fakes, and opioid sales, but advised against over-reliance on these tools, citing the technologys susceptibility to producing inaccurate, biased, and discriminatory outcomes.

Companies should carefully discern whether other non-AI specific regulations could subject them to potential liability for their use of AI technology. For example, the U.S. Equal Employment Opportunity Commission (EEOC) put forthguidancein May 2022 warning companies that their use of algorithmic decision-making tools to assess job applicants and employees could violate the Americans with Disabilities Act by, in part, intentionally or unintentionally screening out individuals with disabilities. Further analysis of the EEOCs guidance can be foundhere.

Many other U.S. agencies and offices are beginning to delve into the fray of AI. In November 2021, the White House Office of Science and Technology Policysolicited engagementfrom stakeholders across industries in an effort to develop a Bill of Rights for an Automated Society. Such a Bill of Rights could cover topics like AIs role in the criminal justice system, equal opportunities, consumer rights, and the healthcare system. Additionally, the National Institute of Standards and Technology (NIST), which falls under the U.S. Department of Commerce, is engaging with stakeholders todevelopa voluntary risk management framework for trustworthy AI systems. The output of this project may be analogous to the EUs proposed regulatory framework, but in a voluntary format.

The overall theme of enacted and pending AI regulations globally is maintaining the accountability, transparency, and fairness of AI. For companies leveraging AI technology, ensuring that their systems remain compliant with the various regulations intended to achieve these goals could be difficult and costly. Two aspects of AIs decision-making process make oversight particularly demanding:

Opaquenesswhere users can control data inputs and view outputs, but are often unable to explain how and with which data points the system made a decision.

Frequent adaptationwhere processes evolve over time as the system learns.

Therefore, it is important for regulators to avoid overburdening businesses to ensure that stakeholders may still leverage AI technologies great benefits in a cost-effective manner. The U.S. has the opportunity to observe the outcomes of the current regulatory action from China and the EU to determine whether their approaches strike a favorable balance. However, the U.S. should potentially accelerate its promulgation of similar laws so that it can play a role in setting the global tone for AI regulatory standards.

Thank you to co-author Lara Coole, a summer associate in Foley & Lardners Jacksonville office, for her contributions to this post.

Go here to see the original:

Artificial Intelligence Regulation Updates: China, EU, and U.S - The National Law Review

Artificial intelligence isn’t that intelligent | The Strategist – The Strategist

Late last month, Australias leading scientists, researchers and businesspeople came together for the inaugural Australian Defence Science, Technology and Research Summit (ADSTAR), hosted by the Defence Departments Science and Technology Group. In a demonstration of Australias commitment to partnerships that would make our non-allied adversaries flinch, Chief Defence Scientist Tanya Monro was joined by representatives from each of the Five Eyes partners, as well as Japan, Singapore and South Korea. Two streams focusing on artificial intelligence were dedicated to research and applications in the defence context.

At the end of the day, isnt hacking an AI a bit like social engineering?

A friend who works in cybersecurity asked me this. In the world of information security, social engineering is the game of manipulating people into divulging information that can be used in a cyberattack or scam. Cyber experts may therefore be excused for assuming that AI might display some human-like level of intelligence that makes it difficult to hack.

Unfortunately, its not. Its actually very easy.

The man who coined the term artificial intelligence in the 1950s, cybernetics researcher John McCarthy, also said that once we know how it works, it isnt called AI anymore. This explains why AI means different things to different people. It also explains why trust in and assurance of AI is so challenging.

AI is not some all-powerful capability that, despite how much it can mimic humans, also thinks like humans. Most implementations, specifically machine-learning models, are just very complicated implementations of the statistical methods were familiar with from high school. It doesnt make them smart, merely complex and opaque. This leads to problems in AI safety and security.

Bias in AI has long been known to cause problems. For example, AI-driven recruitment systems in tech companies have been shown to filter out applications from women, and re-offence prediction systems in US prisons exhibit consistent biases against black inmates. Fortunately, bias and fairness concerns in AI are now well known and actively investigated by researchers, practitioners and policymakers.

AI security is different, however. While AI safety deals with the impact of the decisions an AI might make, AI security looks at the inherent characteristics of a model and whether it could be exploited. AI systems are vulnerable to attackers and adversaries just as cyber systems are.

A known challenge is adversarial machine learning, where adversarial perturbations added to an image cause a model to predictably misclassify it.

When researchers added adversarial noise imperceptible to humans to an image of a panda, the model predicted it was a gibbon.

In another study, a 3D-printed turtle had adversarial perturbations embedded in its surface so that an object-detection model believed it to be a rifle. This was true even when the object was rotated.

I cant help but notice disturbing similarities between the rapid adoption of and misplaced trust in the internet in the latter half of the last century and the unfettered adoption of AI now.

It was a sobering moment when, in 2018, the then US director of national intelligence, Daniel Coats, called out cyber as the greatest strategic threat to the US.

Many nations are publishing AI strategies (including Australia, the US and the UK) that address these concerns, and theres still time to apply the lessons learned from cyber to AI. These include investment in AI safety and security at the same pace as investment in AI adoption is made; commercial solutions for AI security, assurance and audit; legislation for AI safety and security requirements, as is done for cyber; and greater understanding of AI and its limitations, as well as the technologies, like machine learning, that underpin it.

Cybersecurity incidents have also driven home the necessity for the public and private sectors to work together not just to define standards, but to reach them together. This is essential both domestically and internationally.

Autonomous drone swarms, undetectable insect-sized robots and targeted surveillance based on facial recognition are all technologies that exist. While Australia and our allies adhere to ethical standards for AI use, our adversaries may not.

Speaking on resilience at ADSTAR, Chief Scientist Cathy Foley discussed how pre-empting and planning for setbacks is far more strategic than simply ensuring you can get back up after one. That couldnt be more true when it comes to AI, especially given Defences unique risk profile and the current geostrategic environment.

I read recently that Ukraine is using AI-enabled drones to target and strike Russians. Notwithstanding the ethical issues this poses, the article I read was written in Polish and translated to English for me by Googles language translation AI. Artificial intelligence is already pervasive in our lives. Now we need to be able to trust it.

The rest is here:

Artificial intelligence isn't that intelligent | The Strategist - The Strategist

Researchers use artificial intelligence to create a treasure map of undiscovered ant species – EurekAlert

image:Map detailing ant diversity centers in Africa, Madagascar and Mediterranean regions. view more

Credit: Kass et al., 2022, Science Advances

E. O. Wilson once referred to invertebrates as the little things that run the world, without whom the human species [wouldnt] last more than a few months. Although small, invertebrates have an outsized influence on their environments, pollinating plants, breaking down organic matter and speeding up nutrient cycling. And what they lack in stature, they make up for in diversity. With more than one million known species, insects alone vastly outnumber all other invertebrates and vertebrates combined.

Despite their importance and ubiquity, some of the most basic information about invertebrates, such as where theyre most diverse and how many of them there are, still remains a mystery. This is especially problematic for conservation scientists trying to stave off global insect declines; you cant conserve something if you dont know where to look for it.

In a new study published this Wednesday in the journal Science Advances, researchers used ants as a proxy to help close major knowledge gaps and hopefully begin reversing these declines. Working for more than a decade, researchers from institutions around the world stitched together nearly one-and-a-half million location records from research publications, online databases, museums and scientific field work. They used those records to help produce the largest global map of insect diversity ever created, which they hope will be used to direct future conservation efforts.

This is a massive undertaking for a group known to be a critical ecosystem engineer, said co-author Robert Guralnick, curator of biodiversity informatics at the Florida Museum of Natural History. It represents an enormous effort not only among all the co-authors but the many naturalists who have contributed knowledge about distributions of ants across the globe.

Creating a map large enough to account for the entirety of ant biodiversity presented several logistical challenges. All currently known ant species were included, which numbered at more than 14,000, and each one varied dramatically in the amount of data available.

The majority of the records used contained a description of the location where an ant was collected or spotted but did not always have the precise coordinates needed for mapping. Inferring the extent of an ants range from incomplete records required some clever data wrangling.

Co-author Kenneth Dudley, a research technician with the Okinawa Institute of Science and Technology built a computational workflow to estimate the coordinates from the available data, which also checked the data for errors. This allowed the researchers to make different range estimates for each species of ant depending on how much data was available. For species with less data, they constructed shapes surrounding the data points. For species with more data, the researchers predicted the distribution of each species using statistical models that they tuned to reduce as much noise as possible.

The researchers brought these estimates together to form a global map, divided into a grid of 20 km by 20 km squares, that showed an estimate of the number of ant species per square (called the species richness). They also created a map that showed the number of ant species with very small ranges per square (called the species rarity). In general, species with small ranges are particularly vulnerable to environmental changes.

However, there was another problem to overcomesampling bias.

Some areas of the world that we expected to be centers of diversity were not showing up on our map, but ants in these regions were not well-studied, explained co-first author Jamie Kass, a postdoctoral fellow at the Okinawa Institute of Science and Technology. Other areas were extremely well-sampled, for example parts of the USA and Europe, and this difference in sampling can impact our estimates of global diversity.

So, the researchers utilized machine learning to predict how their diversity would change if they sampled all areas around the world equally, and in doing so, identified areas where they estimate many unknown, unsampled species exist.

This gives us a kind of treasure map, which can guide us to where we should explore next and look for new species with restricted ranges, said senior author Evan Economo, a professor at the Okinawa Institute of Science and Technology.

When the researchers compared the rarity and richness of ant distributions to the comparatively well-studied amphibians, birds, mammals and reptiles, they found that ants were about as different from these vertebrate groups as the vertebrate groups were from each other.

This was unexpected given that ants are evolutionarily highly distant from vertebrates, and it suggests that priority areas for vertebrate diversity may also have a high diversity of invertebrate species. The authors caution, however, that ant biodiversity patterns have unique features. For example, the Mediterranean and East Asia show up as diversity centers for ants more than the vertebrates.

Finally, the researchers looked at how well-protected these areas of high ant diversity are. They found that it was a low percentageonly 15% of the top 10% of ant rarity centers had some sort of legal protection, such as a national park or reserve, which is less than existing protection for vertebrates.

Clearly, we have a lot of work to do to protect these critical areas, Economo concluded.

The global distribution of known and undiscovered ant biodiversity

3-Aug-2022

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Excerpt from:

Researchers use artificial intelligence to create a treasure map of undiscovered ant species - EurekAlert

Global Artificial Intelligence in Healthcare Diagnosis Market Research Report 2022: Rising Adoption of Healthcare Artificial Intelligence in Research…

DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence in Healthcare Diagnosis Market Research Report by Technology, Component, Application, End User, Region - Global Forecast to 2026 - Cumulative Impact of COVID-19" report has been added to ResearchAndMarkets.com's offering.

The Global Artificial Intelligence in Healthcare Diagnosis Market size was estimated at USD 2,318.98 million in 2020, USD 2,725.72 million in 2021, and is projected to grow at a Compound Annual Growth Rate (CAGR) of 17.81% to reach USD 6,202.67 million by 2026.

Market Segmentation:

This research report categorizes the Artificial Intelligence in Healthcare Diagnosis to forecast the revenues and analyze the trends in each of the following sub-markets:

Competitive Strategic Window:

The Competitive Strategic Window analyses the competitive landscape in terms of markets, applications, and geographies to help the vendor define an alignment or fit between their capabilities and opportunities for future growth prospects. It describes the optimal or favorable fit for the vendors to adopt successive merger and acquisition strategies, geography expansion, research & development, and new product introduction strategies to execute further business expansion and growth during a forecast period.

FPNV Positioning Matrix:

The FPNV Positioning Matrix evaluates and categorizes the vendors in the Artificial Intelligence in Healthcare Diagnosis Market based on Business Strategy (Business Growth, Industry Coverage, Financial Viability, and Channel Support) and Product Satisfaction (Value for Money, Ease of Use, Product Features, and Customer Support) that aids businesses in better decision making and understanding the competitive landscape.

Market Share Analysis:

The Market Share Analysis offers the analysis of vendors considering their contribution to the overall market. It provides the idea of its revenue generation into the overall market compared to other vendors in the space. It provides insights into how vendors are performing in terms of revenue generation and customer base compared to others. Knowing market share offers an idea of the size and competitiveness of the vendors for the base year. It reveals the market characteristics in terms of accumulation, fragmentation, dominance, and amalgamation traits.

Market Dynamics

Drivers

Restraints

Opportunities

Challenges

Key Topics Covered:

1. Preface

2. Research Methodology

3. Executive Summary

4. Market Overview

5. Market Insights

6. Artificial Intelligence in Healthcare Diagnosis Market, by Technology

7. Artificial Intelligence in Healthcare Diagnosis Market, by Component

8. Artificial Intelligence in Healthcare Diagnosis Market, by Application

9. Artificial Intelligence in Healthcare Diagnosis Market, by End User

10. Americas Artificial Intelligence in Healthcare Diagnosis Market

11. Asia-Pacific Artificial Intelligence in Healthcare Diagnosis Market

12. Europe, Middle East & Africa Artificial Intelligence in Healthcare Diagnosis Market

13. Competitive Landscape

14. Company Usability Profiles

15. Appendix

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/vgkht7

More here:

Global Artificial Intelligence in Healthcare Diagnosis Market Research Report 2022: Rising Adoption of Healthcare Artificial Intelligence in Research...

Can artificial intelligence really help us talk to the animals? – The Guardian

A dolphin handler makes the signal for together with her hands, followed by create. The two trained dolphins disappear underwater, exchange sounds and then emerge, flip on to their backs and lift their tails. They have devised a new trick of their own and performed it in tandem, just as requested. It doesnt prove that theres language, says Aza Raskin. But it certainly makes a lot of sense that, if they had access to a rich, symbolic way of communicating, that would make this task much easier.

Raskin is the co-founder and president of Earth Species Project (ESP), a California non-profit group with a bold ambition: to decode non-human communication using a form of artificial intelligence (AI) called machine learning, and make all the knowhow publicly available, thereby deepening our connection with other living species and helping to protect them. A 1970 album of whale song galvanised the movement that led to commercial whaling being banned. What could a Google Translate for the animal kingdom spawn?

The organisation, founded in 2017 with the help of major donors such as LinkedIn co-founder Reid Hoffman, published its first scientific paper last December. The goal is to unlock communication within our lifetimes. The end we are working towards is, can we decode animal communication, discover non-human language, says Raskin. Along the way and equally important is that we are developing technology that supports biologists and conservation now.

Understanding animal vocalisations has long been the subject of human fascination and study. Various primates give alarm calls that differ according to predator; dolphins address one another with signature whistles; and some songbirds can take elements of their calls and rearrange them to communicate different messages. But most experts stop short of calling it a language, as no animal communication meets all the criteria.

Until recently, decoding has mostly relied on painstaking observation. But interest has burgeoned in applying machine learning to deal with the huge amounts of data that can now be collected by modern animal-borne sensors. People are starting to use it, says Elodie Briefer, an associate professor at the University of Copenhagen who studies vocal communication in mammals and birds. But we dont really understand yet how much we can do.

Briefer co-developed an algorithm that analyses pig grunts to tell whether the animal is experiencing a positive or negative emotion. Another, called DeepSqueak, judges whether rodents are in a stressed state based on their ultrasonic calls. A further initiative Project CETI (which stands for the Cetacean Translation Initiative) plans to use machine learning to translate the communication of sperm whales.

Yet ESP says its approach is different, because it is not focused on decoding the communication of one species, but all of them. While Raskin acknowledges there will be a higher likelihood of rich, symbolic communication among social animals for example primates, whales and dolphins the goal is to develop tools that could be applied to the entire animal kingdom. Were species agnostic, says Raskin. The tools we develop can work across all of biology, from worms to whales.

The motivating intuition for ESP, says Raskin, is work that has shown that machine learning can be used to translate between different, sometimes distant human languages without the need for any prior knowledge.

This process starts with the development of an algorithm to represent words in a physical space. In this many-dimensional geometric representation, the distance and direction between points (words) describes how they meaningfully relate to each other (their semantic relationship). For example, king has a relationship to man with the same distance and direction that woman has to queen. (The mapping is not done by knowing what the words mean but by looking, for example, at how often they occur near each other.)

It was later noticed that these shapes are similar for different languages. And then, in 2017, two groups of researchers working independently found a technique that made it possible to achieve translation by aligning the shapes. To get from English to Urdu, align their shapes and find the point in Urdu closest to the words point in English. You can translate most words decently well, says Raskin.

ESPs aspiration is to create these kinds of representations of animal communication working on both individual species and many species at once and then explore questions such as whether there is overlap with the universal human shape. We dont know how animals experience the world, says Raskin, but there are emotions, for example grief and joy, it seems some share with us and may well communicate about with others in their species. I dont know which will be the more incredible the parts where the shapes overlap and we can directly communicate or translate, or the parts where we cant.

He adds that animals dont only communicate vocally. Bees, for example, let others know of a flowers location via a waggle dance. There will be a need to translate across different modes of communication too.

The goal is like going to the moon, acknowledges Raskin, but the idea also isnt to get there all at once. Rather, ESPs roadmap involves solving a series of smaller problems necessary for the bigger picture to be realised. This should see the development of general tools that can help researchers trying to apply AI to unlock the secrets of species under study.

For example, ESP recently published a paper (and shared its code) on the so called cocktail party problem in animal communication, in which it is difficult to discern which individual in a group of the same animals is vocalising in a noisy social environment.

To our knowledge, no one has done this end-to-end detangling [of animal sound] before, says Raskin. The AI-based model developed by ESP, which was tried on dolphin signature whistles, macaque coo calls and bat vocalisations, worked best when the calls came from individuals that the model had been trained on; but with larger datasets it was able to disentangle mixtures of calls from animals not in the training cohort.

Another project involves using AI to generate novel animal calls, with humpback whales as a test species. The novel calls made by splitting vocalisations into micro-phonemes (distinct units of sound lasting a hundredth of a second) and using a language model to speak something whale-like can then be played back to the animals to see how they respond. If the AI can identify what makes a random change versus a semantically meaningful one, it brings us closer to meaningful communication, explains Raskin. It is having the AI speak the language, even though we dont know what it means yet.

A further project aims to develop an algorithm that ascertains how many call types a species has at its command by applying self-supervised machine learning, which does not require any labelling of data by human experts to learn patterns. In an early test case, it will mine audio recordings made by a team led by Christian Rutz, a professor of biology at the University of St Andrews, to produce an inventory of the vocal repertoire of the Hawaiian crow a species that, Rutz discovered, has the ability to make and use tools for foraging and is believed to have a significantly more complex set of vocalisations than other crow species.

Rutz is particularly excited about the projects conservation value. The Hawaiian crow is critically endangered and only exists in captivity, where it is being bred for reintroduction to the wild. It is hoped that, by taking recordings made at different times, it will be possible to track whether the speciess call repertoire is being eroded in captivity specific alarm calls may have been lost, for example which could have consequences for its reintroduction; that loss might be addressed with intervention. It could produce a step change in our ability to help these birds come back from the brink, says Rutz, adding that detecting and classifying the calls manually would be labour intensive and error prone.

Meanwhile, another project seeks to understand automatically the functional meanings of vocalisations. It is being pursued with the laboratory of Ari Friedlaender, a professor of ocean sciences at the University of California, Santa Cruz. The lab studies how wild marine mammals, which are difficult to observe directly, behave underwater and runs one of the worlds largest tagging programmes. Small electronic biologging devices attached to the animals capture their location, type of motion and even what they see (the devices can incorporate video cameras). The lab also has data from strategically placed sound recorders in the ocean.

ESP aims to first apply self-supervised machine learning to the tag data to automatically gauge what an animal is doing (for example whether it is feeding, resting, travelling or socialising) and then add the audio data to see whether functional meaning can be given to calls tied to that behaviour. (Playback experiments could then be used to validate any findings, along with calls that have been decoded previously.) This technique will be applied to humpback whale data initially the lab has tagged several animals in the same group so it is possible to see how signals are given and received. Friedlaender says he was hitting the ceiling in terms of what currently available tools could tease out of the data. Our hope is that the work ESP can do will provide new insights, he says.

But not everyone is as gung ho about the power of AI to achieve such grand aims. Robert Seyfarth is a professor emeritus of psychology at University of Pennsylvania who has studied social behaviour and vocal communication in primates in their natural habitat for more than 40 years. While he believes machine learning can be useful for some problems, such as identifying an animals vocal repertoire, there are other areas, including the discovery of the meaning and function of vocalisations, where he is sceptical it will add much.

The problem, he explains, is that while many animals can have sophisticated, complex societies, they have a much smaller repertoire of sounds than humans. The result is that the exact same sound can be used to mean different things in different contexts and it is only by studying the context who the individual calling is, how are they related to others, where they fall in the hierarchy, who they have interacted with that meaning can hope to be established. I just think these AI methods are insufficient, says Seyfarth. Youve got to go out there and watch the animals.

There is also doubt about the concept that the shape of animal communication will overlap in a meaningful way with human communication. Applying computer-based analyses to human language, with which we are so intimately familiar, is one thing, says Seyfarth. But it can be quite different doing it to other species. It is an exciting idea, but it is a big stretch, says Kevin Coffey, a neuroscientist at the University of Washington who co-created the DeepSqueak algorithm.

Raskin acknowledges that AI alone may not be enough to unlock communication with other species. But he refers to research that has shown many species communicate in ways more complex than humans have ever imagined. The stumbling blocks have been our ability to gather sufficient data and analyse it at scale, and our own limited perception. These are the tools that let us take off the human glasses and understand entire communication systems, he says.

Read the original post:

Can artificial intelligence really help us talk to the animals? - The Guardian

Elon Musk and Silicon Valley’s Overreliance on Artificial Intelligence – The Wire

When the richest man in the world is being sued by one of the most popular social media companies, its news. But while most of the conversation about Elon Musks attempt to cancel his $44 billion contract to buy Twitter is focusing on the legal, social, and business components, we need to keep an eye on how the discussion relates to one of tech industrys most buzzy products: artificial intelligence.

The lawsuit shines a light on one of the most essential issues for the industry to tackle: What can and cant AI do, and what should and shouldnt AI do? The Twitter v Musk contretemps reveals a lot about the thinking about AI in tech and startup land and raises issues about how we understand the deployment of the technology in areas ranging from credit checks to policing.

At the core of Musks claim for why he should be allowed out of his contract with Twitter is an allegation that the platform has done a poor job of identifying and removing spam accounts. Twitter has consistently claimed in quarterly filings that less than 5% of its active accounts are spam; Musk thinks its much higher than that. From a legal standpoint, it probably doesnt really matter if Twitters spam estimate is off by a few percent, and Twitters been clear that its estimate is subjective and that others could come to different estimates with the same data. Thats presumably why Musks legal team lost in a hearing on July 19when they asked for more time to perform detailed discovery on Twitters spam-fighting efforts, suggesting that likely isnt the question on which the trial will turn.

Regardless of the legal merits, its important to scrutinise the statistical and technical thinking from Musk and his allies. Musks position is best summarised in his filing from July 15, which states: In a May 6 meeting with Twitter executives, Musk was flabbergasted to learn just how meager Twitters process was. Namely: Human reviewers randomly sampled 100 accounts per day (less than 0.00005% of daily users) and applied unidentified standards to somehow conclude every quarter for nearly three years that fewer than 5% of Twitter users were false or spam. The filing goes on to express the flabbergastedness of Musk by adding, Thats it. No automation, no AI, no machine learning.

Perhaps the most prominent endorsement of Musks argument here came from venture capitalist David Sacks,who quoted it while declaring, Twitter is toast. But theres an irony in Musks complaint here: If Twitter were using machine learning for the audit as he seems to think they should, and only labeling spam that was similar to old spam, it would actually produce a lower, less-accurate estimate than it has now.

There are three components to Musks assertion that deserve examination: his basic statistical claim about what a representative sample looks like, his claim that the spam-level auditing process should automated or use AI or machine learning, and an implicit claim about what AI can actually do.

On the statistical question, this is something any professional anywhere near the machine learning space should be able to answer (so can many high school students). Twitter uses a daily sampling of accounts to scrutinise a total of 9,000 accounts per quarter (averaging about 100 per calendar day) to arrive at its under-5% spam estimate. Though that sample of 9,000 users per quarter is, as Musk notes, a very small portion of the 229 million active users the company reported in early 2022, a statistics professor (or student) would tell you that thats very much not the point. Statistical significance isnt determined by what percentage of the population is sampled but simply by the actual size of the sample in question. As Facebook whistleblower Sophie Zhang put it, you can make the comparison to soup: It doesnt matter if you have a small or giant pot of soup, if its evenly mixed you just need a spoonful to taste-test.

The whole point of statistical sampling is that you can learn most of what you need to know about the variety of a larger population by studying a much-smaller but decently sized portion of it. Whether the person drawing the sample is a scientist studying bacteria, or a factory quality inspector checking canned vegetables, or a pollster asking about political preferences, the question isnt what percentage of the overall whole am I checking, but rather how much should I expect my sample to look like the overall population for the characteristics Im studying? If you had to crack open a large percentage of your cans of tomatoes to check for their quality, youd have a hard time making a profit, so you want to check the fewest possible to get within a reasonable range of confidence in your findings.

Also read: Why Understanding This 60S Sci-Fi Novel Is Key to Understanding Elon Musk

While this thinking does go against the grain of certain impulses (theres a reason why many people make this mistake), there is also a way to make this approach to sampling more intuitive. Think of the goal in setting sample size as getting a reasonable answer to the question, If I draw another sample of the same size, how different would I expect it to be? A classic approach to explaining this problem is to imagine youve bought a great mass of marbles, that are supposed to come in a specific ratio: 95% purple marbles and 5% yellow marbles. You want to do a quality inspection to ensure the delivery is good, so you load them into one of those bingo game hoppers, turn the crank, and start counting the marbles you draw, in each color. Lets say your first sample of 20 marbles has 19 purple and one yellow; should you be confident that you got the right mix from your vendor? You can probably intuitively understand that the next 20 random marbles you draw could end up being very different, with zero yellows or seven. But what if you draw 1,000 marbles, around the same as the typical political poll? What if you draw 9,000 marbles? The more marbles you draw, the more youd expect the next drawing to look similar, because its harder to hide random fluctuations in larger samples.

There are onlinecalculators that can let you run the numbers yourself. If you only draw 20 marbles and get one yellow, you can have 95% confidence that the yellows would be between 0.13% and 24.9% of the total not very exact. If you draw 1,000 marbles and get 50 yellows, you can have 95% confidence that yellows would be between 3.7% and 6.5% of the total; closer, but perhaps not something youd sign your name to in a quarterly filing. At 9,000 marbles with 450 yellow, you can have 95% confidence the yellows are between 4.56% and 5.47%; youre now accurate to within a range of less than half a percent, and at that point Twitters lawyers presumably told them theyd done enough for their public disclosure.

Printed Twitter logos are seen in this picture illustration taken April 28, 2022. Photo: Reuters/Dado Ruvic/Illustration/File Photo

This reality that statistical sampling works to tell us about large populations based on much-smaller samples underpins every area where statistics is used, from checking the quality of the concrete used to make the building youre currently sitting in, to ensuring the reliable flow of internet traffic to the screen youre reading this on.

Its also what drives all current approaches to artificial intelligence today. Specialists in the field almost never use the term artificial intelligence to describe their work, preferring to use machine learning. But another common way to describe the entire field as it currently stands is applied statistics. Machine learning today isnt really computers thinking in anything like what we assume humans do (to the degree we even understand how humans think, which isnt a great degree); its mostly pattern-matching and -identification, based on statistical optimisation. If you feed a convolutional neural network thousands of images of dogs and cats and then ask the resulting model to determine if the next image is of a dog or a cat, itll probably do a good job, but you cant ask it to explain what makes a cat different from a dog on any broader level; its just recognising the patterns in pictures, using a layering of statistical formulas.

Stack up statistical formulas in specific ways, and you can build a machine learning algorithm that, fed enough pictures, will gradually build up a statistical representation of edges, shapes, and larger forms until it recognises a cat, based on the similarity to thousands of other images of cats it was fed. Theres also a way in which statistical sampling plays a role: You dont need pictures of all the dogs and cats, just enough to get a representative sample, and then your algorithm can infer what it needs to about all the other pictures of dogs and cats in the world. And the same goes for every other machine learning effort, whether its an attempt to predict someones salary using everything else you know about them, with a boosted random forests algorithm, or to break down a list of customers into distinct groups, in a clustering algorithm like a support vector machine.

You dont absolutely have to understand statistics as well as a student whos recently taken a class in order to understand machine learning, but it helps. Which is why the statistical illiteracy paraded by Musk and his acolytes here is at least somewhat surprising.

But more important, in order to have any basis for overseeing the creation of a machine-learning product, or to have a rationale for investing in a machine-learning company, its hard to see how one could be successful without a decent grounding in the rudiments of machine learning, and where and how it is best applied to solve a problem. And yet, team Musk here is suggesting they do lack that knowledge.

Once you understand that all machine learning today is essentially pattern-matching, it becomes clear why you wouldnt rely on it to conduct an audit such as the one Twitter performs to check for the proportion of spam accounts. Theyre hand-validating so that they ensure its high-quality data, explained security professional Leigh Honeywell, whos been a leader at firms like Slack and Heroku, in an interview. She added, any data you pull from your machine learning efforts will by necessity be not as validated as those efforts. If you only rely on patterns of spam youve already identified in the past and already engineered into your spam-detection tools, in order to find out how much spam there is on your platform, youll only recognise old spam patterns, and fail to uncover new ones.

Also read: India Versus Twitter Versus Elon Musk Versus Society

Where Twitter should be using automation and machine learning to identify and remove spam is outside of this audit function, which the company seems to do. It wouldnt otherwise be possible tosuspend half a million accountsevery day and lock millions of accounts each week, as CEO Parag Agrawal claims. In conversations Ive had with cybersecurity workers in the field, its quite clear that large amounts of automation is used at Twitter (though machine learning specifically is actually relatively rare in the field because the results often arent as good as other methods, marketing claims by allegedly AI-based security firms to the contrary).

At least in public claims related to this lawsuit, prominent Silicon Valley figures are suggesting they have a different understanding of what machine learning can do, and when it is and isnt useful. This disconnect between how many nontechnical leaders in that world talk about AI, and what it actually is, has significant implications for how we will ultimately come to understand and use the technology.

The general disconnect between the actual work of machine learning and how its touted by many company and industry leaders is something data scientists often chalk up to marketing. Its very common to hear data scientists in conversation among themselves declare that AI is just a marketing term. Its also quite common to have companies using no machine learning at all describe their work as AI to investors and customers, who rarely know the difference or even seem to care.

This is a basic reality in the world of tech. In my own experience talking with investors who make investments in AI technology, its often quite clear that they know almost nothing about these basic aspects of how machine learning works. Ive even spoken to CEOs of rather large companies that rely at their core on novel machine learning efforts to drive their product, who also clearly have no understanding of how the work actually gets done.

Not knowing or caring how machine learning works, what it can or cant do, and where its application can be problematic could lead society to significant peril. If we dont understand the way machine learning actually works most often by identifying a pattern in some dataset and applying that pattern to new data we can be led deep down a path in which machine learning wrongly claims, for example, to measure someones face for trustworthiness (when this is entirely based on surveys in which people reveal their own prejudices), or that crime can be predicted (when many hyperlocal crime numbers are highly correlated with more police officers being present in a given area, who then make more arrests there), based almost entirely on a set of biased data or wrong-headed claims.

If were going to properly manage the influence of machine learning on our society on our systems and organisations and our government we need to make sure these distinctions are clear. It starts with establishing a basic level of statistical literacy, and moves on to recognising that machine learning isnt magicand that it isnt, in any traditional sense of the word, intelligent that it works by pattern-matching to data, that the data has various biases, and that the overall project can produce many misleading and/or damaging outcomes.

Its an understanding one might have expected or at least hoped to find among some of those investing most of their life, effort, and money into machine-learning-related projects. If even people that deep arent making those efforts to sort fact from fiction, its a poor omen for the rest of us, and the regulators and other officials who might be charged with keeping them in check.

This article was originally published on Future Tense, a partnership between Slate magazine, Arizona State University, and New America.

Read more here:

Elon Musk and Silicon Valley's Overreliance on Artificial Intelligence - The Wire

High Five: Artificial Intelligence-Generated Campaigns and Experiments | LBBOnline – Little Black Book – LBBonline

I cant stop playing with Midjourney. It may signal the end of human creativity or the start of an exciting new era, but heres me, like a monkey at a typewriter chucking random words into the algorithm for an instant hit of this-shouldnt-be-as-good-as-it-is art.

For those who dont know, Midjourney is one of a number of image-generating AI algorithms that can turn written prompts into unworldly pictures, It, along with OpenAIs DALL-E 2, have been having something of a moment in the last month as people get their hands on them and try to push them to their limits. Craiyon - formerly DALL-E mini - is an older, less refined and very much wobblier platform to try too. Its worth having a go just to get a feel for what these algorithms can and cant do - though be warned, the dopamine hit of seeing some silly words turn into something strange, beautiful, terrifying or cool within seconds is quite addictive. A confused dragon playing chess. A happy apple. A rat transcends and perceives the oneness of the universe, pulsing with life. Yes Sir, I can boogie.

Within the LBB editorial team, weve been having lots of discussions about the implications of these art-generating algorithms. What are the legal and IP ramifications for those artists whose works are mined and drawn into the data set (on my Midjourney server, Klimt and HR Giger seem to be the most popular artists to replicate but what of more contemporary artists?). Will the industry use this to find unexpected new looks that go beyond the human creative habits and rules - or will we see content pulled directly from the algorithm? How long will it take for the algorithms to iron out the wonky weirdness that can sometimes take the human face way beyond the uncanny valley to a nightmarish, distorted abyss? What are the keys to writing prompts when you are after something very specific? Why does the algorithm seem to struggle when two different objects are requested in the same image?

Unlike other technologies that have shaken up the advertising industry, these image-generating algorithms are relatively accessible and easy to use (DALL-E 2s waitlist aside). The results are almost instant - and the possibilities, for now, seem limitless. Weve already seen a couple of brands have a go with campaigns that are definitely playing on the novelty and PR-angle of this new technology - and also a few really intriguing art projects too...

Agency: Rethink

The highest profile commercial campaign of the bunch is Rethinks new Heinz campaign. Its a follow up to a previous campaign, in which humans were asked to draw a bottle of ketchup and ended up all drawing a bottle of Heinz. This time around, the team asked Dall-E 2 - and the algorithm, like its human predecessors, couldnt help but create images that looked like Heinz branded bottles (albeit with a funky AI spin). In this case, the AI is used to reinforce and revisit the original idea - but how long will it take before were using AIs to generate ideas for boards or pitch images?

Agency: 10 Days

Animation: Jeremy Higgins

This artsy animated short by art director and designer Jeremy Higgins is a delight and shows how a sequence of similar AI-generated images can serve as frames in a film. The flickering effect ironically gives the animation a very hand-made stop motion style, reminding me of films that use individual oil paintings as frames. Its a really vivid encapsulation of what it feels like to be sucked into a Midjourney rabbit hole too...I also have to tip my hat to Stefan Sagmeister who shared this film on his Instagram account.

For the latest issue of Cosmopolitan, creative Karen X Cheng used Dall-E 2 to create a dramatic and imposing cover - using the prompt: 'a strong female president astronaut warrior walking on the planet Mars, digital art synthwave'. Theres a deep dive into the creative process that also examines some of the potential ramifications of the technology on the Cosmopolitan website thats well worth a read.

Studio:T&DA

Heres a cheeky sixth entry to High Five. This execution is part of a wider summer platform for BT Sport, centred around belief - in this case football pundit Robbie Savage is served up a Dall-E 2 image of striker Aleksander Mitrovi lifting the golden boot. Fulham has just been promoted to the Premier League - but though Robbie can see it, he cant quite believe it.

Read more from the original source:

High Five: Artificial Intelligence-Generated Campaigns and Experiments | LBBOnline - Little Black Book - LBBonline

Artificial Intelligence In Insurtech Market Is Expected to Boom- Cognizant, Next IT Corp, Kasisto – Digital Journal

New Jersey, N.J., Aug 04, 2022 The Artificial Intelligence In Insurtech Market research report provides all the information related to the industry. It gives the outlook of the market by giving authentic data to its client which helps to make essential decisions. It gives an overview of the market which includes its definition, applications and developments, and manufacturing technology. This Artificial Intelligence In Insurtech market research report tracks all the recent developments and innovations in the market. It gives the data regarding the obstacles while establishing the business and guides to overcome the upcoming challenges and obstacles.

Artificial Intelligence (AI) can help insurers assess risk, detect fraud, and reduce human error in the claim process. As a result, insurers are better equipped to sell the most appropriate plans to their customers. Customers benefit from the improved claims handling and processing provided by Artificial Intelligence.

Increased investment by insurance companies in artificial intelligence and machine learning, as well as increased preferences for personalized insurance services, are conducive to the growth of global artificial intelligence in the insurance market. In addition, the increase in cooperation between insurance companies and the company dealing with AI and machine learning solutions positively influences the development of AI in the insurance market.

Get the PDF Sample Copy (Including FULL TOC, Graphs, and Tables) of this report @:

https://www.a2zmarketresearch.com/sample-request/670659

Competitive landscape:

This Artificial Intelligence In Insurtech research report throws light on the major market players thriving in the market; it tracks their business strategies, financial status, and upcoming products.

Some of the Top companies Influencing this Market include:Cognizant, Next IT Corp, Kasisto, Cape Analytics Inc., Microsoft, Google, Salesforce, Amazon Web Services, Lemonade, Lexalytics, H2O.ai,

Market Scenario:

Firstly, this Artificial Intelligence In Insurtech research report introduces the market by providing an overview which includes definition, applications, product launches, developments, challenges, and regions. The market is forecasted to reveal strong development by driven consumption in various markets. An analysis of the current market designs and other basic characteristics is provided in the Artificial Intelligence In Insurtech report.

Regional Coverage:

The region-wise coverage of the market is mentioned in the report, mainly focusing on the regions:

Segmentation Analysis of the market

The market is segmented on the basis of the type, product, end users, raw materials, etc. the segmentation helps to deliver a precise explanation of the market

Market Segmentation: By Type

Service, Product,

Market Segmentation: By Application

Automotive, Healthcare, Information Technology, Others,

For Any Query or Customization: https://a2zmarketresearch.com/ask-for-customization/670659

An assessment of the market attractiveness with regard to the competition that new players and products are likely to present to older ones has been provided in the publication. The research report also mentions the innovations, new developments, marketing strategies, branding techniques, and products of the key participants present in the global Artificial Intelligence In Insurtech market. To present a clear vision of the market the competitive landscape has been thoroughly analyzed utilizing the value chain analysis. The opportunities and threats present in the future for the key market players have also been emphasized in the publication.

This report aims to provide:

Table of Contents

Global Artificial Intelligence In Insurtech Market Research Report 2022 2029

Chapter 1 Artificial Intelligence In Insurtech Market Overview

Chapter 2 Global Economic Impact on Industry

Chapter 3 Global Market Competition by Manufacturers

Chapter 4 Global Production, Revenue (Value) by Region

Chapter 5 Global Supply (Production), Consumption, Export, Import by Regions

Chapter 6 Global Production, Revenue (Value), Price Trend by Type

Chapter 7 Global Market Analysis by Application

Chapter 8 Manufacturing Cost Analysis

Chapter 9 Industrial Chain, Sourcing Strategy and Downstream Buyers

Chapter 10 Marketing Strategy Analysis, Distributors/Traders

Chapter 11 Market Effect Factors Analysis

Chapter 12 Global Artificial Intelligence In Insurtech Market Forecast

Buy Exclusive Report @: https://www.a2zmarketresearch.com/checkout

Contact Us:

Roger Smith

1887 WHITNEY MESA DR HENDERSON, NV 89014

[emailprotected]

+1 775 237 4157

Link:

Artificial Intelligence In Insurtech Market Is Expected to Boom- Cognizant, Next IT Corp, Kasisto - Digital Journal

Artificial Intelligence is the Future of the Banking Industry Are You Prepared for It? – International Banker

By Pritham Shetty, Consulting Director, Propel Technology Group Inc

Our world is moving at a fast pace. Though banks originally built their foundations to be run solely by humans, the time has come forartificial intelligence in the banking industry. In 2020, the global AI banking market was valued at $3.88 billion, andit is projected to reach $64.03 billion by the end of the decade,with a compound annual growth rate of 32.6%. However, when it comes to implementing even the best strategies, theapplication of artificial intelligence in the banking industryis susceptible to weak core tech and poor data backbones.

By my count, there were 20,000 new banking regulatory requirements created in 2015 alone. Chances are your business wont find a one-size-fits-all solution to dealing with this. The next-best option is to be nimble. You need to be able to break down the business process into small chunks. By doing so, you can come up with digital strategies that work with new and existing regulations.

AIcan take you a long way in this process, but you must know how to harness its power. Take originating home loans, for instance. This can be an important, sometimes tedious, process for the loan seeker and bank. With an AI solution, loan origination can happen quicker and be more beneficial to both parties.

As the world of banking moves toward AI, it is integral to note that the crucial working element for AI is data. The trick to using that data is to understand how to leverage it best for your business value. Data with no direction wont lead to progress, nor will it lead to the proper deployment of your AI. That is one of the top reasons it isso challenging to implement AI in banks there has to be a plan.

Even if you come up with a poor strategy, those mistakes can be course-corrected over time. It takes some time and effort, but it is doable. If you home in on how customer information can be used, you can utilize AI for banking services in a way that is scalable and actionable. Once you understand how to use the data you collect, you can develop technical solutions that work with each other, identify specific needs, and build data pipelines that will lead you down the road to AI.

How is artificial intelligence changing the banking sector?

Due to the increasingly digital world, customers have more access to their banking information than ever. Of course, this can lead to other problems. Because there is so much access to data, there are also prime opportunities for fraudulent activities, and this is one example ofhow AI is changing the banking sector. With AI, you can train systems to learn, understand, and recognize when these activities happen. In fact, there was a5% decrease in record exposure from 2020 to 2021.

AI also safeguards against data theft or abuse. Not only can AI recognize breaches from outside sources, but it can also recognize internal threats. Once an AI system is trained, it can identify these problems and even offer solutions to them. For instance, a customer support call center can have traffic directed by AI to assuage an influx of calls during high-volume fluctuations.

Another great example of this is the development ofconversational AI platforms. The ubiquity of social media and other online platforms can be used to tailor customer experiences directly led by AI. By using the data gathered from all sources, AI can greatly improve the customer experience overall.

For example, a loan might take anywhere from seven to 45 days to be granted. But with AI, the process can be expedited not only for the customer, but also the bank. By using AI in a situation such as this, your bank can assess the risk it is taking on by servicing loans. It can also make the process faster by performing underwriting, document scanning, and other manual processes previously associated with data collection. On top of all that, AI can gather and analyze data about your customers behaviors throughout their banking lives.

In the past, so much of this work was done solely by people. Although automation has certainly helped speed up and simplify tasks, it is used for tedium and doesnt have the complexity of AI. AI saves time and money by freeing up your employees to do other processes and provides valuable insights to your customers. And customers can budget better and have a clearer idea of where their money is going.

Even the most traditional banks will want to adopt AI to save time and money and allow employees more opportunities to have positive one-on-one relationships with customers. Look no further than fintech companies such as Credijusto, Nubank, and Monzo that have digitized traditional banking services through the power of cutting-edge tech.

Are you ready to put AI to work for your business?

Today, its not a question ofhow AI is impacting financial services. Now, its about how to implement it. That all starts with you. You must ask the right questions: What are your goals for implementing AI? Do you want to improve your internal processes? Simply provide a better customer service experience? If so, how should you implement AI for your banking services? Start with these strategies:

By making realistic short-term goals, you set yourself up for future success. These are the solutions that will be the building blocks for the type of AI everyone will aspire to use.

You want to ensure that you know how you currently use data and how you plan on using it in the future. Again, this sets your organization up for success in the long run. If you dont have the right practices now, you certainly wont going forward.

As you implement AI into your banking practices, you should know how exactlyyou generate data. Then, you must understand how you interpret it. What is the best use for it? After that, you can make decisions that will be scalable, useful, and seamless.

Technology has not only made the world around us move faster, but also better in so many ways. Traditional institutions such as banks might be slow to adopt, but weve already seenhow artificial intelligence is changing the banking sector. By taking the proper steps, you could be moving right along with it into the future.

See the article here:

Artificial Intelligence is the Future of the Banking Industry Are You Prepared for It? - International Banker

Some Idiot Asked The Dall.E mini Artificial Intelligence Program What The Last Selfies Of Humans Will Look Like And Good News, We’re Definitely Headed…

Metro- A TikToker asked Dall.E mini, the popular Artificial Intelligence(AI) image generator, what the last selfies on earth would look like and the results are chilling.

In a series of videos titled Asking an Ai to show the last selfie ever taken in the apocalypse, a TikTok account called @robotoverloards, shared the disturbing images.

Each image shows a person taking a selfie set against an apocalyptic background featuring scenes of a nuclear wasteland and catastrophic weather, along with cities burning and even zombies.

Dall.Emini, now renamed toCraiyonis an AI model that can draw images from any text prompt.

The image generator uses artificial intelligence to make photos based on the text you put in.

The image generator is connected to an artificial intelligence that has, for some time, been scraping the web for images to learn what things are. Often it will draw this from the captions attached to the pictures.

What's up everybody? I'm back with my weekly "old man screaming at the clouds" rant about how artificial intelligence is going to wipe our species clean off the planet and it's blatantly telling us this and we continue to ignore it.

Look at this shit.

Does this look like a good time to anybody that's not "metal as fuck"?

No. Absolutely not.

What's worse than the disfiguration in all these beauties' selfies is the devastation in the landscapes behind them.

That shit looks like straight-up nuclear winter to my virgin eyes.

Call it skynet, Boston Robotics, Dall.E mini, whatever the fuck you want. Bottom line is its robot scum and our man Stephen Hawking told us years ago, and Elon Musk is telling us now, that A.I. is going to be the end all be all of homosapiens. That's us. And that's a fucking wrap.

p.s. - the only thing that could make a nuclear/zombie apocalypse worse is this song playing on repeat in your head

The rest is here:

Some Idiot Asked The Dall.E mini Artificial Intelligence Program What The Last Selfies Of Humans Will Look Like And Good News, We're Definitely Headed...

PennWest Edinboro hosts astronomy camp for students with visual impairments – Edinboro University

Dr. David Hurd, director of Edinboro's planeterium, works with Emma Pabrazinsky, of Johnstown, Pa., and YiLi Smedley, from Titusville, Pa., during the Summer Astronomy Camp for Blind and Visually Impaired High School Students.

The last total solar eclipse that was clearly visible from North America occurred in August 2017. The next major eclipse will pass over Edinboro on April 8, 2024, as the community lies within the path of totality where the moon covers the sun completely.

As astronomy enthusiasts prepare for the upcoming intergalactic phenomenon, Dr. David Hurd, director of PennWest Edinboros planetarium, is expanding his outreach to students with visual impairments significantly widening the audience for solar eclipses.

Being able to expand viewership of the eclipse and any astronomical event for that matter is what motivates me to continue teaching astronomy, said Hurd, who has been teaching science and astronomy-related courses at Edinboro for 30 years. It is those magical moments when I see the lightbulb come on for my students who may have never experienced astronomy visually but are starting to connect the dots in their mind's eye.

This summer, Hurd is hosting groups from Erie, Crawford, Venango and Allegheny counties for an on-campus astronomy camp for students with visual impairments. During the camps, participants use tangible items to re-create the moon phases and a selection of tactile books to research eclipses and characteristics of the moon, our solar system and stars.

Using Hurds multisensory textbooks, students experience the total solar eclipse through their fingertips and ears. In 2017, the Hurd led the production of Getting a Feel for Eclipses the official NASA braille guide to the Great American Eclipse which occurred that August.

Dr. Hurd and Pittsburgh high school student Angelina Angelcyk discover the world of eclipses with Hurd's tactile guide.

Students with visual impairments can scan the QR code on the cover of the book to access audio tracks with narration. Inside the book, students can find Braille images and text to experience eclipses and learn about solar and lunar patterns. During the camp, students explored seven books and many different hands-on activities that highlight astronomical concepts.

Each book was written and produced by Hurd and his colleague, Dr. Cass Runyon from the College of Charleston and was supported through Joe Minafra, who works with the Solar System Exploration Research Virtual Institute an arm of NASA.

Edinboro sophomore Lexi Pollock, biology major, and Edinboro grad Ken Quinn, who has visual impairments, assisted Hurd in the summer camp.

I have worked with students with many different impairments, but I particularly like working with the blind and visually impaired community as they have a unique aspect of understanding and making sense of our world and universe, Hurd said. And why should our enjoyment of the heavens be limited to just sight? Aren't we all haptic learners who like to touch and understand? That's often why we make models.

Hurd is no stranger to bridging gaps between teaching science and reaching students with disabilities and visual impairments.

Partnering with Runyon, Hurd has produced many products for NASA that have helped bridge the gap between the research community and special needs students. Highlights of their work include the Tactile Guide to the Solar System, which is part of the Lunar Nautics toolkit, through NASA's Central Operations of Resources for Educators (CORE).

Recently, Hurd served as co-principal investigator on a major Department of Education grant that educated science teachers on how to better address the needs of students with visual impairments and is currently working with San Jose State University on a National Science Foundation Grant.

The Summer Astronomy Camp for Blind and Visually Impaired High School Students was funded through the Pennsylvania Space Grant Consortium.

See the article here:

PennWest Edinboro hosts astronomy camp for students with visual impairments - Edinboro University