Supercomputer predicts every League One result this weekend with Sheffield Wednesday, Barnsley, Derby County, Ipswich Town, Lincoln City and Wycombe…

Lifelong fan David Clowes is now in charge and senior operators recruited in the likes of David McGoldrick, James Chester, Tom Barkhuisen, Conor Hourihane and James Collins give their promotion pitch an immediate element of seriousness and respect.

Whether Derby, in their first season at this level since 1985-86, return to the Championship will probably have as much to do with the impact of two talented young midfielders in Jason Knight and Max Bird and whether they keep them.

Another relegated side in Peterborough will also be conscious of keeping the family silver, with teenage defender Ronnie Edwards being courted by Manchester City. Sammie Szmodics, Harrison Burrows and Jack Taylor have also been linked with moves to higher-division clubs.

Further forward, Posh have a natural scorer at this level in ex-Rotherham player Jonson Clarke-Harris. If he fires and key players are retained, they should have a strong season led by a successful manager at this level in ex-Hull and Doncaster boss Grant McCann.

After the sale of Harry Darling and Scott Twine, MK Dons will do well to emulate their play-off feats of 21-22. At the other end of Buckinghamshire, Wycombes hopes will probably depend on getting another good year from elder statesmen Sam Vokes and Garath McCleary.

Ipswich, who have signed ex-Leeds defender Leif Davis for a significant fee for a League One club and former Millers forward Freddie Ladapo, should be firmly in the picture. The signing of Marcus Harness, for around 600,000, is a further indicator of their ambition.

The ambitions of the above clubs means that Sheffield Wednesday and Barnsley will have their work cut out this season as they seek promotion.

Ahead of the opening weekend's third-tier action data experts at FiveThirtyEight have crunched the numbers to give the probable outcome of every match...

Kick-off: 3pm, Saturday. Supercomputer prediction: Home win - 62%. Away win - 18%. Draw - 20%.

Photo: Getty Images

Kick-off: 3pm, Saturday. Supercomputer prediction: Home win - 41%. Away win - 33%. Draw - 26%.

Photo: Getty Images

Kick-off: 3pm, Saturday. Supercomputer prediction: Home win - 56%. Away win - 19%. Draw - 24%.

Photo: Getty Images

Kick-off: 3pm, Saturday. Supercomputer prediction: Home win - 35%. Away win - 41%. Draw - 24%.

Photo: Getty Images

Original post:

Supercomputer predicts every League One result this weekend with Sheffield Wednesday, Barnsley, Derby County, Ipswich Town, Lincoln City and Wycombe...

National Science Foundation and European Awards support students in Sweden and the U.S. WSU Insider – WSU News

Students from Washington State University and Swedens Linkping University will participate in a pioneering exchange and research program in engineering and scientific computing, emphasizing the computing-based design philosophy that is supporting the international development of Boeings and Saabs new T-7A Red Hawk training aircraft.

The aircraft is an all-new advanced pilot training system designed for the U.S. Air Force, which will train the next generation of pilots for decades to come. As Boeing and Swedens Saab have long-standing ties to WSU and LiU, respectively, they will also support this program, which provides students with an unparalleled opportunity to learn how challenging designs are advanced through international cooperation of multinational corporations.

WSU was awarded $300,000 by the National Science Foundation to support WSU students in Sweden. The WSU-LiU team also received matching funding from the European Erasmus+ program and the Swedish Foundation for International Cooperation in Research and Higher Education to support the LiU students at WSU.

One of the objectives of this program is to graduate profession-ready students who are internationally educated and ready for leadership in a globalized society, said Joseph Iannelli, a professor of mechanical engineering in WSUs School of Engineering and Applied Sciences, who is leading the program.

Jan Nordstrm, a distinguished professor of computational mathematics, and Andrew Winters, a WSU alumnus and assistant professor in computational mathematics will supervise the students research projects at LiU.

LiUs multi-disciplinary strategy with Boeing and SAAB projects will expand students preparation for international high-tech environments, said Nordstrm.

This project will also prepare students for employment opportunities with corporations that employ scientific computing and operate in the U.S. and Sweden. said Iannelli. Students will benefit from studying in Sweden and the U.S. while gaining familiarity with the cultures of both countries.

Boeing and Saab will enrich this program. They will advise on aerospace-related scientific computing projects and mentor students, who will be offered opportunities for company site visits and internships. Students will also learn how computing-based designs lower development costs, increase first-time quality of prototypes, and decrease time to bring complex systems, such as aircraft, to markets.

Boeing is proud to support the education of up-and-coming engineers through this unique exchange and research program, said Craig Bomben, Boeing vice president of flight operations and enterprise chief pilot. This partnership will prepare students for the engineering field and help them fulfill their career ambitions.

WSU and LiU have been developing their international partnership for several years, after Iannellis 2018 outreach to LiU. Thereafter, the two universities signed a memorandum of understanding and a reciprocal student exchange agreement.

A comprehensive internationally-ranked peer university, LiU emphasizes multidisciplinary research and manages Swedens National Supercomputer Centre (NSC). By pooling their teams and financial resources, WSU and LiU can advance education and research at the international level more effectively, Iannelli said.

The three-year project will involve 42 diverse students; 21 from WSU and 21 from LiU. Each of the participating WSU students will receive a $12,000 fellowship.The project synergistically integrates two study-abroad semesters with a research experience and matched student cohorts. At LiU, the WSU students will be collaborating with an equal number of LiU students who will then complete an exchange semester at WSU. At LiU the Swedish students will assist the WSU students with the local culture and vice versa at WSU.

In Sweden, the WSU and LiU students willlearn how physical systems function through computer-based simulations that rely on mathematical algorithms. The WSU students will also take English-taught courses at LiU and transfer their academic credits towards their WSU degree requirements. The program is expected to begin in January.

View original post here:

National Science Foundation and European Awards support students in Sweden and the U.S. WSU Insider - WSU News

Supercomputer predicts every League Two result this weekend with Bradford City, Doncaster Rovers, Swindon Town, Mansfield Town, Tranmere Rovers and…

Last seasons beaten play-off finalists Mansfield Town managed by a shrewd operator in the shape of ex-Sheffield United chief Nigel Clough lock horns with Salford City.

The Stags recruitment has been pretty quiet in truth, but they have retained last seasons cast who have considerable experience at lower-division level in the likes of Stephen McLaughlin, Jordan Bowery, Ollie Hawkins, John-Joe OToole and George Lapsie, George Maris and Rhys Oates.

Elliot Watt rejected fresh terms at Bradford to join Salford, who have also signed ex-Barnsley duo Stevie Mallan and Elliot Simoes and striker Callum Hendry, who hit 12 goals for St Johnstone in 21-22.

Not too far away from Salford, Stockport County back in the EFL after an 11-year absence have major momentum and their arrivals have also caught the eye. They have brought in former Northampton Town defender Fraser Horsfall, with the Huddersfield-born player and PFA League Two Team of the Year inclusion rejecting higher-level interest to join.

Kyle Wootton, who netted 22 times in all competitions for Notts County last term has also joined and another of the leading National League players of last season has signed in Torquay midfielder Connor Lemonheigh-Evans, whose goals total extended into double figures last term.

Crawley Town, backed by American-based crypto consortium WAGMI United LLC, could be ones to watch. They have brought in Newport County striker Dom Telford, who fired 26 goals for the Exiles last term.

Ahead of the opening weekend's fourth-tier action data experts at FiveThirtyEight have crunched the numbers to give the probable outcome of every match...

Kick-off: 3pm, Saturday. Supercomputer prediction: Home win - 52%. Away win - 22%. Draw - 26%.

Photo: Getty Images

Kick-off: 3pm, Saturday. Supercomputer prediction: Home win - 43%. Away win - 28%. Draw - 29%.

Photo: Getty Images

Kick-off: 3pm, Saturday. Supercomputer prediction: Home win - 26%. Away win - 51%. Draw - 23%.

Photo: Getty Images

Kick-off: 3pm, Saturday. Supercomputer prediction: Home win - 53%. Away win - 22%. Draw - 26%.

Photo: Getty Images

Read the original:

Supercomputer predicts every League Two result this weekend with Bradford City, Doncaster Rovers, Swindon Town, Mansfield Town, Tranmere Rovers and...

Bristol Citys predicted finish in worrying verdict as Cardiff City, QPR and others also rated – BristolWorld

The Robins failed to pick up points after defeat to Hull City at the weekend and face Sunderland at home in their next EFL Championship fixture.

Bristol City kicked off their 2022/23 EFL Championship campaign with a 2-1 defeat to Hull City at the MKM Stadium on Saturday.

The Robins are tipped for another difficult season in English footballs second tier and defeat on the opening day of the season came despite Andreas Weimanns 30th minute opener giving them the lead against the Tigers.

The result has done little to see them climb up the predicted final table of the FiveThirtyEight supercomputer - which uses Forecasts and Soccer Power Index (SPI) ratings to show each teams percentage chance of winning the title, reaching the play-offs and being relegated.

The statistics also provide a predicted points tally that each team will finish on when the regular season wraps up in May 2023.

The supercomputer predicts that Nigel Pearsons side will end the season on 56 points but what would that mean for their league position and what are their chances of out performing their current prediction?

Here is how the supercomputer predicts the 2022/23 EFL Championship table will look at the end of the season as of Monday, August 1 - after the weekends opening fixtures but before Watford take on Sheffield United in the Monday night game:

Win Championship: 23%, Promoted: 47%, Play-offs: 34%, Relegated: <1%

Win Championship: 19%, Promoted: 44%, Play-offs: 34%, Relegated: <1%

Win Championship: 13%, Promoted: 34%, Play-offs: 34%, Relegated: 1%

Win Championship: 12%, Promoted: 33%, Play-offs: 34%, Relegated: 1%

View post:

Bristol Citys predicted finish in worrying verdict as Cardiff City, QPR and others also rated - BristolWorld

Artificial Intelligence Regulation Updates: China, EU, and U.S – The National Law Review

Wednesday, August 3, 2022

Artificial Intelligence (AI) systems are poised to drastically alter the way businesses and governments operate on a global scale, with significant changes already under way. This technology has manifested itself in multiple forms including natural language processing, machine learning, and autonomous systems, but with the proper inputs can be leveraged to make predictions, recommendations, and even decisions.

Accordingly,enterprises are increasingly embracing this dynamic technology. A2022 global study by IBMfound that 77% of companies are either currently using AI or exploring AI for future use, creating value by increasing productivity through automation, improved decision-making, and enhanced customer experience. Further, according to a2021 PwC studythe COVID-19 pandemic increased the pace of AI adoption for 52% of companies as they sought to mitigate the crises impact on workforce planning, supply chain resilience, and demand projection.

For these many businesses investing significant resources into AI, it is critical to understand the current and proposed legal frameworks regulating this novel technology. Specifically for businesses operating globally, the task of ensuring that their AI technology complies with applicable regulations will be complicated by the differing standards that are emerging from China, the European Union (EU), and the U.S.

China has taken the lead in moving AI regulations past the proposal stage. In March 2022, China passed aregulationgoverning companies use of algorithms in online recommendation systems, requiring that such services are moral, ethical, accountable, transparent, and disseminate positive energy. The regulation mandates companies notify users when an AI algorithm is playing a role in determining which information to display to them and give users the option to opt out of being targeted. Additionally, the regulation prohibits algorithms that use personal data to offer different prices to consumers. We expect these themes to manifest themselves in AI regulations throughout the world as they develop.

Meanwhile in the EU, the European Commission has published an overarchingregulatory framework proposaltitled the Artificial Intelligence Act which would have a much broader scope than Chinas enacted regulation. The proposal focuses on the risks created by AI, with applications sorted into categories of minimal risk, limited risk, high risk, or unacceptable risk. Depending on an applications designated risk level, there will be corresponding government action or obligations. So far, the proposed obligations focus on enhancing the security, transparency, and accountability of AI applications through human oversight and ongoing monitoring. Specifically, companies will be required to register stand-alone high-risk AI systems, such as remote biometric identification systems, in an EU database. If the proposed regulation is passed, the earliest date for compliance would be the second half of 2024 with potential fines for noncompliance ranging from 2-6% of a companys annual revenue.

Additionally, the previously enacted EU General Data Protection Regulation (GDPR) already carries implications for AI technology.Article 22prohibits decisions based on solely automated processes that produce legal consequences or similar effects for individuals unless the program gains the users explicit consent or meets other requirements.

In the United States there has been a fragmented approach to AI regulation thus far, with states enacting their own patchwork AI laws. Many of the enacted regulations focus on establishing various commissions to determine how state agencies can utilize AI technology and to study AIs potential impacts on the workforce and consumers. Common pending state initiatives go a step further and would regulate AI systems accountability and transparency when they process and make decisions based on consumer data.

On a national level, the U.S. Congress enacted theNational AI Initiative Actin January 2021, creating theNational AI Initiativethat provides an overarching framework to strengthen and coordinate AI research, development, demonstration, and education activities across all U.S. Departments and Agencies . . . . The Act created new offices and task forces aimed at implementing a national AI strategy, implicating a multitude of U.S. administrative agencies including the Federal Trade Commission (FTC), Department of Defense, Department of Agriculture, Department of Education, and the Department of Health and Human Services.

Pending national legislation includes theAlgorithmic Accountability Act of 2022, which was introduced in both houses of Congress in February 2022. In response to reports that AI systems can lead to biased and discriminatory outcomes, the proposed Act would direct the FTC to create regulations that mandate covered entities, including businesses meeting certain criteria, to perform impact assessments when using automated decision-making processes. This would specifically include those derived from AI or machine learning.

While the FTC has not promulgated AI-specific regulations, this technology is on the agencys radar. In April 2021 the FTC issued amemowhich apprised companies that using AI that produces discriminatory outcomes equates to a violation of Section 5 of the FTC Act, which prohibits unfair or deceptive practices. And the FTC may soon take this warning a step fartherin June 2022 theagency indicatedthat it will submit an Advanced Notice of Preliminary Rulemaking to ensure that algorithmic decision-making does not result in harmful discrimination with the public comment period ending in August 2022. The FTC also recently issued areportto Congress discussing how AI may be used to combat online harms, ranging from scams, deep fakes, and opioid sales, but advised against over-reliance on these tools, citing the technologys susceptibility to producing inaccurate, biased, and discriminatory outcomes.

Companies should carefully discern whether other non-AI specific regulations could subject them to potential liability for their use of AI technology. For example, the U.S. Equal Employment Opportunity Commission (EEOC) put forthguidancein May 2022 warning companies that their use of algorithmic decision-making tools to assess job applicants and employees could violate the Americans with Disabilities Act by, in part, intentionally or unintentionally screening out individuals with disabilities. Further analysis of the EEOCs guidance can be foundhere.

Many other U.S. agencies and offices are beginning to delve into the fray of AI. In November 2021, the White House Office of Science and Technology Policysolicited engagementfrom stakeholders across industries in an effort to develop a Bill of Rights for an Automated Society. Such a Bill of Rights could cover topics like AIs role in the criminal justice system, equal opportunities, consumer rights, and the healthcare system. Additionally, the National Institute of Standards and Technology (NIST), which falls under the U.S. Department of Commerce, is engaging with stakeholders todevelopa voluntary risk management framework for trustworthy AI systems. The output of this project may be analogous to the EUs proposed regulatory framework, but in a voluntary format.

The overall theme of enacted and pending AI regulations globally is maintaining the accountability, transparency, and fairness of AI. For companies leveraging AI technology, ensuring that their systems remain compliant with the various regulations intended to achieve these goals could be difficult and costly. Two aspects of AIs decision-making process make oversight particularly demanding:

Opaquenesswhere users can control data inputs and view outputs, but are often unable to explain how and with which data points the system made a decision.

Frequent adaptationwhere processes evolve over time as the system learns.

Therefore, it is important for regulators to avoid overburdening businesses to ensure that stakeholders may still leverage AI technologies great benefits in a cost-effective manner. The U.S. has the opportunity to observe the outcomes of the current regulatory action from China and the EU to determine whether their approaches strike a favorable balance. However, the U.S. should potentially accelerate its promulgation of similar laws so that it can play a role in setting the global tone for AI regulatory standards.

Thank you to co-author Lara Coole, a summer associate in Foley & Lardners Jacksonville office, for her contributions to this post.

Go here to see the original:

Artificial Intelligence Regulation Updates: China, EU, and U.S - The National Law Review

Artificial intelligence isn’t that intelligent | The Strategist – The Strategist

Late last month, Australias leading scientists, researchers and businesspeople came together for the inaugural Australian Defence Science, Technology and Research Summit (ADSTAR), hosted by the Defence Departments Science and Technology Group. In a demonstration of Australias commitment to partnerships that would make our non-allied adversaries flinch, Chief Defence Scientist Tanya Monro was joined by representatives from each of the Five Eyes partners, as well as Japan, Singapore and South Korea. Two streams focusing on artificial intelligence were dedicated to research and applications in the defence context.

At the end of the day, isnt hacking an AI a bit like social engineering?

A friend who works in cybersecurity asked me this. In the world of information security, social engineering is the game of manipulating people into divulging information that can be used in a cyberattack or scam. Cyber experts may therefore be excused for assuming that AI might display some human-like level of intelligence that makes it difficult to hack.

Unfortunately, its not. Its actually very easy.

The man who coined the term artificial intelligence in the 1950s, cybernetics researcher John McCarthy, also said that once we know how it works, it isnt called AI anymore. This explains why AI means different things to different people. It also explains why trust in and assurance of AI is so challenging.

AI is not some all-powerful capability that, despite how much it can mimic humans, also thinks like humans. Most implementations, specifically machine-learning models, are just very complicated implementations of the statistical methods were familiar with from high school. It doesnt make them smart, merely complex and opaque. This leads to problems in AI safety and security.

Bias in AI has long been known to cause problems. For example, AI-driven recruitment systems in tech companies have been shown to filter out applications from women, and re-offence prediction systems in US prisons exhibit consistent biases against black inmates. Fortunately, bias and fairness concerns in AI are now well known and actively investigated by researchers, practitioners and policymakers.

AI security is different, however. While AI safety deals with the impact of the decisions an AI might make, AI security looks at the inherent characteristics of a model and whether it could be exploited. AI systems are vulnerable to attackers and adversaries just as cyber systems are.

A known challenge is adversarial machine learning, where adversarial perturbations added to an image cause a model to predictably misclassify it.

When researchers added adversarial noise imperceptible to humans to an image of a panda, the model predicted it was a gibbon.

In another study, a 3D-printed turtle had adversarial perturbations embedded in its surface so that an object-detection model believed it to be a rifle. This was true even when the object was rotated.

I cant help but notice disturbing similarities between the rapid adoption of and misplaced trust in the internet in the latter half of the last century and the unfettered adoption of AI now.

It was a sobering moment when, in 2018, the then US director of national intelligence, Daniel Coats, called out cyber as the greatest strategic threat to the US.

Many nations are publishing AI strategies (including Australia, the US and the UK) that address these concerns, and theres still time to apply the lessons learned from cyber to AI. These include investment in AI safety and security at the same pace as investment in AI adoption is made; commercial solutions for AI security, assurance and audit; legislation for AI safety and security requirements, as is done for cyber; and greater understanding of AI and its limitations, as well as the technologies, like machine learning, that underpin it.

Cybersecurity incidents have also driven home the necessity for the public and private sectors to work together not just to define standards, but to reach them together. This is essential both domestically and internationally.

Autonomous drone swarms, undetectable insect-sized robots and targeted surveillance based on facial recognition are all technologies that exist. While Australia and our allies adhere to ethical standards for AI use, our adversaries may not.

Speaking on resilience at ADSTAR, Chief Scientist Cathy Foley discussed how pre-empting and planning for setbacks is far more strategic than simply ensuring you can get back up after one. That couldnt be more true when it comes to AI, especially given Defences unique risk profile and the current geostrategic environment.

I read recently that Ukraine is using AI-enabled drones to target and strike Russians. Notwithstanding the ethical issues this poses, the article I read was written in Polish and translated to English for me by Googles language translation AI. Artificial intelligence is already pervasive in our lives. Now we need to be able to trust it.

The rest is here:

Artificial intelligence isn't that intelligent | The Strategist - The Strategist

Researchers use artificial intelligence to create a treasure map of undiscovered ant species – EurekAlert

image:Map detailing ant diversity centers in Africa, Madagascar and Mediterranean regions. view more

Credit: Kass et al., 2022, Science Advances

E. O. Wilson once referred to invertebrates as the little things that run the world, without whom the human species [wouldnt] last more than a few months. Although small, invertebrates have an outsized influence on their environments, pollinating plants, breaking down organic matter and speeding up nutrient cycling. And what they lack in stature, they make up for in diversity. With more than one million known species, insects alone vastly outnumber all other invertebrates and vertebrates combined.

Despite their importance and ubiquity, some of the most basic information about invertebrates, such as where theyre most diverse and how many of them there are, still remains a mystery. This is especially problematic for conservation scientists trying to stave off global insect declines; you cant conserve something if you dont know where to look for it.

In a new study published this Wednesday in the journal Science Advances, researchers used ants as a proxy to help close major knowledge gaps and hopefully begin reversing these declines. Working for more than a decade, researchers from institutions around the world stitched together nearly one-and-a-half million location records from research publications, online databases, museums and scientific field work. They used those records to help produce the largest global map of insect diversity ever created, which they hope will be used to direct future conservation efforts.

This is a massive undertaking for a group known to be a critical ecosystem engineer, said co-author Robert Guralnick, curator of biodiversity informatics at the Florida Museum of Natural History. It represents an enormous effort not only among all the co-authors but the many naturalists who have contributed knowledge about distributions of ants across the globe.

Creating a map large enough to account for the entirety of ant biodiversity presented several logistical challenges. All currently known ant species were included, which numbered at more than 14,000, and each one varied dramatically in the amount of data available.

The majority of the records used contained a description of the location where an ant was collected or spotted but did not always have the precise coordinates needed for mapping. Inferring the extent of an ants range from incomplete records required some clever data wrangling.

Co-author Kenneth Dudley, a research technician with the Okinawa Institute of Science and Technology built a computational workflow to estimate the coordinates from the available data, which also checked the data for errors. This allowed the researchers to make different range estimates for each species of ant depending on how much data was available. For species with less data, they constructed shapes surrounding the data points. For species with more data, the researchers predicted the distribution of each species using statistical models that they tuned to reduce as much noise as possible.

The researchers brought these estimates together to form a global map, divided into a grid of 20 km by 20 km squares, that showed an estimate of the number of ant species per square (called the species richness). They also created a map that showed the number of ant species with very small ranges per square (called the species rarity). In general, species with small ranges are particularly vulnerable to environmental changes.

However, there was another problem to overcomesampling bias.

Some areas of the world that we expected to be centers of diversity were not showing up on our map, but ants in these regions were not well-studied, explained co-first author Jamie Kass, a postdoctoral fellow at the Okinawa Institute of Science and Technology. Other areas were extremely well-sampled, for example parts of the USA and Europe, and this difference in sampling can impact our estimates of global diversity.

So, the researchers utilized machine learning to predict how their diversity would change if they sampled all areas around the world equally, and in doing so, identified areas where they estimate many unknown, unsampled species exist.

This gives us a kind of treasure map, which can guide us to where we should explore next and look for new species with restricted ranges, said senior author Evan Economo, a professor at the Okinawa Institute of Science and Technology.

When the researchers compared the rarity and richness of ant distributions to the comparatively well-studied amphibians, birds, mammals and reptiles, they found that ants were about as different from these vertebrate groups as the vertebrate groups were from each other.

This was unexpected given that ants are evolutionarily highly distant from vertebrates, and it suggests that priority areas for vertebrate diversity may also have a high diversity of invertebrate species. The authors caution, however, that ant biodiversity patterns have unique features. For example, the Mediterranean and East Asia show up as diversity centers for ants more than the vertebrates.

Finally, the researchers looked at how well-protected these areas of high ant diversity are. They found that it was a low percentageonly 15% of the top 10% of ant rarity centers had some sort of legal protection, such as a national park or reserve, which is less than existing protection for vertebrates.

Clearly, we have a lot of work to do to protect these critical areas, Economo concluded.

The global distribution of known and undiscovered ant biodiversity

3-Aug-2022

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Excerpt from:

Researchers use artificial intelligence to create a treasure map of undiscovered ant species - EurekAlert

Global Artificial Intelligence in Healthcare Diagnosis Market Research Report 2022: Rising Adoption of Healthcare Artificial Intelligence in Research…

DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence in Healthcare Diagnosis Market Research Report by Technology, Component, Application, End User, Region - Global Forecast to 2026 - Cumulative Impact of COVID-19" report has been added to ResearchAndMarkets.com's offering.

The Global Artificial Intelligence in Healthcare Diagnosis Market size was estimated at USD 2,318.98 million in 2020, USD 2,725.72 million in 2021, and is projected to grow at a Compound Annual Growth Rate (CAGR) of 17.81% to reach USD 6,202.67 million by 2026.

Market Segmentation:

This research report categorizes the Artificial Intelligence in Healthcare Diagnosis to forecast the revenues and analyze the trends in each of the following sub-markets:

Competitive Strategic Window:

The Competitive Strategic Window analyses the competitive landscape in terms of markets, applications, and geographies to help the vendor define an alignment or fit between their capabilities and opportunities for future growth prospects. It describes the optimal or favorable fit for the vendors to adopt successive merger and acquisition strategies, geography expansion, research & development, and new product introduction strategies to execute further business expansion and growth during a forecast period.

FPNV Positioning Matrix:

The FPNV Positioning Matrix evaluates and categorizes the vendors in the Artificial Intelligence in Healthcare Diagnosis Market based on Business Strategy (Business Growth, Industry Coverage, Financial Viability, and Channel Support) and Product Satisfaction (Value for Money, Ease of Use, Product Features, and Customer Support) that aids businesses in better decision making and understanding the competitive landscape.

Market Share Analysis:

The Market Share Analysis offers the analysis of vendors considering their contribution to the overall market. It provides the idea of its revenue generation into the overall market compared to other vendors in the space. It provides insights into how vendors are performing in terms of revenue generation and customer base compared to others. Knowing market share offers an idea of the size and competitiveness of the vendors for the base year. It reveals the market characteristics in terms of accumulation, fragmentation, dominance, and amalgamation traits.

Market Dynamics

Drivers

Restraints

Opportunities

Challenges

Key Topics Covered:

1. Preface

2. Research Methodology

3. Executive Summary

4. Market Overview

5. Market Insights

6. Artificial Intelligence in Healthcare Diagnosis Market, by Technology

7. Artificial Intelligence in Healthcare Diagnosis Market, by Component

8. Artificial Intelligence in Healthcare Diagnosis Market, by Application

9. Artificial Intelligence in Healthcare Diagnosis Market, by End User

10. Americas Artificial Intelligence in Healthcare Diagnosis Market

11. Asia-Pacific Artificial Intelligence in Healthcare Diagnosis Market

12. Europe, Middle East & Africa Artificial Intelligence in Healthcare Diagnosis Market

13. Competitive Landscape

14. Company Usability Profiles

15. Appendix

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/vgkht7

More here:

Global Artificial Intelligence in Healthcare Diagnosis Market Research Report 2022: Rising Adoption of Healthcare Artificial Intelligence in Research...

Can artificial intelligence really help us talk to the animals? – The Guardian

A dolphin handler makes the signal for together with her hands, followed by create. The two trained dolphins disappear underwater, exchange sounds and then emerge, flip on to their backs and lift their tails. They have devised a new trick of their own and performed it in tandem, just as requested. It doesnt prove that theres language, says Aza Raskin. But it certainly makes a lot of sense that, if they had access to a rich, symbolic way of communicating, that would make this task much easier.

Raskin is the co-founder and president of Earth Species Project (ESP), a California non-profit group with a bold ambition: to decode non-human communication using a form of artificial intelligence (AI) called machine learning, and make all the knowhow publicly available, thereby deepening our connection with other living species and helping to protect them. A 1970 album of whale song galvanised the movement that led to commercial whaling being banned. What could a Google Translate for the animal kingdom spawn?

The organisation, founded in 2017 with the help of major donors such as LinkedIn co-founder Reid Hoffman, published its first scientific paper last December. The goal is to unlock communication within our lifetimes. The end we are working towards is, can we decode animal communication, discover non-human language, says Raskin. Along the way and equally important is that we are developing technology that supports biologists and conservation now.

Understanding animal vocalisations has long been the subject of human fascination and study. Various primates give alarm calls that differ according to predator; dolphins address one another with signature whistles; and some songbirds can take elements of their calls and rearrange them to communicate different messages. But most experts stop short of calling it a language, as no animal communication meets all the criteria.

Until recently, decoding has mostly relied on painstaking observation. But interest has burgeoned in applying machine learning to deal with the huge amounts of data that can now be collected by modern animal-borne sensors. People are starting to use it, says Elodie Briefer, an associate professor at the University of Copenhagen who studies vocal communication in mammals and birds. But we dont really understand yet how much we can do.

Briefer co-developed an algorithm that analyses pig grunts to tell whether the animal is experiencing a positive or negative emotion. Another, called DeepSqueak, judges whether rodents are in a stressed state based on their ultrasonic calls. A further initiative Project CETI (which stands for the Cetacean Translation Initiative) plans to use machine learning to translate the communication of sperm whales.

Yet ESP says its approach is different, because it is not focused on decoding the communication of one species, but all of them. While Raskin acknowledges there will be a higher likelihood of rich, symbolic communication among social animals for example primates, whales and dolphins the goal is to develop tools that could be applied to the entire animal kingdom. Were species agnostic, says Raskin. The tools we develop can work across all of biology, from worms to whales.

The motivating intuition for ESP, says Raskin, is work that has shown that machine learning can be used to translate between different, sometimes distant human languages without the need for any prior knowledge.

This process starts with the development of an algorithm to represent words in a physical space. In this many-dimensional geometric representation, the distance and direction between points (words) describes how they meaningfully relate to each other (their semantic relationship). For example, king has a relationship to man with the same distance and direction that woman has to queen. (The mapping is not done by knowing what the words mean but by looking, for example, at how often they occur near each other.)

It was later noticed that these shapes are similar for different languages. And then, in 2017, two groups of researchers working independently found a technique that made it possible to achieve translation by aligning the shapes. To get from English to Urdu, align their shapes and find the point in Urdu closest to the words point in English. You can translate most words decently well, says Raskin.

ESPs aspiration is to create these kinds of representations of animal communication working on both individual species and many species at once and then explore questions such as whether there is overlap with the universal human shape. We dont know how animals experience the world, says Raskin, but there are emotions, for example grief and joy, it seems some share with us and may well communicate about with others in their species. I dont know which will be the more incredible the parts where the shapes overlap and we can directly communicate or translate, or the parts where we cant.

He adds that animals dont only communicate vocally. Bees, for example, let others know of a flowers location via a waggle dance. There will be a need to translate across different modes of communication too.

The goal is like going to the moon, acknowledges Raskin, but the idea also isnt to get there all at once. Rather, ESPs roadmap involves solving a series of smaller problems necessary for the bigger picture to be realised. This should see the development of general tools that can help researchers trying to apply AI to unlock the secrets of species under study.

For example, ESP recently published a paper (and shared its code) on the so called cocktail party problem in animal communication, in which it is difficult to discern which individual in a group of the same animals is vocalising in a noisy social environment.

To our knowledge, no one has done this end-to-end detangling [of animal sound] before, says Raskin. The AI-based model developed by ESP, which was tried on dolphin signature whistles, macaque coo calls and bat vocalisations, worked best when the calls came from individuals that the model had been trained on; but with larger datasets it was able to disentangle mixtures of calls from animals not in the training cohort.

Another project involves using AI to generate novel animal calls, with humpback whales as a test species. The novel calls made by splitting vocalisations into micro-phonemes (distinct units of sound lasting a hundredth of a second) and using a language model to speak something whale-like can then be played back to the animals to see how they respond. If the AI can identify what makes a random change versus a semantically meaningful one, it brings us closer to meaningful communication, explains Raskin. It is having the AI speak the language, even though we dont know what it means yet.

A further project aims to develop an algorithm that ascertains how many call types a species has at its command by applying self-supervised machine learning, which does not require any labelling of data by human experts to learn patterns. In an early test case, it will mine audio recordings made by a team led by Christian Rutz, a professor of biology at the University of St Andrews, to produce an inventory of the vocal repertoire of the Hawaiian crow a species that, Rutz discovered, has the ability to make and use tools for foraging and is believed to have a significantly more complex set of vocalisations than other crow species.

Rutz is particularly excited about the projects conservation value. The Hawaiian crow is critically endangered and only exists in captivity, where it is being bred for reintroduction to the wild. It is hoped that, by taking recordings made at different times, it will be possible to track whether the speciess call repertoire is being eroded in captivity specific alarm calls may have been lost, for example which could have consequences for its reintroduction; that loss might be addressed with intervention. It could produce a step change in our ability to help these birds come back from the brink, says Rutz, adding that detecting and classifying the calls manually would be labour intensive and error prone.

Meanwhile, another project seeks to understand automatically the functional meanings of vocalisations. It is being pursued with the laboratory of Ari Friedlaender, a professor of ocean sciences at the University of California, Santa Cruz. The lab studies how wild marine mammals, which are difficult to observe directly, behave underwater and runs one of the worlds largest tagging programmes. Small electronic biologging devices attached to the animals capture their location, type of motion and even what they see (the devices can incorporate video cameras). The lab also has data from strategically placed sound recorders in the ocean.

ESP aims to first apply self-supervised machine learning to the tag data to automatically gauge what an animal is doing (for example whether it is feeding, resting, travelling or socialising) and then add the audio data to see whether functional meaning can be given to calls tied to that behaviour. (Playback experiments could then be used to validate any findings, along with calls that have been decoded previously.) This technique will be applied to humpback whale data initially the lab has tagged several animals in the same group so it is possible to see how signals are given and received. Friedlaender says he was hitting the ceiling in terms of what currently available tools could tease out of the data. Our hope is that the work ESP can do will provide new insights, he says.

But not everyone is as gung ho about the power of AI to achieve such grand aims. Robert Seyfarth is a professor emeritus of psychology at University of Pennsylvania who has studied social behaviour and vocal communication in primates in their natural habitat for more than 40 years. While he believes machine learning can be useful for some problems, such as identifying an animals vocal repertoire, there are other areas, including the discovery of the meaning and function of vocalisations, where he is sceptical it will add much.

The problem, he explains, is that while many animals can have sophisticated, complex societies, they have a much smaller repertoire of sounds than humans. The result is that the exact same sound can be used to mean different things in different contexts and it is only by studying the context who the individual calling is, how are they related to others, where they fall in the hierarchy, who they have interacted with that meaning can hope to be established. I just think these AI methods are insufficient, says Seyfarth. Youve got to go out there and watch the animals.

There is also doubt about the concept that the shape of animal communication will overlap in a meaningful way with human communication. Applying computer-based analyses to human language, with which we are so intimately familiar, is one thing, says Seyfarth. But it can be quite different doing it to other species. It is an exciting idea, but it is a big stretch, says Kevin Coffey, a neuroscientist at the University of Washington who co-created the DeepSqueak algorithm.

Raskin acknowledges that AI alone may not be enough to unlock communication with other species. But he refers to research that has shown many species communicate in ways more complex than humans have ever imagined. The stumbling blocks have been our ability to gather sufficient data and analyse it at scale, and our own limited perception. These are the tools that let us take off the human glasses and understand entire communication systems, he says.

Read the original post:

Can artificial intelligence really help us talk to the animals? - The Guardian

Elon Musk and Silicon Valley’s Overreliance on Artificial Intelligence – The Wire

When the richest man in the world is being sued by one of the most popular social media companies, its news. But while most of the conversation about Elon Musks attempt to cancel his $44 billion contract to buy Twitter is focusing on the legal, social, and business components, we need to keep an eye on how the discussion relates to one of tech industrys most buzzy products: artificial intelligence.

The lawsuit shines a light on one of the most essential issues for the industry to tackle: What can and cant AI do, and what should and shouldnt AI do? The Twitter v Musk contretemps reveals a lot about the thinking about AI in tech and startup land and raises issues about how we understand the deployment of the technology in areas ranging from credit checks to policing.

At the core of Musks claim for why he should be allowed out of his contract with Twitter is an allegation that the platform has done a poor job of identifying and removing spam accounts. Twitter has consistently claimed in quarterly filings that less than 5% of its active accounts are spam; Musk thinks its much higher than that. From a legal standpoint, it probably doesnt really matter if Twitters spam estimate is off by a few percent, and Twitters been clear that its estimate is subjective and that others could come to different estimates with the same data. Thats presumably why Musks legal team lost in a hearing on July 19when they asked for more time to perform detailed discovery on Twitters spam-fighting efforts, suggesting that likely isnt the question on which the trial will turn.

Regardless of the legal merits, its important to scrutinise the statistical and technical thinking from Musk and his allies. Musks position is best summarised in his filing from July 15, which states: In a May 6 meeting with Twitter executives, Musk was flabbergasted to learn just how meager Twitters process was. Namely: Human reviewers randomly sampled 100 accounts per day (less than 0.00005% of daily users) and applied unidentified standards to somehow conclude every quarter for nearly three years that fewer than 5% of Twitter users were false or spam. The filing goes on to express the flabbergastedness of Musk by adding, Thats it. No automation, no AI, no machine learning.

Perhaps the most prominent endorsement of Musks argument here came from venture capitalist David Sacks,who quoted it while declaring, Twitter is toast. But theres an irony in Musks complaint here: If Twitter were using machine learning for the audit as he seems to think they should, and only labeling spam that was similar to old spam, it would actually produce a lower, less-accurate estimate than it has now.

There are three components to Musks assertion that deserve examination: his basic statistical claim about what a representative sample looks like, his claim that the spam-level auditing process should automated or use AI or machine learning, and an implicit claim about what AI can actually do.

On the statistical question, this is something any professional anywhere near the machine learning space should be able to answer (so can many high school students). Twitter uses a daily sampling of accounts to scrutinise a total of 9,000 accounts per quarter (averaging about 100 per calendar day) to arrive at its under-5% spam estimate. Though that sample of 9,000 users per quarter is, as Musk notes, a very small portion of the 229 million active users the company reported in early 2022, a statistics professor (or student) would tell you that thats very much not the point. Statistical significance isnt determined by what percentage of the population is sampled but simply by the actual size of the sample in question. As Facebook whistleblower Sophie Zhang put it, you can make the comparison to soup: It doesnt matter if you have a small or giant pot of soup, if its evenly mixed you just need a spoonful to taste-test.

The whole point of statistical sampling is that you can learn most of what you need to know about the variety of a larger population by studying a much-smaller but decently sized portion of it. Whether the person drawing the sample is a scientist studying bacteria, or a factory quality inspector checking canned vegetables, or a pollster asking about political preferences, the question isnt what percentage of the overall whole am I checking, but rather how much should I expect my sample to look like the overall population for the characteristics Im studying? If you had to crack open a large percentage of your cans of tomatoes to check for their quality, youd have a hard time making a profit, so you want to check the fewest possible to get within a reasonable range of confidence in your findings.

Also read: Why Understanding This 60S Sci-Fi Novel Is Key to Understanding Elon Musk

While this thinking does go against the grain of certain impulses (theres a reason why many people make this mistake), there is also a way to make this approach to sampling more intuitive. Think of the goal in setting sample size as getting a reasonable answer to the question, If I draw another sample of the same size, how different would I expect it to be? A classic approach to explaining this problem is to imagine youve bought a great mass of marbles, that are supposed to come in a specific ratio: 95% purple marbles and 5% yellow marbles. You want to do a quality inspection to ensure the delivery is good, so you load them into one of those bingo game hoppers, turn the crank, and start counting the marbles you draw, in each color. Lets say your first sample of 20 marbles has 19 purple and one yellow; should you be confident that you got the right mix from your vendor? You can probably intuitively understand that the next 20 random marbles you draw could end up being very different, with zero yellows or seven. But what if you draw 1,000 marbles, around the same as the typical political poll? What if you draw 9,000 marbles? The more marbles you draw, the more youd expect the next drawing to look similar, because its harder to hide random fluctuations in larger samples.

There are onlinecalculators that can let you run the numbers yourself. If you only draw 20 marbles and get one yellow, you can have 95% confidence that the yellows would be between 0.13% and 24.9% of the total not very exact. If you draw 1,000 marbles and get 50 yellows, you can have 95% confidence that yellows would be between 3.7% and 6.5% of the total; closer, but perhaps not something youd sign your name to in a quarterly filing. At 9,000 marbles with 450 yellow, you can have 95% confidence the yellows are between 4.56% and 5.47%; youre now accurate to within a range of less than half a percent, and at that point Twitters lawyers presumably told them theyd done enough for their public disclosure.

Printed Twitter logos are seen in this picture illustration taken April 28, 2022. Photo: Reuters/Dado Ruvic/Illustration/File Photo

This reality that statistical sampling works to tell us about large populations based on much-smaller samples underpins every area where statistics is used, from checking the quality of the concrete used to make the building youre currently sitting in, to ensuring the reliable flow of internet traffic to the screen youre reading this on.

Its also what drives all current approaches to artificial intelligence today. Specialists in the field almost never use the term artificial intelligence to describe their work, preferring to use machine learning. But another common way to describe the entire field as it currently stands is applied statistics. Machine learning today isnt really computers thinking in anything like what we assume humans do (to the degree we even understand how humans think, which isnt a great degree); its mostly pattern-matching and -identification, based on statistical optimisation. If you feed a convolutional neural network thousands of images of dogs and cats and then ask the resulting model to determine if the next image is of a dog or a cat, itll probably do a good job, but you cant ask it to explain what makes a cat different from a dog on any broader level; its just recognising the patterns in pictures, using a layering of statistical formulas.

Stack up statistical formulas in specific ways, and you can build a machine learning algorithm that, fed enough pictures, will gradually build up a statistical representation of edges, shapes, and larger forms until it recognises a cat, based on the similarity to thousands of other images of cats it was fed. Theres also a way in which statistical sampling plays a role: You dont need pictures of all the dogs and cats, just enough to get a representative sample, and then your algorithm can infer what it needs to about all the other pictures of dogs and cats in the world. And the same goes for every other machine learning effort, whether its an attempt to predict someones salary using everything else you know about them, with a boosted random forests algorithm, or to break down a list of customers into distinct groups, in a clustering algorithm like a support vector machine.

You dont absolutely have to understand statistics as well as a student whos recently taken a class in order to understand machine learning, but it helps. Which is why the statistical illiteracy paraded by Musk and his acolytes here is at least somewhat surprising.

But more important, in order to have any basis for overseeing the creation of a machine-learning product, or to have a rationale for investing in a machine-learning company, its hard to see how one could be successful without a decent grounding in the rudiments of machine learning, and where and how it is best applied to solve a problem. And yet, team Musk here is suggesting they do lack that knowledge.

Once you understand that all machine learning today is essentially pattern-matching, it becomes clear why you wouldnt rely on it to conduct an audit such as the one Twitter performs to check for the proportion of spam accounts. Theyre hand-validating so that they ensure its high-quality data, explained security professional Leigh Honeywell, whos been a leader at firms like Slack and Heroku, in an interview. She added, any data you pull from your machine learning efforts will by necessity be not as validated as those efforts. If you only rely on patterns of spam youve already identified in the past and already engineered into your spam-detection tools, in order to find out how much spam there is on your platform, youll only recognise old spam patterns, and fail to uncover new ones.

Also read: India Versus Twitter Versus Elon Musk Versus Society

Where Twitter should be using automation and machine learning to identify and remove spam is outside of this audit function, which the company seems to do. It wouldnt otherwise be possible tosuspend half a million accountsevery day and lock millions of accounts each week, as CEO Parag Agrawal claims. In conversations Ive had with cybersecurity workers in the field, its quite clear that large amounts of automation is used at Twitter (though machine learning specifically is actually relatively rare in the field because the results often arent as good as other methods, marketing claims by allegedly AI-based security firms to the contrary).

At least in public claims related to this lawsuit, prominent Silicon Valley figures are suggesting they have a different understanding of what machine learning can do, and when it is and isnt useful. This disconnect between how many nontechnical leaders in that world talk about AI, and what it actually is, has significant implications for how we will ultimately come to understand and use the technology.

The general disconnect between the actual work of machine learning and how its touted by many company and industry leaders is something data scientists often chalk up to marketing. Its very common to hear data scientists in conversation among themselves declare that AI is just a marketing term. Its also quite common to have companies using no machine learning at all describe their work as AI to investors and customers, who rarely know the difference or even seem to care.

This is a basic reality in the world of tech. In my own experience talking with investors who make investments in AI technology, its often quite clear that they know almost nothing about these basic aspects of how machine learning works. Ive even spoken to CEOs of rather large companies that rely at their core on novel machine learning efforts to drive their product, who also clearly have no understanding of how the work actually gets done.

Not knowing or caring how machine learning works, what it can or cant do, and where its application can be problematic could lead society to significant peril. If we dont understand the way machine learning actually works most often by identifying a pattern in some dataset and applying that pattern to new data we can be led deep down a path in which machine learning wrongly claims, for example, to measure someones face for trustworthiness (when this is entirely based on surveys in which people reveal their own prejudices), or that crime can be predicted (when many hyperlocal crime numbers are highly correlated with more police officers being present in a given area, who then make more arrests there), based almost entirely on a set of biased data or wrong-headed claims.

If were going to properly manage the influence of machine learning on our society on our systems and organisations and our government we need to make sure these distinctions are clear. It starts with establishing a basic level of statistical literacy, and moves on to recognising that machine learning isnt magicand that it isnt, in any traditional sense of the word, intelligent that it works by pattern-matching to data, that the data has various biases, and that the overall project can produce many misleading and/or damaging outcomes.

Its an understanding one might have expected or at least hoped to find among some of those investing most of their life, effort, and money into machine-learning-related projects. If even people that deep arent making those efforts to sort fact from fiction, its a poor omen for the rest of us, and the regulators and other officials who might be charged with keeping them in check.

This article was originally published on Future Tense, a partnership between Slate magazine, Arizona State University, and New America.

Read more here:

Elon Musk and Silicon Valley's Overreliance on Artificial Intelligence - The Wire

High Five: Artificial Intelligence-Generated Campaigns and Experiments | LBBOnline – Little Black Book – LBBonline

I cant stop playing with Midjourney. It may signal the end of human creativity or the start of an exciting new era, but heres me, like a monkey at a typewriter chucking random words into the algorithm for an instant hit of this-shouldnt-be-as-good-as-it-is art.

For those who dont know, Midjourney is one of a number of image-generating AI algorithms that can turn written prompts into unworldly pictures, It, along with OpenAIs DALL-E 2, have been having something of a moment in the last month as people get their hands on them and try to push them to their limits. Craiyon - formerly DALL-E mini - is an older, less refined and very much wobblier platform to try too. Its worth having a go just to get a feel for what these algorithms can and cant do - though be warned, the dopamine hit of seeing some silly words turn into something strange, beautiful, terrifying or cool within seconds is quite addictive. A confused dragon playing chess. A happy apple. A rat transcends and perceives the oneness of the universe, pulsing with life. Yes Sir, I can boogie.

Within the LBB editorial team, weve been having lots of discussions about the implications of these art-generating algorithms. What are the legal and IP ramifications for those artists whose works are mined and drawn into the data set (on my Midjourney server, Klimt and HR Giger seem to be the most popular artists to replicate but what of more contemporary artists?). Will the industry use this to find unexpected new looks that go beyond the human creative habits and rules - or will we see content pulled directly from the algorithm? How long will it take for the algorithms to iron out the wonky weirdness that can sometimes take the human face way beyond the uncanny valley to a nightmarish, distorted abyss? What are the keys to writing prompts when you are after something very specific? Why does the algorithm seem to struggle when two different objects are requested in the same image?

Unlike other technologies that have shaken up the advertising industry, these image-generating algorithms are relatively accessible and easy to use (DALL-E 2s waitlist aside). The results are almost instant - and the possibilities, for now, seem limitless. Weve already seen a couple of brands have a go with campaigns that are definitely playing on the novelty and PR-angle of this new technology - and also a few really intriguing art projects too...

Agency: Rethink

The highest profile commercial campaign of the bunch is Rethinks new Heinz campaign. Its a follow up to a previous campaign, in which humans were asked to draw a bottle of ketchup and ended up all drawing a bottle of Heinz. This time around, the team asked Dall-E 2 - and the algorithm, like its human predecessors, couldnt help but create images that looked like Heinz branded bottles (albeit with a funky AI spin). In this case, the AI is used to reinforce and revisit the original idea - but how long will it take before were using AIs to generate ideas for boards or pitch images?

Agency: 10 Days

Animation: Jeremy Higgins

This artsy animated short by art director and designer Jeremy Higgins is a delight and shows how a sequence of similar AI-generated images can serve as frames in a film. The flickering effect ironically gives the animation a very hand-made stop motion style, reminding me of films that use individual oil paintings as frames. Its a really vivid encapsulation of what it feels like to be sucked into a Midjourney rabbit hole too...I also have to tip my hat to Stefan Sagmeister who shared this film on his Instagram account.

For the latest issue of Cosmopolitan, creative Karen X Cheng used Dall-E 2 to create a dramatic and imposing cover - using the prompt: 'a strong female president astronaut warrior walking on the planet Mars, digital art synthwave'. Theres a deep dive into the creative process that also examines some of the potential ramifications of the technology on the Cosmopolitan website thats well worth a read.

Studio:T&DA

Heres a cheeky sixth entry to High Five. This execution is part of a wider summer platform for BT Sport, centred around belief - in this case football pundit Robbie Savage is served up a Dall-E 2 image of striker Aleksander Mitrovi lifting the golden boot. Fulham has just been promoted to the Premier League - but though Robbie can see it, he cant quite believe it.

Read more from the original source:

High Five: Artificial Intelligence-Generated Campaigns and Experiments | LBBOnline - Little Black Book - LBBonline

Artificial Intelligence In Insurtech Market Is Expected to Boom- Cognizant, Next IT Corp, Kasisto – Digital Journal

New Jersey, N.J., Aug 04, 2022 The Artificial Intelligence In Insurtech Market research report provides all the information related to the industry. It gives the outlook of the market by giving authentic data to its client which helps to make essential decisions. It gives an overview of the market which includes its definition, applications and developments, and manufacturing technology. This Artificial Intelligence In Insurtech market research report tracks all the recent developments and innovations in the market. It gives the data regarding the obstacles while establishing the business and guides to overcome the upcoming challenges and obstacles.

Artificial Intelligence (AI) can help insurers assess risk, detect fraud, and reduce human error in the claim process. As a result, insurers are better equipped to sell the most appropriate plans to their customers. Customers benefit from the improved claims handling and processing provided by Artificial Intelligence.

Increased investment by insurance companies in artificial intelligence and machine learning, as well as increased preferences for personalized insurance services, are conducive to the growth of global artificial intelligence in the insurance market. In addition, the increase in cooperation between insurance companies and the company dealing with AI and machine learning solutions positively influences the development of AI in the insurance market.

Get the PDF Sample Copy (Including FULL TOC, Graphs, and Tables) of this report @:

https://www.a2zmarketresearch.com/sample-request/670659

Competitive landscape:

This Artificial Intelligence In Insurtech research report throws light on the major market players thriving in the market; it tracks their business strategies, financial status, and upcoming products.

Some of the Top companies Influencing this Market include:Cognizant, Next IT Corp, Kasisto, Cape Analytics Inc., Microsoft, Google, Salesforce, Amazon Web Services, Lemonade, Lexalytics, H2O.ai,

Market Scenario:

Firstly, this Artificial Intelligence In Insurtech research report introduces the market by providing an overview which includes definition, applications, product launches, developments, challenges, and regions. The market is forecasted to reveal strong development by driven consumption in various markets. An analysis of the current market designs and other basic characteristics is provided in the Artificial Intelligence In Insurtech report.

Regional Coverage:

The region-wise coverage of the market is mentioned in the report, mainly focusing on the regions:

Segmentation Analysis of the market

The market is segmented on the basis of the type, product, end users, raw materials, etc. the segmentation helps to deliver a precise explanation of the market

Market Segmentation: By Type

Service, Product,

Market Segmentation: By Application

Automotive, Healthcare, Information Technology, Others,

For Any Query or Customization: https://a2zmarketresearch.com/ask-for-customization/670659

An assessment of the market attractiveness with regard to the competition that new players and products are likely to present to older ones has been provided in the publication. The research report also mentions the innovations, new developments, marketing strategies, branding techniques, and products of the key participants present in the global Artificial Intelligence In Insurtech market. To present a clear vision of the market the competitive landscape has been thoroughly analyzed utilizing the value chain analysis. The opportunities and threats present in the future for the key market players have also been emphasized in the publication.

This report aims to provide:

Table of Contents

Global Artificial Intelligence In Insurtech Market Research Report 2022 2029

Chapter 1 Artificial Intelligence In Insurtech Market Overview

Chapter 2 Global Economic Impact on Industry

Chapter 3 Global Market Competition by Manufacturers

Chapter 4 Global Production, Revenue (Value) by Region

Chapter 5 Global Supply (Production), Consumption, Export, Import by Regions

Chapter 6 Global Production, Revenue (Value), Price Trend by Type

Chapter 7 Global Market Analysis by Application

Chapter 8 Manufacturing Cost Analysis

Chapter 9 Industrial Chain, Sourcing Strategy and Downstream Buyers

Chapter 10 Marketing Strategy Analysis, Distributors/Traders

Chapter 11 Market Effect Factors Analysis

Chapter 12 Global Artificial Intelligence In Insurtech Market Forecast

Buy Exclusive Report @: https://www.a2zmarketresearch.com/checkout

Contact Us:

Roger Smith

1887 WHITNEY MESA DR HENDERSON, NV 89014

[emailprotected]

+1 775 237 4157

Link:

Artificial Intelligence In Insurtech Market Is Expected to Boom- Cognizant, Next IT Corp, Kasisto - Digital Journal

Artificial Intelligence is the Future of the Banking Industry Are You Prepared for It? – International Banker

By Pritham Shetty, Consulting Director, Propel Technology Group Inc

Our world is moving at a fast pace. Though banks originally built their foundations to be run solely by humans, the time has come forartificial intelligence in the banking industry. In 2020, the global AI banking market was valued at $3.88 billion, andit is projected to reach $64.03 billion by the end of the decade,with a compound annual growth rate of 32.6%. However, when it comes to implementing even the best strategies, theapplication of artificial intelligence in the banking industryis susceptible to weak core tech and poor data backbones.

By my count, there were 20,000 new banking regulatory requirements created in 2015 alone. Chances are your business wont find a one-size-fits-all solution to dealing with this. The next-best option is to be nimble. You need to be able to break down the business process into small chunks. By doing so, you can come up with digital strategies that work with new and existing regulations.

AIcan take you a long way in this process, but you must know how to harness its power. Take originating home loans, for instance. This can be an important, sometimes tedious, process for the loan seeker and bank. With an AI solution, loan origination can happen quicker and be more beneficial to both parties.

As the world of banking moves toward AI, it is integral to note that the crucial working element for AI is data. The trick to using that data is to understand how to leverage it best for your business value. Data with no direction wont lead to progress, nor will it lead to the proper deployment of your AI. That is one of the top reasons it isso challenging to implement AI in banks there has to be a plan.

Even if you come up with a poor strategy, those mistakes can be course-corrected over time. It takes some time and effort, but it is doable. If you home in on how customer information can be used, you can utilize AI for banking services in a way that is scalable and actionable. Once you understand how to use the data you collect, you can develop technical solutions that work with each other, identify specific needs, and build data pipelines that will lead you down the road to AI.

How is artificial intelligence changing the banking sector?

Due to the increasingly digital world, customers have more access to their banking information than ever. Of course, this can lead to other problems. Because there is so much access to data, there are also prime opportunities for fraudulent activities, and this is one example ofhow AI is changing the banking sector. With AI, you can train systems to learn, understand, and recognize when these activities happen. In fact, there was a5% decrease in record exposure from 2020 to 2021.

AI also safeguards against data theft or abuse. Not only can AI recognize breaches from outside sources, but it can also recognize internal threats. Once an AI system is trained, it can identify these problems and even offer solutions to them. For instance, a customer support call center can have traffic directed by AI to assuage an influx of calls during high-volume fluctuations.

Another great example of this is the development ofconversational AI platforms. The ubiquity of social media and other online platforms can be used to tailor customer experiences directly led by AI. By using the data gathered from all sources, AI can greatly improve the customer experience overall.

For example, a loan might take anywhere from seven to 45 days to be granted. But with AI, the process can be expedited not only for the customer, but also the bank. By using AI in a situation such as this, your bank can assess the risk it is taking on by servicing loans. It can also make the process faster by performing underwriting, document scanning, and other manual processes previously associated with data collection. On top of all that, AI can gather and analyze data about your customers behaviors throughout their banking lives.

In the past, so much of this work was done solely by people. Although automation has certainly helped speed up and simplify tasks, it is used for tedium and doesnt have the complexity of AI. AI saves time and money by freeing up your employees to do other processes and provides valuable insights to your customers. And customers can budget better and have a clearer idea of where their money is going.

Even the most traditional banks will want to adopt AI to save time and money and allow employees more opportunities to have positive one-on-one relationships with customers. Look no further than fintech companies such as Credijusto, Nubank, and Monzo that have digitized traditional banking services through the power of cutting-edge tech.

Are you ready to put AI to work for your business?

Today, its not a question ofhow AI is impacting financial services. Now, its about how to implement it. That all starts with you. You must ask the right questions: What are your goals for implementing AI? Do you want to improve your internal processes? Simply provide a better customer service experience? If so, how should you implement AI for your banking services? Start with these strategies:

By making realistic short-term goals, you set yourself up for future success. These are the solutions that will be the building blocks for the type of AI everyone will aspire to use.

You want to ensure that you know how you currently use data and how you plan on using it in the future. Again, this sets your organization up for success in the long run. If you dont have the right practices now, you certainly wont going forward.

As you implement AI into your banking practices, you should know how exactlyyou generate data. Then, you must understand how you interpret it. What is the best use for it? After that, you can make decisions that will be scalable, useful, and seamless.

Technology has not only made the world around us move faster, but also better in so many ways. Traditional institutions such as banks might be slow to adopt, but weve already seenhow artificial intelligence is changing the banking sector. By taking the proper steps, you could be moving right along with it into the future.

See the article here:

Artificial Intelligence is the Future of the Banking Industry Are You Prepared for It? - International Banker

Some Idiot Asked The Dall.E mini Artificial Intelligence Program What The Last Selfies Of Humans Will Look Like And Good News, We’re Definitely Headed…

Metro- A TikToker asked Dall.E mini, the popular Artificial Intelligence(AI) image generator, what the last selfies on earth would look like and the results are chilling.

In a series of videos titled Asking an Ai to show the last selfie ever taken in the apocalypse, a TikTok account called @robotoverloards, shared the disturbing images.

Each image shows a person taking a selfie set against an apocalyptic background featuring scenes of a nuclear wasteland and catastrophic weather, along with cities burning and even zombies.

Dall.Emini, now renamed toCraiyonis an AI model that can draw images from any text prompt.

The image generator uses artificial intelligence to make photos based on the text you put in.

The image generator is connected to an artificial intelligence that has, for some time, been scraping the web for images to learn what things are. Often it will draw this from the captions attached to the pictures.

What's up everybody? I'm back with my weekly "old man screaming at the clouds" rant about how artificial intelligence is going to wipe our species clean off the planet and it's blatantly telling us this and we continue to ignore it.

Look at this shit.

Does this look like a good time to anybody that's not "metal as fuck"?

No. Absolutely not.

What's worse than the disfiguration in all these beauties' selfies is the devastation in the landscapes behind them.

That shit looks like straight-up nuclear winter to my virgin eyes.

Call it skynet, Boston Robotics, Dall.E mini, whatever the fuck you want. Bottom line is its robot scum and our man Stephen Hawking told us years ago, and Elon Musk is telling us now, that A.I. is going to be the end all be all of homosapiens. That's us. And that's a fucking wrap.

p.s. - the only thing that could make a nuclear/zombie apocalypse worse is this song playing on repeat in your head

The rest is here:

Some Idiot Asked The Dall.E mini Artificial Intelligence Program What The Last Selfies Of Humans Will Look Like And Good News, We're Definitely Headed...

PennWest Edinboro hosts astronomy camp for students with visual impairments – Edinboro University

Dr. David Hurd, director of Edinboro's planeterium, works with Emma Pabrazinsky, of Johnstown, Pa., and YiLi Smedley, from Titusville, Pa., during the Summer Astronomy Camp for Blind and Visually Impaired High School Students.

The last total solar eclipse that was clearly visible from North America occurred in August 2017. The next major eclipse will pass over Edinboro on April 8, 2024, as the community lies within the path of totality where the moon covers the sun completely.

As astronomy enthusiasts prepare for the upcoming intergalactic phenomenon, Dr. David Hurd, director of PennWest Edinboros planetarium, is expanding his outreach to students with visual impairments significantly widening the audience for solar eclipses.

Being able to expand viewership of the eclipse and any astronomical event for that matter is what motivates me to continue teaching astronomy, said Hurd, who has been teaching science and astronomy-related courses at Edinboro for 30 years. It is those magical moments when I see the lightbulb come on for my students who may have never experienced astronomy visually but are starting to connect the dots in their mind's eye.

This summer, Hurd is hosting groups from Erie, Crawford, Venango and Allegheny counties for an on-campus astronomy camp for students with visual impairments. During the camps, participants use tangible items to re-create the moon phases and a selection of tactile books to research eclipses and characteristics of the moon, our solar system and stars.

Using Hurds multisensory textbooks, students experience the total solar eclipse through their fingertips and ears. In 2017, the Hurd led the production of Getting a Feel for Eclipses the official NASA braille guide to the Great American Eclipse which occurred that August.

Dr. Hurd and Pittsburgh high school student Angelina Angelcyk discover the world of eclipses with Hurd's tactile guide.

Students with visual impairments can scan the QR code on the cover of the book to access audio tracks with narration. Inside the book, students can find Braille images and text to experience eclipses and learn about solar and lunar patterns. During the camp, students explored seven books and many different hands-on activities that highlight astronomical concepts.

Each book was written and produced by Hurd and his colleague, Dr. Cass Runyon from the College of Charleston and was supported through Joe Minafra, who works with the Solar System Exploration Research Virtual Institute an arm of NASA.

Edinboro sophomore Lexi Pollock, biology major, and Edinboro grad Ken Quinn, who has visual impairments, assisted Hurd in the summer camp.

I have worked with students with many different impairments, but I particularly like working with the blind and visually impaired community as they have a unique aspect of understanding and making sense of our world and universe, Hurd said. And why should our enjoyment of the heavens be limited to just sight? Aren't we all haptic learners who like to touch and understand? That's often why we make models.

Hurd is no stranger to bridging gaps between teaching science and reaching students with disabilities and visual impairments.

Partnering with Runyon, Hurd has produced many products for NASA that have helped bridge the gap between the research community and special needs students. Highlights of their work include the Tactile Guide to the Solar System, which is part of the Lunar Nautics toolkit, through NASA's Central Operations of Resources for Educators (CORE).

Recently, Hurd served as co-principal investigator on a major Department of Education grant that educated science teachers on how to better address the needs of students with visual impairments and is currently working with San Jose State University on a National Science Foundation Grant.

The Summer Astronomy Camp for Blind and Visually Impaired High School Students was funded through the Pennsylvania Space Grant Consortium.

See the article here:

PennWest Edinboro hosts astronomy camp for students with visual impairments - Edinboro University

Why gravitational waves are the future of astronomy – Big Think

It was over 100 years ago that Einstein put forth, in its final form, the General theory of Relativity. The old Newtonian conception of gravitation where two massive objects attracted one another, instantaneously, with a force proportional to their masses and inversely proportional to the square of the distance between them disagreed with both the observations of Mercurys orbit and the theoretical requirements of special relativity: where nothing could travel faster than light, not even the force of gravity itself.

General Relativity replaced Newtonian gravity by instead treating spacetime as a four-dimensional fabric, where all the matter and energy traveled through that fabric: limited by the speed of light. That fabric wasnt simply flat, like a Cartesian grid, but rather had its curvature determined by the presence and motion of matter and energy: matter and energy tells spacetime how to curve, and that curved spacetime tells matter and energy how to move. And whenever an energy-containing object moved through curved space, one inevitable consequence is that it would emit energy in the form of gravitational radiation, i.e., gravitational waves. Theyre everywhere in the Universe, and now that weve begun to detect them, theyre about to open up the future of astronomy. Heres how.

Numerical simulations of the gravitational waves emitted by the inspiral and merger of two black holes. The colored contours around each black hole represent the amplitude of the gravitational radiation; the blue lines represent the orbits of the black holes and the green arrows represent their spins. The physics of binary black hole mergers is independent of absolute mass, but depends heavily on the relative masses and spins of the merging black holes.

The first two things you need to know, in order to understand gravitational wave astronomy, is how gravitational waves are generated and how they affect quantities that we can observe in the Universe. Gravitational waves are created whenever an energy-containing object passes through a region where the spacetime curvature changes. This applies to:

In all of these cases, the energy distribution within a particular region of space changes rapidly, and that results in the production of a form radiation inherent to space itself: gravitational waves.

These ripples in the fabric of spacetime travel at precisely the speed of light in a vacuum, and they cause space to alternately compress-and-rarify, in mutually perpendicular directions, as the peaks and troughs of the gravitational waves pass over them. This inherently quadrupolar radiation affects the properties of the space that they pass through, as well as all objects and entities within that space.

Gravitational waves propagate in one direction, alternately expanding and compressing space in mutually perpendicular directions, defined by the gravitational waves polarization. Gravitational waves themselves, in a quantum theory of gravity, should be made of individual quanta of the gravitational field: gravitons. While they might spread out evenly over space, the amplitude is the key quantity for detectors, not the energy.

If you want to detect a gravitational wave, you need some way to be sensitive to both the amplitude and frequency of the wave youre searching for, and you also need to have some way to detect that its affecting the region of space youre measuring. When gravitational waves pass through a region of space:

Travel the Universe with astrophysicist Ethan Siegel. Subscribers will get the newsletter every Saturday. All aboard!

Numerous detection schemes have been proposed, including vibrating bars that would be sensitive to the oscillatory motion of a passing gravitational wave, pulsar timing that would be sensitive to oscillatory changes of gravitational waves that passed through the pulses line-of-sight with respect to us, and reflected laser arms that span different directions, where the relative changes between the multiple path-lengths would reveal the evidence of a gravitational wave as it passed through.

When the two arms are of exactly equal length and there is no gravitational wave passing through, the signal is null and the interference pattern is constant. As the arm lengths change, the signal is real and oscillatory, and the interference pattern changes with time in a predictable fashion.

The last of these is precisely the first and thus far, the only method by which weve ever successfully detected gravitational waves. Our first such detection came on September 14, 2015 and represented the inspiral and merger of two black holes of 36 and 29 solar masses, respectively. As they merged together, they formed a final black hole of only 62 solar masses, with the missing three solar masses getting converted into pure energy, via E = mc, in the form of gravitational waves.

As those waves passed through planet Earth, they alternately compressed-and-rarified our planet by less than the width of a blade of grass: a minuscule amount. However, we had two gravitational wave detectors the LIGO Hanford and LIGO Livingston detectors that each consisted of two perpendicular laser arms, 4 km long, that reflected lasers back-and-forth over a thousand times before the beams were brought back together and recombined.

By observing the periodic shifts in the interference patterns created by the combined lasers, which themselves were caused by the passing gravitational waves through the space that the laser light was traveling through, scientists were able to reconstruct the amplitude and frequency of the gravitational wave that passed through. For the first time, wed captured these now-infamous ripples in spacetime.

GW150914 was the first ever direct detection and proof of the existence of gravitational waves. The waveform, detected by both LIGO observatories, Hanford and Livingston, matched the predictions of general relativity for a gravitational wave emanating from the inward spiral and merger of a pair of black holes of around 36 and 29 solar masses and the subsequent ringdown of the single resulting black hole.

Since that time, the twin LIGO detectors have been joined by two other ground-based laser interferometer gravitational wave detectors: the Virgo detector in Europe, and the KAGRA detector in Japan. By the end of 2022, all four detectors will combine to produce an unprecedented gravitational wave detector array, allowing them to be sensitive to lower-amplitude gravitational waves originating from across more locations on the sky than ever before. Later this decade, theyll be joined by a fifth detector, LIGO India, which will increase their sensitivity even further.

You have to realize that every gravitational wave that passes through Earth comes in with a specific orientation, and only the orientations that cause substantial shifts in both perpendicular laser-arms of an individual detector can lead to a detection. The twin LIGO Hanford and LIGO Livingston detectors are specifically oriented for redundancy: where the angles the detectors are at, relative to one another, is precisely compensated for by the curvature of the Earth. This choice ensures that a gravitational wave that appears in one detector will also appear in the other, but the cost is that a gravitational wave thats insensitive to one detector will also be insensitive to the other. In order to get better coverage, more detectors with a diversity of orientations including detectors sensitive to orientations that LIGO Hanford and LIGO Livingston will miss are necessary to win the Pokmon-esque game of catching them all.

The most up-to-date plot, as of November, 2021, of all the black holes and neutron stars observed both electromagnetically and through gravitational waves. While these include objects ranging from a little over 1 solar mass, for the lightest neutron stars, up to objects a little over 100 solar masses, for post-merger black holes, gravitational wave astronomy is presently only sensitive to a very narrow set of objects.

But even with up to five detectors, with four independent orientations between them, our gravitational wave capabilities will still be limited in two important ways: in terms of amplitude and frequency. Right now, we have somewhere in the ballpark of ~100 gravitational wave events, total, but all of them are from relatively low-mass, compact objects (black holes and neutron stars) that have been caught in the final stages of inspiraling and merging together. In addition, theyre all relatively nearby, with black hole mergers extended out a few billion light-years and neutron star mergers reaching perhaps a couple of million light-years. So far, were only sensitive to the black holes that are around 100 solar masses or under.

Again, the reason is simple: gravitational field strengths increase the closer you get to a massive object, but the closest you can get to a black hole is determined by the size of its event horizon, which is primarily determined by a black holes mass. The more massive the black hole, the larger its event horizon, and that means the greater the amount of time it takes for any object to complete an orbit while still remaining outside the event horizon. The lowest-mass black holes (and all of the neutron stars) allow for the shortest orbital periods around them, and even with thousands of reflections, a laser arm thats only 3-4 km long isnt sensitive to longer time periods.

Gravitational waves span a wide variety of wavelengths and frequencies, and requirea set of vastly different observatories to probe them. The Astro2020 decadal offers a plan to support science in every one of these regimes, furthering our knowledge of the Universe as never before. By the end of the 2030s, we can expect a fleet of various gravitational wave observatories that are sensitive to many different classes of gravitational waves.

Thats why, if we want to detect the gravitational waves emitted by any other sources, including:

we need a new, fundamentally different set of gravitational wave detectors. The ground-based detectors we have today, despite how fabulous they truly are in their realm of applicability, are limited in amplitude and frequency by two factors that cannot be easily improved. The first is the size of the laser arm: if we want to improve our sensitivity or the frequency range that we can cover, we need longer laser arms. With ~4 km arms, were already seeing just about the highest-mass black holes we can; if we want to probe either higher masses or the same masses at greater distances, wed need a new detector with longer laser arms. We might be able to build laser arms perhaps ~10 times as long as the current limits, but thats the best well ever be able to do, because the second limit is set by planet Earth itself: the fact that its curved along with the fact that tectonic plates exist. Inherently, we cant build laser arms beyond a certain length or a certain sensitivity here on Earth.

With three equally spaced detectors in space connected by laser arms, periodic changes in their separation distance can reveal the passing of gravitational waves of appropriate wavelengths. LISA will be humanitys first detector capable of detecting spacetime ripples from supermassive black holes and the objects that fall into them. If these objects are found to exist prior to the formation of the first stars, that would be a smoking gun for the existence of primordial black holes.

But thats okay, because theres another approach that we should begin taking in the 2030s: creating a laser-based interferometer in space. Instead of being limited by either the fundamental seismic noise that cannot be avoided as the Earths crust moves atop the mantle, or by our ability to construct a perfectly straight tube given the curvature of the Earth, we can create laser arms with baselines hundreds of thousands or even millions of kilometers long. This is the idea behind LISA: the Laser Interferometer Space Antenna, scheduled to be launched in the 2030s.

With LISA, we should be able to achieve pristine sensitivities at lower frequencies (i.e., for longer gravitational wave wavelengths) than ever before. We should be able to detected black holes in the thousands-to-millions of solar mass range, as well as highly mismatched black hole mass mergers. Additionally, we should be able to see sources that LIGO-like detectors will be sensitive to, except in much earlier stages, giving us months or even years of notice to prepare for a merger event. With enough such detectors, we should be able to pinpoint precisely where these merger events are going to occur, enabling us to point our other equipment particle detectors and electromagnetically-sensitive telescopes to the right location right at the critical moment. LISA, in many ways, will be the ultimate triumph for what we currently call multi-messenger astronomy: where we can observe light, gravitational waves, and/or particles originating from the same astrophysical event.

This illustration show how the Earth, itself embedded within spacetime, sees the arriving signals from various pulsars delayed and distorted by the background of cosmic gravitational waves that propagate all throughout the Universe. The combined effects of these waves alters the timing of each and every pulsar, and a long-timescale, sufficiently sensitive monitoring of these pulsars can reveal those gravitational signals.

But for even longer-wavelength events, generated by:

we need even longer baselines to probe. Fortunately, the Universe delivers us exactly such a way to do it, naturally, simply by observing whats out there: precise, accurate, natural clocks, in the form of millisecond pulsars. Found all throughout our galaxy, including thousands and tens-of-thousands of light-years away, these natural clocks emit precisely-timed pulses, hundreds of times per second, and are stable on the timescales of years or even decades.

By measuring the pulse periods of these pulsars precisely, and by stitching them together into a continuously-monitored network, the combined timing variations seen across pulsars can reveal these signals that no currently proposed human-created detector could uncover. We know there ought to be many supermassive black hole binaries out there, and the most massive such pairs could even be detected and pinpointed individually. We have lots of circumstantial evidence that an inflationary gravitational wave background should exist, and we can even predict what its gravitational wave spectrum should look like, but we do not know its amplitude. If were lucky in our Universe, in the sense that the amplitude of such a background is above the potentially detectable threshold, pulsar timing could be the Rosetta Stone that unlocks this cosmic code.

A mathematical simulation of the warped space-time near two merging black holes. The colored bands are gravitational-wave peaks and troughs, with the colors getting brighter as the wave amplitude increases. The strongest waves, carrying the greatest amount of energy, come just before and during the merger event itself. From inspiraling neutron stars to ultramassive black holes, the signals that we should expect the Universe to generate ought to span more than 9 orders of magnitude in frequency.

Although we firmly entered the era of gravitational wave astronomy back in 2015, this is a science thats still in its infancy: much like optical astronomy was back in the post-Galileo decades of the 1600s. We only have one type of tool for successfully detecting gravitational waves right now, can only detect them in a very narrow frequency range, and can only detect the closest ones that produce the largest-magnitude signals. As the science and technology underlying gravitational wave astronomy continues to progress, however, to:

were going to reveal more and more of the Universe as weve never seen it before. In combination with cosmic ray and neutrino detectors, and being joined by traditional astronomy from across the electromagnetic spectrum, its only a matter of time before we achieve our first trifecta: an astrophysical event where we observe light, gravitational waves, and particles all from the same event. It might be something unexpected, like a nearby supernova, that delivers it, but it might also come from a supermassive black hole merger from billions of light-years away. One thing thats certain, however, is that whatever the future of astronomy looks like, its definitely going to need to include a healthy and robust investment in the new, fertile field of gravitational wave astronomy!

Read the rest here:

Why gravitational waves are the future of astronomy - Big Think

Bad Astronomy | JWST observes the bizarre Cartwheel Galaxy | SYFY WIRE – Syfy

One of my favorite galaxies in the sky is the Cartwheel, a very strange beast indeed located about 400 million light-years from Earth toward the constellation of Sculptor. Its about the same size as our Milky Way, over 100,00 light-years across. But its resemblance to our own home ends there

You may be familiar with spiral galaxies, ellipticals, and irregulars, but theres a small class called peculiar: Galaxies that have an overall shape, but that shape is strange. In this case, the Cartwheel is shaped like, well a cartwheel.

Its unusual for an astronomical object to have a nickname that really hits the mark, but in this case see for yourself:

That is a new JWST image of the Cartwheel, and holy wow is it spectacular! Its a combination of images taken by the Near-Infrared Camera, or NIRCam, and the Mid-infrared Instrument, or MIRI.

The bizarre structure of the Cartwheel is thought to be due to a collision it had with a smaller galaxy hundreds of millions of years ago. Small galaxies whack into bigger ones fairly often, but in this case it was a bullseye: The smaller one passed right through the center of a bigger spiral at high speed. The gravitational interactions under such circumstances are different than usual: The smaller galaxy created an expanding ring of gas and stars in the bigger galaxy very much like a rock dropped into a pond.

This splash expanded outward, creating the outer ring and riled up the inner hub of stars in the big spiral, creating the smaller inner ring. After the collision, the gas and stars in the disk of the bigger (now ex-) spiral still tried to rotate around the center, but the gravitational disturbance piled them up into thin lanes, creating the spokes you can see connecting the inner and outer rings. If youre curious, a few years back astronomers created a series of animations simulating the gravitational encounter between the two which you can find on their website. Our understanding of the collision has changed since then but these will give you an idea of what happened.

The NIRCAM images cover the wavelength range of about 0.9 microns a wavelength just longer than the human eye can see out to 4.44 microns, and are colored blue, green, yellow, and red. This shows older stars as well as the glow of dust from newborn stars, especially in the outer ring as the gas clouds there compressed and formed new stars by the millions.

MIRI sees farther into the infrared. In the first image thats shown in shades of orange, but here is a combination of MIRI images on its own:

Here blue is actually a wavelength of 7.7 microns, green 10 microns, yellow 12.8, and red 18 microns, well out into what we call the thermal infrared, where warm objects glow. You glow in IR, with a peak brightness at about 10 microns. Just sos you know.

Most luminous stars in visible light arent terribly bright at these wavelengths cooler red supergiants like Betelgeuse being an exception so stars dont show up well in MIRI images. Instead were seeing dust: tiny grains of silicates, or rocky material, and long chains of carbon molecules called polycyclic aromatic hydrocarbons, or PAHs. Think of them as soot, because thats basically what they are.

Dust is created in the atmospheres of dying stars and blown out into space in prodigious quantities; in 2020 when Betelgeuse dimmed so much it was because it blew a huge cloud of dust into space that partially blocked our view of it. Dust is opaque in visible light, the kind of light we see, but in thermal IR it glows. Thats what MIRI detects.

In the MIRI image you can see clumps of dust in the inner ring where bursts of star formation have created massive stars that burned through their fuel in only a few million years, turned into red supergiants, and blew out dust. Eventually those stars exploded as supernova, creating even more dust. These expanding clouds of dust collide and form long streamers, called dust lanes, in spiral galaxies, but in the Cartwheels outer ring it forms huge clumps. A recent supernova found just last year in the outer ring lends credence to this idea.

The spokes have a lot of dust too, created by stars that formed from gas compressed there as well, though its also possible the disk already present in the spiral galaxy before the collision had lots of dust, and those clouds were all gathered up in the gravitational wake of the intruder galaxy, creating the nearly radial features.

And speaking of the intruder galaxy, where is it? You might think its one of the two small galaxies on the left, but theyre actually just innocent nearby galaxies; they are at roughly the same distance from us as the Cartwheel so they may all be part of a small group. The intruder is actually a third nearby galaxy not seen in this relatively small section of sky observed by JWST. It can be seen in wider images, and in fact radio observations showing neutral hydrogen gas show the two are connected, a tail of gaseous debris left in the aftermath of the collision.

You can see why the Cartwheel is among my favorites. These images will help astronomers unravel what actually happened there. We already understand a lot about how collisions work, but each one is different. The more we observe the better well grok them, and JWST will be able to probe what occurs before, during, and after these immense cosmic train wrecks.

It's a fan thing

Join SYFY Insider to get access to exclusive videos and interviews, breaking news, sweepstakes, and more!

Sign Up for Free

Read the original post:

Bad Astronomy | JWST observes the bizarre Cartwheel Galaxy | SYFY WIRE - Syfy

Celestial Events Happening This Month That You’ll Want to Keep an Eye Out For – NBC Connecticut

August is a big month if you enjoy stargazing or astronomy.

First things first - the final super moon of the year happens this month.

Unfortunately, that coincides with one of the bigger meteor showers of the year - the Perseids. The vibrant full moon will wash out all but the brightest meteors (still may be worth a shot to try and catch a few).

Pro tip: If you're interested in trying to catch some of the Perseids, try looking on a clear night away from the peak full moon.

Saturn reaches opposition around the middle of the month. This means that the Earth will be located between the Sun and Saturn. The ringed planet appears the biggest and brightest during this time of year.

The night of August 18, if weather permits, Saturn will be visible for most of the night. If you're interested in checking it out, grab your telescope to be able to see details of Saturn's rings and possibly some of the moons.

A little later in the month, the quarter Moon and Mars appear to 'meet' in the sky early on the morning of the 19.

For more on what you can see by looking up into the August night sky, click here.

More:

Celestial Events Happening This Month That You'll Want to Keep an Eye Out For - NBC Connecticut

Award-winning researcher and prof has stars in his eyes, will give astronomy talk in Alpine – The San Diego Union-Tribune

Imagine a childhood filled with telescopes, night skies and an ongoing fascination with the stars, planets, and galaxies. Robert Quimby lived it.

Its really all my parents fault. They were both amateur astronomers, so some of my earliest memories growing up are of looking through a telescope, says the professor of astronomy at San Diego State University and director of the Mount Laguna Observatory. We went on lots of star parties, which are basically camping trips with telescopes. I was independently driven, so I learned to star hop around and locate deep sky objects, like galaxies, myself.

On Friday, hell present a lecture highlighting some of the research being done at the observatory, along with some of the nonprofit work being done to reduce local light pollution. His presentation begins at 2 p.m. at the Alpine Library.

Quimby, 45, lives in the College Area with his wife, Mika, and their two girls. Hes received awards for his research and work from the Astronomical Society of the Pacific, the American Physical Society, and a share of the 2014 Breakthrough Prize in fundamental physics. He took some time to talk about his work, his first impressions of the breathtaking images from NASAs James Webb Space Telescope, and the time he played in a Reel Big Southern California ska band.

Q: In the description for your talk, the San Diego County Librarys website mentions the Hidden Skies Foundation, a nonprofit run by high school students based in Los Angeles, and its work to preserve dark skies for future generations. First, can you talk about light pollution?

A: Any human-made light that shines where it is not needed, is not helpful, or is just generally a waste, is light pollution. This could be a streetlight shining through a bedroom window that gives you a rough night of sleep, although astronomers usually use the term when discussing the light that spills onto the night sky and obscures the stars. No one sets out to hide the stars behind the glare of electric lights, but just as trash collects in our rivers and beaches, the natural beauty of the night sky can be destroyed by light indiscriminately cast by outdoor lighting.

Q: And what is significant about the work to preserve the darkness of night skies? Why does that lack of light in the sky matter?

A: Dark, star-filled skies give us connection to our past and hopefully our future. From a dark site, you might see the same stars that your great-great-great grandparents enjoyed on their first date, or that dazzled our ancient ancestors thousands of years before. Light pollution breaks this connection by hiding the stars. I have seen the thrill of San Diegans glimpsing their first view of the Milky Way while camping in Mount Laguna, and I would say it is worth protecting these views for future generations to enjoy as well.

We have great neighbors where we live in College Area. Several families welcomed us to the neighborhood soon after we moved here, and our kids became fast friends. There are lots of friendly waves when people walk or drive by. And, there are four taco shops within walking distance!

Q: Youve been director of SDSUs Mount Laguna Observatory since 2014. What have you come to learn about the surrounding area over the years and its place in the study of astronomy?

A: The Mount Laguna Observatory sits at 6,100 feet (500 feet higher than our colleagues to the north at Palomar Observatory, but whos counting!), so when the marine layer of clouds sets in in typical May-gray/June-gloom fashion, we are usually in the clear above the clouds. Better yet, the low clouds block some of the light pollution from the cities and make the nights even darker. Being near the coast we also get the gentle ocean breeze, which affords us much sharper views of the stars than the turbulent air further inland.

Q: What drew you to become interested in this area of study?

A: [Astronomy] remained a hobby of mine into college when it came time to decide on a major. I started with engineering since I liked figuring out how things worked, but I gravitated to physics and astronomy, at least in part because I thought the professors were more interesting people. One once plopped down a bunch of rock-climbing gear at the beginning of a lecture then proceeded to talk the whole period without ever mentioning it. He was just doing some rock climbing before class. These extra dimensions of personality really appealed to me.

Q: Why was this something you wanted to pursue professionally?

A: I figured its better than getting a real job. I love solving puzzles, but sometimes, when the puzzle is something you have to do, it can feel a lot like work. As an astronomer, there is a whole universe of puzzles for me to choose from.

Q: Earlier this month, most of us were in awe of the images of galaxies NASA shared from its James Webb Space Telescope. What initially went through your mind as you looked through those images?

A: I was really surprised at how awestruck I was. I have seen Stephans Quintet and the Carina Nebula before, but the James Webb telescope pictures convey them with such power and beauty. They are at once familiar and otherworldly.

Q: And how did you think about what you saw from your perspective as an astronomy professor and researcher?

A: The first images show how much we have been missing. The James Webb telescope offers, quite literally, a new way to look at our universe, and we are starting to see things we have never seen before. It was quite terrifying at times to wonder what would happen to the future of astronomy if this telescope failed; now that it has arrived and is working superbly, I, for one, am elated.

Q: Whats been challenging about your work in this field?

A: Like the universe itself, the field of astronomy is big and growing at an accelerated pace. It takes effort to stay on top of all of the new discoveries rolling in. It is also very competitive at times. Other groups are often working on projects similar to mine, so there is pressure to publish first.

Q: Whats been rewarding about this work?

A: Every once in a while, you make a breakthrough discovery. I discovered a new class of supernovae and later discovered the first supernova magnified by a strong gravitational lens. It is quite a rush when you put the pieces together and realize you have something that no one has ever seen before.

Q: What has this work taught you about yourself?

A: A big part of science is telling the story. We report our findings in scientific journals and give professional and sometimes public talks. I never considered myself particularly good at writing as a student, but I have come to realize that the storytelling is something I enjoy.

Q: What is the best advice youve ever received?

A: For anyone considering getting their Ph.D., take a year off between undergraduate and graduate school and do something totally different. One of my college professors gave me this advice, and it gave me the opportunity to broaden my world view and, ultimately, meet my wife. If, like me, you find academia calling you back, then you will know that graduate school is right for you, and you will be motivated to stick with it even when it gets tough (it will).

Q: What is one thing people would be surprised to find out about you?

A: Before becoming an astronomer, I played trombone in the ska band Reel Big Fish. It has been a while since I last picked up my horn, but I can still say, pick it up pretty fast.

Q: Describe your ideal San Diego weekend.

A: I have never done it before, but I would love to take a staycation at one of the local resorts in San Diego. Ideally, one with entertainment for the kids and relaxation for the parents. Barring that, I would enjoy a weekend featuring a hike in Mission Trails with my family and a trip to a new restaurant one day, followed by a relaxing day at the beach and a barbecue with friends and family the next day.

Follow this link:

Award-winning researcher and prof has stars in his eyes, will give astronomy talk in Alpine - The San Diego Union-Tribune

Planetary Debris Disks Discovered with Citizen Scientists and Virtual Reality – Scientific American

Astronomers have many tools for studying the cosmos: telescopes, satellites, interplanetary spacecraft, and more. The humble human eye is a critical part of this toolkit, too, as it can often spot patterns or aberrations that algorithms miss. And our visions scrutinizing power has been bolstered recently by virtual reality (VR) as well as by thousands of eyes working in tandem thanks to the crowdsourcing power of the Internet.

Researchers at NASAs Goddard Space Flight Center recently announced the discovery of 10 stars surrounded by dusty debris diskswhirling masses of gas, dust and rock left over after the earliest phases of planet formation. This result, enabled by VR and the help of citizen scientists, was recently published in the Astrophysical Journal. The findings could help astronomers piece together a time line of how planetary systems are built.

Debris disks encompass various stages of planet formation, including the youthful eras in which worlds are still embedded in the detritus from the messy, chaotic processes of their birth. Although astronomers have managed to see a few directly, most of these young planets are beyond the reach of current telescopes. Making a planetary system takes millions of years, so each debris disk observers see is just a brief snapshot of one moment in that systems life. To uncover the whole story, astronomers search for many disk-wreathed planetary systems at different stages of evolution, gathering multiple snapshots to piece together in a time line.

To hunt for debris disks, observers usually start by looking for stars that appear especially bright in the infrared; that abnormal brightness typically comes from a surfeit of starlight-warmed dust in a disk around a star. NASAs infrared telescope WISE (Wide-Field Infrared Survey Explorer) surveyed the entire sky, creating what in some respects is the most comprehensive catalog yet of stellar infrared measurements. With tens of thousands of data points to be analyzed and many debris disks likely hidden within the WISE catalog, whats a scientist to do?

Its a great example of how so much of modern astronomy involves searching massive data sets for the proverbial needle in the haystack, says Meredith Hughes, an astronomer at Wesleyan University, who was not involved in the study. Even with machine-learning algorithms, its still hard to train computers to do this complex work of identifying noisy patterns and noticing subtle deviations from expectations, which is where the collective brainpower of citizen science comes in.

A project called Disk Detective trained citizen scientistsregular people who want to help out on research in their spare timeto look at WISE images and compare them to those from other astronomical surveys, such as the SkyMapper Southern Sky Survey, the Pan-STARRS survey and the Two Micron All Sky Survey (2MASS), with the goal of confirming the presence of disks around each candidate star. Since the projects start in 2014, citizen scientists have found more than 40,000 disksthats 40,000 snapshots of the history of how planets form.

To put these into a time line, though, astronomers need to figure out where each snapshot belongs. In other words, scientists need to know the ages of each star and its debris disk. When we know the ages of stars and planets, we can place them in a sequencefrom baby to teen to adult, if you like, says Marc Kuchner, a NASA astrophysicist and co-author of the new study. That allows us to understand how they form and evolve.

Pinning down a stars age with any substantial precision is a notoriously tricky problem in astronomy. One solution is to match up a star to its siblings, in an association known as a moving group. Stars often form in clusters from one giant cloud of gas, but many of these once-close stellar families drift apart as they age, their individual members spreading out across the Milky Way. By carefully measuring stars locations and velocities, researchers can determine which stars display the telltale motions that, traced backward, reveal they were collectively born at the same time and place. Once astronomers know stars in a group are related, its straightforward to calculate their age based on established knowledge of how stars grow and evolve.

Finding new moving group members isnt easy. To do so, astronomers traditionally rely on analyzing preexisting lists of moving-group stars, flagging potential new members via sophisticated mathematical models. The team behind the new project wanted to try something different and more visceral: it used a VR program to zoom around the stars and get a clearer, three-dimensional perspective on how things move.

I thought I would scare [NASAs VR scientists] away when I said I wanted to visualize the positions and velocities of four million stars, Kuchner says. But they didnt bat an eyelash! To create this virtual stellar cornucopia, the team used data from Gaia, a European Space Agency satellite that provides the best available measurements for the positions and velocities of stars in our galaxy. The resulting VR simulation served as a sort of time machine, tooknowing how fast and in what direction a star was moving allowed Kuchner and colleagues to trace its movement backward and forward in time.

While serving as a visiting researcher at NASA, lead study author Susan Higashio strapped on a VR headset to fly around the simulations millions of stars. She examined where the stars with disks were in relation to known moving groups and extrapolated the stars motions forward and backward in time to test their potential associations. It was so exciting when the four million stars appeared in VR, but it felt a little dizzying when they all started to swirl around me, she recalls. It was a really fun and interactive way to conduct science.

Higashio traced 10 of the debris disks from Disk Detective back to their moving-group families. The team then found the estimated ages of these disks, which ranged from 18 million to 133 million years old. All of them were extremely young, compared with our home solar system, which is around 4.5 billion years old. The researchers also identified an entirely new moving group called Smethells 165, after its brightest star. Whenever we find a new moving group, thats a new batch of stars whose ages we know more precisely, Kuchner explains.

The astronomers also found one strange, extreme debris disk around a star nicknamed J0925 that doesnt quite fit into their expected time line of planet formation. Its much brighter in the infraredmeaning it has more dustthan expected for a star of its age. As debris disks get older, some of their dust spirals into the star or is blown away by stellar winds. J0925, however, seems to have just gotten a fresh new delivery of hot dust, possibly from a recent collision between two protoplanets. Hughes highlights this star as the most interesting object uncovered in the study. Extreme debris disks are still a bit mysterious, but they are probably similar to what our solar system would have looked like during the giant impact that formed the Earths moon.

Disk Detectives citizen-science work is still ongoing, now upgraded to use Gaias most recent batch of data. The team hopes to identify even more members of moving groups and new disks with their unique VR method. Lisa Stiller, one of the many citizen scientist co-authors of the study, offers encouragement for prospective volunteers. Dont hesitate to help out in a citizen-science project, she says. Your help will be needed in whatever form you choose or amount of time you choose to dedicate yourself.

Anyone with an Internet connection can still join the Disk Detective project, no experience needed. More than 30,000 citizen scientists have contributed, Kuchner says. The Disk Detectives are still working their way through hundreds of thousands of WISE imageswe still need your help.

Go here to read the rest:

Planetary Debris Disks Discovered with Citizen Scientists and Virtual Reality - Scientific American