The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Monthly Archives: May 2021
Disturbing the Fermi Sea with Rydberg States – Physics
Posted: May 18, 2021 at 4:24 am
May 17, 2021• Physics 14, 74
A method that enables long-range interactions between fermions on a lattice allows atomic quantum simulations of exotic quantum many-body phenomena.
Currently, one of the best ways to model complex quantum systems is through atomic quantum simulations. Controlling interactions between atoms is key to such simulations, something that can be achieved in atomic lattices using the well-established Feshbach-resonance approach. While that approach can be used to vary the strength of short-range interactions between atoms, it does not carry over to long-range interactions, leaving some interesting quantum systems outside of the techniques scope. Elmer Guardado-Sanchez at Princeton University and colleagues have now shown that such long-range interactions can be controlled using Rydberg dressing in a lattice of lithium ( 6Li) atoms [1]. The teams demonstration opens up unprecedented opportunities for exploring systems that exhibit rich fermionic many-body physics.
In the Feshbach-resonance approach to interaction control, a variable magnetic field is used to tune the scattering dynamics of colliding atoms. The use of this technique has led to the experimental observation of the crossover between the Bose-Einstein-condensation (BEC) regimein which strongly interacting fermions form bosonic moleculesand the Bardeen-Cooper-Schrieffer (BCS) regimein which weakly interacting fermions form loosely bound Cooper pairs. Quantum phenomena that can be simulated using such interactions range from the electron correlations behind high-temperature superconductors to the quantum kinematics taking place in distant neutron stars. Despite this versatility, there remains an important class of systems beyond the reach of simulations based on local interactions. Those systems are ones composed of spinless fermions, which the Pauli exclusion principle forbids from sitting on top of one another, making local interactions largely irrelevant. Instead, it is the long-range interactions that must be controlled.
One way to engineer such long-range interactions between spinless atomic fermions is to excite the atoms to Rydberg states, in which an electron occupies a high orbital. This method has been proposed theoretically as a way to mediate correlated topological density waves within a fermionic system [2]. Guardado-Sanchez and colleagues now employ the technique experimentally, which they do with an ensemble of spinless, fermionic 6Li atoms.
The team cooled a dilute gas of 6Li atoms in an optical lattice to a quantum degenerate temperature, one where each atoms de Broglie wavelength becomes larger than the interatomic spacing. Unable to reach the ground state simultaneously (because of the Pauli exclusion principle), the atoms freeze one by one at the lowest momentum available, forming a Fermi sea (Fig. 1). In this sea state, the atoms barely interact, and there are both minimal thermal and minimal quantum fluctuations.
The teams next step was to use a laser to implement a Rydberg dressing scheme, which mixes the systems internal ground state with a highly excited Rydberg state. An atom in a Rydberg state exhibits a larger electric dipole moment than one in the ground state because of the greater distance between its ion core and its outermost electron. This dipole-moment enhancement produces an effective soft-core interaction between Rydberg-dressed atoms, meaning that the interaction strength remains roughly constant as the interparticle distance increases, before dropping off above a threshold length scale [24]. The researchers show that they can manipulate the strength and the range of this interaction by varying the intensity and frequency of the laser. Although the Rydberg-dressing-induced interaction is isotropic across the two-dimensional system, the motion (by quantum tunneling) of the fermions is restricted to one dimension. This limited freedom of motion hinders the infamous Rydberg-avalanching-loss process by which Rydberg atoms collide, gain kinetic energy, and escape the trap.
The long-range interaction and the consequent hopping motion of the fermions generate many-body excitationscommonly called quantum fluctuationson top of the Fermi sea. These collective quantum fluctuations can have tremendously rich features, yielding many kinds of quantum-correlated states of matter. The types of phenomena that arise in such a system of interacting fermions depend on the way in which the fermions pair up, or, more precisely, on the momenta of the participating fermions and the Cooper pairs that result. These momentum-dependent interactions, in turn, are governed largely by the range of the interaction relative to the lattice spacing. A soft-core interaction with a tunable length, such as that realized by Guardado-Sanchez and colleagues, could lead to abundant momentum-dependent behaviors, generating, for example, topological density waves [2] and chiral p+ip superfluidity [5]. Such p+ip superfluids support topological Majorana vortices and offer a plausible route toward realizing topological quantum computation.
Even more exotic and counterintuitive phenomena may arise when different pairing possibilities occur simultaneously. For example, although mean-field theories typically predict that superfluidity appears in the presence of purely attractive interactions, functional renormalization group calculations suggest that a complex combination of different fermion pairings should generate unconventional f-wave superfluidity even with atomic repulsion [6]. Guardado-Sanchez and colleagues have so far only demonstrated attractive interactions, but tuning from attraction to repulsion is experimentally feasible [7]. Interesting effects should also arise when the interaction strength completely dominates the kinetic energy, with the system then being driven toward a Wigner crystal or fractional quantum Hall state [8, 9].
In the teams experiment, with its lattice-hopping fermions, the dynamical aspects of the system are more easily observed than the quantum many-body equilibrium states. Uncovering how to probe such states in a nonequilibrium setting should stimulate future theoretical investigation. On the application side, as well as the above-mentioned potential for topological quantum computing, long-range interaction control is a key step toward performing quantum simulations of quantum chemistry problems. Such simulations represent one arena ripe for applications employing the so-called quantum advantage to solve problems that would be intractable using classical computers. One strength of the teams scheme in realizing applications is that, unlike previously developed Feshbach-resonance techniques, it is magnetic-field-free. This aspect provides extra freedom to integrate the technique with certain magnetic-field-sensitive cold-atom quantum technologies, such as artificial gauge fields.
Xiaopeng Li is professor of physics in the Physics Department of Fudan University, China, jointly employed by Shanghai Qi Zhi Institute. He is active in quantum information science and condensed-matter theories, with his primary research interests in exploiting the quantum computation power of various quantum simulation platforms. He received his Ph.D. in physics from the University of Pittsburgh in 2013 and joined Fudan University as a faculty member in 2016 after three years at the University of Maryland, supported by a Joint Quantum Institute theoretical postdoctoral fellowship. He has been a full professor since 2019.
Elmer Guardado-Sanchez, Benjamin M. Spar, Peter Schauss, Ron Belyansky, Jeremy T. Young, Przemyslaw Bienias, Alexey V. Gorshkov, Thomas Iadecola, and Waseem S. Bakr
Phys. Rev. X 11, 021036 (2021)
Published May 17, 2021
A new experimental method based on adsorption can indicate whether a material is a Mott insulator or a common insulator. Read More
Read more here:
Posted in Quantum Computing
Comments Off on Disturbing the Fermi Sea with Rydberg States – Physics
Global Quantum Computing Market with Top Growth Companies, Size, Trends, Industry Analysis, and Key Players, Forecast 2020-2028 The Courier – The…
Posted: at 4:24 am
This report studies theQuantum ComputingMarket with many aspects of the industry like the market size, market status, market trends and forecast, the report also provides brief information of the competitors and the specific growth opportunities with key market drivers. Find the complete Quantum Computing Market analysis segmented by companies, region, type and applications in the report.
The report offers valuable insight into the Quantum Computing market progress and approaches related to the Quantum Computing market with an analysis of each region. The report goes on to talk about the dominant aspects of the market and examine each segment.
Request to Get the PDF Sample of theReport:https://marketresearchintelligence.com/request_sample.php?id=44
Report Highlights:
The report also eyes on the latest trends and development plans, patterns, and policies in the global market. The report covers the global Sports Nutrition & Quantum Computing market overview, key players profiling, key developments, suppliers of raw materials, and dealers, among other information. It also consists of market size, sales, share, industry growth rate, and revenue. It sheds light on essential components such as the size of the market as well as its share along with forecast trends, specifications, and applications. The summary of present innovations, specifications, parameters have been clarified in the report. The report provides a complete abstract of the fluctuations in terms of demand rates.
Based on geography, the global Quantum Computing market segmented into
Ask for Discount@https://marketresearchintelligence.com/ask_for_discount.php?id=44
The Report focuses on-demand supply chain to understand the requirement from various global clients along with some significant features. The turning point of the industries has been presented by giving effective approaches to discover global customers massively. SWOT and Porters five model have been used for analyzing the market on the basis of strengths, challenges and global opportunities in front of the businesses. This report has been aggregated on the basis of recent scope, challenges in front of the businesses and global opportunities to enlarge the Global Quantum Computing Market sector in upcoming years.
The scope of the report:
The Quantum ComputingMarket Research Report is a comprehensive publication that aims to determine the financial outlook for the market. For the same reason, it offers a detailed understanding of the competitive landscape. It examines some of the key players, their leadership styles, their research and development status, and their expansion strategies. The report also includes the product portfolios and the list of products in the pipeline. It provides a detailed explanation of advanced technology and the investments that have been made to upgrade existing technologies.
Enquiry before buying this premium Report:https://marketresearchintelligence.com/enquiry_before_buying.php?id=44
The years considered to estimate the market size in this study are as follows:
In the end the Global Quantum Computing Market Report delivers conclusion which includes Research Findings, Market Size Estimation, Market Share, Consumer Needs/Customer Preference Change and Data Source. These factors will increase business overall.
Table of Content:Part 01: Executive SummaryPart 02: Scope of the ReportPart 03: Research MethodologyPart 04: Market LandscapePart 05: Pipeline AnalysisPart 06: Market SizingPart 07: Five Forces AnalysisPart 08: Market SegmentationPart 09: Customer LandscapePart 10: Regional LandscapePart 11: Decision FrameworkPart 12: Drivers and ChallengesPart 13: Market TrendsPart 14: Vendor LandscapePart 15: Vendor AnalysisPart 16: Appendix
Note: If you have any special requirement, please let us know and we will offer you the report as you want.
About us:
Market Research Intelligence is a pioneer in understanding current market trends from the international business industry. Our expertise lies in putting forth penetrative market insights, thus making us front runners in market analysis for clients seeking world class industry research. At Market Research Intelligence we work diligently on delivering erudite market insights along with competence in creating intelligent market reports; with that we take pride in delivering comprehensive industry insights based on the market, market competitors, its products and global customers. Through our erudite market approach, Market Research Intelligence has become synonymous to delivering best product service.
Contact Us:
Email :sales@marketresearchintelligence.com
Number: +44-2080403303
Address :167-169 Great Portland Street,6th Floor, London
Visit link:
Posted in Quantum Computing
Comments Off on Global Quantum Computing Market with Top Growth Companies, Size, Trends, Industry Analysis, and Key Players, Forecast 2020-2028 The Courier – The…
XSEDE Webinar: ‘GPU Computing and Programming on Expanse’ Will Be Held on May 20 – HPCwire
Posted: at 4:24 am
May 12, 2021 There will be an XSEDE webinar Thursday, May 20 at 1:00 p.m. U.S. Central time. The topic will be GPU Computing and Programming on Expanse. This webinar provides a brief introduction to massively parallel computing with graphics processing units (GPUs) on the SDSC Expanse supercomputer.
The use of GPUs is becoming increasingly popular across all scientific domains both for traditional simulations and AI applications since GPUs can significantly accelerate time to solution for many computational tasks. In this webinar, participants will learn how to access Expanse GPU nodes, how to launch GPU jobs on Expanse, and get introduced to GPU programming. The webinar will cover the essential background of GPU chip architectures and the basics of programming GPUs with the NVIDIA HPC SDK via the use of libraries, OpenACC compiler directives, and the CUDA programming language. We will also briefly discuss performance analysis with NVIDIA Nsight profilers. Participants will thus acquire the foundation to use and develop GPU aware applications.
Registration is required to attend this event.
Source: XSEDE
Read more:
XSEDE Webinar: 'GPU Computing and Programming on Expanse' Will Be Held on May 20 - HPCwire
Posted in Quantum Computing
Comments Off on XSEDE Webinar: ‘GPU Computing and Programming on Expanse’ Will Be Held on May 20 – HPCwire
Artificial Intelligence – National Cancer Institute
Posted: at 4:23 am
Artificial intelligence (AI) is everywhere: personal digital assistants answer our questions, robo-advisors trade stocks for us, and driverless cars will someday take us where we want to go. AI has penetrated our lives, and its use is exploding in biomedical research and health careincluding across all dimensions of cancer research, where the potential applications for AI are vast.
Artificial Intelligence (AI)is a computer performing tasks commonly associated with human intelligence. Humans are coding or programing a computer to act, reason, and learn. Analgorithm or modelis the code that tells the computer how to act, reason, and learn.
Machine Learning (ML)is a type of AI that is not explicitly programmed to perform a specific task but rather can learn iteratively to make predictions or decisions. The more data an ML model is exposed to, the better it performs over time.
Deep Learning (DL)is a subset of ML that uses artificial neural networks modeled after how the human brain processes information to learn from huge amounts of data. A well-designed and well-trained DL model is able to perform classification tasks and make predictions with high accuracy, sometimes exceeding human expertlevel performance.
AI excels at recognizing patterns in large volumes of data, extracting relationships between complex features in the data, and identifying characteristics in data (including images) that cannot be perceived by the human brain. It has already produced results in radiology, where clinicians use computers to process images rapidly, thus allowing radiologists to focus their time on aspects for which their technical judgment is critical. For example, last year, the Food and Drug Administration approved the first AI-based software to process images rapidly and assist radiologists in detecting breast cancer in screening mammograms.
Integration of AI technology in cancer care could improve the accuracy and speed of diagnosis, aid clinical decision-making, and lead to better health outcomes. AI-guided clinical care has the potential to play an important role in reducing health disparities, particularly in low-resource settings. NCI will invest in supporting research, developing infrastructure, and training the workforce to help achieve these goals and more.
NCI-funded research has already led to several opportunities for the use of AI.
Scientists inNCIs intramural research programare leveraging the capabilities of AI to improve cancer screening in cervical and prostate cancer. NCI investigators developed a deep learning approach for the automated detection of precancerous cervical lesions from digital images. Read more about this inMark's story.
Another group of NCI intramural investigators and their collaborators trained a computer algorithm to analyze MRI images of the prostate. Historically, standard biopsies of the prostate did not always produce the most accurate information. Starting 15 years ago, clinicians at NCI began performing biopsies guided by findings from MRI, enabling them to focus on regions of the prostate most likely to be cancerous. MRI-guided biopsy improved diagnosis and treatment when utilized by prostate cancer experts, but the method did not transfer well to clinics without prostate cancer expertise. The NCI clinicians used AI to capture their diagnostic expertise and made the algorithm accessible to clinics across the country as a tool to help with diagnosis and clinical decision-making.
The full potential of the MRI-guided biopsy developed by NCI researchers is being realized in clinics without prostate cancerspecific expertise because of this AI tool. New AI algorithms under development now aim to surpass the capabilities of well-trained radiologists by enabling the prediction of patient outcomes from MRI.
AI methods can also be used to identify specific gene mutations from tumor pathology images instead of using traditional genomic sequencing. For instance, NCI-funded researchers at New York University used deep learning(DL) to analyze pathology images of lung tumorsobtained fromThe Cancer Genome Atlas. Not only could the DL method accurately distinguish between two of the most common lung cancer subtypes, adenocarcinoma and squamous cell carcinoma, it could predict commonly mutated genes from the images.
In the context of brain tumors, identifying mutations using noninvasive techniques is a particularly challenging problem. With NCI support, an international team, including investigators at Harvard University and the University of Pennsylvania, recently developed a DL method to identifyIDHmutations noninvasively from MRI images of gliomas. These research findings suggest that, in the future, AI could help identify gene mutations in innovative ways.
NCI is leveraging the power of AI in multiple ways to discover new treatments for cancer. TheCancer Moonshotis supporting two major efforts in partnership with the Department of Energy (DOE) to leverage its supercomputing expertise and power for cancer research. In one effort,AI is being used to detect and interpret features of target molecules(e.g., proteins or nucleic acids that are important in cancer growth), make predictions for new drugs to target those molecules, and help evaluate the effectiveness of those drugs. Research is also being done to identify novel approaches for creating new drugs more effectively.
A project that is part of the second effort is usingcomputational methods to model the interaction of KRAS protein with the cell membranein detailed ways that were not previously possible. A cross-agency research team collaborating with theRAS Initiativedeveloped a model of KRASlipid membrane binding to simulate the behavior of KRAS at the membrane. This model could help identify novel ways to inhibit the activity of mutant KRAS protein. This work will help scientists find new avenues to target mutations in theKRASgene, one of the most frequently mutated oncogenes in tumors. In the future, this could be applied to other important oncogenes.
The NCIDOE collaboration is also enabling the application of DL to analyze patient information and cancer statistics collected by theNCI Surveillance, Epidemiology, and End Results (SEER) program. As part of this effort, DL algorithms were developed to extract tumor features automatically from pathology reports, saving thousands of hours of manual processing time. The goal of the project is to transform cancer care by applying AI capabilities to population-based cancer data in real time. This will help us better understand how new diagnostic methods, treatments, and other factors affect patient outcomes. Real-time data analysis will also allow for newly diagnosed individuals to be linked with clinical trials that may benefit them. NCIs long-term investment in the SEER program and its infrastructure, coupled with newer investments in AI, will enable pattern recognition in population data that was impossible before. AI will aid in predicting treatment response, likelihood of recurrence (local or metastatic), and survival.
The potential applications of AI in medicine and cancer research hold great promise. Leveraging these opportunities will require increasing investments and addressing some challenges that will have to be overcome.
The data science and AI communities will be important partners in realizing the promise of AI in cancer research. NCI can engage these communities by providing appropriate funding opportunities and access to data sources; linking cancer researchers and AI researchers; and supporting the training and development of a workforce with expertise in AI, data science, and cancer. Building on the NCIDOE collaboration, a series of workshops are being held to build a community engaged in pushing the limits of current computational practices in cancer research to develop new computational technologies.
Currently, the use of AI in cancer research and care is in its infancy. Most research is focused on methods development, rather than on implementing those methods in clinical practice. NCI has an opportunity to lead the way in implementing AI in cancer care by supporting research to find effective pathways for clinical integration (including ways to understand uncertainty and validate AI approaches), educating medical personnel about the strengths and weaknesses of the technology, and rigorously assessing its benefits in terms of clinical outcomes, patient experience, and costs.
The lack of large, publicly available, well-annotated cancer datasets has been a significant barrier for AI research and algorithm development. The lack of benchmarking datasets in cancer research hampers reproducibility and validation. Support for annotation, harmonization, and sharing of standardized cancer datasets to drive AI innovation and support training and validation of AI models will be essential. With even greater volumes of data anticipated in the future, support for developing approaches to generate and aggregate new research and clinical data coherently will be critical for long-term success.
To support this work and to make cancer data broadly available for all types of research, NCI is refining policies and practices to enhance and improve data sharing. As part of those efforts, NCI is building aCancer Research Data Commons (CRDC). One node of the CRDC is an Imaging Data Commons that will connect toThe Cancer Imaging Archive, a unique resource of publicly available, archival cancer images with supporting data to enable discovery. NCI also recently launched theChildhood Cancer Data Initiativeto accelerate progress for children, adolescents, and young adults with cancer by optimizing the collection, aggregation, and utility of research and clinical data.
NCIs data aggregation and sharing efforts are crucial to moving AI and many areas of cancer research forward. As new sources of biomedical and health data emerge, the amount of information will continue growing faster than it can be interrogated. AI will be an essential tool for processing, aggregating, and analyzing the vast amounts of information the data hold to drive discovery and improve patient care.
One challenge of AI, and DL specifically, is the black box problem: not fully understanding what features of the data a computer has used in its decision-making process. For example, a DL algorithm that predicts the optimal treatment for a patient does not provide the reasoning it used to make that prediction. Additional efforts are needed to reveal how algorithms arrive at a decision or prediction so that the process becomes transparent to scientists and clinicians. Making these algorithms transparent could help researchers identify new biological features relevant to disease diagnosis or treatment.
Incorporating information about biological processes into the algorithm is likely to improve its accuracy and decrease dependence on large amounts of annotated data, which may not be available. One danger of the black box problem is that DL may inadvertently perpetuate existing unconscious biases. Researchers need to carefully consider how potential biases affect the data being used to develop a model, adopt practices to address and monitor those biases, and monitor performance and applicability of AI models.
With increased investments, NCIs efforts to realize AIs potential will lead to more accurate and rapid diagnoses, improved clinical decision-making, and, ultimately, better health outcomes for patients with cancer and those at risk.
Visit link:
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence – National Cancer Institute
What is Artificial Intelligence | Artificial Intelligence …
Posted: at 4:23 am
In this blog on What is Artificial Intelligence, were going to talk about what Artificial Intelligence is and how it is useful for us. Lets begin by looking at the following concepts:
Do you think the concept and existence of Artificial Intelligence is new?
Well, when there was no internet in the past people were researching on Artificial Intelligence by reading books or checking out articles in the newspaper.
What is Artificial Intelligence? Well, this key term sure did start a lot of curiosity back in the day!
People wanted to knowif they could teach computers to learn like how a young child does. The concept here was to basically use trial and error to develop formal reasoning.
The term Artificial Intelligence was actually coined way back in 1956 by John McCarthy, a professor at Dartmouth.
For years, it was thought that computers would never match the power of the human brain, but this has proven to not be the case.
Well, back then we did not have enough data and computation power, but now with Big Data coming into existence and with the advent of GPUs, Artificial Intelligence is possible.
Did you know that 90% of the worlds data has been generated in the past two years alone? Computers can make sense of all this information more quickly.
Very soon, we can see Artificial Intelligence being a little less artificial and a lot more intelligent.
Artificial Intelligence in my opinion, is the simulation of human intelligence done by machines programmed by us. The machines need to learn how to reason and do some self-correction as needed along the way.
Now that we have detailed algorithms which Artificial Intelligence systems can make use of, they can perform huge tasks faster and more efficiently.
Machine Learning and Deep Learning are just ways to achieve Artificial Intelligence.
In contrast, some AI experts believe such projections are wildly optimistic given our limited understanding of the human brain and believe that Artificial Intelligence is still centuries away.
In this blog about What is Artificial Intelligence lets now walk-through some of the types of Artificial Intelligence.
Artificial Intelligence can be popularly categorized in two ways:
Lets talk about Narrow AI:
Narrow AI is an Artificial Intelligence System that is designed and trained for one particular task. Virtual assistantssuch as Amazons Alexa and Apples Siri use narrow AI.
Narrow AI is sometimes also referred to as Weak AI.However, that doesnt mean that Narrow AI is inefficient or something of that sort.
On the contrary, it is extremely good at routine jobs, both physical and cognitive. It is Narrow AI that is threatening to replace many human jobs throughout the world.
However, my curiosity on What is Artificial Intelligence didnt stop here. I was digging a little bit further.
Heres when I found out more aboutWide AI:
Wide AI is a system with cognitive abilities so that when the system is presented with an unfamiliar task, it is intelligent enough to find a solution.
Here the system is capable of having intelligent behavior across a variety of tasks from driving a car to telling a joke.
The techniques aim at replicating and surpassing many (ideally all) capacities of human intelligence such as risk analysis and other cognitive processes.
Artificial Intelligence is used almost everywhere today, in systems such asMail spam filtering, Credit-Card fraud detection systems, Virtual Assistance and so on.
I believe there is no end or limitation to the number of applications we have with Artificial Intelligence to make our lives better!
Next up, in this What is Artificial Intelligence blog, lets go through some of the use cases that I believe stand out.
Well, in the late 90s when the common man was still wondering what is Artificial Intelligence? We had computers trained to play games and solve basic problems.
Deep Blue was a chess-playing computer developed by IBM.
It is known for being the first computer chess-playing system to win both a chess game and a chess match against a reigning world champion under regular time controls.
Today, the Artificial Intelligence available on the free chess games on your phones are exponentially faster and better than Deep Blue.
What we majorly require is the use of Artificial Intelligence and technology to ensure that help arrives faster. We can start by developing systems which help first responders find victims of earthquakes, floods, and any other natural disasters.
Normally, responders need to examine aerial footage to determine where people could be stranded. However, examining a vast number of photos and drone footage is very time and labor intensive.
This is a time critical process and it might very well be the difference between life and death for the victims.
An Artificial Intelligence system developed at Texas A&M University permits computer programmers to write basic algorithms that can examine extensive footage and find missing people in under two hours.
Hunting of Wildlife species and poaching is a global problem as it leads to extinction.
For example, the latest African census showed a 30% decline in elephant populations between 2007 and 2014. Wildlife conservation areas have been established to protect these species from poachers, and these areas are protected by park rangers. The Rangers, however, do not always have the resources to patrol the vast areas efficiently.
Ugandas Queen Elizabeth National Park uses Predictive modeling to predict poaching threat levels. Such models can be used to generate efficient and feasible patrol routes for the park rangers.
In my opinion, Neural networks work well to provide smart agricultural solutions.
Everything ranging from complete monitoring of the soil and crop yield to providing predictive analytic models to track and predict various factors and variables that could affect future yields.
For example, the Berlin-based agricultural tech startup PEAT has developed a deep learning algorithm-based application called Plantix which can identify defects and nutrient deficiencies in the soil.
Their algorithms correlate particular foliage patterns with certain soil defects, plant pests and diseases.
Well, one day youre wondering What is Artificial Intelligence and later robots are ready to perform surgical procedures on you?
Robots today are machine learning-enabled tools that provide doctors with extended precision and control. These machines enable shortening the patients hospital stay, positively affecting the surgical experience and reducing medical costs all at once.
Similarly, mind-controlled robotic arms and brain chip implants have begun helping paralyzed patients regain mobility and sensations of touch.
Overall, Machine learning and Artificial Intelligence are helping improve patient experience on the whole.
It is amazing to see that applications like iNaturalist and eBirds collect data on the species encountered.This helps keep track of species populations, ecosystems and migration patterns.
As a result, these applications also have an important role in the better identification and protection of marine and freshwater ecosystems as well.
Do check out this link for more information on the blog about Artificial Intelligence with Deep Learning!
I personally believe that Artificial Intelligence will revolutionize all aspects of our daily life. It will be subtle enough and have a big impact on everything around us!
I hope you have enjoyed my post on what is Artificial Intelligence. If you have any questions, mention it in the comments section, I will reply ASAP.
This video on Artificial Intelligence gives you a brief introduction to AI and how AI can change the world.
See original here:
What is Artificial Intelligence | Artificial Intelligence ...
Posted in Artificial Intelligence
Comments Off on What is Artificial Intelligence | Artificial Intelligence …
Helping students of all ages flourish in the era of artificial intelligence – MIT News
Posted: at 4:23 am
A new cross-disciplinary research initiative at MIT aims to promote the understanding and use of AI across all segments of society. The effort, called Responsible AI for Social Empowerment and Education (RAISE), will develop new teaching approaches and tools to engage learners in settings from preK-12 to the workforce.
People are using AI every day in our workplaces and our private lives. Its in our apps, devices, social media, and more. Its shaping the global economy, our institutions, and ourselves. Being digitally literate is no longer enough. People need to be AI-literate to understand the responsible use of AI and create things with it at individual, community, and societal levels, says RAISE Director Cynthia Breazeal, a professor of media arts and sciences at MIT.
But right now, if you want to learn about AI to make AI-powered applications, you pretty much need to have a college degree in computer science or related topic, Breazeal adds. The educational barrier is still pretty high. The vision of this initiative is: AI for everyone else with an emphasis on equity, access, and responsible empowerment.
Headquartered in the MIT Media Lab, RAISE is a collaboration with the MIT Schwarzman College of Computing and MIT Open Learning. The initiative will engage in research coupled with education and outreach efforts to advance new knowledge and innovative technologies to support how diverse people learn about AI as well as how AI can help to better support human learning. Through Open Learning and the Abdul Latif Jameel World Education Lab (J-WEL), RAISE will also extend its reach into a global network where equity and justice are key.
The initiative draws on MITs history as both a birthplace of AI technology and a leader in AI pedagogy. MIT already excels at undergraduate and graduate AI education, says Breazeal, who heads the Media Labs Personal Robots group and is an associate director of the Media Lab. Now were building on those successes. Were saying we can take a leadership role in educational research, the science of learning, and technological innovation to broaden AI education and empower society writ large to shape our future with AI.
In addition to Breazeal, RAISE co-directors are Hal Abelson, professor of computer science and education; Eric Klopfer, professor and director of the Scheller Teacher Education Program; and Hae Won Park, a research scientist at the Media Lab. Other principal leaders include Professor Sanjay Sarma, vice president for open learning. RAISE draws additional participation from dozens of faculty, staff, and students across the Institute.
In todays rapidly changing economic and technological landscape, a core challenge nationally and globally is to improve the effectiveness, availability, and equity of preK-12 education, community college, and workforce development. AI offers tremendous promise for new pedagogies and platforms, as well as for new content. Developing and deploying advances in computing for the public good is core to the mission of the Schwarzman College of Computing, and Im delighted to have the College playing a role in this initiative, says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing.
The new initiative will engage in research, education, and outreach activities to advance four strategic impact areas: diversity and inclusion in AI, AI literacy in preK-12 education, AI workforce training, and AI-supported learning. Success entails that new knowledge, materials, technological innovations, and programs developed by RAISE are leveraged by other stakeholder AI education programs across MIT and beyond to add value to their efficacy, experience, equity, and impact.
RAISE will develop AI-augmented tools to support human learning across a variety of topics. Weve done a lot of work in the Media Lab around companion AI, says Park. Personalized learning companion AI agents such as social robots support individual students learning and motivation to learn. This work provides an effective and safe space for students to practice and explore topics such as early childhood literacy and language development.
Diversity and inclusion will be embedded throughout RAISEs work, to help correct historic inequities in the field of AI. We're seeing story after story of unintended bias and inequities that are arising because of these AI systems, says Breazeal. So, a mission of our initiative is to educate a far more diverse and inclusive group of people in the responsible design and use of AI technologies, who will ultimately be more representative of the communities they will be developing these products and services for.
This spring, RAISE is piloting a K-12 outreach program called Future Makers. The program brings engaging, hands-on learning experiences about AI fundamentals and critical thinking about societal implications to teachers and students, primarily from underserved or under-resourced communities, such as schools receiving Title I services.
To bring AI to young people within and beyond the classroom, RAISE is developing and distributing curricula, teacher guides, and student-friendly AI tools that enable anyone, even those with no programming background, to create original applications for desktop and mobile computing. Scratch and App Inventor are already in the hands of millions of learners worldwide, explains Abelson. RAISE is enhancing these platforms and making powerful AI accessible to all people for increased creativity and personal expression.
Ethics and AI will be a central component to the initiatives curricula and teaching tools. Our philosophy is, have kids learn about the technical concepts right alongside the ethical design practices, says Breazeal. Thinking through the societal implications cant be an afterthought.
AI is changing the way we interact with computers as consumers as well as designers and developers of technology, Klopfer says. It is creating a new paradigm for innovation and change. We want to make sure that all people are empowered to use this technology in constructive, creative, and beneficial ways.
Connecting this initiative not only to [MITs schools of] engineering and computing, but also to the School of Humanities, Arts and Social Sciences recognizes the multidimensional nature of this effort, Klopfer adds.
Sarma says RAISE also aims to boost AI literacy in the workforce, in part by adapting some of their K-12 techniques. Many of these tools when made somewhat more sophisticated and more germane to the adult learner will make a tremendous difference, says Sarma. For example, he envisions a program to train radiology technicians in how AI programs interpret diagnostic imagery and, vitally, how they can err.
AI is having a truly transformative effect across broad swaths of society, says Breazeal. Children today are not only digital natives, theyre AI natives. And adults need to understand AI to be able to engage in a democratic dialogue around how we want these systems deployed.
Read the rest here:
Helping students of all ages flourish in the era of artificial intelligence - MIT News
Posted in Artificial Intelligence
Comments Off on Helping students of all ages flourish in the era of artificial intelligence – MIT News
How artificial intelligence can help us get back to the humanity of college admissions decisions | TheHill – The Hill
Posted: at 4:23 am
The job of a college admissions officer is not an easy one. For any competitive higher learning institution, the admissions process used to hand pick each incoming student is one that has also drawn increased scrutiny over the years.
To ensure the ongoing success of an institution, admissions officers aresaddled with the nearly impossible task of efficiently evaluating thousands of applications each school year, with the expectation that their choices will reflect the institutions standards, grow diversity and that the students chosen will then be inspired enough to enroll and attend classes in the fall.
The process is a balancing act and one that is expected to proceed without gender-based or racial bias. The problem? Humans are inherently biased, and schools are now beginning to realize the faults in their traditional approach to admissions one that has placed an outweighed emphasis on test scores and transcripts and often fails to find the human factor in their applicants. The flaws in this system also tend to leave underprivileged groups behind and keep underrepresented demographics as anomalies.
Surprisingly, the solution to this issue to this lack of humanity might possibly be found through the utilization of artificial intelligence.
The mission of the organization is to bring a human aspect back into the admissions process,said Andrew Martelli, the chief technology officer at Kira Talent.
The Canadian-founded company works with learning institutions around the world, in hopes of delivering a more holistic approach to reviewing candidates. Hopeful students applying to institutions that partner with Kira undergo a video interview process in which they will not encounter another live person. Instead, video- and text-based prompts lead you through a series of questions, and the applicants answers are then used to evaluate things like leadership potential, verbal and written communication skills, comprehension of key concepts, drives and motivations, and professionalism.
Martelli tells us that artificial intelligence (AI) has entered the picture in a beta phase: one that is used not to evaluate students, but rather the admissions officers and their possible biases.
It's almost more of a science experiment, to understand things like: are people accidentally or inadvertently introducing bias, he said. When schools express interest in it, they are presented with an AI-based tool that takes video data, and analyzes personality traits and behaviors. We take the very same footage that you view as an admissions person to get a sense of the applicant, and we have them run it through a series of algorithms. Schools are then able to run the algorithms, which give them AI-based data to then compare to what their human reviewers said.
The idea behind the technology is to help the human reviewer ask questions of themselves. Did I see these traits or qualities? Am I missing something? So the emphasis is not on using AI to replace the human aspect of the process. Our whole focus is on helping the human be a better evaluator of other humans.
Its those principles that the company utilized last year in their partnerships with schools like California State University (CSU) Fullerton. Members of the admissions committee were able to pre-record questions for the students to answer through video interviews.
Kira allowed us to bring our own personality, said Deanna Jung, Assistant Professor of Nursing and Coordinator of Pre-Licensure Programs. We have a diverse faculty, so there was a diverse group of individuals reading the questions. Students were able to watch those videos and think okay, there are faculty who teach here who are like me.
Automating bias
Not all AI systems are created equal, though, or without unconsciously programmed bias. At the end of the day, data scientists are still human, meaning that many of the subjective choices that they make as they create and refine training data can lead to racial bias in machine learning systems.
Human bias is an issue that pervades nearly every industry and facet of life, certainly not just in the process of college admissions. Over the last few years, society has become acutely more aware of how these human prejudices can affect peoples lives. These biases can slip into AI systems creating what is called algorithmic bias, taking various forms from gender bias to racial prejudice and age discrimination.
Weve already seen how algorithmic bias can lead to damaging consequences, like in 2016 when Microsoft released an AI-based chatbot on Twitter. Its goal was to interact with people through tweets and direct messages, but within hours of its release began replying to users with offensive and racist language. The chatbot, which was trained on anonymous public data and utilized a built-in internal learning feature, was targeted by hate groups to introduce racial bias into its system.
America is changing faster than ever!Add Changing America to yourFacebookorTwitterfeed to stay on top of the news.
Luckily, researchers are working daily to figure out how to mitigate the possibility of introducing racial bias into AI-based systems. One postgraduate researcher at the Massachusetts Institute of Technology, Joy Buolamwini, even founded the Algorithmic Justice League, with the objective to highlight the social and cultural implications of AI bias using both art and scientific research.
Combatting summer melt
A much more successful AI-based messaging experiment than Microsofts 2016 disaster was recently utilized by Georgia State University. The same year, the university introduced an AI chatbot called Pounce, whose objective was to reduce what schools refer to as summer melt.
Summer melt is what happens when enrolled students drop out during the summer, before their first fall semester even begins. According to the university, Pounce was able to reduce the occurrence of summer melt by an impressive 22 percent, which translated to an additional 324 students showing up for their first day of classes in the fall.
Realizing the power of communicating with their students through text message but not having the human power to implement it, Georgia State partnered up with the Boston-based education technology company AdmitHub.
More thanhalf of the universitys students hail from low-income backgrounds, and many of them are first generation college students a demographic that has shown the need for individual attention and financial aid, both of which aid enrolled students in showing up ready to start classes once the semester starts.
The admissions team worked with AdmitHub to identify these obstacles and fed information and answers into Pounce, which students could then direct their questions to at any time of the day or night by text message. In the first year of implementing Pounce, the AI-based system had answered more than 200,000 questions by incoming freshmen.
Every interaction was tailored to the specific students enrollment task, says Scott Burke, assistant vice president of undergraduate admissions at Georgia State, on the universitys website. We would have had to hire 10 full-time staff members to handle that volume of messaging without Pounce.
The future of AI and education
What experts seem to agree on is that the sole use ofAI will never be best practice for college admissions decisions, at least for now. Nevertheless, AI-based systems can serve an increasingly important purpose for schools, not only streamlining teams and processes, but also promoting education about unconscious biases amongst admissions officers.
I do believe that schools continuously look for ways to adjust their practices. I think COVID has also caused people to take a hard look at the processes that they use to try to find ways to make them more convenient, to make them more accessible, to make them safer because of the social distancing and other requirements, says Martelli.
I also think a lot of the social movements that we see in place today have asked for schools to take a harder look into their practices and the processes, and the ways they make these admissions decisions.
As far as the future of AI-based systems, Martelli preaches cautious optimism, saying that it has to be implemented in the right ways. Thechief technology officersaid that along with the promise that AI shows, there is a lot of danger as well. Experiments over the years have shown just how easy it is for algorithmic bias to make its way into an AI-based system, and Martelli says that a biased sample could only serve to perpetuate some of the problematic decision making of the past.
When you think about using those kinds of tools, we still think it needs a person at the heart of the whole system to make the judgment about another human, he says. Do I think there's promise there? For sure. Do I think we have to be careful about how we apply it? 100 percent.
A version of thisarticle can also be found on The Hill.
READ MORE STORIES FROM CHANGING AMERICA
'CHILDREN ARE DYING': ACTIVISTS COMPARE RESTRAINTS ON SCHOOLCHILDREN TO KILLING OF GEORGE FLOYD
WHITE SUPREMACISTS DRIVE US DOMESTIC TERRORIST ATTACKS TO HIGHEST LEVEL IN 25 YEARS
LAWMAKERS HOLD FIRST ANTI-ASIAN DISCRIMINATION HEARING IN DECADES AS US REACHES A CRISIS POINT
TENNESSEE PASSES BILL REQUIRING BUSINESSES TO POST SIGNS SAYING THEY ALLOW TRANS PEOPLE TO USE THEIR BATHROOMS
BIDEN STILL CONSIDERING CANCELLATION OF AS MUCH AS $50K PER PERSON IN STUDENT DEBT
US SECRETARY OF EDUCATION CANCELS $1B OF STUDENT LOAN DEBT
View original post here:
Posted in Artificial Intelligence
Comments Off on How artificial intelligence can help us get back to the humanity of college admissions decisions | TheHill – The Hill
Of All Things: Artificial intelligence is real | News | montgomerynews.com – Montgomery Newspapers
Posted: at 4:23 am
There seem to be a lot of articles about artificial intelligence in newspapers and magazines these days. Some of the other stuff in print makes me think that what we need is more regular intelligence,
Last week, the legislative branch of the 27-country European Union headquartered in Brussels announced plans to restrict the use of artificial intelligence. Its an attempt to head off abuse of artificial intelligence technology, instead of waiting for it to be a problem the way the United States does.
Artificial intelligencesimulates humanintelligencein computers that are programmed to think and act like human beings. (Hey, what could go wrong?)
Originally,artificial intelligence meant a machine doing something that would have previously needed human intelligence.From what Im reading these days, I worry that the artificial intelligence may be more intelligent than the human kind.
All of the major computer companies seem to offervirtual personal assistants (Microsoft Cortana, Apple Siri, Amazon Alexa and Google Assistant, for instance.)
Alexa, for another instance, can handle your e-mail, your shopping list, the radio and television, cooking, a wake-up call, communication with friends and family, and generally canrun your life.
Its hard to believe (at least for an old guy like me) to read about some of the things artificial intelligence can do.
For instance, some artificial intelligence systems can allow you todeposit checks in the bank from your living room, and, if necessary, some can decipher the handwriting on the check.
Artificial intelligence can also detect fraudulent use of a credit card by observing the users normal credit card spending patterns.
Youre likely to run into that sort of electronic voodoo any time in these ever-increasing days of artificial intelligence.
The intelligence algorithms can detect and remove hate speech, faster than a human censor can. They are able to identify key words and phrases.
Google maps, Im told, not only tell you how to drive to a destination, but, thanks to an artificial intelligence algorithm, tell you what time youll get there, based on traffic conditions.
The Google app algorithm remembers the edges of buildings that have been fed into the system after the owner has manually identified them.
Another feature is the electronic (or possibly voodoo again) recognizing and understanding of handwritten house numbers.(On paper, I presume, not on the houses.)
The scary thing about the foregoing is that the people who devise, and write about, all this new technology claim that the field of artificial intelligence is still in its infancy. More programs are still to come, they tell us, that will much more accurately replicate human capabilities.
I wonder how long it will be before the computers tell us to just go home and take a nap, and theyll take care of everything.
Next thing you know, dear reader, weekly columns like this may be turned out by artificial intelligence, instead of the good old fashioned writers like me. Please dont tell me that you wont know the difference.
Originally posted here:
Of All Things: Artificial intelligence is real | News | montgomerynews.com - Montgomery Newspapers
Posted in Artificial Intelligence
Comments Off on Of All Things: Artificial intelligence is real | News | montgomerynews.com – Montgomery Newspapers
Artificial Intelligence Identifies IBM And Netflix Among Trending Stocks This Week – Forbes
Posted: at 4:23 am
getty
Last week, our trending stock lists collected a motley crew of companies ranging from biotech to regular tech to home entertainment tech. In general, there was just a lot of tech.
For the week of May 16, many of those same stocks hit our trending roundup again for good reason. From a 49 million square foot downgrade to a pilot program intended to put credit cards in the hands of the credit-less, heres an inside look at whats making the market pop.
Forbes AI Investor
Q.ai runs daily factor models to get the most up-to-date reading on stocks and ETFs. Our deep-learning algorithms use Artificial Intelligence (AI) technology to provide an in-depth, intelligence-based look at a company so you dont have to do the digging yourself.
Sign up for the free Forbes AI Investor newsletterhereto join an exclusive AI investing community and get premium investing ideas before markets open.
First up in our weekly trending list is International Business Machines, which closed up 0.35% on Friday to $144.68 with 2.7 million trades on the books. The stock is up almost 15% for the year.
IBM a repeat customer from last week, after announcing the week before that they had developed the worlds first 2nm chip-making technology. Now, IBM is trending, likely in part due to a slew of messages surrounding the future of the companys working arrangements.
CEO Arvind Krishna declared last week that IBM expects up to 80% of its 350,000 employees to opt for a hybrid office-remote work arrangement, with the remaining 20% keeping their position as fully remote employees. And while he cautioned that nobody should make firm plans for another two or three months, he also acknowledged that these arrangements will vary around the world to account for individual countries situations regarding the pandemic.
In anticipation of this massive workplace shift, IBM is planning to shed tens of millions of square feet of real estate. The company plans to move to the CrossPoint office complex in Lowell, Massachusetts, downsizing from almost 50 million square feet to a mere 150,608.
IBM 5-year performance
The downgrade will likely help IBMs bottom line as well as their employee working conditions something the company has surely considered. Although an operating income of $8.58 billion in the last fiscal year doesnt exactly put the company in hurting territory, this is a touch over half of their $13.2 billion operating income three years ago.
Plus, their mere 0.21% in revenue growth to $73.6 billion isnt quite up to snuff compared to their three-year-ago revenue of nearly $79.6 billion. Their EPS has fallen in the same timeframe, too, from $9.52 to $6.23, while their ROE has decline from 50.3% to 26.4%.
Still, when youre down, theres nowhere to go but up: the company is expected to see revenue growth of 0.57% in the next twelve months. And with a forward 12-month P/E of 12.8x, their stock shows plenty of room for growth, as well.
Our AI sees IBM as having above-average potential overall, as well, though it could be doing better in some areas. The company earned Cs in Technicals and Growth, and Bs in Quality Value and Low Volatility Momentum, setting them up for an optimistic future.
Netflix NFLX is a company that needs no introduction especially seeing as how it, too, appeared on last weeks trending lists. The streaming media giant remains a household name due to its invaluable services during the pandemic and a stock staple due to its overall history of turning out profits.
Although Netflix fell short 2 million subscribers in its most recent earnings report, it still closed up almost 1.4% on Friday to $493.37 per share, ending the week on a positive note with 2.88 million trades in the bank. Still, its stock remains down 8.8% YTD.
Netflix 5-year performance
Netflixs high stock prices are mostly supported by its underlying performance revenue grew almost 5.6% in the last fiscal year and 67% in the last three, bringing total revenue to $25 billion. Of this, the company netted $4.585 billion in operating income, almost triple their $1.6 billion operating income three years ago. And their EPS has risen 35.9% in the last year alone and 208% in the last three, seeing per-share earnings grow from $2.68 to $6.08.
And while you might think that a company this big has nowhere to go, youd be wrong: Netflixs revenue is expected to expand 3.33% in the next twelve months. At the same time, some analysts see the stock as overvalued (largely due to the inability to monetize their Netflix Originals via traditional methods), and their forward 12-month P/E of 47.53 appears to support this claim.
Still, our AI doubts that Netflix has reached its cap; far from it. In fact, Netflix earned an A in Growth, with Bs in Low Volatility Momentum and Quality Value though it did net a D in Technicals. Only time will tell if Netflix is able to grow as pandemic restrictions loosen at last and the broader economy reopens.
Last week was a roller coaster week for vaccine manufacturers after the Biden administration openly supported an initiative to waive Covid-19 vaccine patents in order to bolster global production. This announcement as well as Modernas MRNA announcement that it would funnel at least 34 million doses to struggling countries saw the stock trend twice last week.
Once again, Moderna makes our list of trending stocks as shares closed up 7.7% on Friday to $161.38 on volume of 6.5 million trades. While the stock has been falling over the past month, as indicated in the 22-day price average of $167 and change, the stock is still up almost 54.5% YTD.
Moderna can once again thank the government for its upward trend, as shares of the biotech company rose after an announcement from the CDC that individuals who are fully vaccinated can participate in indoor and outdoor activities, large or small, without wearing a mask or physical distancing. While Moderna cant claim full credit, distributing hundreds of millions of doses of their mRNA vaccine surely helped make this long-awaited declaration possible.
And in further good news, Australia last week noted that its in talks with Moderna to establish domestic production of messenger RNA vaccines. The company is also supposedly conferring with Samsung BioLogics Co Ltd to start production of the Moderna vaccine in South Korea (though no official decision has been made) after two expert panels recommended that Modernas vaccine be approved for emergency use.
Moderna 5-year performance
Moderna is one of those fortunate companies that benefitted from the pandemic more than it was hurt, as the biotech company saw their revenue expand 240% in the last fiscal year to $803 million, compared to $135 million three years ago. Their operating income almost doubled in the same time frame, from $413 million to $763 million, though per-share earnings plummeted from $4.95 to $1.96.
Moderna is expected to see revenue growth of 16.2% over the next twelve months. And with a forward-facing P/E of 5.84x, theres some indication that the company may be undervalued.
However, our AI is wary of Moderna especially after a year of such rapid, and situation-specific, growth. The company scored its highest rating, B, in Growth, with Ds across the rest of the board in Technicals, Quality Value, and Low Volatility Momentum.
After making our trending list last week after Atlantic Equities downgraded the chipmaker to underweight, Intel INTC is back after closing up almost 2.5% to $55.35 on Friday with 28 million trades on the books. Though the stock has seen some losses in the last 10 days, its still up 11% YTD.
Intel 5-year performance
Intels struggles in chip manufacturing and delivery have been compounded in recent weeks during the global chip shortage, with the company still behind several nanometers in design not to mention grappling with the results of a $2.2 billion judgment by a federal jury in March over patent infringement. Still, the company managed to deliver strong Q1 results in April, driven by exceptional product demand.
However, Intels last year was not its best its numbers are roughly stagnant in the three-year timespan. Revenue is up 9.7% over the past three years, from $70.8 billion to $77.9 billion, though operating income barely ticked up from $23.2 billion to $23.9 billion. Its EPS has risen by around $0.50 to $4.94 in per-share earnings, with ROE down to 26% from 29%. All in all, Intel is trading with a forward 12-month P/E of 12.94x.
Our AI doesnt have much faith in Intel to turn its situation around anytime soon, either. The company earned below-average ratings across the board from our artificial intelligence, with Cs in Quality Value and Low Volatility Momentum, D in Growth, and F in Technicals.
Wells Fargo WFC closed up 1.2% on Friday to $46.96 on volume of 17 million trades. The company is up 55.6% YTD, despite waffling between $46 and $45 on its 10- and 22-day price averages.
Wells Fargo 5-year performance
Wells Fargo has been on a massive public rehabilitation campaign since 2018, wading through scandal after lawsuit due to deplorable business practices that damaged the credit of their members and tarnished their reputation (and bottom line). In the years since, despite some slip-ups, Wells Fargo has done their best to make a comeback and with their stock earnings approaching pre-pandemic numbers at last, the company may just be about to make it.
Later this year, Wells Fargo is set to join a number of companies including fellow financial giants US Bancorp USB and JPMorgan Chase JPM in a government-backed pilot program to put credit cards in the hands of credit invisible individuals. Under this program, banks will share applicant information regarding balances and overdraft histories to identify financially responsible individuals who havent previously built credit. If the initiative proves promising, the banks may move into other types of lending particularly auto loans for responsible credit invisible individuals, as well.
This program gives Wells Fargo a company that once hurt the credit of thousands of its members a chance to redeem itself and give members a financial foot forward. Thats something that the company has experienced itself in the last year, as the company saw a revenue increase of 9.3% to $58.3 billion (down from $84.7 billion three years ago), and operating income growth of 216% to $2.1 billion (down from $28.5 billion three years ago).
And while its EPS and ROE remain in the tank - $0.41 and 1.92%, respectively Wells Fargo maintains a forward 12-month P/E of 14.16x.
Still, our AI remains skeptical of Wells Fargo, rating the company just below average with a D in Quality Value and Cs in Technicals, Growth, and Low Volatility Momentum.
Liked what you read? Sign up for our free Forbes AI Investor Newsletterhereto get AI driven investing ideas weekly. For a limited time, subscribers can join an exclusive slack group to get these ideas before markets open.
Read the rest here:
Artificial Intelligence Identifies IBM And Netflix Among Trending Stocks This Week - Forbes
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence Identifies IBM And Netflix Among Trending Stocks This Week – Forbes
Artificial intelligence taking over DevOps functions, survey confirms – ZDNet
Posted: at 4:23 am
The pace of software releases has only accelerated, and DevOps is the reason things have sped up. Now, artificial intelligence and machine learning are also starting to play a role in this acceleration of code releases.
That's the word from GitLab's latest surveyof 4,300 developers and managers, which finds some enterprises are releasing code ten times faster than in previous surveys. Almost all respondents, 84%, say they're releasing code faster than before, and 57% said code is being released twice as fast, from 35% a year ago. Close to one in five, 19%, say their code goes out the door ten times faster.
Tellingly, 75% are using AI/ML or bots to test and review their code before release, up from 41% just one year ago. Another 25% say they now have full test automation, up from 13%.
About 21% of survey respondents say the pace of releases has accelerated with the addition of source code management to their DevOps practice (up from 15% last year), the survey's authors add. Another 18% added CI and 13% added CD. Nearly 12% say adding a DevOps platform has sped up the process, while just over 10% have added automated testing.
Developers' roles are shifting toward the operations side as well, the survey shows. Developers are taking on test and ops tasks, especially around cloud, infrastructure and security. At least 38% of developers said they now define or create the infrastructure their app runs on. About 13% monitor and respond to that infrastructure. At least 26% of developers said they instrument the code they've written for production monitoring -- up from just 18% last year.
Fully 43% of our survey respondents have been doing DevOps for between three and five years -- "that's the sweet spot where they've known success and are well-seasoned," the survey's authors point out. In addition, they add, "this was also the year where practitioners skipped incremental improvements and reached for the big guns: SCM, CI/CD, test automation, and a DevOps platform."
Industry leaders concur that DevOps has significantly boosted enterprise software delivery to new levels, but caution that it still tends to be seen as an IT activity, versus a broader enterprise initiative. "Just like any agile framework, DevOps requires buy-in," says Emma Gautrey, manager of development operations at Aptum. "If the development and operational teams are getting along working in harmony that is terrific, but it cannot amount to much if the culture stops at the metaphorical IT basement door. Without the backing of the whole of the business, continuous improvement will be confined to the internal workings of a single group."
DevOps is a commitment to quick development/deployment cycles, "enhanced by, among other things, an enhanced technical toolset -- source code management, CI/CD, orchestration," says Matthew Tiani, executive vice president at iTech AG. But it takes more than toolsets, he adds. Successful DevOps also incorporates "a compatible development methodology such as agile and scrum, and an organization commitment to foster and encourage collaboration between development and operational staff."
Then organizations aspects of DevOps tend to be more difficult, Tiani adds. "Wider adoption of DevOps within the IT services space is common because the IT process improvement goal is more intimately tied to the overall organizational goals. Larger, more established companies may find it hard to implement policies and procedures where a complex organizational structure impedes or even discourages collaboration. In order to effectively implement a DevOps program, an organization must be willing to make the financial and human investments necessary for maintaining a quick-release schedule."
What's missing from many current DevOps efforts is "the understanding and shared ownership of committing to DevOps," says Gautrey. "Speaking to the wider community, there is often a sense that the tools are the key, and that once in place a state of enlightenment is achieved. That sentiment is little different from the early days of the internet, where people would create their website once and think 'that's it, I have web presence.'"
That's where the organization as a whole needs to be engaged, and this comes to fruition "with build pipelines that turn red the moment an automated test fails, and behavioral-driven development clearly demonstrating the intentions of the software," says Gautrey. "With DevOps, there is a danger in losing interaction with individuals over the pursuit of tools and processes. Nothing is more tempting than to apply a blanket ruling over situations because it makes the automation processes consistent and therefore easier to manage. Responding to change means more than how quickly you can change 10 servers at once. Customer collaboration is key."
View original post here:
Artificial intelligence taking over DevOps functions, survey confirms - ZDNet
Posted in Artificial Intelligence
Comments Off on Artificial intelligence taking over DevOps functions, survey confirms – ZDNet







