12345...102030...


Google says ‘quantum supremacy’ achieved in new age super computer – Fox Business

FOX Businesss Hillary Vaughn on Google CEO Sundar Pichai and Ivanka Trump teaming up to create more IT jobs.

SAN FRANCISCO (AP) Google says it has achieved a breakthrough in quantum computing research.

It says an experimental quantum processor has completed a calculation in just a few minutes that would take a traditional supercomputer thousands of years.

The results of its study appear in the scientific journal Nature. Google says it has achieved quantum supremacy, which means the quantum computer did something a conventional computer could never do.

"For those of us working in science and technology, its the hello world moment weve been waiting forthe most meaningful milestone to date in the quest to make quantum computing a reality," Google CEOSundar Pichai wrote in a blog post announcing the breakthrough."But we have a long way to go between todays lab experiments and tomorrows practical applications; it will be many years before we can implement a broader set of real-world applications," he continued.

Competitor IBM is disputing that Google achieved the benchmark, saying Google underestimated the conventional supercomputer.

Quantum computing is an advanced computing technology that is still at a relatively early stage of development.

Link:

Google says 'quantum supremacy' achieved in new age super computer - Fox Business

Milwaukee School of Engineering Supercomputer to Support AI Education and Research – Campus Technology

High-Performance Computing

A new supercomputer at the Milwaukee School of Engineering will serve both research and instruction in artificial intelligence and deep learning for the university's computer science program. MSOE partnered with Microway to build the custom cluster, an NVIDIA DGX POD-based machine designed to support modern AI development. The DGX POD reference architecture is based on NVIDIA's DGX SATURN V AI supercomputer used for internal research and development for autonomous vehicles, robotics, graphics and more.

Technical specs include:

MSOE students will be able to access the supercomputer via web browser and "start a DGX-1 or NVIDIA T4 GPU deep learning session with the click of a button" with no need to understand command line interfaces and workload managers, according to a news announcement. "Unlike many university programs in which students' access to supercomputers is usually limited to graduate students in computer labs, this configuration gives undergraduate students at MSOE supercomputer access in the classroom, enabling training of the next AI workforce."

About the Author

About the author: Rhea Kelly is executive editor for Campus Technology. She can be reached at rkelly@1105media.com.

The rest is here:

Milwaukee School of Engineering Supercomputer to Support AI Education and Research - Campus Technology

New Cray Supercomputer Brings Advanced AI Capabilities to the High-Performance Computing Center Stuttgart – Yahoo Finance

German HPC Center Prepares for the Exascale Era; Responds to Growing Demand for Converged Solutions Combining AI and HPC

SEATTLE, Oct. 24, 2019 (GLOBE NEWSWIRE) -- Global supercomputer leader Cray, a Hewlett Packard Enterprise company (HPE), today announced that the High-Performance Computing Center of the University of Stuttgart (HLRS) in Germany has selected a new Cray CS-Storm GPU-accelerated supercomputer to advance its computing infrastructure in response to user demand for processing-intensive applications like machine learning and deep learning. The new Cray system is tailored for artificial intelligence (AI) and includes the Cray Urika-CS AI and Analytics suite, enabling HLRS to accelerate AI workloads, arm users to address complex computing problems and process more data with higher accuracy of AI models in engineering, automotive, energy, and environmental industries and academia.

As we extend our service portfolio with AI, we require an infrastructure that can support the convergence of traditional high-performance computing applications and AI workloads to better support our users and customers, said Prof. Dr. Michael Resch, director at HRLS. Weve found success working with our current Cray Urika-GX system for data analytics, and we are now at a point where AI and deep learning have become even more important as a set of methods and workflows for the HPC community. Our researchers will use the new CS-Storm system to power AI applications to achieve much faster results and gain new insights into traditional types of simulation results.

Supercomputer users at HLRS are increasingly asking for access to systems containing AI acceleration capabilities. With the GPU-accelerated CS-Storm system and Urika-CS AI and Analytics suite, which leverages popular machine intelligence frameworks like TensorFlow and PyTorch, HLRS can provide machine learning and deep learning services to its leading teaching and training programs, global partners and R&D. The Urika-CS AI and Analytics suite includes Crays Hyperparameter Optimization (HPO) and Cray Programming Environment Deep Learning Plugin, arming system users with the full potential of deep learning and advancing the services HLRS offers to its users interested in data analytics, machine learning and related fields.

The future will be driven by the convergence of modeling and simulation with AI and analytics and were honored to be working with HLRS to further their AI initiatives by providing advanced computing technology for the Centers engineering and HPC training and research endeavors, said Peter Ungaro, president and CEO at Cray, a Hewlett Packard Enterprise company. HLRS has the opportunity to apply AI to improve and scale data analysis for the benefit of its core research areas, such as looking at trends in industrial HPC usage, creating models of car collisions, and visualizing black holes. The Cray CS-Storm combined with the unique Cray-CS AI and Analytics suite will allow HLRS to better tackle converged AI and simulation workloads in the exascale era.

In addition to the Cray CS-Storm architecture and Cray-CS AI and Analytics suite, the system will feature NVIDIA V100 Tensor Core GPUs and Intel Xeon Scalable processors.

The convergence of AI and scientific computing has accelerated the pace of scientific progress and is helping solve the world's most challenging problems, said Paresh Kharya, Director of Product Management and Marketing at NVIDIA. Our work with Cray and HLRS on their new GPU-accelerated system will result in a modern HPC infrastructure that addresses the demands of the Centers research community to combine simulation with the power of AI to advance science, find cures for disease, and develop new forms of energy.

The system is scheduled for delivery to HLRS in November 2019.

About Cray Inc.Cray, a Hewlett Packard Enterprise company, combines computation and creativity so visionaries can keep asking questions that challenge the limits of possibility. Drawing on more than 45 years of experience, Cray develops the worlds most advanced supercomputers, pushing the boundaries of performance, efficiency and scalability. Cray continues to innovate today at the convergence of data and discovery, offering a comprehensive portfolio of supercomputers, high-performance storage, data analytics and artificial intelligence solutions. Go to http://www.cray.com for more information.

Story continues

CRAY and Urika are registered trademarks of Cray Inc. in the United States and other countries, and CS-Storm is a trademark of Cray Inc. Other product and service names mentioned herein are the trademarks of their respective owners.

Cray Media:Diana Brodskiy415/306-6199pr@cray.com

Continue reading here:

New Cray Supercomputer Brings Advanced AI Capabilities to the High-Performance Computing Center Stuttgart - Yahoo Finance

AMD CPUs Will Power UKs Next-Generation ARCHER2 Supercomputer – The Next Platform

AMD has picked up yet another big supercomputer win with the selection of its second-generation Epyc processors, aka Rome, as the compute engine for the ARCHER2 system to be installed at the University of Edinburgh next year. The UK Research and Innovation (UKRI) announced the selection earlier this week, along with additional details on the makeup of system hardware.

According to the announcement, when ARCHER2 is up and running in 2020, it will deliver a peak performance of around 28 petaflops, more than 10 times that of the UKs current ARCHER supercomputer housed at EPCC, the University of Edinburghs supercomputing center. ARCHER, which stands for Advanced Research Computing High End Resource, has filled the role of the UK National Supercomputing Service since it came online in 2013.

The now six-year-old ARCHER is a Cray XC30 machine comprised of 4,920 dual-socket nodes, powered by 12-core, 2.7 GHz Intel Ivy Bridge Xeon E5 v2 processors vintage, yielding a total of 118,080 cores and rated at a peak theoretical performance of 2.55 petaflops across all those nodes. Most of the nodes are outfitted with 64 GB of memory, with a handful of large-memory nodes equipped with 128 GB, yielding a total capacity of 307.5 TB. Crays Aries XC interconnect, as the system name implies, is employed to lash together the nodes.

The upcoming ARCHER2 will also be a Cray (now owned by Hewlett Packard Enterprise) machine, in this case based on the companys Shasta platform. It will consist of 5,848 nodes laced together with the 100 Gb/sec Slingshot HPC variant of Ethernet, which is based on Crays homegrown Rosetta switch ASIC and deployed in a 3D dragonfly topology.

Although thats only about a thousand more nodes than its predecessor, each ARCHER2 node will be equipped with two AMD Rome 64-core CPUs running at 2.25 GHz, for a grand total of 748,544 cores. It looks like ARCHER2 is not using the new Epyc 7H12 HPC variant of the Rome chip, which was launched in September, in fact, which has clocks spinning at 2.6 GHz but a turbo boost speed that is lower at 3.3 GHz; this chip requires direct liquid cooling on the socket because it is revving at 280 watts, which cannot be moved quickly off the CPU by fans blowing air in the server chassis.

Even though the ARCHER2 machine will only have about six times the core count, each of those Rome cores is nearly twice as powerful as the Ivy Bridge ones in ARCHER from a peak double precision flops perspective. Thats actually pretty remarkable when you consider that the nominal clock frequency on these particular Rome chips is 450 MHz slower than that of the Xeon E5 v2 counterparts in ARCHER. Having 5.3X the number of cores helps, and really, it is the only benefit we are getting out of Moores Law. The vector units in the Rome chips are 256-bits wide, while the AVX units in the Ivy Bridge Xeons are 128 bits wide, so this also accounts for some of the performance increase.

ARCHER2s total system memory is 1.57 PB, which is more than five times larger than that of ARCHER, but given the 10X peak performance discrepancy, the second-generation machine will have to manage with about half the number of bytes per double-precision flop. Fortunately, those bytes are moving at lot faster now, thanks to the eight-memory-controller design of the Epyc processors. The system also has a 1.1 PB all-flash Lustre burst buffer front ending a 14.5 PB Lustre parallel disk file system to keep the data moving steadily into and out of the system. All of this will be crammed into 23 Shasta cabinets, which have water cooling in the racks.

In fact, as we reported in August in our deep dive on the Rome architecture, these processors can deliver up to 410 GB/sec of memory bandwidth if all the DIMM slots are populated. That works out to about 45 percent more bandwidth than what can be achieved with Intels six-channel Cascade Lake Xeon SP, a processor that can deliver a comparable number of flops.

The reason we are dwelling of this particular metric is that when we spoke with EPCC center director Mark Parsons in March, he specifically referenced memory bandwidth as an important criteria for the selection of the CPU that would be powering ARCHER2, telling us that the better the balance between memory bandwidth and flops, the more attractive the processor is.

Of course, none of these peak numbers matter much to users, who are more interested in real-world application performance. In that regard, ARCHER2 is expected to provide over 11X the application throughput as ARCHER, on average, based on five of the most heavily used codes at EPCC. Specifically, their evaluation, presumably based on early hardware, revealed the following application speedups compared to the 2.5 petaflops ARCHER:

As the announcement pointed out, that level of performance puts ARCHER2 in the upper echelons of CPU-only supercomputers. (Currently, the top CPU-powered system is the 38.7 petaflops Frontera system at the Texas Advanced Computing Center.) It should be noted that ARCHER2 will, however, include a collaboration platform with four compute nodes containing a total of 16 AMD GPUs, so technically its not a pure CPU machine.

ARCHER2 will be installed in the same machine room at EPCC as ARCHER, so when they swap machines, there will be a period without HPC service. The plan is to pull the plug on ARCHER on February 18, 2020 and have ARCHER2 up and running on May 6. Subsequent to that, the new system will undergo a 30-day stress test, during which access may be limited.

This is all good news for AMD, of course, which has been capturing HPC business at a breakneck pace over the last several months. Thats largely been due to the attractive performance (and likely price-performance) offered by the Rome silicon compared to what Intel is currently offering.

Some recent notable AMD wins include a 24-petaflop supercomputer named Hawk, which is headed to the High-Performance Computing Center of the University of Stuttgart (HLRS) later this year, as well as a 7.5-petaflops system at the IT Center for Science, CSC, in Finland. Add to that a couple of large Rome-powered academic systems, including a 5.9-petaflops machine for the national Norwegian e-infrastructure provider Uninett Sigma2 and another system of the same size to be deployed at Indiana University. The US Department of Defense has jumped on the AMD bandwagon as well, with a trio of Rome-based supercomputers for the Air Force and Army.

All of these systems are expected to roll out in 2019 and 2020. And until Intel is able to counter the Rome juggernaut with its upcoming 10 nanometer Ice Lake Xeon processors in 2020, we fully expect to see AMD continue to rack up HPC wins at the expense of its larger competitor.

The ARCHER2 contract was worth 79 million, which translates to about $102 million at current exchange rates. The original ARCHER system cost 43 million, which converted to about $70 million at the time. So the ARCHER2 machine will cost about 1.46X and delivers 11X the peak theoretical performance over an eight year span of time. First of all, that is a very long time to wait to do an upgrade for an HPC center, so clearly EPCC was waiting for a chance to get a really big jump in price/performance, and by the way, at 28 petaflops, that is considerably higher than the 20 petaflops to 25 petaflops that EPCC was expecting back in March when the requisition was announced.

That original ARCHER system cost around $27,450 per peak teraflops back in 2012, which was on par with all-CPU systems but considerably more expensive than the emerging accelerated systems, on a cost per teraflops basis, of the time. (We did an analysis of the cost of the highest end, upper echelon supercomputers over time back in April 2018.) The ARCHER2 system is coming in at around $3,642 per teraflops, which is a huge improvement of 7.5X in bang for the buck, but the US Department of Energy is going to pay another order of magnitude lower something on the order of $335 per teraflops for the Frontier accelerated system at Oak Ridge National Laboratory and the El Capitan accelerated system at Lawrence Livermore National Laboratory when they are accepted in around 2022 and 2023. Both have AMD CPUs and Frontier will also use AMD GPUs for compute; El Capitan has not yet decided on its GPU. The current Summit and Sierra systems at those very same labs, which mix IBM Power9 processors with Nvidia Tesla V100 GPU accelerators, cost a little more than $1,000 per teraflops.

Our point is, all-CPU systems are necessary, particularly for labs with diverse workloads, and they come at a premium compared to labs that use accelerators and have ported their codes to them.

Continued here:

AMD CPUs Will Power UKs Next-Generation ARCHER2 Supercomputer - The Next Platform

Surprising Discovery Made When Supercomputer Simulations Explore Magnetic Reconnection – SciTechDaily

Collision of two magnetized plasma plumes showing Biermann battery-mediated reconnection. Credit: Jackson Matteucci and Will Fox

Magnetic reconnection, a process in which magnetic field lines tear and come back together, releasing large amounts of kinetic energy, occurs throughout the universe. The process gives rise to auroras, solar flares and geomagnetic storms that can disrupt cell phone service and electric grids on Earth. A major challenge in the study of magnetic reconnection, however, is bridging the gap between these large-scale astrophysical scenarios and small-scale experiments that can be done in a lab.

Researchers have now overcome this barrier through a combination of clever experiments and cutting-edge simulations. In doing so, they have uncovered a previously unknown role for a universal process called the Biermann battery effect, which turns out to impact magnetic reconnection in unexpected ways.

The Biermann battery effect, a possible seed for the magnetic fields pervading our universe, generates an electric current that produces these fields. The surprise findings, made through computer simulations, show the effect can play a significant role in the reconnection occurring when the Earths magnetosphere interacts with astrophysical plasmas. The effect first generates magnetic field lines, but then reverses roles and cuts them like scissors slicing a rubber band. The sliced fields then reconnect away from the original reconnection point.

The simulations modeled the results of experiments in China that studied high-energy-density plasmasmatter under extreme states of pressure. The experiments used lasers to blast a pair of plasma bubbles from a solid metal target. Simulations of the three-dimensional plasma (see image at the top of the page) traced the expansion of the bubbles and the magnetic fields that the Biermann effect created, tracking the collision of the fields to produce magnetic reconnection. Researchers performed these simulations on the Titan supercomputer at the U.S. Department of Energys Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory.

The results provide a new platform for replicating the reconnection observed in astrophysical plasmas in the laboratory, said Jackson Matteucci, a graduate student in the Plasma Physics program at the Princeton Plasma Physics Laboratory who led the research.

By bridging the traditional gap between laboratory experiments and astrophysical processes, these results open a new chapter in efforts to understand the universe.

###

Funding provided in part by the National Defense Science and Engineering Graduate Fellowship Program.

Abstract:

3-D magnetic reconnection in laser-driven plasmas: novel modeling provides insight into laboratory and astrophysical current sheets9:30 AM-12:30 PM, Thursday, October 24, 2019Room: Floridian Ballroom CD

Visit link:

Surprising Discovery Made When Supercomputer Simulations Explore Magnetic Reconnection - SciTechDaily

Bitcoin and cryptocurrencies had a very bad day – TechCrunch

The price of Bitcoin and other cryptocurrencies tanked today, continuing a months-long slide that has seen the value of the digital currency slide by more than $2,000 from highs of above $10,000 earlier in the year.

Investors are still speculating about the cause of the crash, but hopeful cryptocurrency bulls before today had hoped that $8,000 would be the new floor for Bitcoin.

No longer. Today the price of Bitcoin dropped to $7,448.75, down from around $8,000 earlier in the day.

Investors arent sure whats behind the crash, but Bitcoins commentariat pointed to two likely culprits.

One was the underwhelming performance of Facebooks chief executive Mark Zuckerberg in testimony before Congress on the Libra cryptocurrency that his company is leading the charge to create.

However, an underwhelming performance from Zuckerberg and the potential fate of Libra, which cryptocurrency purists have scoffed at anyway, may be less concerning for the Bitcoin crowd than developments happening in Googles quantum computing research labs around the world.

Earlier today, Google declared quantum dominance, indicating that it had solved a problem using quantum computing that a supercomputer would have taken years to solve. Thats great news for theoretical physicists and quantum computing aficionados, but less good for investors whove put their faith (and billions of dollars) into a system of record whose value depends on its inability to be cracked by computing power.

When news of Googles achievement first began trickling out in late September (thanks to reporting by the Financial Times), Bitcoin experts dismissed the notion that it would cause problems for the cryptocurrency.

We still dont even know if its possible to scale quantum computers; quite possible that adding qbits will have an exponential cost, wrote early Bitcoin developer Peter Todd, on Twitter.

The comments, flagged by CoinTelegraph, seem to indicate that the economic cost of cracking Bitcoins cryptography is far beyond the means of even Alphabets multibillion-dollar budgets.

Still, it has been a dark few months for cryptocurrencies after steadily surging throughout the year. The real test, of course, of the viability of Bitcoin and the other cryptographically secured transaction mechanisms floating around the tech world these days is whether anyone will build viable products on their open architectures.

Aside from a few flash-in-the-pan fads, the jury is very much still out on what the verdict will be.

That uncertainty affects more than just Bitcoin, and, indeed, the rest of the market also tumbled, as Coindesk pricing charts indicate.

Read more:

Bitcoin and cryptocurrencies had a very bad day - TechCrunch

Google claims to have developed a quantum computer which is BILLIONS of times faster than our most advanced – The Sun

GOOGLE says it has developed a quantum computer billions of times faster than any other technology.

The US giant claims it took 200 seconds to carry out a task that would have taken a supercomputer around 10,000 years.

1

Experts called it a "phenomenally significant" breakthrough.

In conventional computers, a unit of information or bit can have a value of 0 or 1.

But quantum bits can be both 0 and 1, allowing multiple computations at once.

University College London expert Dr Ciarn Gilligan-Lee said: It is the first baby step on a long journey where we can fully harness the power of quantum mechanics that will eclipse anything our current laptops or even supercomputers can do.

"It is a huge technological and scientific milestone."

Quantum computing promises to revolutionise the way PCs crunch data.

They could perform important work like designing super-materials, speeding up package deliveries and creating new drugs for deadly diseases at high speed.

Scientists have spent decades trying to achieve "quantum supremacy", a landmark that Google now claims to have conquered.

The phrase basically means the moment at which aquantum computer is able to do something that a classical computer can't.

The search giant worked for more than a decade to produce its own quantum computing chip, called Sycamore.

"Our machine performed the target computation in 200 seconds," Google researchers saidin a blog post about the work.

"From measurements in our experiment we determined that it would take the world's fastest supercomputer 10,000 years to produce a similar output."

Google carried out its research at a lab in Santa Barbara, California.

The findings were published in the journal Nature.

Dr Luke Fleet, a senior editor at Nature, said quantum machines are still years, if not decades away.

GAME DAY Call of Duty Modern Warfare release date is TODAY read our ultimate launch guide

SWORD IN THE STONE 'Real Excalibur' pulled from rock at bottom of lake

PAY TO PLAY Netflix vows crackdown on users who share logins and could make you pay EXTRA

HIDDEN FIGURES Skeletons of woman and child dating back 500 years found in Tower of London

LUNAR MYSTERY Crashed Indian Moon lander mysteriously 'missing' as Nasa fails to find it

He said: This breakthrough result marks the dawn of a new type of computing.it allows you to compute things faster not just a little, but lot faster!

This is transformative as people will be able to compute things that they previously thought impossible.

Follow this link:

Google claims to have developed a quantum computer which is BILLIONS of times faster than our most advanced - The Sun

First Cray Shasta System in EMEA will be the UK’s Most Powerful Supercomputer – StreetInsider.com

As you were browsing http://www.streetinsider.com something about your browser made us think you were a bot.

If you are interested in http://www.streetinsider.com content, APIs are available. Please contact us here http://www.streetinsider.com/signup_content.php

After completing the CAPTCHA below, you will immediately regain access to http://www.streetinsider.com.

Read more from the original source:

First Cray Shasta System in EMEA will be the UK's Most Powerful Supercomputer - StreetInsider.com

talkSPORT Super Computer predicts where every club will finish in the 2019/20 Premier League table *October – talkSPORT.com

Were now two months into the Premier League season and, quite frankly, its even more chaotic than we imagined.

Sure, unbeaten Liverpool and title holders Manchester City look by far and away the best teams but elsewhere were seeing unpredictable results and performances from every team.

Getty Images - Getty

Newcastle United looked a smart and solid outfit in their victory over Tottenham but against Leicester City looked incapable of battling against relegation to the Championship.

Top four contenders Arsenal have looked excellent one minute and laughable the next in several of their matches, while Manchester United are also rotating between sublime and ridiculous.

Its making for interesting viewing with teams like Leicester now seeing a real opening to break into the top six but just how will it all play out across the season?

We booted up the talkSPORT Super Computer to find out just what is going to happen.

You can see the results and the predicted Premier League table below

AFP or licensors

Getty Images - Getty

Getty Images - Getty

Getty Images - Getty

Getty Images - Getty

AFP or licensors

AFP or licensors

Getty Images - Getty

AFP or licensors

Saturday is GameDay on talkSPORT and talkSPORT 2 as we become your go to destination for all the Premier League action.

Well bring you LIVE commentary of Premier League games across all three time slots on Saturday 12.30pm, 3pm and 5.30pm delivering award-winning coverage to moreGameDaylisteners than ever.

See original here:

talkSPORT Super Computer predicts where every club will finish in the 2019/20 Premier League table *October - talkSPORT.com

India is on its way to become a supercomputer power – Quartz India

Indias recent Chandrayaan-2 mission, which almost soft landed a probe on the moon, had a palpable zeal, which, as prime minister Narendra Modipointed out, will be felt in other realms of the knowledge society.

With renewed aspirations to excel in science, engineering, and business, the time is ripe for India to invest in the infrastructure that will help achieve these goalsamong them, supercomputers.

Developed, and almost-developed, countries have begun investing heavily in high-performance computing to boost their economies and tackle the next generation of social problems.

With their unique ability to simulate the real world, by processing massive amounts of data, supercomputers have made cars and planes safer, and fuel more efficient and environment friendly. They help in the extraction of new sources of oil and gas, development of alternative energy sources, and the advancement of medical science.

Supercomputers have also allowed weather forecasters to accurately predict severe storms, enabling better mitigation planning, and warning systems. They are increasingly being deployed by financial services, manufacturing and internet companies, and in vital infrastructure systems such as water supply networks, energy grids, and transportation.

Future applications of artificial intelligence (AI), running at any moderate degree of scale, will depend on supercomputing. This explanatory video brings the potential of high-performance computing (HPC) to life.

Thanks to the potential of HPC, countries like the US, China, France, Germany, Japan, and Russia have created national-level supercomputing strategies and are investing substantial resources in these programmes. These are the nations with which India has to compete in its bid to become a centre for scientific and business excellence.

Yet, the list of top 500 supercomputers, counts fewer than five in the country.

A pertinent question here is whether it makes economic sense for India to invest in expensive technology like supercomputers? Cant we make do with something more frugal? After all, we launched our Mangalyaan Mars Orbiter Mission with a budget of $73 million and we almost made it to the moons south pole, where no country has ever gone before, for less than $150 million.

India is not typically considered a pioneer or leader when it comes to adopting newer technologies. While it has the most number of IT professionals in the world, it is a laggard in adopting innovation.

By harnessing the power of supercomputing, there is an opportunity to reverse this trend. India has reached a stage where it has the will and wherewithal to provide better lives to its citizens. It wants to enhance the impact of its welfare programmes by formulating the right schemes for the right beneficiaries in the right parts of the country. It wants to improve its prediction of cyclones and droughts and better plan infrastructure for its fast-expanding cities.

To realise these goals, India can no longer afford to ignore supercomputers. It needs the capacity to solve complex scientific problems which have real-life implications. It needs its workforce to have the skills to participate and lead in new innovations across various academic and industrial sectors.

To do all of this a country needs the appropriate infrastructuredigital as well as physical. Case in point:Chinas Jiangsu.

In the province, the supercomputer Sunway TaihuLight performs a range of tasks, including climate science, weather forecasting, and earth-system modelling to help ships avoid rough seas, farmers improve their yield and ensure the safety of offshore drilling. TaihuLight has already led to an increase in profits and a reduction in expenses that justify its $270 million cost.

In the US, too, supercomputers are radically transforming the healthcare system. The Center for Disease Control (CDC) has used supercomputers to create a far more detailed model of the Hepatitis-C virus, a major cause of liver disease that costs $9 billion in healthcare costs in the US alone.

Using supercomputers, the researchers have now developed a model that comprehensively simulates the human heart down to the cellular level and could lead to a substantial reduction in heart diseases, which costs the US around $200 billion each year.

On Aug. 14, 2017, the SpaceX CRS-12 rocket was launched from the Kennedy Space Center to send Dragon Spacecraft to the International Space Station (ISS) National Lab. Aboard the Dragon was a Hewlett Packard enterprises (HPE) supercomputer, called the Spaceborne, which is part of a year-long experiment conducted by HPE and NASA to run a supercomputer system in space.

The goal is for the system to operate seamlessly in the harsh conditions of space for one yearroughly the amount of time it would take to travel to Mars.

If India truly wants to become a knowledge-driven, multi-trillion-dollar economy, which is able to support cutting-edge science to benefit its economy, its society and the businesses that operate within it environment, investment in supercomputing is a necessity.

Without it, India risks being surpassed on the global stage by other nations and will consequently miss the huge benefits that come from having this vitally important technology at the disposal of Indias best and brightest minds. The Modi government has big ambitions for India and supercomputing can help make them a reality.

This piece is published in collaboration with India Economic Summit. We welcome your comments at ideas.india@qz.com.

See the original post:

India is on its way to become a supercomputer power - Quartz India

Why build your own cancer-sniffing neural network when this 1.3 exaflop supercomputer can do if for you? – The Register

The worlds fastest deep learning supercomputer is being used to develop algorithms that can help researchers automatically design neural networks for cancer research, according to the Oak Ridge National Laboratory.

The World Health Organisation estimates that by 2025, the number of diagnosed new cases of cancer will reach 21.5 million a year, compared to the current number of roughly 18 million. Researchers at Oak Ridge National Laboratory (ORNL) and Stony Brook University reckon that this means doctors will have to analyse about 200 million biopsy scans per year.

Neural networks could help ease their workloads, however, so that they can focus more on patient care. There have been several studies describing how computer vision models can be trained to diagnose cancerous cells in the lung or prostate. Although these systems seem promising theyre time consuming and expensive to build.

The team working at ORNL, a federally funded research facility working under the US Department of Energy, however, want to change that. They have developed software that automatically spits out new neural network architectures to analyse cancer scans so that engineers dont have to spend as much time designing the model themselves.

Known as MENDLL, the Python-based framework uses an evolutionary algorithm and neural architecture search to piece together building blocks in neural networks to come up with new designs. Millions of new models can be generated within hours before the best one is chosen, according to a paper released on arXiv.

The end result is a convolutional neural architecture that can look for seven different types of cancers within a pathology image, Robet Patton, first author of the study and a researcher at ORNL, told The Register.

The software is computationally intensive to run and requires a deep learning supercomputer like Summit. The ORNL supercomputer contains 4,608 nodes, where each one contains two IBM POWER9 CPUs and six Nvidia Volta GPUs. MENDLL can achieve 1.3 exaflops - a quintillion or 1018 floating point operations per second - when the code is running at mixed precision floating point operations on a total of 9,216 CPUs and 27,648 GPUs.

Although millions of potential architectures are created, the best one is chosen based on the neural networks size, how computationally intensive it is to train, and its accuracy at detecting tumors in medical scans.

The images in the training dataset are split into patches; 86,000 patches were manually annotated to classify the tumors, where 64,381 patches contained benign cells and 21,773 contained cancerous ones. All the images represent seven different cancer types in the breast, colon, lung, pancreas, prostate, skin, and pelvic forms.

The seven different cancer types are considered to be a single data set. As a result, MENNDL starts with some initial set of architectures, and then evolves that set toward a single network architecture that is capable of identifying seven different types, said Patton.

The winning model achieved an accuracy score of 0.839, where 1 is the perfect score, and could zip through 7,033 patches per second. For comparison, a hand designed convolutional neural network known as Inception is slightly more accurate at 0.899 but can only analyse 433 patches per second.

Currently, the best networks were still too slow, creating a backlog of images that needed to analyzed. Using MENNDL, a network was created that was 16x faster and capable of keeping up with the image generation so that no backlog would be created, said Patton.

In other words, the one built by MENNDL has a comparable performance to a hand-built design and can process cancer scans at a much faster rate. The researchers believe the network can bring the rate of image analysis up to the speed of the rate of image collection.

But the software is still a proof of concept, however. It is important to note that the goal of MENNDL is not to produce a fully trained model - a network that can immediately be deployed on a particular problem - but to optimize the network design to perform well on a particular dataset. The resulting network design can then be trained for a longer period of time on the dataset to produce a fully trained model, the paper said.

Our goal with MENNDL is to not only create novel neural architectures for different applications but also to create datasets of neural architectures that could be studied to better understand how neural structures differ from one application to the next. This would give AI researchers greater insights into how neural networks operate, Scott concluded.

Sponsored: Beyond the Data Frontier

More:

Why build your own cancer-sniffing neural network when this 1.3 exaflop supercomputer can do if for you? - The Register

Luxembourg’s ‘Meluxina’ Supercomputer Project to be Overseen by LuxProvide SA – HPCwire

Sept. 26, 2019 Luxembourg is acquiring a supercomputer called Meluxina which will be co-financed by European funds and will join the European network of EuroHPC supercomputers. Based on the business plan for the installation of this High Performance Computing (HPC) infrastructure prepared by the Ministry of the Economy and LuxConnect, LuxProvide SA was recently created to provide acquisition, launch and operation of Meluxina. The company is a subsidiary of LuxConnect and is headquartered in Bissen.

In addition to the implementation of the 10 petaflops power supercomputer, LuxProvide SA will also provide the various activities related to this high-performance computing capability and the provision of related services, in particular in terms of broadband connectivity and mobile applications. point. Ultimately employing up to 50 people, LuxProvide also aims to facilitate access to the use of Meluxinas capabilities by setting up a skills center to guide and support companies in their high-performance computing projects. .

Meluxina will focus on the needs of its users, including companies and players in the Luxembourg economy, with particular emphasis on the use by SMEs and start-ups as well as on applications in the context of research, personalized medicine and eHealth projects.

LuxProvide will install the Meluxina ECU in LuxConnects DC2 data center in Bissen, which is powered by green energy sourced in part from Kiowatt, the cogeneration power plant fueled by waste wood. The computing power of Meluxina will be 10 petaflops, which corresponds to 10,000,000,000,000,000 calculation operations per second.

The Luxembourg supercomputer Meluxina is a key element of the data-driven innovation strategy of the Ministry of the Economy, which aims to develop a sustainable and reliable digital economy and supports the digital transition of the economy by facilitating competitiveness and business innovation in an increasingly digital world.

A video presenting the supercomputer Meluxina is available under the following link:

Source: Ministry of Economy, Luxembourg

Read more:

Luxembourg's 'Meluxina' Supercomputer Project to be Overseen by LuxProvide SA - HPCwire

Rugby World Cup predictions: Super Computer predicts results for every match in Japan – Express

Eddie Jones is hoping he can lead England to glory in Japan - the team he coached over in England four years ago - as he looks challenge the likes of New Zealand and South Africa.

England are third favourites with most bookmakers with holders New Zealand heavy favourites to retain their crown.

The action is underway with the hosts beating Russia in the first game to kick the tournament off, and QBE Business Insurance have run the numbers to predict how things will turn out.

And its good news for the home nations who should all qualify from their pools.

Japan 40-12 Russia

Australia 36-11 Fiji

France 22-20 Argentina

New Zealand 28-17 South Africa

Italy 36-11 Namibia

Ireland 23-16 Scotland

England 33-11 Tonga

Wales 30-9 Georgia

Russia 16-22 Samoa

Fiji 35-12 Uruguay

Italy 29-12 Canada

England 45-9 USA

Argentina 29-19 Tonga

Japan 21-37 Ireland

South Africa 72-0 Namibia

View post:

Rugby World Cup predictions: Super Computer predicts results for every match in Japan - Express

Russian Nuclear Engineer Fined for Trying to Mine Bitcoin on One of the Country’s Most Powerful Supercomputers – Newsweek

A Russian scientist has been fined the equivalent of $7,000 for using a supercomputer inside a secretive nuclear facility to mine for bitcoin cryptocurrency.

Denis Baykov, an employee of the Federal Nuclear Center in Sarov, was fined 450,000 rubles on September 17 after being found guilty of violating the lab's internal computer policies, RIA Novosti reported via The Moscow Times, citing a ruling published by the city court.

Read more

Two additional staff members, Andrei Rybkin and Andrei Shatokhin, are still facing legal action. The employees were charged with unlawful access to computer information and using unauthorized computer software, RIA Novosti reported

Bitcoin, the most popular type of cryptocurrency, is created using computing power, which requires a lot of energy resources. The process is known as mining.

News of the arrests came to light in February 2018, when the Interfax news agency reported that security at the nuclear facility was alerted to the illicit mining activity. According to the BBC, the scientists raised a red flag by connecting the computer to the internet. "There was an attempt at unauthorized use of office computing power for personal purposes, including for the so-called mining," the institute said in a statement at the time.

Alexei Korolev, the lawyer for one of the defendants, told state media outlet RT that the engineers developed a special program that was supposed to keep their activities undetected. He said they managed to mine some bitcoin, but the exact amount was not immediately clear.

Korolev confirmed the nuclear scientists had pleaded guilty after their arrest. "They regret what they did," he noted. "But I think they went for it out of professional interest, not for the purpose of profit."

According to RT, the hearing date for Rybkin and Shatokhin has not yet been scheduled, but the case was received by the city court on September 11.

RT is a news outlet financed by the Russian government. The Sarov lab, founded in 1946, was responsible for producing the first Soviet nuclear weapon, The Moscow Times reported. The lab houses a supercomputer capable of conducting 1,000 trillion calculations per second.

In August, employees of a power plant in Ukraine exposed secret information after installing cryptocurrency mining rigs into the network, the website SecurityWeek reported at the time.

The Security Service of Ukraine found staffers of the South Ukraine Nuclear Power Station had been using the plant's systems to power their mining devices, but they appeared to have aided the leak of classified data after the equipment was linked up to the internet. Typically, critical computer networks can be isolated from the internet, or "air-gapped," for security purposes.

Read more here:

Russian Nuclear Engineer Fined for Trying to Mine Bitcoin on One of the Country's Most Powerful Supercomputers - Newsweek

iPhone 11 Cinematography: The 5 Breakthroughs of the New Camera, Explained – IndieWire

Despite major annual updates, progress can be incremental in the world of iPhone cinematography and photography. And Apple events feature an avalanche of impressive specs and gimmicky features geared toward making consumers feel like the latest and greatest will make them a professional shooter.

To get past the hype, IndieWire spoke with Filmic Pro CTO Chris Cohen. He shared the stage with filmmaker Sean Baker at the big Apple unveiling, and its Cohens app that allows every serious filmmaker, from Baker to Steven Soderbergh, to use the iPhone like a professional camera. We also talked to the iPhone experts at Moment, a five-year-old company that creates apps and tools for professional iPhone shooters.

Here are the five actual breakthrough camera advances in the iPhone 11 that should have filmmakers excited.

iPhone 11 Ultra Wide Lens

screenshot

1. The Ultra Wide Lens

If youve ever shot anything on an iPhone, youll notice that switching from photo to video mode tightens the image to create a more limited field of view. To widen that view, filmmakers rely on a third-party lens attachments: Soderbergh used Moments 18mm on Unsane, Baker the anamorphic Moondog lens on Tangerine. With the new iPhone 11, Apples Ultra Wide lens solves this problem.

RelatedRelated

It looks to sit right around a 13mm, said Caleb Babcock, chief content creator at Moment. Which is perfect, because any wider on the iPhone and you start to get that fish-eye look.

Director Rian Johnson (Star Wars: The Last Jedi, Knives Out) experimented with an early iPhone 11 Pro. He shot footage in Paris (shared below). It features some of the first shots weve seen from the new ultra wide lens, which he tweeted was a real game changer. The optics look solid, while being, as Babcock speculated, right on that edge of being too wide.

iPhone cinematography will likely continue to be most effective when shooting subjects who are relatively close to iPhone. The camera still lacks the ability to capture detail for images with too much scope, which makes the ability to get wider and see more in intimate situations an incredibly important feature.

2. The Selfie Camera

Until the iPhone 11, the user-facing camera commonly used for FaceTime and selfies has not been a pro tool, lacking the optics and sensor of the back-facing lenses.

Weve always discouraged it to our users, said Cohen. Weve even had internal conversations of whether we should even let users use the front-facing lens, because the quality was just poor.

Apples user-facing camera is now TrueDepth, and represents one of the most significant upgrades made to its camera system. The camera is now 12 megapixels, has a significantly wider lens, and the ability to capture in 4K up to 60 frames per second. Cohen, who got early access to the camera in order to build the new software used in Bakers demo with jazz musicians, said everyone at Filmic Pro was blown away by the massive upgrade, adding, Its a worthy addition to the lens kit now.

Heres why this matters:

iPhone 11 Shot-Reverse-Shot using upgraded user facing camera

screenshot

3. Shot Reverse Shot

Much attention has been placed on the iPhone 11s ability to simultaneously record two video streams from the back-facing cameras a great feature for photographers, less so for filmmakers. To seamlessly cut together multi-camera coverage, and avoid jump cuts, the two shots need both a different image size (which the iPhone can now do), and a change of angle (which the iPhone still cant do).

One of the only ways to make two shots cut together is a straight ahead, perfectly centered symmetrical frame think Stanley Kubrick or Wes Anderson. So while those real-world applications are limited, theres a lot more potential in the new shot-reverse-shot capabilities.

As a filmmaker, theres some really practical use cases for it, said Babcock. If someone wanted to record a podcast, youre sitting across the desk from someone, one camera in the middle, and youre getting both angles. That goes for documentary use as well.

In fact, when Apple first invited Filmic Pro into look at the technology and asked them how they could best represent its capabilities to users, Cohen and his team suggested an interview demo.

That was the first version of the pitch: A news reporter conducting an interview, with shot-reverse-shot, and in the end they wanted something more artsy, said Cohen. But thats how we envisioned this feature. We wanted to empower storytellers, and those will be our early adopters with this feature.

Director Sean Baker and Filmic Pro CTO Chris Cohen at the Apple Event unveiling the iPhone 11

Screenshot

4. Camera + Super Computer

Smartphone companies love to hype the power of their newest processing chips, and eye rolls from the software engineers usually follow. We always joke, Great all this power, I wonder how fast this will throttle. 30 seconds? 40 seconds?, said Cohen. Because even though there is a lot of peak performance on tap with the processors Apple has been making, theyre sandwiched between two pieces of glass, so for a high performance application like Filmic Pro that has a computation imagining pipeline, we can only really tap into about 30 percent of that maximum potential before the system fails.

However, the new A13 chips in all iPhone 11s are another matter. At one point, while building the demo app using an iPhone 11 prototype, Cohens Filmic team had six composites showing at once. This thing wasnt even getting hot to the touch, said Cohen. Its a breakthrough in terms of sustaining performance, and thats going to have huge implications for what we do.

Phil Pasqual, the head of Moments App team, agrees. These phones are extremely powerful and the benchmarks on the chips in them are not far off from a laptop computer, said Pasqual. Youre basically pairing a camera with a super computer.

Pasqual said the cameras ability to take multiple photos simultaneously, combined with an algorithm that can merge them intelligently and in real time, is a paradigm shift. The next two years are going to be very interesting, said Cohen. Youre going to see things with real time imaging software thats going to blow you away.

An important iPhone professional advance of the last two years was Filmic Pros Log V2. This gave cinematographers the ability to record video images that preserved maximum dynamic range information, simulating the process of recording in Log or Raw on professional cameras. These images could then be accessed in a professional post-production color grade setting.

I would say Log V2 was as far as we could push it in terms of previous versions of software, said Cohen. Now, our heads are spinning. We have a lot of things we were planning to put on the road map that we werent planning to put in there for the next two or three years. Now we are seriously considering fast-tracking them, because the sustaining performance is so good.

Apples iPhone 11

screenshot

5. Its Not Just the iPhone 11 Pro

For professional cinematographers, the focus has been on the most expensive Pro model. However, most of the camera advances are in all the new iPhone 11 models. The Pro does have the third telephoto lens in back, extra battery power, and a matte finish. Most importantly, all iPhone 11s have the A13 chip, ultra wide lens, upgraded user facing camera, and the newest capture sensors which increases the native dynamic range of the iPhone.

Apple, to their credit, said Cohen. They could have arbitrarily made the pro artificially superior to the other ones, but they did not do that.

The Local Tone Mapping Problem: Soderbergh and others have pleaded with Apple to fix, or at least allow the ability to turn off, the iPhones local tone mapping that can adjust the exposure of a portion of the frame in the middle of a shot. It would appear that issue will become more manageable with the iPhone 11.

Im not in a position to speak for Apple, said Cohen. What I am going to say is that issue looks like it Im going to use my words carefully here I dont think itll be such a problem.

When Can We Expect the new Filmic Pro App?: We have never been beholden to hard deadlines because of our internal process, said Cohen. We give early access to filmmakers and educators and, with their feedback, we go to market or we may re-tool. Were just saying the end of the year. That said, we do reserve the right to go behind that if part of the user experience need to improve.

And will some of the features shown with Baker at the Apple launch event be accessible, through updates, before then? Cohen declined to answer.

Is a Composite Zoom Through all Three Pro Lens Possible? Its possible to zoom through all the focal lengths using a combination of digital zoom and lens switching, said Cohen. It comes with some caveats. Switching between lenses, you are going to have different effective apertures. Youre also going to have different characteristics of lens compression. If you were to do, lets call it a composite, multi-cam zoom, you wouldnt notice it if the zoom was relatively fast, but you would notice it if it was very, very slow.

Sign Up: Stay on top of the latest breaking film and TV news! Sign up for our Email Newsletters here.

See original here:

iPhone 11 Cinematography: The 5 Breakthroughs of the New Camera, Explained - IndieWire

atheism | Definition, Philosophy, & Comparison to …

Atheism, in general, the critique and denial of metaphysical beliefs in God or spiritual beings. As such, it is usually distinguished from theism, which affirms the reality of the divine and often seeks to demonstrate its existence. Atheism is also distinguished from agnosticism, which leaves open the question whether there is a god or not, professing to find the questions unanswered or unanswerable.

The dialectic of the argument between forms of belief and unbelief raises questions concerning the most perspicuous delineation, or characterization, of atheism, agnosticism, and theism. It is necessary not only to probe the warrant for atheism but also carefully to consider what is the most adequate definition of atheism. This article will start with what have been some widely accepted, but still in various ways mistaken or misleading, definitions of atheism and move to more adequate formulations that better capture the full range of atheist thought and more clearly separate unbelief from belief and atheism from agnosticism. In the course of this delineation the section also will consider key arguments for and against atheism.

A central, common core of Judaism, Christianity, and Islam is the affirmation of the reality of one, and only one, God. Adherents of these faiths believe that there is a God who created the universe out of nothing and who has absolute sovereignty over all his creation; this includes, of course, human beingswho are not only utterly dependent on this creative power but also sinful and who, or so the faithful must believe, can only make adequate sense of their lives by accepting, without question, Gods ordinances for them. The varieties of atheism are numerous, but all atheists reject such a set of beliefs.

Atheism, however, casts a wider net and rejects all belief in spiritual beings, and to the extent that belief in spiritual beings is definitive of what it means for a system to be religious, atheism rejects religion. So atheism is not only a rejection of the central conceptions of Judaism, Christianity, and Islam; it is, as well, a rejection of the religious beliefs of such African religions as that of the Dinka and the Nuer, of the anthropomorphic gods of classical Greece and Rome, and of the transcendental conceptions of Hinduism and Buddhism. Generally atheism is a denial of God or of the gods, and if religion is defined in terms of belief in spiritual beings, then atheism is the rejection of all religious belief.

It is necessary, however, if a tolerably adequate understanding of atheism is to be achieved, to give a reading to rejection of religious belief and to come to realize how the characterization of atheism as the denial of God or the gods is inadequate.

To say that atheism is the denial of God or the gods and that it is the opposite of theism, a system of belief that affirms the reality of God and seeks to demonstrate his existence, is inadequate in a number of ways. First, not all theologians who regard themselves as defenders of the Christian faith or of Judaism or Islam regard themselves as defenders of theism. The influential 20th-century Protestant theologian Paul Tillich, for example, regards the God of theism as an idol and refuses to construe God as a being, even a supreme being, among beings or as an infinite being above finite beings. God, for him, is being-itself, the ground of being and meaning. The particulars of Tillichs view are in certain ways idiosyncratic, as well as being obscure and problematic, but they have been influential; and his rejection of theism, while retaining a belief in God, is not eccentric in contemporary theology, though it may very well affront the plain believer.

Second, and more important, it is not the case that all theists seek to demonstrate or even in any way rationally to establish the existence of God. Many theists regard such a demonstration as impossible, and fideistic believers (e.g., Johann Hamann and Sren Kierkegaard) regard such a demonstration, even if it were possible, as undesirable, for in their view it would undermine faith. If it could be proved, or known for certain, that God exists, people would not be in a position to accept him as their sovereign Lord humbly on faith with all the risks that entails. There are theologians who have argued that for genuine faith to be possible God must necessarily be a hidden God, the mysterious ultimate reality, whose existence and authority must be accepted simply on faith. This fideistic view has not, of course, gone without challenge from inside the major faiths, but it is of sufficient importance to make the above characterization of atheism inadequate.

Finally, and most important, not all denials of God are denials of his existence. Believers sometimes deny God while not being at all in a state of doubt that God exists. They either willfully reject what they take to be his authority by not acting in accordance with what they take to be his will, or else they simply live their lives as if God did not exist. In this important way they deny him. Such deniers are not atheists (unless we wish, misleadingly, to call them practical atheists). They are not even agnostics. They do not question that God exists; they deny him in other ways. An atheist denies the existence of God. As it is frequently said, atheists believe that it is false that God exists, or that Gods existence is a speculative hypothesis of an extremely low order of probability.

Yet it remains the case that such a characterization of atheism is inadequate in other ways. For one it is too narrow. There are atheists who believe that the very concept of God, at least in developed and less anthropomorphic forms of Judeo-Christianity and Islam, is so incoherent that certain central religious claims, such as God is my creator to whom everything is owed, are not genuine truth-claims; i.e., the claims could not be either true or false. Believers hold that such religious propositions are true, some atheists believe that they are false, and there are agnostics who cannot make up their minds whether to believe that they are true or false. (Agnostics think that the propositions are one or the other but believe that it is not possible to determine which.) But all three are mistaken, some atheists argue, for such putative truth-claims are not sufficiently intelligible to be genuine truth-claims that are either true or false. In reality there is nothing in them to be believed or disbelieved, though there is for the believer the powerful and humanly comforting illusion that there is. Such an atheism, it should be added, rooted for some conceptions of God in considerations about intelligibility and what it makes sense to say, has been strongly resisted by some pragmatists and logical empiricists.

While the above considerations about atheism and intelligibility show the second characterization of atheism to be too narrow, it is also the case that this characterization is in a way too broad. For there are fideistic believers, who quite unequivocally believe that when looked at objectively the proposition that God exists has a very low probability weight. They believe in God not because it is probable that he existsthey think it more probable that he does notbut because belief is thought by them to be necessary to make sense of human life. The second characterization of atheism does not distinguish a fideistic believer (a Blaise Pascal or a Soren Kierkegaard) or an agnostic (a T.H. Huxley or a Sir Leslie Stephen) from an atheist such as Baron dHolbach. All believe that there is a God and God protects humankind, however emotionally important they may be, are speculative hypotheses of an extremely low order of probability. But this, since it does not distinguish believers from nonbelievers and does not distinguish agnostics from atheists, cannot be an adequate characterization of atheism.

It may be retorted that to avoid apriorism and dogmatic atheism the existence of God should be regarded as a hypothesis. There are no ontological (purely a priori) proofs or disproofs of Gods existence. It is not reasonable to rule in advance that it makes no sense to say that God exists. What the atheist can reasonably claim is that there is no evidence that there is a God, and against that background he may very well be justified in asserting that there is no God. It has been argued, however, that it is simply dogmatic for an atheist to assert that no possible evidence could ever give one grounds for believing in God. Instead, atheists should justify their unbelief by showing (if they can) how the assertion is well-taken that there is no evidence that would warrant a belief in God. If atheism is justified, the atheist will have shown that in fact there is no adequate evidence for the belief that God exists, but it should not be part of his task to try to show that there could not be any evidence for the existence of God. If the atheist could somehow survive the death of his present body (assuming that such talk makes sense) and come, much to his surprise, to stand in the presence of God, his answer should be, Oh! Lord, you didnt give me enough evidence! He would have been mistaken, and realize that he had been mistaken, in his judgment that God did not exist. Still, he would not have been unjustified, in the light of the evidence available to him during his earthly life, in believing as he did. Not having any such postmortem experiences of the presence of God (assuming that he could have them), what he should say, as things stand and in the face of the evidence he actually has and is likely to be able to get, is that it is false that God exists. (Every time one legitimately asserts that a proposition is false one need not be certain that it is false. Knowing with certainty is not a pleonasm.) The claim is that this tentative posture is the reasonable position for the atheist to take.

An atheist who argues in this manner may also make a distinctive burden-of-proof argument. Given that God (if there is one) is by definition a very recherch realitya reality that must be (for there to be such a reality) transcendent to the worldthe burden of proof is not on the atheist to give grounds for believing that there is no reality of that order. Rather, the burden of proof is on the believer to give some evidence for Gods existencei.e., that there is such a reality. Given what God must be, if there is a God, the theist needs to present the evidence, for such a very strange reality. He needs to show that there is more in the world than is disclosed by common experience. The empirical method, and the empirical method alone, such an atheist asserts, affords a reliable method for establishing what is in fact the case. To the claim of the theist that there are in addition to varieties of empirical facts spiritual facts or transcendent facts, such as it being the case that there is a supernatural, self-existent, eternal power, the atheist can assert that such facts have not been shown.

It will, however, be argued by such atheists, against what they take to be dogmatic aprioristic atheists, that the atheist should be a fallibilist and remain open-minded about what the future may bring. There may, after all, be such transcendent facts, such metaphysical realities. It is not that such a fallibilistic atheist is really an agnostic who believes that he is not justified in either asserting that God exists or denying that he exists and that what he must reasonably do is suspend belief. On the contrary, such an atheist believes that he has very good grounds indeed, as things stand, for denying the existence of God. But he will, on the second conceptualization of what it is to be an atheist, not deny that things could be otherwise and that, if they were, he would be justified in believing in God or at least would no longer be justified in asserting that it is false that there is a God. Using reliable empirical techniques, proven methods for establishing matters of fact, the fallibilistic atheist has found nothing in the universe to make a belief that God exists justifiable or even, everything considered, the most rational option of the various options. He therefore draws the atheistical conclusion (also keeping in mind his burden-of-proof argument) that God does not exist. But he does not dogmatically in a priori fashion deny the existence of God. He remains a thorough and consistent fallibilist.

Such a form of atheism (the atheism of those pragmatists who are also naturalistic humanists), though less inadequate than the first formation of atheism, is still inadequate. God in developed forms of Judaism, Christianity, and Islam is not, like Zeus or Odin, construed in a relatively plain anthropomorphic way. Nothing that could count as God in such religions could possibly be observed, literally encountered, or detected in the universe. God, in such a conception, is utterly transcendent to the world; he is conceived of as pure spirit, an infinite individual who created the universe out of nothing and who is distinct from the universe. Such a realitya reality that is taken to be an ultimate mysterycould not be identified as objects or processes in the universe can be identified. There can be no pointing at or to God, no ostensive teaching of God, to show what is meant. The word God can only be taught intralinguistically. God is taught to someone who does not understand what the word means by the use of descriptions such as the maker of the universe, the eternal, utterly independent being upon whom all other beings depend, the first cause, the sole ultimate reality, or a self-caused being. For someone who does not understand such descriptions, there can be no understanding of the concept of God. But the key terms of such descriptions are themselves no more capable of ostensive definition (of having their referents pointed out) than is God, where that term is not, like Zeus, construed anthropomorphically. (That does not mean that anyone has actually pointed to Zeus or observed Zeus but that one knows what it would be like to do so.)

In coming to understand what is meant by God in such discourses, it must be understood that God, whatever else he is, is a being that could not possibly be seen or be in any way else observed. He could not be anything material or empirical, and he is said by believers to be an intractable mystery. A nonmysterious God would not be the God of Judaism, Christianity, and Islam.

This, in effect, makes it a mistake to claim that the existence of God can rightly be treated as a hypothesis and makes it a mistake to claim that, by the use of the experimental method or some other determinate empirical method, the existence of God can be confirmed or disconfirmed as can the existence of an empirical reality. The retort made by some atheists, who also like pragmatists remain thoroughgoing fallibilists, is that such a proposed way of coming to know, or failing to come to know, God makes no sense for anyone who understands what kind of reality God is supposed to be. Anything whose existence could be so verified would not be the God of Judeo-Christianity. God could not be a reality whose presence is even faintly adumbrated in experience, for anything that could even count as the God of Judeo-Christianity must be transcendent to the world. Anything that could actually be encountered or experienced could not be God.

At the very heart of a religion such as Christianity there stands a metaphysical belief in a reality that is alleged to transcend the empirical world. It is the metaphysical belief that there is an eternal, ever-present creative source and sustainer of the universe. The problem is how it is possible to know or reasonably believe that such a reality exists or even to understand what such talk is about.

It is not that God is like a theoretical entity in physics such as a proton or a neutrino. They are, where they are construed as realities rather than as heuristically useful conceptual fictions, thought to be part of the actual furniture of the universe. They are not said to be transcendent to the universe, but rather are invisible entities in the universe logically on a par with specks of dust and grains of sand, only much, much smaller. They are on the same continuum; they are not a different kind of reality. It is only the case that they, as a matter of fact, cannot be seen. Indeed no one has an understanding of what it would be like to see a proton or a neutrinoin that way they are like Godand no provision is made in physical theory for seeing them. Still, there is no logical ban on seeing them as there is on seeing God. They are among the things in the universe, and thus, though they are invisible, they can be postulated as causes of things that are seen. Since this is so it becomes at least logically possible indirectly to verify by empirical methods the existence of such realities. It is also the case that there is no logical ban on establishing what is necessary to establish a causal connection, namely a constant conjunction of two discrete empirical realities. But no such constant conjunction can be established or even intelligibly asserted between God and the universe, and thus the existence of God is not even indirectly verifiable. God is not a discrete empirical thing or being, and the universe is not a gigantic thing or process over and above the things and processes in the universe of which it makes sense to say that the universe has or had a cause. But then there is no way, directly or indirectly, that even the probability that there is a God could be empirically established.

Read the original:

atheism | Definition, Philosophy, & Comparison to ...

Atheism | CARM.org

Atheism is a lack of belief in any God and deities as well as a total denial of the existence of any god. It is a growing movement that is becoming more aggressive, more demanding, and less tolerant of anything other than itself - as is exemplified by its adherents. Is atheism a sound philosophical system as a worldview or is it ultimately self-defeating? Is the requirement of empirical evidence for God a mistake in logic or is it a fair demand? Can we prove that God exists or is that impossible? Find out more about atheism, its arguments, and its problems here at CARM. Learn how to deal with the arguments raised against the existence of God that seek to replace Him with naturalism, materialism, and moral relativism.

Read more from the original source:

Atheism | CARM.org

Atheism – Simple English Wikipedia, the free encyclopedia

Atheism is rejecting the belief in a god or gods. It is the opposite of theism, which is the belief that at least one god exists.A person who rejects belief in gods is called an atheist.Theism is the belief in one or more gods. Adding an a, meaning "without", before the word theism results in atheism, or literally, "without theism".. Atheism is not the same as agnosticism: agnostics say that ...

Original post:

Atheism - Simple English Wikipedia, the free encyclopedia

atheism r/atheism – reddit: the front page of the internet

This happened around last year when they just found out that i was an atheist. My parents sat down with me (and for some reason they roped my brother in too) to kinda talk it out with them, the why and how and all that.

So my father was talking about how god had blessed him and his family with a luxurious and comfortable life. I, thinking that my parents would hear me out since they got out of their own way just to talk about religion with us, told them that i believed that they worked hard and earned the money themselves.

Surprisingly enough, my father immediately blew his top off and yelled at me, insisting that it was by god's grace that we are now able to live such a good life. He then, for some reason told me that my ability to draw was a god-given talent. Naturally, i was pissed. After all, i went to years and years of art class just to be able to draw like i do now, though it only looks nice in my family's standards since i'm the only one in my family that can draw. But i didn't say anything back since i don't want to start another war with m parents.

Seriously, if it really was just god's grace that allowed my family to live comfortably, why have i never seen god just bestow upon my father a paycheck? Why is it that he's so happy about having all his hard work credited to an invisible sky daddy? Call me greedy or selfish, but if someone took all the credit to my hard work i'd be bloody pissed. But hey, thanks for reading this.

See original here:

atheism r/atheism - reddit: the front page of the internet

Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More

Ripple vs SWIFT: The War Begins
While most criticisms of XRP do nothing to curb my bullish Ripple price forecast, there is one obstacle that nags at my conscience. Its name is SWIFT.

The Society for Worldwide Interbank Financial Telecommunication (SWIFT) is the king of international payments.

It coordinates wire transfers across 11,000 banks in more than 200 countries and territories, meaning that in order for XRP prices to ascend to $10.00, Ripple needs to launch a successful coup. That is, and always has been, an unwritten part of Ripple’s story.

We’ve seen a lot of progress on that score. In the last three years, Ripple wooed more than 100 financial firms onto its.

The post Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More appeared first on Profit Confidential.

The rest is here:

Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More


12345...102030...