Cleveland Clinic and IBM Begin Installation of IBM Quantum System One – Cleveland Clinic Newsroom

Cleveland Clinicand IBM have begundeployment of the first private sector onsite,IBM-managedquantum computer in the United States.The IBM Quantum Systemis to be located on Cleveland Clinics main campus in Cleveland.

The first quantum computer in healthcare, anticipated to be completed in early 2023, is a key part of the two organizations10-year partnership aimed at fundamentally advancing the pace of biomedical research through high-performance computing. Announced in 2021, the Cleveland Clinic-IBM Discovery Accelerator is a joint center that leverages Cleveland Clinics medical expertise with the technology expertise of IBM, including its leadership in quantum computing.

The current pace of scientific discovery is unacceptably slow, while our research needs are growing exponentially, said Lara Jehi, M.D., Cleveland Clinics Chief Research Information Officer. We cannot afford to continue to spend a decade or more going from a research idea in a lab to therapies on the market. Quantum offers a future to transform this pace, particularly in drug discovery and machine learning.

A step change in the way we solve scientific problems is on the horizon, said Ruoyi Zhou, Director, Ph.D., IBM Research Cleveland Clinic Partnership. At IBM, were more motivated than ever to create with Cleveland Clinic and others lasting communities of discovery and harness the power of quantum computing, AI and hybrid cloud to usher in a new era of accelerated discovery in healthcare and life sciences.

The Discovery Accelerator at Cleveland Clinic draws upon a variety of IBMs latest advancements in high performance computing, including:

Lara Jehi, M.D., and Ruoyi Zhou, Ph.D., at the site of the IBM Quantum System One on Cleveland Clinics main campus. (Courtesy: Cleveland Clinic/IBM)

The Discovery Accelerator also serves as the technology foundation for Cleveland Clinics Global Center for Pathogen Research & Human Health, part of the Cleveland Innovation District. The center, supported by a $500 million investment from the State of Ohio, Jobs Ohio and Cleveland Clinic, brings together a team focused on studying, preparing and protecting against emerging pathogens and virus-related diseases. Through Discovery Accelerator, researchers are leveraging advanced computational technology to expedite critical research into treatments and vaccines.

Together, the teams have already begun several collaborative projects that benefit from the new computational power. The Discovery Accelerator projects include a research study developing a quantum computing method to screen and optimize drugs targeted to specific proteins; improving a prediction model for cardiovascular risk following non-cardiac surgery; and using artificial intelligence to search genome sequencing findings and large drug-target databases to find effective, existing drugs that could help patients with Alzheimers and other diseases.

A significant part of the collaboration is a focus on educating the workforce of the future and creating jobs to grow the economy. An innovative educational curriculum has been designed for participants from high school to professional level, offering training and certification programs in data science, machine learning and quantum computing to build the skilled workforce needed for cutting-edge computational research of the future.

Read more from the original source:

Cleveland Clinic and IBM Begin Installation of IBM Quantum System One - Cleveland Clinic Newsroom

CEO Jack Hidary on SandboxAQ’s Ambitions and Near-term Milestones – HPCwire

Spun out from Google last March, SandboxAQ is a fascinating, well-funded start-up targeting the intersection of AI and quantum technology. As the world enters the third quantum revolution, AI + Quantum software will address significant business and scientific challenges, is the companys broad self-described mission. Part software company, part investor, SandboxAQ foresees a blended classical computing-quantum computing landscape with AI infused throughout.

Its developing product portfolio comprises enterprise software for assessing and managing cryptography/data security in the so-called post-quantum era. NIST, of course, released its first official post-quantum algorithms in July and SandboxAQ is one of 12 companies selected to participate in its newproject Migration to Post Quantum Cryptography to build and commercialize tools. SandboxAQs AQ Analyzer product, says the company, is already available and being used by a few marquee customers.

Then theres SandboxAQs Strategic Investment Program, announced in August, which acquires or invests in technology companies of interest. So far, it has acquired one company (Cryptosense) and invested in two others (evolutionQ, and Qunnect).

Last week, HPCwire talked with SandboxAQ CEO Jack Hidary about the companys products and strategy. One has the sense that SandboxAQs aspirations are broad, and with nine figure funding, it has the wherewithal to pivot or expand. The A in the name stands for AI and the Q stands for quantum. One area not on the current agenda: building a quantum computer.

We want to sit above that layer. All these [qubit] technologies ion trap, and NV center (nitrogen vacancy center), neutral atoms, superconducting, photonic are very interesting and we encourage and mentor a lot of these companies who are quantum computing hardware companies. But we are not going to be building one because we really see our value as a layer on top of those computing [blocks], said Hidary. Google, of course, has another group working on quantum hardware.

Hidary joined Google in 2016 as Sandbox group director. A self-described serial entrepreneur, Hidarys varied experience includes founding EarthWeb, being a trustee of the XPrize Foundation, and running for Mayor in New York City in 2013. While at Google Sandbox, he wrote a textbook Quantum Computing: An Applied Approach.

I was recruited in to start a new division to focus on the use of AI and ultimately also quantum in solving really hard problems in the world. We realized that we needed to be multi-platform and focus on all the clouds and to do [other] kinds of stuff so we ended up spinning out earlier this year, said Hidary.

Eric Schmidt joined us about three and a half years ago as he wrapped up his chairmanship at Alphabet (Google parent company). He got really into what were doing, looking at the impact that scaled computation can have both on the AI side and the quantum side. He became chairman of SandboxAQ. I became CEO. Weve other backers like Marc Benioff from Salesforce and T. Rowe Price and Guggenheim, who are very long-term investors. What youll notice here thats interesting is we dont have short-term VCs. Wehave really long term investors who are here for 10 to 15 years.

The immediate focus is on post quantum cryptography tools delivered mostly by a SaaS model. By now were all familiar with the threat that fault-tolerant quantum computers will be able to crack conventionally encrypted (RSA) data using Shors algorithm. While fault-tolerant quantum computers are still many years away, the National Institute of Standards and Technology (NIST) and others, including SandboxAQ, have warned against Store Now/Decrypt Later attacks. (See HPCwire article, The Race to Ensure Post Quantum Data Security).

What adversaries are doing now is siphoning off information over VPNs. Theyre not cracking into your network. Theyre just doing it over VPNs, siphoning that information. They cant read it today, because its RSA protected, but theyll store it and read it in a number of years when they can, he said. The good news is you dont have to scrap your hardware. You could just upgrade the software. But thats still a monumental challenge. As you can imagine, for all the datacenters and the high-performance computing centers this is a non-trivial operation to do all that.

A big part of the problem is simply finding where encryption code is in existing infrastructure. That, in turn, has prompted calls for what is being called crypto-agility a comprehensive yet modular approach that allows easy swapping in-and-out cryptography code.

We want crypto-agility, and what we find is large corporations, large organizations, and large governments dont have crypto-agility. What were hoping is to develop tools to implement this idea. For example, as a first step to crypto-agility, were trying to see if people even have an MRI (discovery metaphor) machine for use on their own cybersecurity, and they really dont when it comes to encryption. Theres no diagnostic tools that these companies are using to find where their [encryption] footprint is or if they are encrypting everything appropriately. Maybe some stuff is not even being encrypted, said Hidary, who favors the MRI metaphor for a discovery tool.

No doubt, the need to modernize encryption/decryption methods and tools represents a huge problem and a huge market.

Without getting into technical details, Hidary said SandboxAQ is leveraging technology from its recent Cryptosense acquisition and internally developed technologies to develop a product portfolio planned to broadly encompass cryptography assessment, deployment and management. Its core current product is AQ Analyzer.

The idea, says Hidary returning to the MRI metaphor, is to take an MRI scan of inside the organization on-premise, cloud, private cloud, and so forth and this feeds into compliance vulnerabilities and post-quantum analysis. Its not just a quantum thing. Its about your general vulnerabilities on encryption. Overall, it happens to be that post quantum is helped by this, but this is a bigger issue. Then that feeds into your general sysops, network ops, and management tools that youre using.

AQ Analyzer, he says, is enterprise software that starts the process for organizations to become crypto-agile. Its now being used at large banks and telcos, and also by Mount Sinai Hospital. Healthcare replete with sensitive information is another early target for SandboxAQ. Long-term the idea is for Sandbox software tools to be able to automate much of the crypto management process from assessment to deployment through ongoing monitoring and management.

Thats the whole crypto-agility ballgame, says Hidary.

The business model, says Hidary, is carbon copy of Salesforce.coms SaaS model. Broadly, SandboxAQ uses a three-prong go-to-market via direct sales, global systems integrators in May it began programs with Ernst & Young (EY) and Deloitte and strategic partners/resellers. Vodafone and SoftBank are among the latter. Even though these are still early days for SandboxAQ as an independent entity, its moving fast, having benefitted from years of development inside Google. AQ Analyzer, said Hidary, is in general availability.

Were doing extremely well in banks and financial institutions. Theyre typically early adopters of cybersecurity because of the regulatory and compliance environment, and the trust they have with their customers, said Hidary.

Looking at near-term milestones, he said, Wed like to see a more global footprint of banks. Well be back in Europe soon now that we have Cryptosense (UK and Paris-based), and we have a local strong team in Europe. Weve had a lot of traction in the U.S. and the Canadian markets. So thats one key milestone over the next 18 months or so. Second, wed like to see [more adoption] into healthcare and telcos. We have Vodafone and Softbank mobile, on the telco side. We have Mount Sinai, wed like to see if that can be extended into additional players in those two spaces. The fourth vertical well probably go into is the energy grid. These are all critical infrastructure pieces of our society the financial structure of our society, energy, healthcare and the medical centers, the telecommunications grid.

While SandboxAQs AQ Analyzer is the companys first offering, its worth noting that the company aggressively looking for niches it can serve. For example, the company is keeping close tab on efforts to build a quantum internet.

Theres going to be a parallel quantum coherent internet to connect for distributed quantum computing, said Hidary. So nothing to do with cyber at all.

Our vision of the future that we share with I think everyone in the industry is that quantum does not take over classical, said Hidary. Its a mesh, a hybridization of CPU, GPU and quantum processing units. And the program, the code, in Python for example: part of it runs on CPUs, part of it on GPUs, and then yes, part of it will run on a QPU. In that mesh, youd want to have access both to the traditional Internet TCP IP today, but you also want to be able to connect over a quantum coherence intranet. So thats Qunnect.

Qunnect, of course, is one of the companies SandboxAQ has invested in and it is working on hardware (quantum memory and repeaters) to enable a quantum internet. Like dealing with post quantum cryptography, outfitting the quantum internet is likely to be as huge business. Looking at SandboxAQ, just seven months after being spun out from Google, the scope of its ambitions is hard to pin down.

Stay tuned.

Here is the original post:

CEO Jack Hidary on SandboxAQ's Ambitions and Near-term Milestones - HPCwire

The world, and todays employees, need quantum computing more than ever – VentureBeat

Did you miss a session from MetaBeat 2022? Head over to the on-demand library for all of our featured sessions here.

Quantum computing can soon address many of the worlds toughest, most urgent problems.

Thats why the semiconductor legislation Congress just passed is part of a $280 billion package that will, among other things, direct federal research dollars toward quantum computing.

Quantum computing will soon be able to:

The economy and the environment are clearly two top federal government agenda items.Congress in July was poised to pass the most ambitious climate bill in U.S. history. The New York Times said that the bill would pump hundreds of billions of dollars into low-carbon energy technologies like wind turbines, solar panels and electric vehicles and would put the United States on track to slash its greenhouse gas emissions to roughly 40% below 2005 levels by 2030. This could help to further advance and accelerate the adoption of quantum computing.

Low-Code/No-Code Summit

Join todays leading executives at the Low-Code/No-Code Summit virtually on November 9. Register for your free pass today.

Because quantum technology can solve many previously unsolvable problems, a long list of the worlds leading businesses including BMW and Volkswagen, FedEx, Mastercard and Wells Fargo, and Merck and Roche are making significant quantum investments. These businesses understand that transformation via quantum computing, which is quickly advancing with breakthrough technologies, is coming soon. They want to be ready when that happens.

Its wise for businesses to invest in quantum computing because the risk is low and the payoff is going to be huge. As BCG notes: No one can afford to sit on the sidelines as this transformative technology accelerates toward several critical milestones.

The reality is that quantum computing is coming, and its likely not going to be a standalone technology. It will be tied to the rest of the IT infrastructure supercomputers, CPUs and GPUs.

This is why companies like Hewlett Packard Enterprise are thinking about how to integrate quantum computing into the fabric of the IT infrastructure. Its also why Terra Quantum AG is building hybrid data centers that combine the power of quantum and classical computing.

Amid these changes, employees should start now to get prepared. There is going to be a tidal wave of need for both quantum Ph.D.s and for other talent such as skilled quantum software developers to contribute to quantum efforts.

Earning a doctorate in a field relevant to quantum computing requires a multi-year commitment. But obtaining valuable quantum computing skills doesnt require a developer to go back to college, take out a student loan or spend years studying.

With modern tools that abstract the complexity of quantum software and circuit creation, developers no longer require Ph.D.-level knowledge to contribute to the quantum revolution, enabling a more diverse workforce to help businesses achieve quantum advantage. Just look at the winners in the coding competition that my company staged. Some of these winners were recent high school graduates, and they delivered highly innovative solutions.

Leading the software stack, quantum algorithm design platforms allow developers to design sophisticated quantum circuits that could not be created otherwise. Rather than defining tedious low-level gate connections, this approach uses high-level functional models and automatically searches millions of circuit configurations to find an implementation that fits resource considerations, designer-supplied constraints and the target hardware platform. New tools like Nvidias QODA also empower developers by making quantum programming similar to how classical programming is done.

Developers will want to familiarize themselves with quantum computing, whichwill be an integral arrow in their metaphorical quiver of engineering skills. People who add quantum skills to their classical programming and data center skills will position themselves to make more money and be more appealing to employers in the long term.

Many companies and countries are experimenting with and adopting quantum computing. They understand that quantum computing is evolving rapidly and is the way of the future.

Whether you are a business leader or a developer, its important to understand that quantum computing is moving forward. The train is leaving the station will you be on board?

Erik Garcell is technical marketing manager at Classiq.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

Go here to see the original:

The world, and todays employees, need quantum computing more than ever - VentureBeat

Cancer to Be Treated as Easily as Common Cold When Humans Crack Quantum Computing – Business Wire

DUBAI, United Arab of Emirates--(BUSINESS WIRE)--Breakthroughs in quantum computing will enable humans to cure diseases like cancer, Alzheimers, and Parkinsons as easily as we treat the common cold.

That was one of the major insights to emerge from the Dubai Future Forum, with renowned theoretical physicist Dr. Michio Kaku telling the worlds largest gathering of futurists that humanity should brace itself for major transformations in healthcare.

The forum concluded with a call for governments to institutionalize foresight and engrain it within decision making.

Taking place in Dubai, UAE at the Museum of the Future, Amy Webb, CEO of Future Today Institute, criticized nations for being too pre-occupied with the present and too focused on creating white papers, reports and policy recommendations instead of action.

Nowism is a virus. Corporations and governments are infected, she said.

One panel session heard how humans could be ready to test life on the Moon in just 15 years and be ready for life on Mars in another decade. Sharing his predictions for the future, Dr. Kaku also said there is a very good chance humans will pick up a signal from another intelligent life form this century.

Dr. Jamie Metzl, Founder and Chair, OneShared.World, urged people to eat more lab-grown meat to combat global warming and food insecurity.

If we are treating them like a means to an end of our nutrition, wouldnt it be better instead of growing the animal, to grow the meat? he said.

Among the 70 speakers participating in sessions were several UAE ministers. HE Mohammad Al Gergawi, UAE Minister of Cabinet Affairs, Vice Chairman, Board of Trustees and Managing Director of the Dubai Future Foundation, said ministers around the world should think of themselves as designers of the future. Our stakeholders are 7.98 billion people around the world, he noted.

Dubais approach to foresight was lauded by delegates, including HE Omar Sultan Al Olama, UAE Minister of State for Artificial Intelligence, Digital Economy, and Remote Work Applications, who said: What makes our city and nation successful is not natural resources, but a unique ability to embrace all ideas and individuals.

More than 30 sessions covered topics including immortality, AI sentience, climate change, terraforming, genome sequencing, legislation, and the energy transition.

*Source: AETOSWire

Follow this link:

Cancer to Be Treated as Easily as Common Cold When Humans Crack Quantum Computing - Business Wire

New laboratory to explore the quantum mysteries of nuclear materials – EurekAlert

Replete with tunneling particles, electron wells, charmed quarks and zombie cats, quantum mechanics takes everything Sir Isaac Newton taught about physics and throws it out the window.

Every day, researchers discover new details about the laws that govern the tiniest building blocks of the universe. These details not only increase scientific understanding of quantum physics, but they also hold the potential to unlock a host of technologies, from quantum computers to lasers to next-generation solar cells.

But theres one area that remains a mystery even in this most mysterious of sciences: the quantum mechanics of nuclear fuels.

Until now, most fundamental scientific research of quantum mechanics has focused on elements such as silicon because these materials are relatively inexpensive, easy to obtain and easy to work with.

Now, Idaho National Laboratory researchers are planning to explore the frontiers of quantum mechanics with a new synthesis laboratory that can work with radioactive elements such as uranium and thorium.

An announcement about the new laboratory appears online in the journalNature Communications.

Uranium and thorium, which are part of a larger group of elements called actinides, are used as fuels in nuclear power reactors because they can undergo nuclear fission under certain conditions.

However, the unique properties of these elements, especially the arrangement of their electrons, also means they could exhibit interesting quantum mechanical properties.

In particular, the behavior of particles in special, extremely thin materials made from actinides could increase our understanding of phenomena such as quantum wells and quantum tunneling (see sidebar).

To study these properties, a team of researchers has built a laboratory around molecular beam epitaxy (MBE), a process that creates ultra-thin layers of materials with a high degree of purity and control.

The MBE technique itself is not new, said Krzysztof Gofryk, a scientist at INL. Its widely used. Whats new is that were applying this method to actinide materials uranium and thorium. Right now, this capability doesnt exist anywhere else in the world that we know of.

The INL team is conducting fundamental research science for the sake of knowledge but the practical applications of these materials could make for some important technological breakthroughs.

At this point, we are not interested in building a new qubit [the basis of quantum computing], but we are thinking about which materials might be useful for that, Gofryk said. Some of these materials could be potentially interesting for new memory banks and spin-based transistors, for instance.

Memory banks and transistors are both important components of computers.

To understand how researchers make these very thin materials, imagine an empty ball pit at a fast-food restaurant. Blue and red balls are thrown in the pit one at a time until they make a single layer on the floor. But that layer isnt a random assortment of balls. Instead, they arrange themselves into a pattern.

During the MBE process, the empty ball pit is a vacuum chamber, and the balls are highly pure elements, such as nitrogen and uranium, that are heated until individual atoms can escape into the chamber.

The floor of our imaginary ball pit is, in reality, a charged substrate that attracts the individual atoms. On the substrate, atoms order themselves to create a wafer of very thin material in this case, uranium nitride.

Back in the ball pit, weve created layer of blue and red balls arranged in a pattern. Now we make another layer of green and orange balls on top of the first layer.

To study the quantum properties of these materials, Gofryk and his team will join two dissimilar wafers of material into a sandwich called a heterostructure. For instance, the thin layer of uranium nitride might be joined to a thin layer of another material such as gallium arsenide, a semiconductor. At the junction between the two different materials, interesting quantum mechanical properties can be observed.

We can make sandwiches of these materials from a variety of elements, Gofryk said. We have lots of flexibility. We are trying to think about the novel structures we can create with maybe some predicted quantum properties.

We want to look at electronic properties, structural properties, thermal properties and how electrons are transported through the layers, he continued. What will happen if you lower the temperature and apply a magnetic field? Will it cause electrons to behave in certain way?

INL is one of the few places where researchers can work with uranium and thorium for this type of science. The amounts of the radioactive materials and the consequent safety concerns will be comparable to the radioactivity found in an everyday smoke alarm.

INL is the perfect place for this research because were interested in this kind of physics and chemistry, Gofryk said.

In the end, Gofryk hopes the laboratory will result in breakthroughs that help attract attention from potential collaborators as well as recruit new employees to the laboratory.

These actinides have such special properties, he said. Were hoping we can discover some new phenomena or new physics that hasnt been found before.

In 1900, German physicist Max Planck first described how light emitted from heated objects, such as the filament in a light bulb, behaved like particles.

Since then, numerous scientists including Albert Einstein and Niels Bohr have explored and expanded upon Plancks discovery to develop the field of physics known as quantum mechanics. In short, quantum mechanics describes the behavior of atoms and subatomic particles.

Quantum mechanics is different than regular physics, in part, because subatomic particles simultaneously have characteristics of both particles and waves, and their energy and movement occur in discrete amounts called quanta.

More than 120 years later, quantum mechanics plays a key role in numerous practical applications, especially lasers and transistors a key component of modern electronic devices. Quantum mechanics also promises to serve as the basis for the next generation of computers, known as quantum computers, which will be much more powerful at solving certain types of calculations.

Uranium, thorium and the other actinides have something in common that makes them interesting for quantum mechanics: the arrangement of their electrons.

Electrons do not orbit around the nucleus the way the earth orbits the sun. Rather, they zip around somewhat randomly. But we can define areas where there is a high probability of finding electrons. These clouds of probability are called orbitals.

For the smallest atoms, these orbitals are simple spheres surrounding the nucleus. However, as the atoms get larger and contain more electrons, orbitals begin to take on strange and complex shapes.

In very large atoms like uranium and thorium (92 and 90 electrons respectively), the outermost orbitals are a complex assortment of party balloon, jelly bean, dumbbell and hula hoop shapes. The electrons in these orbitals are high energy. While scientists can guess at their quantum properties, nobody knows for sure how they will behave in the real world.

Quantum tunneling is a key part of any number of phenomena, including nuclear fusion in stars, mutations in DNA and diodes in electronic devices.

To understand quantum tunneling, imagine a toddler rolling a ball at a mountain. In this analogy, the ball is a particle. The mountain is a barrier, most likely a semiconductor material. In classical physics, theres no chance the ball has enough energy to pass over the mountain.

But in the quantum realm, subatomic particles have properties of both particles and waves. The waves peak represents the highest probability of finding the particle. Thanks to a quirk of quantum mechanics, while most of the wave bounces off the barrier, a small part of that wave travels through if the barrier is thin enough.

For a single particle, the small amplitude of this wave means there is a very small chance of the particle making it to the other side of the barrier.

However, when large numbers of waves are travelling at a barrier, the probability increases, and sometimes a particle makes it through. This is quantum tunneling.

Quantum wells are also important, especially for devices such as light emitting diodes (LEDs) and lasers.

Like quantum tunneling, to build quantum wells, you need alternating layers of very thin (10 nanometers) material where one layer is a barrier.

While electrons normally travel in three dimensions, quantum wells trap electrons in two dimensions within a barrier that is, for practical purposes, impossible to overcome. These electrons exist at specific energies say the precise energies needed to generate specific wavelengths of light.

About Idaho National LaboratoryBattelle Energy Alliance manages INL for the U.S. Department of Energys Office of Nuclear Energy. INL is the nations center for nuclear energy research and development,and alsoperforms research in each of DOEs strategic goal areas: energy, national security, science and the environment. For more information, visitwww.inl.gov.Follow us on social media:Twitter,Facebook,InstagramandLinkedIn.

View post:

New laboratory to explore the quantum mysteries of nuclear materials - EurekAlert

1836, the Slaveholder Republic’s Birthday – The Texas Observer

History, its often said, is written by the victors. While that isnt always true, its certainly borne out by many popular accounts of the Texas Revolution of 1835-36, which often tell a very black-and-white story of the virtuous Texansthe victorsfighting against the evil Mexicans. The San Jacinto Monument inscription, for instance, blames the rebellion on unjust acts and despotic decrees of unscrupulous rulers in Mexico. A pamphlet produced by the Republican Party-sponsored 1836 Project says that Anglo settlers fought to preserve constitutional liberty and republican government.

University of Houston history professor Gerald Horne tells a very different story. In The Counter-Revolution of 1836: Texas Slavery & Jim Crow and the Roots of American Fascism, published earlier this year, Horne contends that the motivation behind the Anglo-American rebellion was anything but virtuous: to make Texas safe for slavery and white supremacy. For othersBlacks, the Indigenous peoples, and many Tejanosthe Anglo victory meant slavery, oppression, dispossession, and in many cases, death.

The Counter-Revolution of 1836 is a big, sprawling book (over 570 pages), as befits its scope: It takes the reader from the lead-up to the Texas rebellion, through independence, annexation, the Civil War, Reconstruction, and Jim Crow, to the early 1920s. It is scrupulously researched, drawing not only on other scholars but also on a wide range of sources from the times, including letters, speeches, newspaper articles, and diplomatic posts.

Given current right-wing efforts to expel discussions of systemic racism from Texas classrooms, Hornes book is an important contribution to the ongoing debate over our collective history.

Recently, Horne discussed the book and its implications with the Texas Observer via email.

As the title indicates, you contend that the Texas Revolution was in fact a counter-revolution. What does counter-revolution mean to you, and why do you think its a more accurate designation?

The title suggests that the 1836 revolt was in response to abolitionism south of the border and thus was designed to stymie progress. A revolution, properly understood, should advance progress. [The] counter-revolution in 1836 assuredly was a step forward for many European settlersnot so much for Africans and the Indigenous.

This book continues the story you begin in your 2014 book on the American rebellion against England (1775-83). In that book, you similarly contend that the American Revolution was a counter-revolution. Why do you think so?

Similarly, 1776 was designed to stymie not only the prospect of abolitionism, but as well to sweep away what was signaled by the Royal Proclamation of 1762-3 which expressed Londons displeasure at continuing to expend blood and treasure ousting Indigenous peoples for the benefit of real estate speculators, e.g., George Washington. Not coincidentally, nationals from the post-1776 republic [the United States] were essential to the success of the 1836 counter-revolution.

You refer to the pre-emancipation United States and the pre-annexation Republic of Texas as slaveholder republics. Some readers may bristle at this label, especially those who believe, as anti-critical race theory Senate Bill 3 puts it, that slavery was not central to the American founding, but was merely a failur[e] to live up to the authentic founding principles of the United States. Why do you think the term slaveholder republic is a more accurate description?

Slaveholding republic is actually a term popularized by the late Stanford historianand Pulitzer Prize winnerDon Fehrenbacher. It is an indicator of regressionan offshoot of counter-revolutionthat this accurate descriptor is now deemed to be verboten. This ruse of suggesting that every blemishor atrocityis inconsistent with founding principles is akin to the thief and embezzler telling the judge when caught red-handed, Your honor, this is not who I am. Contrary to the delusions of the delirious, slaveholding was not an accident post-1776: How else to explain the exponential increase in the number of enslaved leading up to the Civil War? How else to explain U.S. and Texiannationals coming to dominate the slave trade in Cuba, Brazil, etc.?

You write that 1836 was a civil war over slavery and, like a precursor of Typhoid Mary, Texas seemed to bring the virulent bacteria that was war to whatever jurisdiction it joined. Of course, Texas ultimately joined the United States. How did slave-owning Texas infect the United States?

Texas was a bulwark of the so-called Confederate States of America which seceded from the U.S. in 1861 not least to preserveand extendenslavement of Africans in the first place. The detritus of Texas slaveholders became a bulwark of the Ku Klux Klan which served to drown Reconstructionor the post-Civil War steps to deliver a measure of freedom to the formerly enslavedin blood. This Texas detritus were stalwart backers in the 20th century of the disastrous escapades of McCarthyism, which routed not just communists but numerous labor organizers and anti-Jim Crow advocates. Texas also supplied a disproportionate percentage of the insurrectionists who stormed Capitol Hill on 6 January 2021.

Your book is subtitled The Roots of U.S. Fascism. Theres a growing awareness among pundits and some political leaders, President Biden, for instance, of the rise of fascist or fascist-like politics in the United Statesa politics of racist nationalism, trading in perceived grievances and centered on devotion to an autocratic leader. Your book argues that todays American fascism has roots as far back as the Anglo settlement of Mexican Texas. Why do you think so?

The genocidal and enslaving impulse has been essential to fascism whenever it has reared its ugly head globally. In Texasas in the wider republicthis involved class collaboration between and among a diverse array of settlers for mutual advantage. This class collaboration persists to this very day and can be espied on 6, January 2021 and thereafter.

Read more:

1836, the Slaveholder Republic's Birthday - The Texas Observer

General officers in the Confederate States Army – Wikipedia

Senior military leaders of the Confederate States of America

The general officers of the Confederate States Army (CSA) were the senior military leaders of the Confederacy during the American Civil War of 18611865. They were often former officers from the United States Army (the regular army) prior to the Civil War, while others were given the rank based on merit or when necessity demanded. Most Confederate generals needed confirmation from the Confederate Congress, much like prospective generals in the modern U.S. armed forces.

Like all of the Confederacy's military forces, these generals answered to their civilian leadership, in particular Jefferson Davis, the South's president and therefore commander-in-chief of the Army, Navy, and the Marines of the Confederate States.

Much of the design of the Confederate States Army was based on the structure and customs of the U.S. Army[1] when the Confederate Congress established their War Department on February 21, 1861.[2] The Confederate Army was composed of three parts; the Army of the Confederate States of America (ACSA, intended to be the permanent, regular army), the Provisional Army of the Confederate States (PACS, or "volunteer" Army, to be disbanded after hostilities), and the various Southern state militias.

Graduates from West Point and Mexican War veterans were highly sought after by Jefferson Davis for military service, especially as general officers. Like their Federal counterparts, the Confederate Army had both professional and political generals within it. Ranks throughout the CSA were roughly based on the U.S. Army in design and seniority.[3] On February 27, 1861, a general staff for the army was authorized, consisting of four positions: an adjutant general, a quartermaster general, a commissary general, and a surgeon general. Initially the last of these was to be a staff officer only.[2] The post of adjutant general was filled by Samuel Cooper (the position he had held as a colonel in the U.S. Army from 1852 until resigning) and he held it throughout the Civil War, as well as the army's inspector general.[4]

Initially, the Confederate Army commissioned only brigadier generals in both the volunteer and regular services;[2] however, the Congress quickly passed legislation allowing for the appointment of major generals as well as generals, thus providing clear and distinct seniority over the existing major generals in the various state militias.[5] On May 16, 1861, when there were only five officers at the grade of brigadier general, this legislation was passed, which stated in part:

That the five general officers provided by existing laws for the Confederate States shall have the rank and denomination of 'general', instead of 'brigadier-general', which shall be the highest military grade known to the Confederate States ...[6]

As of September 18, 1862, when lieutenant generals were authorized, the Confederate Army had four grades of general officers; they were (in order of increasing rank) brigadier general, major general, lieutenant general, and general.[7] As officers were appointed to the various grades of general by Jefferson Davis (and were confirmed), he would create the promotion lists himself. The dates of rank, as well as seniority of officers appointed to the same grade on the same day, were determined by Davis "usually following the guidelines established for the prewar U.S. Army."[8]

These generals were most often infantry or cavalry brigade commanders, aides to other higher ranking generals, and War Department staff officers. By war's end the Confederacy had at least 383 different men who held this rank in the PACS, and three in the ACSA: Samuel Cooper, Robert E. Lee, and Joseph E. Johnston.[9] The organization of regiments into brigades was authorized by the Congress on March 6, 1861. Brigadier generals would command them, and these generals were to be nominated by Davis and confirmed by the Confederate Senate.[2]

Though close to the Union Army in assignments, Confederate brigadiers mainly commanded brigades while Federal brigadiers sometimes led divisions as well as brigades, particularly in the first years of the war. These generals also often led sub-districts within military departments, with command over soldiers in their sub-district. These generals outranked Confederate Army colonels, who commonly led infantry regiments.

This rank is equivalent to brigadier general in the modern U.S. Army.

These generals were most commonly infantry division commanders, aides to other higher ranking generals, and War Department staff officers. They also led the districts that made up military departments and had command over the troops in their districts. Some Major generals also led smaller military departments. By war's end, the Confederacy had at least 88 different men who had held this rank, all in the PACS.[10]

Divisions were authorized by the Congress on March 6, 1861, and major generals would command them. These generals were to be nominated by Davis and confirmed by the Senate.[2] Major generals outranked brigadiers and all other lesser officers.

This rank was not synonymous with the Union's use of it, as Northern major generals led divisions, corps, and entire armies. This rank is equivalent in most respects to major general in the modern U.S. Army.

Not further promoted

Evander Mclver Law was promoted to the rank of Major General on March 20, 1865; on the recommendation of Generals Johnston and Hampton just before the surrender. The promotion was too late to be confirmed by the Confederate Congress however.

There were 18 lieutenant generals in the Confederate Army, and these general officers were often corps commanders within armies or military department heads, in charge of geographic sections and all soldiers in those boundaries. All of the Confederacy's lieutenant generals were in the PACS.[10] The Confederate Congress legalized the creation of army corps on September 18, 1862, and directed that lieutenant generals lead them. These generals were to be nominated by President Davis and confirmed by the C.S. Senate.[7] Lieutenant generals outranked major generals and all other lesser officers.

This rank was not synonymous with the Federal use of it; Ulysses S. Grant (18221885) was one of only two Federal lieutenant generals during the war, the other being Winfield Scott (17861866), General-in-Chief of the United States Army 18411861, at the beginning of the American Civil War who also served in the War of 1812 (18121815), and led an army in the field during the MexicanAmerican War (18461849), received a promotion to brevet lieutenant general by a special Act of Congress in 1855. Gen. Grant was by the time of his promotion, March 9, 1864, the only Federal lieutenant general in active service. Grant became General-in-Chief, commander of the United States Army and of all the Union armies, answering directly to President Abraham Lincoln and charged with the task of leading the Federal armies to victory over the southern Confederacy. The CSA lieutenant general rank is also roughly equivalent to lieutenant general in the modern U.S. Army.

The Congress passed legislation in May 1864 to allow for "temporary" general officers in the PACS, to be appointed by President Jefferson Davis and confirmed by the C.S. Senate and given a non-permanent command by Davis.[12] Under this law, Davis appointed several officers to fill open positions. Richard H. Anderson was appointed a "temporary" lieutenant general on May 31, 1864, and given command of the First Corps in the Army of Northern Virginia commanded by Gen. Lee (following the wounding of Lee's second-in-command, Lt. Gen. James Longstreet on May 6 in the Battle of the Wilderness.) With Longstreet's return that October, Anderson reverted to a major general. Jubal Early was appointed a "temporary" lieutenant general on May 31, 1864, and given command of the Second Corps (following the reassignment of Lt. Gen. Richard S. Ewell to other duties) and led the Corps as an army into the third Southern invasion of the North in July 1864 with battles at the Monocacy near Frederick, Maryland and Fort Stevens outside the Federal capital city of Washington, D.C., until December 1864, when he too reverted to a major general. Likewise, both Stephen D. Lee and Alexander P. Stewart were appointed to fill vacancies in the Western Theater as "temporary" lieutenant generals and also reverted to their prior grades as major generals as those assignments ended. However, Lee was nominated a second time for lieutenant general on March 11, 1865.[13]

Originally five officers in the South were appointed to the rank of general, and only two more would follow. These generals occupied the senior posts in the Confederate Army, mostly entire army or military department commanders, and advisers to Jefferson Davis. This rank is equivalent to the general in the modern U.S. Army, and the grade is often referred to in modern writings as "full general" to help differentiate it from the generic term "general" meaning simply "general officer".[15]

All Confederate generals were enrolled in the ACSA to ensure that they outranked all militia officers,[5] except for Edmund Kirby Smith, who was appointed general late in the war and into the PACS. Pierre G.T. Beauregard, had also initially been appointed a PACS general, was elevated to ACSA two months later with the same date of rank.[16] These generals outranked all other grades of generals, as well as all lesser officers in the Confederate States Army.

The first group of officers appointed to general was Samuel Cooper, Albert Sidney Johnston, Robert E. Lee, Joseph E. Johnston, and Pierre G.T. Beauregard, with their seniority in that order. This ordering caused Cooper, a staff officer who would not see combat, to be the senior general officer in the CSA. That seniority strained the relationship between Joseph E. Johnston and Jefferson Davis. Johnston considered himself the senior officer in the Confederate States Army and resented the ranks that President Davis had authorized. However, his previous position in the U.S. Army was staff, not line, which was evidently a criterion for Davis regarding establishing seniority and rank in the subsequent Confederate States Army.[17]

On February 17, 1864, legislation was passed by Congress to allow President Davis to appoint an officer to command the Trans-Mississippi Department in the Far West, with the rank of general in the PACS. Edmund Kirby Smith was the only officer appointed to this position.[18] Braxton Bragg was appointed a general in the ACSA with a date of rank of April 6, 1862, the day his commanding officer Gen. Albert Sidney Johnston died in combat at Shiloh/Pittsburg Landing.[19]

The Congress passed legislation in May 1864 to allow for "temporary" general officers in the PACS, to be appointed by Davis and confirmed by the C.S. Senate and given a non-permanent command by Davis.[12]John Bell Hood was appointed a "temporary" general on July 18, 1864, the date he took command of the Army of Tennessee in the Atlanta Campaign, but this appointment was not later confirmed by the Congress, and he reverted to his rank of lieutenant general in January 1865.[20] Later in March 1865, shortly before the end of the war, Hood's status was spelled out by the Confederate States Senate, which stated:

Resolved, That General J. B. Hood, having been appointed General, with temporary rank and command, and having been relieved from duty as Commander of the Army of Tennessee, and not having been reappointed to any other command appropriate to the rank of General, he has lost the rank of General, and therefore cannot be confirmed as such.[21]

Note that during 1863, Beauregard, Cooper, J. Johnston, and Lee all had their ranks re-nominated on February 20 and then re-confirmed on April 23 by the Confederate Congress.[13] This was in response to debates on February 17 about whether confirmations made by the provisional legislature needed re-confirmation by the permanent legislature, which was done by an Act of Congress issued two days later.[22]

The position of General in Chief of the Armies of the Confederate States was created on January 23, 1865. The only officer appointed to it was Gen. Robert E. Lee, who served from February 6 until April 12.

The Southern states had had militias in place since Revolutionary War times consistent with the U.S. Militia Act of 1792. They went by varied names such as State "Militia" or "Armies" or "Guard" and were activated and expanded when the Civil War began. These units were commanded by "Militia Generals" to defend their particular state and sometimes did not leave native soil to fight for the Confederacy. The Confederate militias used the general officer ranks of Brigadier General and Major General.

The regulations in the Act of 1792 provided for two classes of militia, divided by age. Class one was to include men from 22 to 30 years old, and class two would include men from 18 to 20 years as well as from 31 to 45 years old.[23] The various southern states were each using this system when the war began.

All Confederate generals wore the same uniform insignia regardless of which rank of general they were,[24] except for Robert E. Lee who wore the uniform of a Confederate colonel. The only visible difference was the button groupings on their uniforms; groups of three buttons for lieutenant and major generals, and groups of two for brigadier generals. In either case, a general's buttons were also distinguished from other ranks by their eagle insignia.

To the right is a picture of the CSA general's full uniform, in this case of Brig. Gen. Joseph R. Anderson of the Confederacy's Ordnance Department. All of the South's generals wore uniforms like this regardless of which grade of general they were, and all with gold-colored embroidering.

The general officers of the Confederate Army were paid for their services, and exactly how much (in Confederate dollars (CSD)) depended on their rank and whether they held a field command or not. On March 6, 1861, when the army only contained brigadier generals, their pay was $301 CSD monthly, and their aide-de-camp lieutenants would receive an additional $35 CSD per month beyond regular pay. As more grades of the general officer were added, the pay scale was adjusted. By June 10, 1864, a general received $500 CSD monthly, plus another $500 CSD if they led an army in the field. Also, by that date, lieutenant generals got $450 CSD and major generals $350 CSD, and brigadiers would receive $50 CSD in addition to regular pay if they served in combat.[25]

The CSA lost more general officers killed in combat than the Union Army did throughout the war, in the ratio of about 5-to-1 for the South compared to roughly 12-to-1 in the North.[26] The most famous of them is General Thomas "Stonewall" Jackson, probably the best known Confederate commander after General Robert E. Lee.[27] Jackson's death was the result of pneumonia which emerged subsequently after a friendly fire incident had occurred at Chancellorsville on the night of May 2, 1863. Replacing these fallen generals was an ongoing problem during the war, often having men promoted beyond their abilities (a common criticism of officers such as John Bell Hood[28] and George E. Pickett,[29] but an issue for both armies), or gravely wounded in combat but needed, such as Richard S. Ewell.[30] The problem was made more difficult by the South's depleting manpower, especially near the war's end.

The last Confederate general in the field, Stand Watie, surrendered on June 23, 1865, and the war's last surviving full general, Edmund Kirby Smith, died on March 28, 1893.[31] James Longstreet died on January 2, 1904, and was considered "the last of the high command of the Confederacy".[32]

The Confederate Army's system of using four grades of general officers is currently the same rank structure used by the U.S. Army (in use since shortly after the Civil War) and is also the system used by the U.S. Marine Corps (in use since World War II).

View original post here:

General officers in the Confederate States Army - Wikipedia

President Biden Announces Key Appointments to Boards and Commissions – The White House

WASHINGTON Today, President Biden announced his intent to appoint the following individuals to serve in key roles:

Council of the Administrative Conference of the United StatesAdministrative Conference of the United States (ACUS) is an independent federal agency charged with convening expert representatives from the public and private sectors to recommend improvements to administrative process and procedure. ACUS initiatives promote efficiency, participation, and fairness in the promulgation of federal regulations and in the administration of federal programs. The ten-member ACUS Council is composed of government officials and private citizens.

Kristen Clarke, Member, Council of the Administrative Conference of the United StatesKristen Clarke is the Assistant Attorney General for Civil Rights at the U.S. Department of Justice. In this role, she leads the Justice Departments broad federal civil rights enforcement efforts and works to uphold the civil and constitutional rights of all who live in America. Clarke is a lifelong civil rights lawyer who has spent her entire career in public service. She most recently served as President and Executive Director of the Lawyers Committee for Civil Rights Under Law, one of the nations leading civil rights organizations founded at the request of John F. Kennedy.

Fernando Raul Laguarda, Member, Council of the Administrative Conference of the United StatesFernando Laguarda is General Counsel at AmeriCorps. Prior to his current role, he was Faculty Director of the Program on Law and Government and a Professor at American University Washington College of Law, where he taught and developed courses in administrative law, legislation, and antitrust, and launched the law schools LLM in Legislation. Laguarda also founded the nations first student-centered initiative to study the work of government oversight entities and was faculty advisor to the Latino Law Students Association. Fernando has worked in the telecommunications industry and as a partner at two different Washington, D.C. law firms focusing on technology and competition law. He was a Founder, served as General Counsel, and eventually became Board Chair, of the National Network to End Domestic Violence. Laguarda has also served as a member of numerous non-profit, civil rights, academic, and advisory boards. Laguarda received his J.D. cum laude from Georgetown University Law Center and his A.B. cum laude in government from Harvard College.

Anne Joseph OConnell, Member, Council of the Administrative Conference of the United StatesAnne Joseph OConnell, a lawyer and social scientist, is the Adelbert H. Sweet Professor of Law at Stanford University. Her research and teaching focuses on administrative law and public administration. She is a three-time recipient of the American Bar Associations Scholarship Award in Administrative Law for the best article or book published in the preceding year, and a two-time winner of the Richard D. Cudahy Writing Competition on Regulatory and Administrative Law from the American Constitution Society. OConnell joined the Gellhorn and Byses Administrative Law: Cases and Comments casebook as a co-editor with the twelfth edition. Most recently, her work has focused on acting officials and delegations of authority in federal agencies. Her research has been cited by Congress, the Supreme Court, lower federal courts, and the national media. She is an elected fellow of the American Academy of Arts and Sciences and the National Academy of Public Administration.

Before entering law school teaching, OConnell clerked for Justice Ruth Bader Ginsburg and Judge Stephen F. Williams and served as a trial attorney for the Federal Programs Branch of the Department of Justices Civil Division. A Truman Scholar, she worked for a number of federal agencies in earlier years. OConnell received a B.A. in Mathematics from Williams College, an M.Phil. in the History and Philosophy of Science from Cambridge University, a J.D. from Yale Law School, and a Ph.D. in Political Economy and Government from Harvard University.

Jonathan Su, Member, Council of the Administrative Conference of the United StatesJonathan Su most recently served as Deputy Counsel to the President. Prior to his service at the White House, Su was the Deputy Office Managing Partner of the Washington, D.C. office of Latham & Watkins LLP, where he was also a partner in the White Collar Defense & Investigations practice. During the Obama-Biden Administration, Su served as Special Counsel to the President. Su was also a federal prosecutor at the United States Attorneys Office for the District of Maryland. He served as a law clerk for U.S. Circuit Judge Ronald M. Gould and U.S. District Judge Julian Abele Cook, Jr. Su is a graduate of the University of California at Berkeley and Georgetown University Law Center.

National Capital Planning CommissionEstablished by Congress in 1924, the National Capital Planning Commission (NCPC) is the federal governments central planning agency for the National Capital Region. Through planning, policymaking, and project review, NCPC protects and advances the federal governments interest in the regions development. The Commission provides overall planning guidance for federal land and buildings in the region by reviewing the design of federal and certain local projects, overseeing long-range planning for future development, and monitoring capital investment by federal agencies. The 12-member Commission represents federal and local constituencies with a stake in planning for the nations capital.

Bryan Clark Green, Commissioner, National Capital Planning CommissionBryan Green leverages his expertise as an educator, writer, and practicing preservationist to embrace the role of architecture in Americas larger story. He began his career at the Virginia Historical Society, worked for the Virginia Department of Historic Resources, was a Senior Associate and Director of Historic Preservation at Commonwealth Architects. He later joined the Tidewater and Big Bend Foundation as Executive Director. Green is the author of the forthcoming work, In Jeffersons Shadow: The Architecture of Thomas R. Blackburn, co-author of Lost Virginia: Vanished Architecture of the Old Dominion, After the Monuments Fall: The Removal of Confederate Monuments from the American South (LSU Press), with Kathleen James-Chakraborty and Katherine Kuenzli. Green graduated from the University of Notre Dame with a Bachelors in History and obtained his Masters and Ph.D. in Architectural History at the University of Virginia.

He serves as Chair, Preservation Officer, and ex officio member the Board at the Heritage Conservation Committee of the Society of Architectural Historians. He co-chairs the Publications Committee of the Association Preservation Technology International and serves on the Commonwealth of Virginias Citizens Advisory Council on Furnishing and Interpreting the Executive Mansion, and formerly served on the City of Richmond Commission of Architectural Review and Urban Design committees. Greens longstanding commitment to this work led him to Honorary Membership in both the Virginia Society and the Richmond Chapter of the American Institute of Architects.

Elizabeth M. Hewlett, Commissioner, National Capital Planning CommissionElizabeth M. Hewlett is an attorney and servant of the public interest. She recently retired from her second tenure as the Chairman of the Prince Georges County Planning Board and the Maryland-National Capital Park and Planning Commission. She has represented Maryland on the Washington Metropolitan Area Transit Authority and served as a Principal at Shipley, Horne and Hewlett, P.A., a law firm where she represented individuals, businesses, and real estate clients while also rendering many community-centric pro bono services. Hewlett has participated in or led dozens of public boards, civic groups, and key initiatives, including the Prince Georges County Census effort, the Maryland State Board of Law Examiners, and as a member of the Governors Drug and Alcohol Abuse Commission.

Throughout her career, Hewlett has also been a contributor to several legal and professional organizations, including: National Bar Association, Womens Bar Association of Maryland, the J. Franklyn Bourne Bar Association, the National Association for the Advancement of Colored People, and Delta Sigma Theta Sorority, Inc. She has been awarded many awards, including the Wayne K. Curry Distinguished Service Award, the National Bar Association Presidential Lifetime Achievement Award, and the J. Joseph Curran Award for Public Service. She is a graduate of Tufts University, Boston College Law School, and the John F. Kennedy School of Government Executive Program at Harvard University.

Presidents Intelligence Advisory BoardThe Presidents Intelligence Advisory Board is an independent element within the Executive Office of the President. The Presidents Intelligence Advisory Board exists exclusively to assist the President by providing the President with an independent source of advice on the effectiveness with which the Intelligence Community is meeting the nations intelligence needs and the vigor and insight with which the community plans for the future. The President is able to appoint up to 16 members of the Board.

Anne M. Finucane, Member, Presidents Intelligence Advisory BoardAnne Finucane currently serves as Chairman of the Board for Bank of America Europe. She also serves on the board of Bank of America Securities Europe SA, the banks EU broker-dealer in Paris. Finucane served as the first woman Vice Chairman of Bank of America. She led the companys strategic positioning and global sustainable and climate finance work, environmental, social and governance (ESG), capital deployment and public policy efforts. She is widely recognized for pioneering sustainable finance in the banking industry. For most of her career, Finucane also oversaw marketing, communications, and data and analytics at the company, and is credited with leading Bank of Americas successful efforts to reposition the company and repair its reputation after the 2008 financial crisis.

Finucane serves on a variety of corporate and nonprofit boards of directors, including CVS Health, Williams Sonoma, Mass General Brigham Healthcare, Special Olympics, the (RED) Advisory Board, and the Carnegie Endowment for International Peace. She previously served on the U.S. State Departments Foreign Affairs Policy board and is a member of the Council on Foreign Relations. Finucane has consistently been highlighted in most powerful women lists, including in the American Banker, Fortune, and Forbes. In 2021, she received the Carnegie Hall Medal of Honor, and in 2019 she was inducted into the American Advertising Federations Advertising Hall of Fame, and received the Edward M. Kennedy Institute for Inspired Leadership.

###

Continued here:

President Biden Announces Key Appointments to Boards and Commissions - The White House

8 Best Canadian Whiskies of 2022 – HICONSUMPTION

In a world of whiskies where identity is key, Canadian whisky might just suffer by its ability to do everything. From making a Scotch-style single malt to an American-style bourbon, distilleries from our northern neighbors thrive of that very versatility, opening the doors to creativity and innovation. Luckily, in recent years, Canadian liquor has been on the rise Stateside. Its yet to build up the exotic cachet of Scotch or Japanese whisky, but were confident that its only a matter of time. To help you get started, weve compiled a guide to the best Canadian whiskies to drink right now.

And Rye Is It So Good?

Although its often called rye whisky, Canadian rye whisky is much different than American rye whiskey (other than the added e), which can contain as much as 100% rye in the mashbill. For one, the rye in Canadian whisky refers to the grain being added to a predominantly-corn mashbill. Whereas most popular whisky-making regions (think Scotland, Ireland, Japan, and the United States) specialize in a certain style or styles brought on by the prominence of a specific grain or still type Canada is known for its eclectic variety and is frequently blended from different styles.

That said, there are some legal stipulations pinned to making Canadian whisky thanks to the nations Food and Drug Act. Most importantly, the liquor is required to be mashed, distilled, and aged in Canada. Additionally, it must be aged in small wood vessels for at least three years and bottled at 40% ABV. Unlike many other regions, caramel may be added for flavoring as long as it doesnt lose the aroma and taste generally attributed to Canadian whisky.

Slow But Steady

Around since the 1700s, Canadian whisky mostly began as a wheat spirit, since thats what primarily grew in the country at the time. Rye was added for flavor, thus creating what would become the profile and identity of the spirit for some time. The liquor really started to boom in the 19th century in England, who was having trouble sourcing their whisky elsewhere. Later on, during the Civil War in the United States, the North looked to Canada to supply them with their liquor since they refused to buy products from the Confederate states, which happened to be the source of most of the whiskey in the country.

The first real nation to enact an aging requirement, which was only one year in 1887 before eventually increasing to three, Canadian whisky was able to capitalize on the repeal of Prohibition since many U.S. distilleries had shut down and consumers wanted something besides the bootleg whiskey they had been drinking for 13 years. Likewise, a lot of their products had been aging in barrels waiting for the demand to return. Like most spirits (other than vodka), wine and beer were the favored alcoholic drink throughout the 70s and 80s until 1992 when Forty Creek reclaimed what Canadian whisky could be.

Launched in 1946, Albert Distillers started making rye whisky a couple decades after it went out of style and long before it was cachet again. A few years ago, the number-one rye producer in North America decided to do something a little different. Where its contemporaries were finishing their whiskies in former wine casks, Alberta was putting it straight in the batch, blending 91% rye, 8% bourbon, and 1% sherry to make its Dark Batch, which rides on a profile of vanilla, oak, dried stone fruit, citrus, and baking spices.

Lot 40 was created by Corby Spirit and Wine in 1998 as a limited-edition homage to pre-Prohibition-style rye whisky. After the resurgence of rye, it was launched as its own brand in 2012 and has since become one of the most decorated Canadian whiskies. Utilizing a mashbill of 100% unmalted rye, Lot 40, which gets its namesake from the plot of land owned by one of its founders, is distilled in copper pot stills one batch at a time and aged in new American oak barrels much similar to bourbon. The result is a dry and complex profile of spice, dark fruit, and citrus.

Since 2011, British Columbia-based distillery Shelter Point has made all of its whiskies with the barley thats grown on its own 380-acre property and water from a river that runs through its estate. Its highly-popular small-batch Smoke Point expression takes after the peated single malts from Scotland. Made from pot stills, Batch #3 has already won a plethora of awards this year, including Double Gold at the San Francisco World Spirits Competition and Best Single Malt at the Canadian Whisky Awards.

With 165 years of whisky-making experience, J.P. Wisers is one of the oldest operating distilleries in the nation. Thanks to the low-rye mashbill, this 18-year-old corn whisky the brands highest age statement is double column distilled, blended, and aged for nearly two decades in Canadian oak casks. Perfect for sipping neat, this expression goes down super smooth with a dynamic palate of pine, oak, apple, and floral notes with a long finish. And with a sub-$60 price point, this is one of the best deals youll find in any liquor category.

What Jack Daniels is to American whisky, Crown Royal is to our neighbors to the north. Easily Canadas most recognizable brand, the Gimli giant has been heading in a new premium direction as of late. However, that purple bag and picturesque bottles have always come underpinned with an air of elegance. This most recent version of the Noble Collections Winter Wheat Blended whisky has been the brands hottest batch as of late, even winning Best Whisky Overall at the Canadian Whiskey Awards back in February.

Since its launch in 1992, Forty Creek has been paving the way for Canadian whisky with its resilient approach to thinking outside the box. Credited with helping revive the national spirit, Forty Creeks small-batch Confederation Oak Reserve, named after the Canadian Confederation of 1867, blends together three spirts of different ages, made from a mashbill of corn, rye, and barley, and then finished for two years in Canadian oak casks. The colder weather imparts a profile of vanilla, butter cream, pepper, and walnut.

Billed as Canadas first single-barrel whisky, this marvelous expression from Caribou Crossing comes from one of around 200,000 casks in the distillerys collection. Bourbon lovers might compare its caribou bottle topper to Blantons galloping horse, but the flavor profile can stand toe-to-toe as well. Easily one of the most prolific top-shelf options from the Great White North, Caribou Crossings Single Barrel soars with a slightly-fluctuating medium-body profile of vanilla, honey, pepper, and fruit.

Rye typically matures much faster that corn- or barley-based whiskies. Nevertheless, the folks at Lock Stock & Barrel have found magic in their process, utilizing a mashbill of 100% rye. The brands top-shelf 21 Year was double distilled in copper pot stills before being aged for over two decades in new charred American oak barrels. Bottled at 111 proof, this whisky has a definite heat undergirding notes of cinnamon, caramel, cocoa, anise, and treacle, giving way to a long finish of leather, oak, and spice.

Original post:

8 Best Canadian Whiskies of 2022 - HICONSUMPTION

Inside Lake Lanier’s Deaths And Why People Say It’s Haunted – All That’s Interesting

Constructed right atop the historically Black town of Oscarville, Georgia in 1956, Lake Lanier has become one of the most dangerous bodies of water in America with the remains of buildings just below the surface ensnaring hundreds of boats and swimmers.

Each year, more than 10 million people visit Lake Lanier in Gainesville, Georgia. Though unsuspecting the massive, placid lake might look, its considered one of the deadliest in America indeed, there have been 700 deaths at Lake Lanier since its construction in 1956.

This shocking number of accidents at the lake have led many to theorize that the site may, in fact, be haunted.

And given the controversial circumstances surrounding the lakes construction and a history of racial violence in the ruins of the former town of Oscarville that lie beneath the lakes surface, there might be some truth to this idea.

In 1956, the United States Army Corps of Engineers was tasked with creating a lake to provide water and power to parts of Georgia and help to prevent the Chattahoochee River from flooding.

They chose to construct the lake near Oscarville, in Forsyth County. Named after the poet and Confederate soldier Sidney Lanier, Lake Lanier has 692 miles of shoreline, making it the largest in Georgia and far, far larger than the town of Oscarville, which the Corps of Engineers forcefully emptied so that the lake could be built.

In total, 250 families were displaced, roughly 50,000 acres of farmland were destroyed, and 20 cemeteries were either relocated or otherwise engulfed by the lakes waters over its five-year construction period.

The town of Oscarville, however, was strangely not demolish before the lake was filled, and its ruins still rest at the bottom of Lake Lanier.

Divers have reported finding fully intact streets, walls, and houses, making it the single most dangerous underwater surface in the United States.

The flooded structures, coupled with declining water levels, are presumed to be a major factor in the high number of deaths that occur yearly at Lake Lanier, catching swimmers and holding them under or damaging boats with debris.

The deaths at Lake Lanier arent the typical sort, though. While there are many instances of people drowning, there are also reports of boats randomly going up in flames, freak accidents, missing persons, and inexplicable tragedies.

Some believe the regions dark past is responsible for these incidents. Legend asserts that vengeful and restless spirits of those whose graves were flooded many of whom were Black or persecuted and driven out by violent white mobs is behind this curse.

The town of Oscarville was once a bustling, turn-of-the-century community and a beacon for Black culture in the South. At the time, 1,100 Black people owned land and operated businesses in Forsyth County alone.

But on Sept. 9, 1912, an 18-year-old white woman named Mae Crow was raped and murdered near Browns Bridge on the Chattahoochee River banks, right by Oscarville.

According to the Oxford American, Mae Crows murder was pinned on four young Black people who happened to live in the area nearby; siblings Oscar and Trussie Jane Daniel, only 18 and 22 respectively, and their 16-year-old cousin Ernest Knox. With them was Robert Big Rob Edwards, 24.

Edwards was arrested for Crows rape and murder and taken to jail in Cumming, Georgia, the seat of Forsyth County.

A day later, a white mob invaded Edwards jail cell. They shot him, dragged him through the streets, and hanged him from a telephone pole outside the courthouse.

A month later, Ernest Knox and Oscar Daniel appeared in court for the rape and murder of Mae Crow. They were found guilty by the jury in just over an hour.

Some 5,000 people gathered to watch the teenagers be hanged.

Trussie Daniels charges were dismissed, but its widely believed that all three boys were innocent of the crimes.

Following Edwards lynching, white mobs known as night riders started going door to door across Forsyth County with torches and guns, burning down Black businesses and churches, demanding that all Black citizens vacate the county.

As Narcity reported, to this day less than five percent of Forsyth Countys population is Black.

But perhaps Lake Lanier is haunted by some other force?

The most popular legend surrounding Lake Lanier is called The Lady of the Lake.

As the story goes, in 1958, two young girls named Delia May Parker Young and Susie Roberts were at a dance in town but had decided to leave early. On the way home, they stopped to get gas and then left without paying for it.

They were driving across a bridge over Lake Lanier when they lost control of the car, spiraling off the edge and crashing into the dark waters below.

A year later, a fisherman out on the lake came across a decomposed, unrecognizable body floating near the bridge. At the time, no one could identify who it belonged to.

It wasnt until 1990 when officials discovered a 1950s Ford sedan at the bottom of the lake with the remains of Susie Roberts inside, that they were able to identify the body found three decades earlier as Delia May Parker Youngs.

But locals already knew who she was. They had reportedly seen her, still in her blue dress, wandering near the bridge at night with handless arms, waiting to drag unsuspecting lake-goers to the bottom.

Other people have reported seeing a shadowy figure sitting on a raft, inching himself across the water with a long pole and holding up a lantern to see.

Besides these ghost stories of yore, there are those who claim that the lake is haunted by the spirits of the 27 victims who have died in Lake Lanier over the years, but whose bodies were never found.

In the end, though, ghost stories are perhaps nothing more than a fun way to write off an otherwise tragic history littered with racist violence as well as unsafe and poorly planned construction.

Regardless of its size, for 700 people to have died in the lake in less than 70 years, something must be wrong. The Army Corps of Engineers initially believed that the submerged town of Oscarville wouldnt cause any harm, but the lake also wasnt constructed to be recreational it was meant to supply water from the Chattahoochee River to towns and cities in Georgia.

Many of the deaths can likely be attributed to things as simple as not wearing a life jacket, drinking alcohol while out on the lake, accidents, or incorrectly assuming that shallow water is always safe.

Perhaps the only thing that truly haunts Lake Lanier is its bigoted past.

After reading about the deaths at Lake Lanier and the history of Lake Lanier, learn about Ohios Franklin Castle, which quickly became a house of horrors. Then, see the twisted, dark history of the Myrtles Plantation in Louisiana.

Read this article:

Inside Lake Lanier's Deaths And Why People Say It's Haunted - All That's Interesting

NIST Cloud Computing Program – NCCP | NIST

Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is composed of five essential characteristics (On-demand self-service, Broad network access, Resource pooling, Rapid elasticity, Measured Service); three service models (Cloud Software as a Service (SaaS), Cloud Platform as a Service (PaaS), Cloud Infrastructure as a Service (IaaS)); and, four deployment models (Private cloud, Community cloud, Public cloud, Hybrid cloud). Key enabling technologies include: (1) fast wide-area networks, (2) powerful, inexpensive server computers, and (3) high-performance virtualization for commodity hardware.

The Cloud Computing model offers the promise of massive cost savings combined with increased IT agility. It is considered critical that government and industry begin adoption of this technology in response to difficult economic constraints. However, cloud computing technology challenges many traditional approaches to datacenter and enterprise application design and management. Cloud computing is currently being used; however, security, interoperability, and portability are cited as major barriers to broader adoption.

The long term goal is to provide thought leadership and guidance around the cloud computing paradigm to catalyze its use within industry and government. NIST aims to shorten the adoption cycle, which will enable near-term cost savings and increased ability to quickly create and deploy enterprise applications. NIST aims to foster cloud computing systems and practices that support interoperability, portability, and security requirements that are appropriate and achievable for important usage scenarios

Read the original here:

NIST Cloud Computing Program - NCCP | NIST

Alibaba Invests in Cloud Computing Business With New Campus – ETF Trends

Chinese tech giant Alibaba Group Holding Ltd. has opened a new campus for its cloud computing unit, Alibaba Cloud, in its home city of Hangzhou. Per the South China Morning Post, which Alibaba owns, the 10-building, 2.1 million-square-foot campus is roughly the size of the 2 million-square-foot campus for Googles Silicon Valley headquarters, aka the Googleplex, in Mountain View, California.

Alibaba Cloud also highlighted the campuss eco-friendly designs in a video, including a photovoltaic power generation system, flowerpots made from recycled plastic, and high-efficiency, low-energy devices in the on-site coffee shop, according to SCMP.

The new campus signals the firms commitment to investing in its growing cloud computing business. While Alibabas net income dropped 50% year-over-year in the second quarter to 22.74 billion yuan ($3.4 billion), Alibaba Cloud experienced the fastest growth among all of Alibabas business segments in Q2, making up 9% of total revenue.

The new facilities also come at a time when Chinas economy has been facing a slowdown. While Chinas economy is slowing down, Alibabas cloud computing unit has been eyeing expansion opportunities overseas. For example, Alibaba Cloud announced last month a $1 billion commitment to upgrading its global partner ecosystem.

Alibaba is currently thethird-largest holding in EMQQ Globals flagship exchange-traded fund, the Emerging Markets Internet & Ecommerce ETF (NYSEArca: EMQQ) with a weighting of 7.01% as of October 14. EMQQ seeks to offer investors exposure to the growth in internet and e-commerce activities in the developing world as middle classes expand and affordable smartphones provide unprecedentedly large swaths of the population with access to the internet for the first time, according to the issuer.

EMQQ tracks an index of leading internet and e-commerce companies that includes online retail, search engines, social networking, online video, e-payments, online gaming, and online travel.

For more news, information, and strategy, visit ourEmerging Markets Channel.

Read more from the original source:

Alibaba Invests in Cloud Computing Business With New Campus - ETF Trends

The Top 5 Cloud Computing Trends In 2023 – Forbes

The ongoing mass adoption of cloud computing has been a key driver of many of the most transformative tech trends, including artificial intelligence (AI), the internet of things (IoT), and remote and hybrid working. Going forward, we can expect to see it becoming an enabler of even more technologies, including virtual and augmented reality (VR/AR), the metaverse, cloud gaming, and even quantum computing.

The Top 5 Cloud Computing Trends In 2023

Cloud computing makes this possible by removing the need to invest in buying and owning the expensive infrastructure required for these intensive computing applications. Instead, cloud service providers make it available "as-a-service," running on their own servers and data centers. It also means companies can, to some extent, avoid the hassle of hiring or training a highly specialized workforce if they want to take advantage of these breakthrough technologies.

In 2023 we can expect to see companies continuing to leverage cloud services in order to access new and innovative technologies as well as drive efficiencies in their own operations and processes. Heres a rundown of some of the trends that I believe will have the most impact.

Increased investment in cloud security and resilience

Migrating to the cloud brings huge opportunities, efficiencies, and convenience but also exposes companies and organizations to a new range of cybersecurity threats. On top of this, the growing pile of legislation around how businesses can store and use personal data means that the risk of fines or (even worse) losing the trust of their customers is a real problem.

As a result, spending on cyber security and building resilience against everything from data loss to the impact of a pandemic on global business will become even more of a priority during the coming year. However, as many companies look to cut costs in the face of a forecasted economic recession, the emphasis is likely to be on the search for innovative and cost-efficient ways of maintaining cyber security in order to get the most "bang for the buck." This will mean greater use of AI and predictive technology designed to spot threats before they cause problems, as well as an increase in the use of managed security-as-a-service providers in 2023.

Multi-cloud is an increasingly popular strategy

If 2022 was the year of hybrid cloud, then 2023 could be the year that businesses come to understand the advantages of diversifying their services across a number of cloud providers. This is a strategy known as taking a multi-cloud approach, and it offers a number of advantages, including improved flexibility and security.

It also prevents organizations from becoming too tied in to one particular ecosystem - a situation that can create challenges when cloud service providers change the applications they support or stop supporting particular applications altogether. And it helps to create redundancy that reduces the chance of system errors or downtime from causing a critical failure of business operations.

Adopting a multi-cloud infrastructure means moving away from potentially damaging business strategies such as building applications and processes solely around one particular cloud platform, e.g., AWS, Google Cloud, or Microsoft Azure. The growing popularity of containerized applications means that in the event of changes to service levels, or more cost-efficient solutions becoming available from different providers, applications can be quickly ported across to new platforms. While back in 2020, most companies (70 percent) said they were still tied to one cloud service provider, reports have found that 84% of mid-to-large companies will have adopted a multi-cloud strategy by 2023, positioning it as one of the years defining trends in cloud computing.

The AI and ML-powered cloud

Artificial intelligence (AI) and machine learning (ML) are provided as cloud services because few businesses have the resources to build their own AI infrastructure. Gathering data and training algorithms require huge amounts of computing power and storage space that is generally more cost-efficient to rent as-a-service. Cloud service providers are increasingly relying on AI themselves for a number of tasks. This includes managing the vast, distributed networks needed to provide storage resources to their customers, regulating the power and cooling systems in data centers, and powering cyber security solutions that keep their data safe. In 2023, we can expect to see continued innovation in this field as hyper scale cloud service providers like Amazon, Google, and Microsoft continue to apply their own AI technology to create more efficient and cost-effective cloud services for their customers.

Low-code and no-code cloud services

Tools and platforms that allow anybody to create applications and to use data to solve problems without getting their hands dirty with writing computer code are increasingly popular. This category of low-code and no-code solutions includes tools for building websites, web applications and designing just about any kind of digital solutions that companies may need. Low-code and no-code solutions are even becoming available for creating AI-powered applications, drastically lowering the barriers to entry for companies wanting to leverage AI and ML. Many of these services are provided via the cloud meaning users can access them as-a-service without having to own the powerful computing infrastructure needed to run them themselves. Tools like Figma, Airtable, and Zoho allow users to carry out tasks that previously would have required coding experience, such as designing websites, automating spreadsheet tasks, and building web applications, and I see providing services like this as an area where cloud technology will become increasingly useful in 2023 and beyond.

Innovation and consolidation in cloud gaming

The cloud has brought us streaming services like Netflix and Spotify, which have revolutionized the way we consume movies, TV, and music. Streaming video gaming is taking a little longer to gain a foothold but is clearly on its way, with Microsoft, Sony, Nvidia, and Amazon all offering services in this field. It hasnt all been plain sailing, however Google spent millions of dollars developing their Stadia streaming gaming service only to retire it this year due to lack of commercial success. One of the problems is the networks themselves streaming video games clearly requires higher bandwidth than music or videos, meaning it's restricted to those of us with good quality high-speed internet access, which is still far from all of us. However, the ongoing rollout of 5G and other ultra-fast networking technologies should eventually solve this problem, and 2023 could be the year that cloud gaming will make an impact. Google themselves have said that the technology itself that powers Stadia will live on as the backbone of an in-development B2B game streaming service that will allow game developers to provide streaming functionality directly to their customers. If, as many predict, cloud gaming will become the killer app for 5G in the same way that streaming video was for 4G and streaming music was for 3G, then 2023 could be the year when we start to see things fall into place.

To stay on top of the latest on the latest business and tech trends, make sure to subscribe to my newsletter, follow me on Twitter, LinkedIn, and YouTube, and check out my books Tech Trends in Practice and Business Trends in Practice, which just won the 2022 Business Book of the Year award.

Visit link:

The Top 5 Cloud Computing Trends In 2023 - Forbes

Benefits of Cloud Computing That Can Help You With Your Business 2023 – ReadWrite

Today, businesses are looking to operate more flexibly and cost-effectively. This has led to the rise of cloud computing as a viable solution for almost every business. Cloud computing uses a network of remote servers hosted on the Internet and accessible through standard web browsers or mobile apps.

It enables users to store data remotely, exchange files, and access software applications from anywhere with an internet connection. In addition, individuals and businesses that use the cloud can access their data from any computer or device connected to the internet, allowing them to sync their settings and files wherever they go.

There are many advantages to using cloud services for your business. Here are 10 benefits of cloud computing that can help you with your business.

One of the most significant benefits of cloud computing is its security. If you run your business on the cloud, you dont have to worry about protecting your data from hackers or other threats.

Cloud providers use industry-standard security practices to keep your data safe, including firewalls, encryption, and authentication systems.

You can further customize your businesss security settings if your business uses a private cloud. For example, if an employee loses or misplaces a device that has access to your data, you can remotely disable that device without putting your data at risk.

You can also encrypt your data to protect it against cyber threats. Businesses can also use multi-factor authentication (MFA) to protect their data further. MFA requires users to input a one-time passcode sent to their phone to log in and confirm their identity.

Another advantage of cloud computing is its scalability. Cloud providers offer scalable cloud solutions that you can adjust to meet your businesss needs.

You can scale up or down your system on demand to deal with seasonal traffic or unexpected spikes in usage. This allows you to avoid buying too much computing power and resources upfront and allows your business to adjust to changes in demand quickly.

You can also try out a cloud solution before you commit to it by renting a smaller instance for a trial period. Cloud solutions are also flexible enough for you to upgrade or downgrade your solutions as your business scales up or down.

This means that you dont have to buy more computing power than you need upfront, and you dont have to upgrade your systems again if your business starts to slow down.

Cloudcomputing can help you achieve greater flexibility and mobility if your business relies on people working remotely. With cloud solutions, you can access your data and run your applications from any computer or device connected to the internet.

When you can access all your data from anywhere, employees can work from home, in coffee shops, or other locations without sacrificing productivity. In addition, cloud providers offer a wide range of collaboration and communication tools that work with their services.

You can also use these tools to collaborate and communicate with clients and vendors who dont need access to your companys data.

Another advantage of cloud computing is its consistency. While different people and departments may use other devices and software, cloud solutions ensure everyone has a consistent experience.

This prevents miscommunications and ensures that everyone is on the same page. Whether you use Office 365, Google G Suite, Salesforce, or another cloud service, your business will have a consistent experience across platforms.

You can also use tools, like identity integration, to access information from different applications without switching between them.

Cloud solutions offer significant cost reductions over the long run compared to other IT solutions. You can save money on hardware, upgrades, and software licenses while enjoying a flexible and scalable solution.

Cloud providers handle all the maintenance and upgrade of their systems, so you dont have to worry about keeping up with the latest trends in IT.

Cloud solutions offer significant cost reductions over the long run compared to other IT solutions. You can save money on hardware, upgrades, and software licenses while enjoying a flexible and scalable solution.

You can easily integrate multiple cloud services to streamline your workflows if your business uses various cloud services.

Many cloud services have a wide range of integrations with other services that you can use to enhance your business processes. For example, you can use Salesforce to manage your leads and close rates and Zapier to link it with other business tools like Gmail, Mailchimp, and Google Calendar.

You can also use a hybrid cloud solution that lets you keep your data close to home while accessing additional IT services through the cloud.

Cloud solutions offer unlimited storage, unlike other data storage solutions like on-premise computers. So while you can scale down your cloud solution if you dont need as much storage for your data, you can also increase your storage later.

You can also use a hybrid solution to keep some of your data local while storing other data in the cloud.

Another advantage of cloud computing is faster performance. In addition, if you use the cloud, you arent limited by your hardware, and your systems are more scalable.

This means that your website and other business applications will perform faster without you having to make hardware upgrades.

You can also use a hybrid solution to improve your performance by keeping your most critical data close to home while accessing other data in the cloud.

Cloud solutions offer a collaborative online environment that lets you share important information with clients and vendors. You can use collaboration tools like wikis, blogs, and forums to work with team members and manage your projects.

You can also use collaboration tools to communicate with clients and vendors who dont need access to your companys data. These tools let you share documents, collaborate on tasks, and manage your workflow from a single platform.

Even though control is an essential aspect of a companys success, there are certain things that you cannot control. Whether your organization controls its own procedures or not, there are certain things that are out of your control. In todays market, even a small amount of downtime has a significant impact.

Business downtime leads to lost productivity, revenue, and reputation. Although you cant prevent or foresee all the catastrophes, there is something you can do to speed up your recovery. Cloud-based data recovery services provide quick recovery for emergency situations, such as natural disasters and electrical outages.

The range of benefits of Cloud computingmakes it a viable solution for almost every business. It offers many advantages that can help you streamline your workflow, achieve better performance, and operate more efficiently.

Suvigya Saxena is the Founder & CEO of Exato Software, a global ranking mobile, cloud computing and web app development company. With 15+ years of experience of IT Tech which are now global leaders with creative solutions, he is differentiated by out-of-the-box IT solutions throughout the domain.

Go here to see the original:

Benefits of Cloud Computing That Can Help You With Your Business 2023 - ReadWrite

cloud-computing GitHub Topics GitHub

cloud-computing GitHub Topics GitHub Here are 1,464 public repositories matching this topic...

Learn and understand Docker&Container technologies, with real DevOps practice!

A curated list of software and architecture related design patterns.

Pulumi - Universal Infrastructure as Code. Your Cloud, Your Language, Your Way

High-Performance server for NATS.io, the cloud and edge native messaging system.

A curated list of Microservice Architecture related principles and technologies.

Cloud Native application framework for .NET

A curated list of awesome services, solutions and resources for serverless / nobackend applications.

Cloud Native Control Planes

A comprehensive tutorial on getting started with Docker!

Rules engine for cloud security, cost optimization, and governance, DSL in yaml for policies to query, filter, and take actions on resources

Service Fabric is a distributed systems platform for packaging, deploying, and managing stateless and stateful distributed applications and containers at large scale.

Open, Multi-Cloud, Multi-Cluster Kubernetes Orchestration

The open-source data integration platform for security and infrastructure teams

Web-based Cloud Gaming service for Retro Game

A list of resources in different fields of Computer Science

Example of a cinema microservice

This repository consists of the code samples, assignments, and notes for the DevOps bootcamp of Community Classroom.

Awesome Cloud Security Resources

Source code accompanying book: Data Science on the Google Cloud Platform, Valliappa Lakshmanan, O'Reilly 2017

TFHE: Fast Fully Homomorphic Encryption Library over the Torus

Add a description, image, and links to the cloud-computing topic page so that developers can more easily learn about it.

Curate this topic

To associate your repository with the cloud-computing topic, visit your repo's landing page and select "manage topics."

Learn more

Original post:

cloud-computing GitHub Topics GitHub

Cloud Computing: A catalyst for the IoT Industry – SiliconIndia

Cloud computing is a great enabler for todays businesses for a variety of reasons. It helps companies, particularly small and medium enterprises jumpstart their operations sooner as there is very little lead time needed to stand up a full-fledged in-house IT infrastructure. Secondly, it eases the financial requirements by avoiding heavy capex and turning the IT costs into an opex model. Even more advantageous is the opex costs can be scaled up and down dynamically based on demand thus optimizing IT costs.

I think Cloud computing became a catalyst for the IoT industry and the proliferation that is seen today probably may not have happened in the absence of Cloud integration. Typically, IoT devices like sensors generate huge amounts of data that require both storage and processing thus making Cloud platforms the perfect choice for building IoT-based solutions. In an IoT implementation, apart from data assimilation there are some fundamental aspects like security, managing devices, etc. that needs to be considered and Cloud platforms take over some of these implementation aspects enabling the solution provider to focus on the core problem.

An interesting case study of how IoT and Cloud technologies can help to create innovative solutions was presented in a Microsoft conference few years back. Its a solution developed to monitor the pollution levels in Ganges which is a project sponsored by Central Pollution Control Board. For more information, readers could go to this link https://azure.microsoft.com/en-us/blog/cleaning-up-the-ganges-river-with-help-from-iot/

Digital technology in the financial services

When we talk about disruptive digital technologies in Financial Services industry, perhaps Blockchain is the one that stands out immediately. The concept of DLT (Decentralised Ledger Technology) has been around for some time and theres lots of interest in leveraging this technology primarily for transparency and efficiency reasons. After an article by Reserve Bank of India in 2020, many Indian banks responded to this initiative by starting to look at opportunities that involve DLT. For e.g. State Bank of India tied up with JP Morgan to use their Blockchain technology.

Adoption of Blockchain could simplify Inter-bank payment settlement and perhaps could be extended in future to cross-border payment settlements across different DLT platforms. It could also be used for settlement of securitized assets by putting them on a common ledger. Another application is using DLT for KYC whereby multiple agencies (like banks) can access customer data from a decentralized and secure database. In fact, EQ uses Blockchain in its product offering to privately funded companies and PEs for Cap table management.

The next one is probably Artificial Intelligence (AI) and Machine Learning (ML) which is predominantly being applied in Financial Services industry in managing internal and external risks. AI-based algorithms now underpin risk-based pricing in Insurance sector and in reducing NPAs in the Banking sector. The technology helps banks predict defaults and take proactive measures to mitigate that risk.

In the Indian context, Unified Payments Interface (UPI) and Aadhar-enabled Payment Service (AePS) are classic examples of disruptive products in financial services industry.

Effective Network Security acts as a gatekeeper

In todays connected world where much of the commerce happens online, its imperative businesses focus on security to safeguard them from threats in cyberspace. The recent approach to Network security is Zero Trust model which basically means never trusts any user/device unless verified. In this model, mutual authentication happens between the two entities in multiple ways, for e.g. using User credentials followed by a second factor like an OTP and sometimes application authentication happens through a digital certificate. The process also uses analytics and log analysis to detect abnormalities in user behaviour and enforce additional authenticating measures while sending alerts at the same time. This is something many of us might have come across when we try to connect to an application from a new device that the application is not aware of. The security mechanism might enforce additional authentication whilst sending an alert to us. Nowadays, businesses also use innovative methods of authentication like biometrics, voice recognition, etc. and some of these are powered by AI/ML.

Fintech players leverage Artificial Intelligence to bridge the gap in MSME lending

I think MSME lending (maybe Retail Lending too) is one of the segments significantly disrupted by technology. In a way, it has opened unconventional options for MSMEs to attract capital both for capex and working capital requirements. There are products ranging from P2P lending to Invoice Discounting offered by Fintech companies which is opening up a new market place. There are Fintech players interested in lending in this space and they use AI/ML models to predict probability of defaults and assess credit risk and appropriately hedge against them.

See the original post here:

Cloud Computing: A catalyst for the IoT Industry - SiliconIndia

Revitalising data and infrastructure management through cloud – ETCIO South East Asia

The Cloud has been a significant contributor to the digital optimisation and transformation of businesses and institutions globally since the 2010s. It seems almost an eternity ago, when the IT department was essentially a support function, with KRAs around design and delivery of Information Technology Architecture encompassing Infrastructure, Data Centres and constituent servers, personal computers, software, networking and security systems, along with the associated, vendor evaluation, outsourcing, contracting, commissioning and finally aligning with business systems and goals, as this pre-millennium Research Gate paper indicates.

The one and a half decades since the advent of the millennium saw the rise of many trends, besides the cloud, such as shift from integrated to business specific applications, resulting data management and insights, globalisation, adoption of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), implosion of telecom, mobility and Mobile Backend as a Service (MBaaS), other technologies such as social media, E-commerce, Extended Reality, Digital Twins, AI/ ML, RPA, Internet of things, Blockchain and Chatbots and lastly, the growing skill-gaps and demands of talent.

The cloud has now taken over a major chunk of responsibilities pertaining to infrastructure, data centre, hosting, SaaS and architectural applications, platforms, networking and security functions thus freeing up the IT and Business Teams for leveraging technology for more strategic tasks related to operations, customers, R&D, supply chain and others. The cloud, hence enabled companies and institutions to leverage the amalgamation of technology, people and processes across their extended enterprises to have ongoing digital programmes for driving revenue, customer satisfaction, and profitability. The cloud can potentially add USD 1 Trillion of Economic Value across the Fortune 500 band of companies by 2030, as this research by McKinsey estimates.

Even before the pandemic, although the initial adoption of Cloud was in the SaaS applications, IaaS and PaaS was surely catching up, thus shifting away from the traditional Data Centre and On-premise Infrastructure. Gartners research way back in 2015 predicted a 30 % plus increase in the IaaS spending with the public cloud IaaS workloads finally surpassing those of on-premise loads. In the same year, Gartners similar paper highlighted significant growth in PaaS as well : both for Infrastructure and Application iPaaS.

The Cloud is adding significant value across Industry verticals and business functions, right from remote working with online meetings & collaboration tools, automated factory operations, extended reality, digital twins, remote field services and many others. The cloud has also been adopted as the platform for deploying other new technologies such as RPA and Artificial Intelligence/ Machine Learning (AI/ ML) Depending on industry best practices, business use cases and IT strategies, it became feasible to leverage infrastructure, assets, applications, assets, and software, in a true Hybrid / Multi/ Industry Cloud scenario with separate private and public cloud environments covering IaaS, PaaS, SaaS and MBaaS. As Platforms were maturing, organisations were furthermore transitioning from Virtual Machine and even from IaaS based solutions to PaaS based. Gartner had predicted in this research that by 2021, over 75% of enterprises and mid-sized organisations would adopt a hybrid or multi-cloud strategy.

There was also a clear transition from the traditional lift and shift to the cloud native approach, which makes full use of cloud elasticity and optimisation levers and moreover minimises technical debt and inefficiencies. This approach makes use of cloud computing to build and run microservices based scalable applications running in virtualised containers, orchestrated in the Container-as-a-service applications and managed and deployed using DevOps workflows. Microservices, container management, infrastructure as a code, serverless architectures, declarative code and continuous integration and delivery (CI/CD) are the fundamental tenets of this cloud native approach. Organisations are balancing use of containerization along with leveraging the cloud hosting provider capabilities especially considering the extent of hybrid cloud, efforts and costs of container infrastructure and running commodity applications.

From the architecture standpoint, cloud-based composable architectures such as MACH- Microservices based, API-first, Cloud-native SaaS and Headless and Packaged business capabilities (PBCs) are increasingly being used in organisations for enhancing Digital Experience Platforms enabling customers, employees and supply chain with the new age Omnichannel experience. These architectures facilitate faster deployment and time to market through quick testing by sample populations and subsequent full-fledged implementations. These composable architectures help organisations in future proofing their IT investments, and improve business resilience and recovery with the ability to decouple and recouple the technology stacks. At the end of the 1st year of the pandemic, Gartner here highlighted the importance of composable architecture in its Hype Cycle of 2021 especially in business resilience and recovery during a crisis.

Intelligently deploying Serverless Computing in the architecture also enhances cloud native strategies immensely, thus enabling developers to focus on triggers and running function/ event-based computing, also resulting in more optimised cloud economics. Also, access to the cloud service providers Function-as-a-service (FaaS) and Backend-as-service (BaaS) models significantly reduce the IT environment transformation costs. This Deloitte Research illustrates the advantages that Serverless computing can bring about to retail operations.

To enhance their cloud native strategies, to further encourage citizen development, reducing over reliance on IT and bridging the IT-Business Gap, organisations are also making use of Low Code No Code (LCNC) tools and assembling, clearly shifting from application development. Citizen developers are making use of LCNC functionalities such as Drag and drop, pre-built User Interfaces, APIs and Connectors, one-click delivery and others to further augment their containerisation and microservices strategies. This Gartner Research predicts that 70% of new applications developed by organisations will use LCNC by 2025, well up from less than 25% in 2020.

Infrastructure and Data Management in the cloud are being immensely powered up by Automation and Orchestration. Especially in minimising manual efforts and errors in processes such as provisioning, configuring, sizing and auto-scaling, asset tagging, clustering and load balancing, performance monitoring, deploying, DevOps and CI/ CD testing and performance management. Further efficiencies are brought to fruition through automation especially in areas such as shutting down unutilised instances, backups, workflow version control, and establishing Infrastructure as Code (IAC). This further hence value-adds to robust cloud native architecture by enhancing containerisation, clustering, network configuration, storage connectivity, load balancing and managing the workload lifecycle, besides highlighting vulnerabilities and risks. Enterprises pursuing hybrid cloud strategies are hence driving automation in private clouds as well as integrating with public clouds by creating automation assets that perform resource codification across all private and public clouds and offer a single API. This McKinsey research highlights that companies that have adopted end-to-end automation in their cloud platforms and initiatives report a 20-40% increase in speed of releasing new capabilities to market. A similar report by Deloitte, mentions that intelligent automation in the cloud enables scale in just 4-12 months, compared to the earlier 6-24 months period, through streamlined development and deployment processes.

CIOs are also increasingly turning to distributed cloud models to address edge or location-based cloud use cases especially across Banks and Financial Institutions, Healthcare, Smart cities, and Manufacturing. It is expected that decentralised and distributed cloud computing will move from the initial private cloud substation deployments to an eventually Wi-Fi like distributed cloud substation ecosystems, especially considering the necessary availability, bandwidth and other operational and security aspects.

These rapid developments in the cloud ecosystems especially for hybrid and multi cloud environments have necessitated infrastructure and data management to encompass dashboards for end-to-end visibility of all the cloud resources and usage across providers, business functions, and departments. Governance and Compliance, Monitoring, Inventory Management, Patches and Version Control, Disaster Recovery, Hybrid Cloud Management Platforms (HCMP), Cloud Service Brokers (CSB) and other Tools aid companies in better Infrastructure Management in the Cloud, while catering to fluctuating demands and corresponding under and over utilisation scenarios, while continuously identifying pockets for optimisation and corrections. For companies with customers spread across diverse geographies, it is important to have tools for infrastructure management, global analytics, database engines and application architectures across these global Kubernetes clusters and Virtual Machines.

The vast increase in attack surfaces and potential breach points have necessitated CIOs and CISOs to incorporating robust security principles and tools within the Cloud Native ecosystem itself, through Cloud Security Platforms such as Cloud Access Security Broker (CASB), Cloud security posture management (CSPM), Secure Access Service Edge (SASE), DevSecOps and incorporation of AI and ML in their proactive threat hunting and response systems. This is also critical in adhering to Governance, Risk and Compliance (GRC) and Regulatory compliances, in with Zero Trust Architecture and Cyber Resilient frameworks and strategy. This McKinsey article highlights the importance of Security as Code (SaC) in cloud native strategies and its reliance on architecture and the right automation capabilities.

This EY article highlights the importance of cybersecurity in cloud native strategies as well as the corresponding considerations in processes, cyber security tools, architecture, risk management, skills and competencies and controls. Data Encryption and Load Protection, Identity and Access management, Extended Threat and Response Systems (XDR), Security Incident and Environment Management (SIEM), and Security Orchestration and Response (SOAR) tools that incorporate AI/ ML capabilities ensure more of proactive vis--vis a reactive response. Considering the vast information to ingest, store and analyse, organisations are also considering/ deploying Cyber Data Lakes as either alternatives or in conjunction to complement their SIEM ecosystems.

There is an increasing popularity of Financial Operations (FinOps) which is helping organisations to gain the maximum value from the cloud, through the cross functional involvement of business, finance, procurement, supply chain, engineering, DevOps/ DevSecOps and cloud operational teams. Augmented FinOps has been listed by Gartner in the 2022 Hype Cycle for emerging technologies here. FinOps immensely value-adds to infrastructure and data management in the cloud through dynamic and continuous sourcing and managing cloud consumption, demand mapping, crystallising the Total Cost of Ownership and Operations with end-to-end cost visibility and forecasting to make joint decisions and monitor comprehensive KPIs. Besides the cloud infrastructure management strategies listed in this section, FinOps also incorporates vendor management strategies and leveraging cloud carbon footprint tools for their organisations Sustainability Goals.

What about Data Management through the Cloud?

The 2nd half of the 2010s and especially the COVID-19 period have also resulted in an implosion of IoT, social media, E-commerce and other Digital Transformation. This has made organisations deal with diverse data sources residing on cloud, on-premise and on the edge, diversity in data sets across sensor, text, image, Audio-Visual, Voice, E-commerce, social media and others, and the volume of data that is now required to be ingested, managed and delivered on real-time and batch mode. Even before the pandemic, this implosion of unstructured data, necessitated companies to leverage Hadoop and other Open Source based Data Lakes besides their structured data residing in Data Warehouses. According to this Deloitte Article, for their surveyed CXOs, Data Modernisation is even a more critical aspect than cost and performance consideration for migrating to the cloud.

This research by Statista estimated the total worldwide data amount rose from 9 Zettabytes in 2013 to over 27 Zettabytes in 2021, and the prediction is this growing to well over 180 Zettabytes in 2025. Decentralised and distributed cloud computing, Web 3.0, the Metaverse and rise in Edge Computing will further contribute to this data growth.

Many organisations are looking at the Cloud as the future of data management as this article by Gartner states. As the cloud encompasses more and more data sources, this becomes more pivotal for data architects to have a deeper understanding of metadata and schema, the end-to-end data lifecycle pipeline of ingestion, cleaning, storage, analysis, delivery and visualisation, APIs, cloud automation and orchestration, Data Streaming, AI/ ML models, Analytics, Data Storage and Visualisation, as well as Governance and Security.

Data Architects are hence leveraging cloud computing in their strategies including scaling, elasticity and decoupling, ensuring high availability and optimal performance with relation to bursts and shutdowns, while optimising cost at the same time. The Data Teams are also deploying Automated and Active Cloud Data management covering classification, validation and governance with extensible and decoupling. There is also careful consideration for ensuring security for data at rest and in motion, as well as seamless data integration and sharing

It is also important to choose the right MetaData Strategy and consider options of Tiered apps with push-based services, pull-based ETL, and Event based Metadata. It is also worthwhile to stress upon the importance of having a robust Data Architecture/ DataOps culture as well, especially considering business and technology perspectives of the end-to-end data lifecycle right from data sources and ingestion, meta data and active data management, streaming, storage, analytics and visualisation. Deploying elasticity, AI/ ML and automation bring about immense benefits to the cloud native strategies.

Considering these aspects in Data Management, organisations have looking at ML and API powered Data Fabrics along with Data Lakes, Warehouses and Layers to manage this end-to-end data lifecycle by creating, maintaining and providing outputs to the consumers of this data, as this Gartner article on technology trends for 2022 highlights.

This article by McKinsey summarises the major pivot points in the data architecture ethos which are fundamentally based on Cloud with containerization and serverless data. These cover hybrid real time and batch data processing, shift from end-to-end COTS applications to modular best in function/ industry, move to APIs and decoupling, shift from centralised Data Warehousing to domain-based architecture and lastly from proprietary predefined datasets to data schema that is light and flexible, especially the NoSQL family.

For BFSI, Telecoms and other industry verticals which need customer data to reside locally, CXOs have been deploying Hybrid Data Management environments, that leverage Cloud Data Management tools to also automate, orchestrate, and re-use the on-premise data, thus providing a unified data model and access interface to both cloud and on-premise datasets.

Application of Automation and Orchestration in Data Storage also ensures prioritisation of processes, tasks and resources to balance speed, efficiency, usage and cost along with eliminating security vulnerabilities. This is especially applicable for tasks such as provisioning and configuration, capacity management, workflows and data migration, resource optimisation, software updates and data protection and disaster recovery. This World Economic Forum report right before the pandemic highlighted the fact that the conventional optical/ magnetic storage systems will be unable to handle this phenomenon for more than a century. CIOs and Leaders are hence leveraging automation and cloud, Storage-as-a Service (STaaS), decentralised Blockchain powered data storage and storage on the Edge, besides alternates to conventional electromagnetic/ optical data storage mechanism

What is the role of people and culture in this cloud powered data and infrastructure management ecosystem?

People, talent pool and organisation culture play a pivotal part in successful FinOps, cloud native and cloud data management strategies. In this dynamic and uncertain world, it is of paramount importance to have uniformity, alignment and resonance of business KPIs to best practices for Enterprise and Data Architecture, DevOps as well as those of Engineering, Finance, and Procurement. This environment of continuous evolution, and optimisation can be only brought about by an ethos of Communication, Trust, Change Management, Business-Finance-IT alignment, which are equally important cloud native strategies, Architecture, DevOps, DataOps, Security and other Engineering Talent Pools.

The continuing trends of the Great Resignation, Quiet Quitting and Moonlighting necessitate a combination of having the best employee and vendor engagement strategies, a readily available talent pool of architects, analysts, engineers and other skillsets, as well as upskilling.

Winding up?

The Cloud has deeply impacted and revitalised Infrastructure and Data Management in all aspects in the workplace. As per this Deloitte research, it is ideal to leverage an equal mix of people, tools and approaches to address cloud complexity, and have a powerful, agile, elastic, secure and resilient virtual business infrastructure deriving maximum value from the cloud.

Cloud-centric digital infrastructure is a bedrock in the post COVID world, aligning technology with business to support digital transformation, resilience, governance along with business outcomes through a combination of operations, technology and deployment as mentioned in this IDC paper. This is so important in the increasing complexity of todays world across Public Cloud Infrastructure, On-Premises and on the Edge.

With continuing business uncertainty, competitiveness, customer, supplier and employee pressures and stringent IT budgets, organisations are looking at the Cloud to revitalise their Infrastructure and Data Management and gain maximum value.

See the article here:

Revitalising data and infrastructure management through cloud - ETCIO South East Asia

High-Performance Computing (HPC) Market is expected to generate a revenue of USD 65.12 Billion by 2030, Globally, at 7.20% CAGR: Verified Market…

The growing demand for high-efficiency computing across a range of industries, including financial, medical, research, government, and defense, as well as geological exploration and analysis, is a significant growth driver for the HPC Market.

JERSEY CITY, N.J., Oct. 17, 2022 /PRNewswire/ -- Verified Market Research recently published a report, "High-Performance Computing (HPC) Market" By Component (Solutions, Services), By Deployment Type (On-Premise, Cloud), By Server Price Band (USD 250,000500,000 And Above, USD 250,000100,000 And Below), By Application Area (Government And Defense, Education And Research), and By Geography.

According to the extensive research done by Verified Market Research experts, the High-Performance Computing (HPC) Market size was valued at USD 34.85 Billion in 2021 and is projected to reach USD 65.12 Billion by 2030, growing at a CAGR of 7.20% from 2023 to 2030.

Download PDF Brochure: https://www.verifiedmarketresearch.com/download-sample/?rid=6826

Browse in-depth TOC on "High-Performance Computing (HPC) Market"

202 - Pages126 Tables37 Figures

Global High-Performance Computing (HPC) Market Overview

High-performance computing is the use of parallel processing to efficiently, consistently, and quickly operate big software programmes. High-Performance Computing is a technique that makes use of a sizable amount of computing power to offer high-performance capabilities for resolving various issues in the fields of engineering, business, and research. HPC systems refer to all types of servers and micro-servers used for highly computational or data-intensive applications. High-performance computing systems are those that can do 1012 floating-point operations per second, or one teraflop, on a computer.

One of the main factors influencing the growth of the High-Performance Computing (HPC) Market is the ability of HPC solutions to swiftly and precisely process massive volumes of data. The increasing demand for high-efficiency computing across a range of industries, including financial, medical, research, exploration and study of the earth's crust, government, and defense, is one of the primary growth factors for the high-performance computing (HPC) market.

The rising need for high precision and quick data processing in various industries is one of the major drivers of the High-Performance Computing (HPC) Market. The market will also grow exponentially over the course of the anticipated period as a result of the growing popularity of cloud computing and the government-led digitization initiatives.

The usage of HPC in cloud computing is what is driving the worldwide high-performance computing (HPC) market. Utilizing cloud computing platforms has a number of benefits, including scalability, flexibility, and availability. Cloud HPC offers a number of benefits, including low maintenance costs, adaptability, and economies of scale.

Additionally, HPC in the cloud gives businesses that are new to high-end computing the chance to form a larger community, enabling them to profit from cheap operating expenditure (OPEX) and overcome the challenges of power and cooling. As a result, it is anticipated that the growing usage of HPC in the cloud would significantly influence the High-Performance Computing (HPC) Market.

Key Developments

Partnerships, Collaborations, and Agreements

Mergers and Acquisitions

Product Launches and Product Expansions

Key Players

The major players in the market are Advanced Micro Devices Inc., Hewlett Packard Enterprise, Intel Corporation, International Business Machines (IBM) Corporation, NEC Corporation, Sugon Information Industry Co. Ltd, Fujistu Ltd, Microsoft Corporation, Dell Technologies Inc., Dassault Systemes SE, Lenovo Group Ltd, Amazon Web Series, and NVIDIA Corporation.

Verified Market Research has segmented the Global High-Performance Computing (HPC) Market On the basis of Component, Deployment Type, Server Price Band, Application Area, and Geography.

Browse Related Reports:

Enterprise Quantum Computing Market By Component (Hardware, Software, Services), By Application (Optimization, Simulation And Data Modelling, Cyber Security), By Geography, And Forecast

Healthcare Cognitive Computing Market By Technology (Natural Language Processing, Machine Learning), By End-Use (Hospitals, Pharmaceuticals), By Geography, And Forecast

Cognitive Computing Market By Component (Natural Language Processing, Machine Learning, Automated Reasoning), By Deployment Model (On-Premise, Cloud), By Geography, And Forecast

Cloud Computing In Retail Banking Market By Product (Public Clouds, Private Clouds), By Application (Personal, Family, Small and Medium-Sized Enterprises)

Top 10 Edge Computing Companies supporting industries to become 100% self-reliant

Visualize High-Performance Computing (HPC) Market using Verified Market Intelligence -:

Verified Market Intelligence is our BI Enabled Platform for narrative storytelling in this market. VMI offers in-depth forecasted trends and accurate Insights on over 20,000+ emerging & niche markets, helping you make critical revenue-impacting decisions for a brilliant future.

VMI provides a holistic overview and global competitive landscape with respect to Region, Country, Segment, and Key players of your market. Present your Market Report & findings with an inbuilt presentation feature saving over 70% of your time and resources for Investor, Sales & Marketing, R&D, and Product Development pitches. VMI enables data delivery In Excel and Interactive PDF formats with over 15+ Key Market Indicators for your market.

About Us

Verified Market Research is a leading Global Research and Consulting firm servicing over 5000+ customers. Verified Market Research provides advanced analytical research solutions while offering information enriched research studies. We offer insight into strategic and growth analyses, Data necessary to achieve corporate goals and critical revenue decisions.

Our 250 Analysts and SME's offer a high level of expertise in data collection and governance use industrial techniques to collect and analyze data on more than 15,000 high impact and niche markets. Our analysts are trained to combine modern data collection techniques, superior research methodology, expertise and years of collective experience to produce informative and accurate research.

We study 14+ categories from Semiconductor & Electronics, Chemicals, Advanced Materials, Aerospace & Defense, Energy & Power, Healthcare, Pharmaceuticals, Automotive & Transportation, Information & Communication Technology, Software & Services, Information Security, Mining, Minerals & Metals, Building & construction, Agriculture industry and Medical Devices from over 100 countries.

Contact Us

Mr. Edwyne FernandesVerified Market ResearchUS: +1 (650)-781-4080UK: +44 (753)-715-0008APAC: +61 (488)-85-9400US Toll Free: +1 (800)-782-1768Email: [emailprotected]Web: https://www.verifiedmarketresearch.com/Follow Us: LinkedIn | Twitter

Logo: https://mma.prnewswire.com/media/1315349/Verified_Market_Research_Logo.jpg

SOURCE Verified Market Research

Continued here:

High-Performance Computing (HPC) Market is expected to generate a revenue of USD 65.12 Billion by 2030, Globally, at 7.20% CAGR: Verified Market...

Fake News: What Laws Are Designed to Protect | LegalZoom

Just a few years ago, fake news" was something you'd find in supermarket tabloids.

Now, though, the line between fake news" and real news" can seem awfully blurry. Fake news" has been blamed for everything from swaying the U.S. presidential election to prompting a man to open fire at a Washington, DC pizza parlor.

A real news" outlet, such as a major newspaper or television network, might make mistakes, but it doesn't distribute false information on purpose. Reporters and editors who report real news have a code of ethics that includes using reputable sources, checking facts, and getting comments from people on both sides of an issue.

Fake news outlets, on the other hand, are designed to deceive. They might have URLs that sound like legitimate news organizations, and they might even copy other news sites' design. They may invent news" stories or republish stories from other internet sources without checking to see if they are true. Their purpose is usually to get clicks" and generate ad revenue or to promote their owners' political viewpoint.

Some fake news" is published on satire sites that are usually clearly labeled as satire. However, when people share articles without reading beyond the headline, a story that was supposed to be a parody can end up being taken as the truth.

The First Amendment protects Americans' rights to freely exchange ideaseven false or controversial ones. If the government passed laws outlawing fake news, that would be censorship that would also have a chilling effect on real news that people disagree with.

The main legal recourse against fake news is a defamation lawsuit. You can sue someone for defamation if they published a false fact about you and you suffered some sort of damage as a resultsuch as a lost job, a decline in revenue, or a tarnished reputation. If you are an ordinary, private person, you also must show that the news outlet was negligent (or careless).

But most fake news relates to public figures, who can only win a defamation lawsuit by showing that the news outlet acted with actual malice." This means that the author must have known the story was false or must have had a reckless disregard" for whether it was true or not. It's usually a difficult standard to meet, but defamation suits may become more common as concern about fake news grows.

For example, Chobani yogurt recently filed a defamation suit against conspiracy theorist Alex Jones and his site, Infowars, over a video and tweet headlined Idaho Yogurt Maker Caught Importing Migrant Rapists." Jones' tweet led to a boycott of the popular yogurt brand.

Defamation liability isn't limited to the person who first published a fake storyit extends to anyone who republishes it on a website or blog. Melania Trump, for example, recently settled defamation lawsuits against a Maryland blogger, who published an article in August 2016, and the online Daily Mail that published a similar false article later that month.

Fake news can be hard to identify, with some fake news sites looking and sounding almost exactly like well-known media outlets. Here are some tips for figuring out what's fake and what's real:

In the end, the law can't protect you from fake news. Get your news from sources that you know are reputable, do your research, and read beyond the headlines. And, if you find out an article is fake, don't share it. That's the surest way to stop a false story from spreading.

Excerpt from:

Fake News: What Laws Are Designed to Protect | LegalZoom

Lies, propaganda and fake news: A challenge for our age

For Rohit Chandra, vice president of engineering at Yahoo, more humans in the loop would help. I see a need in the market to develop standards, he says. "We cant fact-check every story, but there must be enough eyes on the content that we know the quality bar stays high.

Google is also helping fact-checking organisations like Full Fact, which is developing new technologies that can identify and even correct false claims. Full Fact is creating an automated fact-checker that will monitor claims made on TV, in newspapers, in parliament or on the internet.

Initially it will be targeting claims that have already been fact-checked by humans and send out corrections automatically in an attempt to shut down rumours before they get started. As artificial intelligence gets smarter, the system will also do some fact-checking of its own.

For a claim like crime is rising, it is relatively easy for a computer to check, says Moy. We know where to get the crime figures and we can write an algorithm that can make a judgement about whether crime is rising. We did a demonstration project last summer to prove we can automate the checking of claims like that. The challenge is going to be writing tools that can check specific types of claims, but over time it will become more powerful.

What would Watson do?

It is an approach being attempted by a number of different groups around the world. Researchers at the University of Mississippi and Indiana University are both working on an automated fact-checking system. One of the worlds most advanced AIs has also had a crack at tackling this problem. IBM has spent several years working on ways that its Watson AI could help internet users distinguish fact from fiction. They built a fact-checker app that could sit in a browser and use Watsons language skills to scan the page and give a percentage likelihood of whether it was true. But according to Ben Fletcher, senior software engineer at IBM Watson Research who built the system, it was unsuccessful in tests - but not because it couldnt spot a lie.

We got a lot of feedback that people did not want to be told what was true or not, he says. At the heart of what they want, was actually the ability to see all sides and make the decision for themselves. A major issue most people face without knowing it is the bubble they live in. If they were shown views outside that bubble they would be much more open to talking about them.

See the original post:

Lies, propaganda and fake news: A challenge for our age