Thanks To Renewables And Machine Learning, Google Now Forecasts The Wind – Forbes

(Photo by Vitaly NevarTASS via Getty Images)

Wind farms have traditionally made less money for the electricity they produce because they have been unable to predict how windy it will be tomorrow.

The way a lot of power markets work is you have to schedule your assets a day ahead, said Michael Terrell, the head of energy market strategy at Google. And you tend to get compensated higher when you do that than if you sell into the market real-time.

Well, how do variable assets like wind schedule a day ahead when you don't know the wind is going to blow? Terrell asked, and how can you actually reserve your place in line?

We're not getting the full benefit and the full value of that power.

Heres how: Google and the Google-owned Artificial Intelligence firm DeepMind combined weather data with power data from 700 megawatts of wind energy that Google sources in the Central United States. Using machine learning, they have been able to better predict wind production, better predict electricity supply and demand, and as a result, reduce operating costs.

What we've been doing is working in partnership with the DeepMind team to use machine learning to take the weather data that's available publicly, actually forecast what we think the wind production will be the next day, and bid that wind into the day-ahead markets, Terrell said in a recent seminar hosted by the Stanford Precourt Institute of Energy. Stanford University posted video of the seminar last week.

The result has been a 20 percent increase in revenue for wind farms, Terrell said.

The Department of Energy listed improved wind forecasting as a first priority in its 2015 Wind Vision report, largely to improve reliability: Improve Wind Resource Characterization, the report said at the top of its list of goals. Collect data and develop models to improve wind forecasting at multiple temporal scalese.g., minutes, hours, days, months, years.

Googles goal has been more sweeping: to scrub carbon entirely from its energy portfolio, which consumes as much power as two San Franciscos.

Google achieved an initial milestone by matching its annual energy use with its annual renewable-energy procurement, Terrell said. But the company has not been carbon-free in every location at every hour, which is now its new goalwhat Terrell calls its 24x7 carbon-free goal.

We're really starting to turn our efforts in this direction, and we're finding that it's not something that's easy to do. It's arguably a moon shot, especially in places where the renewable resources of today are not as cost effective as they are in other places.

The scientists at London-based DeepMind have demonstrated that artificial intelligence can help by increasing the market viability of renewables at Google and beyond.

Our hope is that this kind of machine learning approach can strengthen the business case for wind power and drive further adoption of carbon-free energy on electric grids worldwide, said DeepMind program manager Sims Witherspoon and Google software engineer Carl Elkin. In a Deepmind blog post, they outline how they boosted profits for Googles wind farms in the Southwest Power Pool, an energy market that stretches across the plains from the Canadian border to north Texas:

Using a neural network trained on widely available weather forecasts and historical turbine data, we configured the DeepMind system to predict wind-power output 36 hours ahead of actual generation. Based on these predictions, our model recommends how to make optimal hourly delivery commitments to the power grid a full day in advance.

The DeepMind system predicts wind-power output 36 hours in advance, allowing power producers to make ... [+] more lucrative advance bids to supply power to the grid.

See the rest here:
Thanks To Renewables And Machine Learning, Google Now Forecasts The Wind - Forbes

Microsoft throws weight behind machine learning hacking competition – The Daily Swig

Emma Woollacott02 June 2020 at 13:14 UTC Updated: 02 June 2020 at 14:48 UTC

ML security evasion event is based on a similar competition held at DEF CON 27 last summer

The defensive capabilities of machine learning (ML) systems will be stretched to the limit at a Microsoft security event this summer.

Along with various industry partners, the company is sponsoring a Machine Learning Security Evasion Competition involving both ML experts and cybersecurity professionals.

The event is based on a similar competition held at AI Village at DEF CON 27 last summer, where contestants took part in a white-box attack against static malware machine learning models.

Several participants discovered approaches that completely and simultaneously bypassed three different machine learning anti-malware models.

The 2020 Machine Learning Security Evasion Competition is similarly designed to surface countermeasures to adversarial behavior and raise awareness about the variety of ways ML systems may be evaded by malware, in order to better defend against these techniques, says Hyrum Anderson, Microsofts principal architect for enterprise protection and detection.

The competition will consist of two different challenges. A Defender Challenge will run from June 15 through July 23, with the aim of identifying new defenses to counter cyber-attacks.

The winning defensive technique will need to be able to detect real-world malware with moderate false-positive rates, says the team.

Next, an Attacker Challenge running from August 6 through September 18 provides a black-box threat model.

Participants will be given API access to hosted anti-malware models, including those developed in the Defender Challenge.

RECOMMENDED DEF CON 2020: Safe Mode virtual event will be free to attend, organizers confirm

Contestants will attempt to evade defenses using hard-label query results, with samples from final submissions detonated in a sandbox to make sure theyre still functional.

The final ranking will depend on the total number of API queries required by a contestant, as well as evasion rates, says the team.

Each challenge will net the winner $2,500 in Azure credits, with the runner up getting $500 in Azure credits.

To win, researchers must publish their detection or evasion strategies. Individuals or teams can register on the MLSec website.

Companies investing heavily in machine learning are being subjected to various degrees of adversarial behavior, and most organizations are not well-positioned to adapt, says Anderson.

It is our goal that through our internal research and external partnerships and engagements including this competition well collectively begin to change that.

READ MORE Going deep: How advances in machine learning can improve DDoS attack detection

See original here:
Microsoft throws weight behind machine learning hacking competition - The Daily Swig

Global trade impact of the Coronavirus Machine Learning as a Service Market Report 2020-2026 Research Insights 2020 Global Industry Outlook Shared in…

The Machine Learning as a Service Market research report enhanced worldwide Coronavirus COVID19 impact analysis on the market size (Value, Production and Consumption), splits the breakdown (Data Status 2014-2019 and 6 Year Forecast From 2020 to 2026), by region, manufacturers, type and End User/application. This Machine Learning as a Service market report covers the worldwide top manufacturers like (Amazon, Oracle Corporation, IBM, Microsoft Corporation, Google Inc., Salesforce.Com, Tencent, Alibaba, UCloud, Baidu, Rackspace, SAP AG, Century Link Inc., CSC (Computer Science Corporation), Heroku, Clustrix, Xeround) which including information such as: Capacity, Production, Price, Sales, Revenue, Shipment, Gross, Gross Profit, Import, Export, Interview Record, Business Distribution etc., these data help the consumer know about the Machine Learning as a Service market competitors better. It covers Regional Segment Analysis, Type, Application, Major Manufactures, Machine Learning as a Service Industry Chain Analysis, Competitive Insights and Macroeconomic Analysis.

Get Free Sample PDF (including COVID19 Impact Analysis, full TOC, Tables and Figures)of Machine Learning as a Service[emailprotected]https://www.researchmoz.us/enquiry.php?type=S&repid=2302143

Machine Learning as a Service Market report offers comprehensive assessment of 1) Executive Summary, 2) Market Overview, 3) Key Market Trends, 4) Key Success Factors, 5) Machine Learning as a Service Market Demand/Consumption (Value or Size in US$ Mn) Analysis, 6) Machine Learning as a Service Market Background, 7) Machine Learning as a Service industry Analysis & Forecast 20182023 by Type, Application and Region, 8) Machine Learning as a Service Market Structure Analysis, 9) Competition Landscape, 10) Company Share and Company Profiles, 11) Assumptions and Acronyms and, 12) Research Methodology etc.

Scope of Machine Learning as a Service Market:Machine learning is a field of artificial intelligence that uses statistical techniques to give computer systems the ability to learn (e.g., progressively improve performance on a specific task) from data, without being explicitly programmed.

On the basis on the end users/applications,this report focuses on the status and outlook for major applications/end users, shipments, revenue (Million USD), price, and market share and growth rate foreach application.

Personal Business

On the basis of product type, this report displays the shipments, revenue (Million USD), price, and market share and growth rate of each type.

Private clouds Public clouds Hybrid cloud

Do You Have Any Query Or Specific Requirement? Ask to Our Industry[emailprotected]https://www.researchmoz.us/enquiry.php?type=E&repid=2302143

Geographically, the report includes the research on production, consumption, revenue, Machine Learning as a Service market share and growth rate, and forecast (2017-2022) of the following regions:

Important Machine Learning as a Service Market Data Available In This Report:

Strategic Recommendations, Forecast Growth Areasof the Machine Learning as a Service Market.

Challengesfor the New Entrants,TrendsMarketDrivers.

Emerging Opportunities,Competitive Landscape,Revenue Shareof Main Manufacturers.

This Report Discusses the Machine Learning as a Service MarketSummary; MarketScopeGives A BriefOutlineof theMachine Learning as a Service Market.

Key Performing Regions (APAC, EMEA, Americas) Along With Their Major Countries Are Detailed In This Report.

Company Profiles, Product Analysis,Marketing Strategies, Emerging Market Segments and Comprehensive Analysis of Machine Learning as a Service Market.

Machine Learning as a Service Market ShareYear-Over-Year Growthof Key Players in Promising Regions.

What is the (North America, South America, Europe, Africa, Middle East, Asia, China, Japan)production, production value, consumption, consumption value, import and exportof Machine Learning as a Service market?

To Get Discount of Machine Learning as a Service Market:https://www.researchmoz.us/enquiry.php?type=D&repid=2302143

Contact:

ResearchMozMr. Rohit Bhisey,Tel: +1-518-621-2074USA-Canada Toll Free: 866-997-4948Email:[emailprotected]

Browse More Reports Visit @https://www.mytradeinsight.blogspot.com/

Follow this link:
Global trade impact of the Coronavirus Machine Learning as a Service Market Report 2020-2026 Research Insights 2020 Global Industry Outlook Shared in...

Machine Learning and How it Is Transforming Transportation – IT Business Net

If you are in any way connected to the computer world, you have heard of the term machine learning. It is such an important concept, but it has been used as a buzzword so much that it is starting to lose its effectiveness. That said, machine learning is one of the most important developments in the computing world and if it can be utilized to its full potential, it is set to revolutionize the way we use computers. Because of its versatility and flexibility, machine learning can be used in almost any industry where tasks can be automated. These are industries where machines can learn to think like humans and be able to perform at the same level as or even better than humans. One of these areas is transportation.

When many people think about artificial intelligence (AI) and machine learning as it ties into the automotive industry, they think about driverless cars and fleets of cars communicating with each other in real-time. While this is one part of it, there is so much more. Machine learning can be used to:

When doing their research, scientists and computer programmers are starting to look at machine learning at a higher level and using it to revolutionize the engine and to help decision-makers make the best decisions about transportation systems.

In the past, computer programmers had to write code that told the computer what to do in specific situations. This code would get more complex and unmaintainable as computer programmers tried to plan for and provide code for every case their program would encounter. Now, programmers can write the base code and use neural networks to train computers on what to do in all these different scenarios. Because computers are able to crunch data faster than we could, they are able to discover cases we never could.

Now, computer programmers feed machine learning algorithms using:

The machines are then asked to find a relationship between the two. Once it is done, the data produced is used to create modes that are used to make predictions.

Researchers are using machine learning to explore how transportation systems are designed. This helps them understand what issues are contained therein and how they affect entire transportation systems.

Their research will help transportation departments:

Understanding the complexity of transportation systems is almost impossible unless researchers comb through a huge amount of data. Machine learning can help them not only decipher this data, but also help them find trends and relationships and see how both of these affect transportation systems.

The insights that come out of such explorations will help:

These insights also help with decision making as they can help people and autonomous vehicles make better decisions, help coordinate emergency responses and help planners minimize the impact of the disruption of a transportation system in a given area.

Machine learning is also being used to optimize engine designs and the processes used to produce these engines. For example, researchers have been able to develop new combustion models using machine learning. These models have reduced the amount of time it takes to complete engine combustion simulations.

Using neural networks, researchers have also been able to model complex properties that were previously not available. Now, scientists can create complex reaction pathways to see how combustion happens inside new engine models. Because of this, researchers and automotive manufacturers are able to better optimize their engines.

In the past, researchers were forced to reduce the complexity of their combustion models. This is because they did not have powerful tools to help them carry out complex simulations. This led to data that was not as accurate as it should have been. All this has now changed with the advancement of machine learning, deep learning, AI and neural networks.

Computers that run machine learning models are very good at making predictions using past data. This data can be used to optimize route planning for both drivers and fleet managers. Machine learning can help these parties understand:

Once drivers and fleet operators understand all these things, they can choose cars and routes that save fuel while saving time and maximizing transportation efficiency.

The only way to understand what is going to happen in the future is to make predictions that are as accurate as possible. This has been enabled by the use of machine learning. Machine learning is being used to predict how transportation systems will look in the future. Researchers are doing this with the aim of predicting the impact of the transportation system on the world around us as it continues to grow and how this growing transportation system will impact energy needs.

Researchers are forced to model their predictions using:

Using these predictions, researchers can see how different technologies will impact transportation systems of the future. This allows them to focus on the technologies that will have the most impact.

Machine learning has brought us new modes of transport. These include autonomous cars, driverless shuttles and more. You can click here to learn more about how future transportation is likely to look. Perhaps the most common mode of transportation impacted by machine learning is autonomous cars.

Autonomous cars are fitted with computers that run different scenarios as they drive or are driven around. This computer makes it possible for this car to identify:

All this data is used to identify the safest route to follow to avoid collisions and keep transportation systems as safe as possible. As it stands, these cars need a human to always be behind the wheel in case of an emergency. As this technology matures and the computers in autonomous cars become more powerful, we will have cars that can drive themselves. The possibilities are both exciting and endless.

Perhaps we will have other autonomous modes of transportation like driverless trucks and autonomous airplanes. At this point, we can only speculate.

It is almost impossible to talk about the future of transportation without talking about 5G technology. 5G is the fifth generation of mobile communication and it comes with so many advantages. The most important of these are:

As we look into the future of driverless fleets of cars, it becomes clear that these cars need some way to communicate with each other. This is for purposes like overtaking, turning at junctions, giving right of way and more.

Ideally, we want these cars to communicate in real-time or as close to it as we can get. With low latency times and fast speeds, 5G stands as the best option for this purpose. Of course, communication technology will continue to evolve and we might see better speeds and lower latency in the future. That said, we already have something we can use to enable fleets of driverless cars.

Machine learning is a very complex topic, with both upsides and downsides, because we are just starting to see its potential. That said, there are some upsides that we are already seeing:

Machine learning has some downsides too. One of the biggest ones is job loss. As machine learning creates jobs in some sectors, it will lead to massive job losses in the transportation sector. Just think about all the drivers who will be left without a job if we switch to driverless cars. All these taxi and long-haul drivers will have to find new jobs.

There is no denying that machine learning is here and it will revolutionize the transportation sector. Its impacts on the reduction of fuel and the time it takes to get from one place to another are touted as its biggest achievements, as is the development of fuel-efficient engines, something that will have a massive positive impact on the environment.

View post:
Machine Learning and How it Is Transforming Transportation - IT Business Net

Yale Researchers Use Single-Cell Analysis and Machine Learning to Identify Major COVID-19 Target – HospiMedica

Image: The Respiratory Epithelium (Photo courtesy of Wikimedia Commons)

In the study, the scientists identified ciliated cells as the major target of SARS-CoV-2 infection. The bronchial epithelium acts as a protective barrier against allergens and pathogens. Cilia removes mucus and other particles from the respiratory tract. Their findings offer insight into how the virus causes disease. The scientists infected HBECs in an air-liquid interface with SARS-CoV-2. Over a period of three days, they used single-cell RNA sequencing to identify signatures of infection dynamics such as the number of infected cells across cell types, and whether SARS-CoV-2 activated an immune response in infected cells.

The scientists utilized advanced algorithms to develop working hypotheses and used electron microscopy to learn about the structural basis of the virus and target cells. These observations provide insights about host-virus interaction to measure SARS-CoV-2 cell tropism, or the ability of the virus to infect different cell types, as identified by the algorithms. After three days, thousands of cultured cells became infected. The scientists analyzed data from the infected cells along with neighboring bystander cells. They observed ciliated cells were 83% of the infected cells. These cells were the first and primary source of infection throughout the study. The virus also targeted other epithelial cell types including basal and club cells. The goblet, neuroendocrine, tuft cells, and ionocytes were less likely to become infected.

The gene signatures revealed an innate immune response associated with a protein called Interleukin 6 (IL-6). The analysis also showed a shift in the polyadenylated viral transcripts. Lastly, the (uninfected) bystander cells also showed an immune response, likely due to signals from the infected cells. Pulling from tens of thousands of genes, the algorithms locate the genetic differences between infected and non-infected cells. In the next phase of this study, the scientists will examine the severity of SARS-CoV-2 compared to other types of coronaviruses, and conduct tests in animal models.

Machine learning allows us to generate hypotheses. Its a different way of doing science. We go in with as few hypotheses as possible. Measure everything we can measure, and the algorithms present the hypothesis to us, said senior author David van Dijk, PhD, an assistant professor of medicine in the Section of Cardiovascular Medicine and Computer Science.

Related Links:Yale School of Medicine

Read the rest here:
Yale Researchers Use Single-Cell Analysis and Machine Learning to Identify Major COVID-19 Target - HospiMedica

Machine Learning as a Service Market Benefits, Forthcoming Developments, Business Opportunities & Future Investments to 2027 – 3rd Watch News

Reports published inMarket Research Incfor the Machine Learning as a Service market are spread out over several pages and provide the latest industry data, market future trends, enabling products and end users to drive revenue growth and profitability. Industry reports list and study key competitors and provide strategic industry analysis of key factors affecting market dynamics. This report begins with an overview of the Machine Learning as a Service market and is available throughout development. It provides a comprehensive analysis of all regional and major player segments that provide insight into current market conditions and future market opportunities along with drivers, trend segments, consumer behavior, price factors and market performance and estimates over the forecast period.

Request a pdf copy of this report athttps://www.marketresearchinc.com/request-sample.php?id=16701

Key Strategic Manufacturers

:Microsoft (Washington,US), Amazon Web Services (Washington, US), Hewlett Packard Enterprises (California, US), Google, Inc

The report gives a complete insight of this industry consisting the qualitative and quantitative analysis provided for this market industry along with prime development trends, competitive analysis, and vital factors that are predominant in the Machine Learning as a Service Market.

The report also targets local markets and key players who have adopted important strategies for business development. The data in the report is presented in statistical form to help you understand the mechanics. The Machine Learning as a Service market report gathers thorough information from proven research methodologies and dedicated sources in many industries.

Avail 40% Discount on this report athttps://www.marketresearchinc.com/ask-for-discount.php?id=16701

Key Objectives of Machine Learning as a Service Market Report: Study of the annual revenues and market developments of the major players that supply Machine Learning as a Service Analysis of the demand for Machine Learning as a Service by component Assessment of future trends and growth of architecture in the Machine Learning as a Service market Assessment of the Machine Learning as a Service market with respect to the type of application Study of the market trends in various regions and countries, by component, of the Machine Learning as a Service market Study of contracts and developments related to the Machine Learning as a Service market by key players across different regions Finalization of overall market sizes by triangulating the supply-side data, which includes product developments, supply chain, and annual revenues of companies supplying Machine Learning as a Service across the globe.

Furthermore, the years considered for the study are as follows:

Historical year 2015-2019

Base year 2019

Forecast period 2020 to 2026

Table of Content:

Machine Learning as a Service Market Research ReportChapter 1: Industry OverviewChapter 2: Analysis of Revenue by ClassificationsChapter 3: Analysis of Revenue by Regions and ApplicationsChapter 6: Analysis of Market Revenue Market Status.Chapter 4: Analysis of Industry Key ManufacturersChapter 5: Marketing Trader or Distributor Analysis of Market.Chapter 6: Development Trend of Machine Learning as a Service market

Continue for TOC

If You Have Any Query, Ask Our Experts:https://www.marketresearchinc.com/enquiry-before-buying.php?id=16701

About Us

Market Research Inc is farsighted in its view and covers massive ground in global research. Local or global, we keep a close check on both markets. Trends and concurrent assessments sometimes overlap and influence the other. When we say market intelligence, we mean a deep and well-informed insight into your products, market, marketing, competitors, and customers. Market research companies are leading the way in nurturing global thought leadership. We help your product/service become the best they can with our informed approach.

Contact Us

Market Research Inc

Kevin

51 Yerba Buena Lane, Ground Suite,

Inner Sunset San Francisco, CA 94103, USA

Call Us:+1 (628) 225-1818

Write [emailprotected][emailprotected]

https://www.marketresearchinc.com

Read the rest here:
Machine Learning as a Service Market Benefits, Forthcoming Developments, Business Opportunities & Future Investments to 2027 - 3rd Watch News

InterDigital, Blacknut, and Nvidia Unveil World’s First Cloud Gaming Solution With AI-Enabled User Interface – GlobeNewswire

WILMINGTON, Del., June 03, 2020 (GLOBE NEWSWIRE) -- InterDigital, Inc. (NASDAQ:IDCC), a mobile and video technology research and development company, today introduced the worlds first cloud gaming solution with an AI and machine learning-enabled user interface, presented in collaborative partnership with cloud gaming trailblazer Blacknut and in cooperation with GPU pioneer Nvidia. The tripartite collaboration represents the first time that an AI and machine learning-driven user interface is utilized, wearable-free, with a live cloud gaming solution. The technology demonstrates the incredible potential of integrating localized and far-Edge enabled AI capabilities into home gaming experiences.

The AI and machine learning-enabled user interface is connected to a cloud gaming solution that operates without joysticks or wearable accessories. The demonstration leverages unique technologies, including real-time video analysis on home and local edge devices, dynamic adaptation to available compute resources, and shared AI models managed through an in-home AI hub, to implement a cutting-edge gaming experience.

In the demonstration, users play a first-person view snowboarding game streamed by Blacknut and displayed on a commercial television. Users do not require a joystick or handheld controller to play the game; instead, their movements and interactions are tracked by AI processing of the live video capture of the users movements. The users presence is detected using an AI model and his or her body movements are matched with the snowboarder in the game, in real time, using InterDigitals low latency Edge AI running on a local AI accelerator. The groundbreaking demo addresses the challenges of ensuring the lowest possible end-to-end latency from gesture capture to game action, while accelerating inference of concurrent AI models serving multiple applications to deliver an interactive and more seamless gaming experience. This demonstration enables AI and machine learning tasks to be completed locally, revolutionizing our current implementation of cloud gaming solutions.

We are so proud of the work of this demonstration, as it displays the real potential of AI and edge computing, highlights the power of industry collaboration, and helps blaze a trail for new cloud gaming capabilities. Of course, such a success would not have been possible without the utmost implication of all the teams from Interdigital, Blacknut, and Nvidia, and I would like to take the opportunity to credit and thank their outstanding work, said Laurent Depersin, Director of the Home Experience Lab at InterDigital.

The far-Edge AI and machine learning technologies put forth by InterDigital bring a plethora of new capabilities to the cloud gaming experience. Far-Edge AI enables low-latency analysis to deliver an interactive and entertaining experience, reduces cloud computing costs by leveraging available computing resources, and saves significant bandwidth by prioritizing up-linking. In addition, far-Edge AI in edge cloud architecture offers an important solution for privacy concerns by localizing computing and supports a variety of new and emerging vertical applications beyond gaming, including smart home and security, remote healthcare, and robotics.

Cloud gaming with far-Edge AI leverages artificial intelligence and localized Edge computing to showcase the ways an interactive television or gaming experience can be enhanced by the localized AI analysis of a cameras video stream. Ongoing research in the real-time processing of user generated data will drive new innovations and vertical applications in the home, from cloud gaming to remote medical care, and those innovations will be enhanced by the ability to execute artificial intelligence models under low latency conditions.

Blacknuts mission is to bring to our customers unlimited hours of gaming fun in the simplest manner, said Pascal Manchon, CTO at Blacknut. Our unique cloud gaming solution allows to free games from dedicated consoles or hardware. Using AI and machine learning to transform the human body itself in a full-fledge game controller was challenging but Blacknuts close collaboration with Interdigital and NVidia led to outstanding performances. And yes, it is addictive and fun to play this way!

Cloud gaming is an exciting industry use case that leverages innovations in network architecture, video streaming and content delivery to shape the future of interactive gaming and entertainment. This worlds first cloud gaming solution, and the broader exploration of AI-enabled cloud solutions, would not be possible without a commitment to collaboration with industry leaders and partners.

To learn more about the demonstration of the worlds first cloud gaming solution with AI-enabled user interface, please click here.

About InterDigital

InterDigital develops mobile and video technologies that are at the core of devices, networks, and services worldwide. We solve many of the industrys most critical and complex technical challenges, inventing solutions for more efficient broadband networks, better video delivery, and richer multimedia experiences years ahead of market deployment. InterDigital has licenses and strategic relationships with many of the worlds leading technology companies. Founded in 1972, InterDigital is listed on NASDAQ and is included in the S&P MidCap 400 index.

InterDigital is a registered trademark of InterDigital, Inc.

For more information, visit: http://www.interdigital.com.

About Blacknut

Blacknut was founded in 2016 by Olivier Avaro (CEO) and is headquartered in Rennes, France, with offices in Paris and San Francisco. Blacknut designs, develops and commercializes a cloud gaming service. Blacknut first launched in France in 2018, for PC, Mac, and Linux. The service allows to play more than 400 premium games for a monthly subscription fee. Blacknut is now available across Europe & North America on a wider range of devices, including mobiles, set-top-boxes and Smart TVs. Blacknut is also distributed through major ISPs, device manufacturers, OTT services & Media companies.

For more information, visit: http://www.blacknut.com

See original here:
InterDigital, Blacknut, and Nvidia Unveil World's First Cloud Gaming Solution With AI-Enabled User Interface - GlobeNewswire

Neuromorphic Computing Drives The Landscape Of Emerging Memories For Artificial Intelligence SoCs – SemiEngineering

New techniques based on intensive computing and massive amounts of distributed memory.

The pace of deep machine learning and artificial intelligence (AI) is changing the world of computing at all levels of hardware architecture, software, chip manufacturing, and system packaging. Two major developments have opened the doors to implementing new techniques in machine learning. First, vast amounts of data, i.e., Big Data, are available for systemsto process. Second, advanced GPU architectures now support distributed computing parallelization. With these two developments, designers can take advantage of new techniques that rely on intensive computing and massive amounts of distributed memory to offer new, powerful compute capabilities.

Neuromorphic computing-based machine learning utilizes techniques of Spiking Neural Networks (SNN), Deep Neural Networks (DNN) and Restricted Boltzmann Machines (RBM). Combined with Big Data, Big Compute is utilizing statistically based High-Dimensional Computing (HDC) that operates on patterns, supports reasoning built on associative memory and on continuous learning to mimic human memory learning and retention sequences.

Emerging memories range from Compute-In-memory SRAMs (CIM), STT-MRAMs, SOT-MRAMs, ReRAMs, CB-RAMs, and PCMs. The development of each type is simultaneously trying to enable a transformation in computation for AI. Together, they are advancing the scale of computational capabilities, energy efficiency, density, and cost.

To read more, click here.

See the original post:
Neuromorphic Computing Drives The Landscape Of Emerging Memories For Artificial Intelligence SoCs - SemiEngineering

Tips to Take Care of a Rescue Animal

Animals exist in this world from a time a lot earlier than humans. Humans, in general, have the majority of the population and in one way or another is running, responsible and accountable for all the acts being done. The animals have had their natural habitats shortened due to the resource requirements of humans and on the other side, humans have grown fond of having animals as pets as well for different purposes. Pets can be kept for a source of entertainment or as a hobby as well. Well in the last couple of years there has been a lot of awareness created for rescuing animals that might be hurt, abandoned or extremely ill. Rescuing animals right now mostly includes but not limited to cats and dogs. Well rescuing an animal can be a huge responsibility. People who intend to rescue animals hesitate mainly due to the fact they don’t know how they can take care of the rescued animals.

Well to be able to go ahead and take care of the rescued animal in the best possible manner it is suggested to use Wi-Fi nanny cam to be able to have a look any time of the day even without being physically there. The area where you plan to keep the rescued being should have motion sensor lights so that all the movements and behavior of the animal is known and one can be cautious about it or learn from it. So, preparing the place to keep the animal is the first step, so one definitely needs to remove household chemicals from the reachable places, dangling items to be removed and cover-up delicate furniture with a slip or a throw. Well right after getting things ready at the home or any other place one needs to go and gather supplies for the animal that can be a sitting basket, supplementary food, bedding, water bowls or even grooming supplies. Once this is done you need to mentally prepare other people living in that place and set the rules that what are the do’s and don’ts after the animal is here. If any information about the animal before it was rescued is available then all that information shall be kept in consideration.

Building trust with animals is the only way that the rescued animals will be able to heal fully and be as they normally are. Every animal needs space and might act a little differently like eat a lot or chew a lot but through a Wi-Fi nanny, cam one can keep an eye on how the animal is behaving unsupervised or in his own personal space. The animal initially might act up tense or sometimes aggressive as well so patience is the key here, note down the patterns or things that cause changes in behavior and adjust accordingly. If the animal is doing something un-acceptable you have to firmly let him know that this is not to be done again but kindness needs to be maintained. Making animals used to daily noises in your routine is compulsory as well. Make them used to it slowly and gradually so that they can accept it. The animal might show signs of separation anxiety; they might cry, bark or pee a lot when left alone, so in that case, someone should be around most of the time, play with the animal and make them feel wanted.

Lastly feeding the animal according to their nature is very important. One can keep eye on them using cams and motion detector lights that what time do they eat and do they like to eat fast or slow or when it’s hot or cold. Training the animals builds a connection as well, training becomes easier with treats and then they can be made to exercise to stay active. The person who is taking care of the rescued animal should go ahead establish a connection with a doctor as well to keep things right. Well, technology such as the cams and motion detector lights can help note patterns and understand the animal in a better manner as it allows noting down their acts and behaviors when they are alone and feel free. It is suggested to an eye on them through such means till the time they don’t start acting up normal and blend in with the people around.

Butterfly landmines mapped by drones and machine learning – The Engineer

27th May 20209:41 am27th May 20209:41 am

IEDs and so-called butterfly landminescould be detected over wide areas using drones and advanced machine learning, according to research from Binghamton University, State University at New York.

The team had previously developed a method that allowed for the accurate detection of butterfly landmines using low-cost commercial drones equipped with infrared cameras.

EPSRC-funded project takes dual approach to clearing landmines

Their new research focuses on automated detection of landmines using convolutional neural networks (CNN), which they say is the standard machine learning method for object detection and classification in the field of remote sensing. This method is a game-changer in the field, said Alek Nikulin, assistant professor of energy geophysics at Binghamton University.

All our previous efforts relied on human-eye scanning of the dataset, Nikulin said in a statement.Rapid drone-assisted mapping and automated detection of scatterable mine fields would assist in addressing the deadly legacy of widespread use of small scatterable landmines in recent armed conflicts and allow to develop a functional framework to effectively address their possible future use.

There are at least 100 million military munitions and explosives of concern devices in the world, of various size, shape and composition. Furthermore,an estimated twenty landmines are placed for every landmine removed in conflict regions

Millions of these are surface plastic landmines with low-pressure triggers, such as the mass-produced Soviet PFM-1 butterfly landmine. Nicknamed for their small size and butterfly-like shape, these mines are extremely difficult to locate and clear due to their small size, low trigger mass and a design that mostly excluded metal components, making them virtually invisible to metal detectors.

The design of the mine combined with a low triggering weight have earned it notoriety as the toy mine, due to a high casualty rate among small children who find these devices while playing and who are the primary victims of the PFM-1 in post-conflict nations, like Afghanistan.

The researchers believe that these detection and mapping techniques are generalisable and transferable to other munitions and explosives. They could be adapted to detect and map disturbed soil for improvised explosive devices (IEDs).

The use of Convolutional Neural Network-based approaches to automate the detection and mapping of landmines is important for several reasons, the researchers said in a paper published inRemote Sensing. One, it is much faster than manually counting landmines from an orthoimage (i.e. an aerial image that has been geometrically corrected). Two, it is quantitative and reproducible, unlike subjective human error-prone ocular detection. And three, CNN-based methods are easily generalisable to detect and map any objects with distinct sizes and shapes from any remotely sensed raster images.

Read more:
Butterfly landmines mapped by drones and machine learning - The Engineer

Machine Learning Takes UWB Localization to the Next Level – Eetasia.com

Article By : Nitin Dahad

Imec uses machine learning algorithms in chip design to achieve cm accuracy and low-power ultra-wideband (UWB) localization...

Imec this week said it has developed next generation ultra-wideband (UWB) technology that uses digital RF and machine learning to achieve a ranging accuracy of less than 10cm in challenging environments while consuming 10 times less power than todays implementations.

The research and innovation hub announced two new innovations from its secure proximity research program for secure and very high accuracy ranging technology. One is hardware-based, with a digital-style RF circuit design such as its all-digital phase locked loop (PLL), to achieve a low power consumption of less than 4mW/20mW (Tx/Rx), which it claims is up to 10 times better than todays implementations. The second is software-based enhancements which utilize machine learning based error correction algorithms to allow less than 10cm ranging accuracy in challenging environments.

Explaining the context imec said ultra-wideband technology is currently well suited to support a variety of high accuracy and secure wireless ranging use-cases, such as the smart lock solutions commonly being applied in automotive; it automatically unlocks a cars doors as its owner approaches, while locking the car when the owner moves away.

However, despite its benefits such as being inherently more difficult to compromise than some alternatives, its potential has largely remained untapped because of its higher power consumption and larger footprint. Hence imec said the hardware and software innovations it has introduced mark an important step to unlocking the technologys full potential, and opens up the opportunity for micro-localization services beyond the secure keyless access that its been widely promoted for so far, to AR/VR gaming, asset tracking and robotics.

Christian Bachmann, the program manager at imec, said, UWBs power consumption, chip size and associated cost have been prohibitive factors to the technologys adoption, especially when it comes to the deployment of wireless ranging applications. Imecs brand-new UWB chip developments result in a significant reduction of the technologys footprint based on digital-style RF-concepts: we have been able to integrate an entire transceiver including three receivers for angle-of-arrival measurements on an area of less than 1mm.

He added this is when implemented on advanced semiconductor process nodes applicable to IoT sensor node devices. The new chip is also compliant with the new IEEE 802.15.4z standard supported by high-impact industry consortia such as the Car Connectivity Consortium (CCC) and Fine Ranging (FiRa).

Complementing the hardware developments, researchers from IDLab (an imec research group at Ghent University) have come up with software-based enhancements that significantly improve UWBs wireless ranging performance in challenging environments. This is particularly in factories or warehouses where people and machines constantly move around, and with metallic obstacles causing massive reflection all of which impact the quality of UWBs localization and distance measurements.

Using machine learning, it has created smart anchor selection algorithms that detect the (non) line-of-sight between UWB anchors and the mobile devices that are being tracked. Building on that knowledge, the ranging quality is estimated, and ranging errors are corrected. The approach also comes with machine learning features that enable adaptive tuning of the networks physical layer parameters, which allows appropriate steps to then be initiated to mitigate those ranging errors for instance by tuning the anchors radios.

Professor Eli De Poorter from IDLab said, We have already demonstrated an UWB ranging accuracy of better than 10cm in such very challenging industrial environments, which is a factor of two improvement compared to existing approaches. Additionally, while UWB localization use-cases are typically custom-built and often depend on manual configuration, our smart anchor selection software works in any scenario as it runs in the application layer.

Through these adaptive configurations, the next-generation low power and high-accuracy UWB chips can be utilized in a wide range of other applications such as improved contact tracing during epidemics using small and privacy-aware devices.

In fact, imec has already licensed the technology to its spin-off Lopos, which hasreleased a wearable that enables enforcement of Covid-19 social distancingby warning employees through an audible or haptic alarm when they are violating safe distance guidelines while approaching each other.

Choosing UWB instead of Bluetooth, Lopos SafeDistance wearable operates as a standalone solution which weighs 75g and has a battery life of 2-5 days. The UWB-technology based device enables safe, highly accurate (< 15cm error margin) distance measurement. When two wearables approach each other, the exact distance between the devices (which is adjustable) is measured and an alarm is activated when a minimum safety distance is not respected.

Since it is standalone, no personal data is logged and there is no gateway, server or other infrastructure required. Lopos has already ramped up production to meet market demand, with multiple large-scale orders received over the last few weeks from companies active in a wide range of different sectors.

Related Posts:

More here:
Machine Learning Takes UWB Localization to the Next Level - Eetasia.com

House Introduces the Advancing Quantum Computing Act – Lexology

On May 19, 2020, Representative Morgan Griffith (R-VA-9) introduced the Advancing Quantum Computing Act (AQCA), which would require the Secretary of Commerce to conduct a study on quantum computing. We cant depend on other countries . . . to guarantee American economic leadership, shield our stockpile of critical supplies, or secure the benefits of technological progress to our people, Representative Griffith explained. It is up to us to do that.

Quantum computers use the science underlying quantum mechanics to store data and perform computations. The properties of quantum mechanics are expected to enable such computers to outperform traditional computers on a multitude of metrics. As such, there are many promising applications, from simulating the behavior of matter to accelerating the development of artificial intelligence. Several companies have started exploring the use of quantum computing to develop new drugs, improve the performance of batteries, and optimize transit routing to minimize congestion.

In addition to the National Quantum Initiative Act passed in 2018, the introduction of AQCA represents another importantalbeit preliminarystep for Congress in helping to shape the growth and development of quantum computing in the United States. It signals Congresss continuing interest in developing a national strategy for the technology.

Overall, the AQCA would require the Secretary of Commerce to conduct the following four categories of studies related to the impact of quantum computing:

Original post:
House Introduces the Advancing Quantum Computing Act - Lexology

The Role of Quantum Computing in Online Education – MarketScale

On this episode of the MarketScale Online Learning Minute, host Brian Runo dives into how quantum computing, the next revolutionary leap forward in computing, could apply to online education.

In particular, it can be used to epitomize the connectivism theory and provide personalized learning for each individual, as its not restricted by the capacity of an individual instructor.

In this way, each learner can be empowered to learn at their own pace and be presented with materials more tailored to them in real-time.

In fact, quantum computing is so revolutionary that the education world likely cant even currently dream up the innovations it will enable.

For the latest news, videos, and podcasts in theEducation Technology Industry, be sure to subscribe to our industry publication.

Follow us on social media for the latest updates in B2B!Twitter @MarketScaleFacebook facebook.com/marketscaleLinkedIn linkedin.com/company/marketscale

Link:
The Role of Quantum Computing in Online Education - MarketScale

The University of New Mexico Becomes IBM Q Hub’s First University Member – HPCwire

May 28, 2020 Under the direction of Michael Devetsikiotis, chair of the Department of Electrical and Computer Engineering (ECE), The University of New Mexico recently joined the IBM Q Hubat North Carolina State University as its first university member.

The NC State IBM Q Hub is a cloud-based quantum computing hub, one of six worldwide and the first in North America to be part of the globalIBM Q Network. This global network links national laboratories, tech startups, Fortune 500 companies, and research universities, providing access to IBMs largest quantum computing systems.

Mainstream computer processors inside our laptops, desktops, and smartphones manipulatebits, information that can only exist as either a 1 or a 0. In other words, the computers we are used to function through programming, which dictates a series of commands with choices restricted to yes/no or if this, then that.Quantum computers, on the other hand, process quantum bits or qubits, that are not restricted to a binary choice. Quantum computers can choose if this, then that or both through complex physics concepts such as quantum entanglement. This allows quantum computers to process information more quickly, and in unique ways compared to conventional computers.

Access to systems such as IBMs newly announced53 qubit processor (as well as several 20 qubit machines) is just one of the many benefits to UNMs participation in the IBM Q Hub when it comes to data analysis and algorithm development for quantum hardware. Quantum knowledge will only grow with time, and the IBM Q Hub will provide unique training and research opportunities for UNM faculty and student researchers for years to come.

How did this partnership come to be? Two years ago, a sort of call to arms was sent out among UNM quantum experts, saying now was the time for big ideas because federal support for quantum research was gaining traction. Devetsikiotis vision was to create a quantum ecosystem, one that could unite the foundational quantum research in physics atUNMsCenter for Quantum Information and Control(CQuIC) with new quantum computing and engineering initiatives for solving big real-world mathematical problems.

At first, I thought [quantum] was something for physicists, explains Devetsikiotis. But I realized its a great opportunity for the ECE department to develop real engineering solutions to these real-world problems.

CQuIC is the foundation of UNMs long-standing involvement in quantum research, resulting in participation in theNational Quantum Initiative(NQI) passed by Congress in 2018 to support multidisciplinary research and training in quantum information science. UNM has been a pioneer in quantum information science since the field emerged 25 years ago, as CQuIC Director Ivan Deutsch knows first-hand.

This is a very vibrant time in our field, moving from physics to broader activities, says Deutsch, and [Devetsikiotis] has seen this as a real growth area, connecting engineering with the existing strengths we have in the CQuIC.

With strategic support from the Office of the Vice President for Research, Devetsikiotis secured National Science Foundation funding to support a Quantum Computing & Information Science (QCIS) faculty fellow. The faculty member will join the Department of Electrical and Computer Engineering with the goal to unite well-established quantum research in physics with new quantum education and research initiatives in engineering. This includes membership in CQuIC and implementation of the IBM Q Hub program, as well as a partnership with Los Alamos National Lab for a Quantum Computing Summer School to develop new curricula, educational materials, and mentorship of next-generation quantum computing and information scientists.As part of the Q Hub at NC State, UNM gains access to IBMs largest quantum computing systems for commercial use cases and fundamental research. It also allows for the restructuring of existing quantum courses to be more hands-on and interdisciplinary than they have in the past, as well as the creation of new courses, a new masters degree program in QCIS, and a new university-wide Ph.D. concentration in QCIS that can be added to several departments including ECE, Computer Science, Physics and Astronomy, and Chemistry.

Theres been a lot of challenges, Devetsikiotis says, but there has also been a lot of good timing, and thankfully The University has provided support for us. UNM has solidified our seat at the quantum table and can now bring in the industrial side.

For additional graphics and full announcement, https://news.unm.edu/news/the-university-of-new-mexico-becomes-ibm-q-hubs-first-university-member

Source: Natalie Rogers, University of New Mexico

View original post here:
The University of New Mexico Becomes IBM Q Hub's First University Member - HPCwire

What’s New in HPC Research: Astronomy, Weather, Security & More – HPCwire

In this bimonthly feature,HPCwirehighlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here.

Developing the HPC system for the ASKAP telescope

The Australian Square Kilometre Array Pathfinder (ASKAP) telescope (itself a pilot project for the record-setting Square Kilometre Array planned for construction in the coming years) will enable highly sensitive radio astronomy that produces a tremendous amount of data. In this paper, researchers from the Commonwealth Scientific and Industrial Research Organisation (CSIRO) highlight how they are preparing a dedicated HPC platform, called ASKAPsoft, to handle the expected 5 PB/year of data produced by ASKAP.

Authors: Juan C. Guzman, Eric Bastholm, Wasim raja, Matthew Whiting, Daniel Mitchell, Stephen Ord and Max Voronkov.

Creating an open infrastructure for sharing and reusing HPC knowledge

In an expert field like HPC, institutional memory and information-sharing is crucial for maintaining and building on expertise but institutions often lack cohesive infrastructures to perpetuate that knowledge. These authors, a team from North Carolina State University and Lawrence Livermore National Laboratory, introduce OpenK, an open, ontology-based infrastructure aimed at facilitating the accumulation, sharing and reuse of HPC knowledge.

Authors: Yue Zhao, Xipeng Shen and Chunhua Liao.

Using high-performance data analysis to facilitate HPC-powered astrophysics

High-performance data analysis (HPDA) is an emerging tool for scientific disciplines like bioscience, climate science and security and now, its being used to prepare astrophysics research for exascale. In this paper, written by a team from the Astronomical Observatory of Trieste, Italy, the authors discuss the ExaNeSt and EuroExa projects, which built a prototype of a low-power exascale facility for HPDA and astrophysics.

Authors: Giuliano Taffoni, David Goz, Luca Tornatore, Marco Frailis, Gianmarco Maggio and Fabio Pasian.

Using power analysis to identify HPC activity

Monitoring users on large computing platforms such as [HPC] and cloud computing systems, these authors a duo from Lawrence Berkeley National Laboratory write, is non-trivial. Users can (and have) abused access to HPC systems, they say, but process viewers and other monitoring tools can impose substantial overhead. To that end, they introduce a technique for identifying running programs with 97% accuracy using just the systems power consumption.

Authors: Bogdan Copos and Sein Peisert.

Building resilience and fault tolerance in HPC for numerical weather and climate prediction

In numerical weather and climate prediction (NWP), accuracy depends strongly on available computing power but the increasing number of cores in top systems is leading to a higher frequency of hardware and software failures for NWP simulations. This report (from researchers at eight different institutions) examines approaches for fault tolerance in numerical algorithms and system resilience in parallel simulations for those NWP tools.

Authors: Tommaso Benacchio, Luca Bonaventura, Mirco Altenbernd, Chris D. Cantwell, Peter D. Dben, Mike Gillard, Luc Giraud, Dominik Gddeke, Erwan Raffin, Keita Teranishi and Nils Wedi.

Pioneering the exascale era with astronomy

Another team this time, from SURF, a collaborative organization for Dutch research also investigated the intersection of astronomy and the exascale era. This paper, written by three researchers from SURF, highlights a new, OpenStack-based cloud infrastructure layer and Spider, a new addition to SURFs high-throughput data processing platform. The authors explore how these additions help to prepare the astronomical research community for the exascale era, in particular with regard to data-intensive experiments like the Square Kilometre Array.

Authors: J. B. R. Oonk, C. Schrijvers and Y. van den Berg.

Enabling EASEY deployment of containerized applications for future HPC systems

As the exascale era approaches, HPC systems are growing in complexity, improving performance but making the systems less accessible for new users. These authors a duo from the Ludwig Maximilian University of Munich propose a support framework for these future HPC architectures called EASEY (for Enable exAScale for EverYone) that can automatically deploy optimized container computations with negligible overhead[.]

Authors: Maximilian Hb and Dieter Kranzlmller.

Do you know about research that should be included in next months list? If so, send us an email at[emailprotected]. We look forward to hearing from you.

Original post:
What's New in HPC Research: Astronomy, Weather, Security & More - HPCwire

What Is the Many-Worlds Theory of Quantum Mechanics? – The Wire

Photo: Kelly Sikkema/Unsplash.

Quantum physics is strange. At least, it is strange to us, because the rules of the quantum world, which govern the way the world works at the level of atoms and subatomic particles (the behaviour of light and matter, as the renowned physicist Richard Feynman put it), are not the rules that we are familiar with the rules of what we call common sense.

The quantum rules, which were mostly established by the end of the 1920s, seem to be telling us that a cat can be both alive and dead at the same time, while a particle can be in two places at once. But to the great distress of many physicists, let alone ordinary mortals, nobody (then or since) has been able to come up with a common-sense explanation of what is going on. More thoughtful physicists have sought solace in other ways, to be sure, namely coming up with a variety of more or less desperate remedies to explain what is going on in the quantum world.

These remedies, the quanta of solace, are called interpretations. At the level of the equations, none of these interpretations is better than any other, although the interpreters and their followers will each tell you that their own favored interpretation is the one true faith, and all those who follow other faiths are heretics. On the other hand, none of the interpretations is worse than any of the others, mathematically speaking. Most probably, this means that we are missing something. One day, a glorious new description of the world may be discovered that makes all the same predictions as present-day quantum theory, but also makes sense. Well, at least we can hope.

Meanwhile, I thought I might provide an agnostic overview of one of the more colorful of the hypotheses, the many-worlds, or multiple universes, theory. For overviews of the other five leading interpretations, I point you to my book, Six Impossible Things. I think youll find that all of them are crazy, compared with common sense, and some are more crazy than others. But in this world, crazy does not necessarily mean wrong, and being more crazy does not necessarily mean more wrong.

If you have heard of the Many Worlds Interpretation (MWI), the chances are you think that it was invented by the American Hugh Everett in the mid-1950s. In a way thats true. He did come up with the idea all by himself. But he was unaware that essentially the same idea had occurred to Erwin Schrdinger half a decade earlier. Everetts version is more mathematical, Schrdingers more philosophical, but the essential point is that both of them were motivated by a wish to get rid of the idea of the collapse of the wave function, and both of them succeeded.

Also read: If You Thought Quantum Mechanics Was Weird, Wait Till You Hear About Entangled Time

As Schrdinger used to point out to anyone who would listen, there is nothing in the equations (including his famous wave equation) about collapse. That was something that Bohr bolted on to the theory to explain why we only see one outcome of an experiment a dead cat or a live cat not a mixture, a superposition of states. But because we only detect one outcome one solution to the wave function that need not mean that the alternative solutions do not exist. In a paper he published in 1952, Schrdinger pointed out the ridiculousness of expecting a quantum superposition to collapse just because we look at it. It was, he wrote, patently absurd that the wave function should be controlled in two entirely different ways, at times by the wave equation, but occasionally by direct interference of the observer, not controlled by the wave equation.

Although Schrdinger himself did not apply his idea to the famous cat, it neatly resolves that puzzle. Updating his terminology, there are two parallel universes, or worlds, in one of which the cat lives, and in one of which it dies. When the box is opened in one universe, a dead cat is revealed. In the other universe, there is a live cat. But there always were two worlds that had been identical to one another until the moment when the diabolical device determined the fate of the cat(s). There is no collapse of the wave function. Schrdinger anticipated the reaction of his colleagues in a talk he gave in Dublin, where he was then based, in 1952. After stressing that when his eponymous equation seems to describe different possibilities (they are not alternatives but all really happen simultaneously), he said:

Nearly every result [the quantum theorist] pronounces is about the probability of this or that or that happening with usually a great many alternatives. The idea that they may not be alternatives but all really happen simultaneously seems lunatic to him, just impossible. He thinks that if the laws of nature took this form for, let me say, a quarter of an hour, we should find our surroundings rapidly turning into a quagmire, or sort of a featureless jelly or plasma, all contours becoming blurred, we ourselves probably becoming jelly fish. It is strange that he should believe this. For I understand he grants that unobserved nature does behave this waynamely according to the wave equation. The aforesaid alternatives come into play only when we make an observation which need, of course, not be a scientific observation. Still it would seem that, according to the quantum theorist, nature is prevented from rapid jellification only by our perceiving or observing it it is a strange decision.

In fact, nobody responded to Schrdingers idea. It was ignored and forgotten, regarded as impossible. So Everett developed his own version of the MWI entirely independently, only for it to be almost as completely ignored. But it was Everett who introduced the idea of the Universe splitting into different versions of itself when faced with quantum choices, muddying the waters for decades.

It was Hugh Everett who introduced the idea of the Universe splitting into different versions of itself when faced with quantum choices, muddying the waters for decades.

Everett came up with the idea in 1955, when he was a PhD student at Princeton. In the original version of his idea, developed in a draft of his thesis, which was not published at the time, he compared the situation with an amoeba that splits into two daughter cells. If amoebas had brains, each daughter would remember an identical history up until the point of splitting, then have its own personal memories. In the familiar cat analogy, we have one universe, and one cat, before the diabolical device is triggered, then two universes, each with its own cat, and so on. Everetts PhD supervisor, John Wheeler, encouraged him to develop a mathematical description of his idea for his thesis, and for a paper published in the Reviews of Modern Physics in 1957, but along the way, the amoeba analogy was dropped and did not appear in print until later. But Everett did point out that since no observer would ever be aware of the existence of the other worlds, to claim that they cannot be there because we cannot see them is no more valid than claiming that the Earth cannot be orbiting around the Sun because we cannot feel the movement.

Also read: What Is Quantum Biology?

Everett himself never promoted the idea of the MWI. Even before he completed his PhD, he had accepted the offer of a job at the Pentagon working in the Weapons Systems Evaluation Group on the application of mathematical techniques (the innocently titled game theory) to secret Cold War problems (some of his work was so secret that it is still classified) and essentially disappeared from the academic radar. It wasnt until the late 1960s that the idea gained some momentum when it was taken up and enthusiastically promoted by Bryce DeWitt, of the University of North Carolina, who wrote: every quantum transition taking place in every star, in every galaxy, in every remote corner of the universe is splitting our local world on Earth into myriad copies of itself. This became too much for Wheeler, who backtracked from his original endorsement of the MWI, and in the 1970s, said: I have reluctantly had to give up my support of that point of view in the end because I am afraid it carries too great a load of metaphysical baggage. Ironically, just at that moment, the idea was being revived and transformed through applications in cosmology and quantum computing.

Every quantum transition taking place in every star, in every galaxy, in every remote corner of the universe is splitting our local world on Earth into myriad copies of itself.

The power of the interpretation began to be appreciated even by people reluctant to endorse it fully. John Bell noted that persons of course multiply with the world, and those in any particular branch would experience only what happens in that branch, and grudgingly admitted that there might be something in it:

The many worlds interpretation seems to me an extravagant, and above all an extravagantly vague, hypothesis. I could almost dismiss it as silly. And yet It may have something distinctive to say in connection with the Einstein Podolsky Rosen puzzle, and it would be worthwhile, I think, to formulate some precise version of it to see if this is really so. And the existence of all possible worlds may make us more comfortable about the existence of our own world which seems to be in some ways a highly improbable one.

The precise version of the MWI came from David Deutsch, in Oxford, and in effect put Schrdingers version of the idea on a secure footing, although when he formulated his interpretation, Deutsch was unaware of Schrdingers version. Deutsch worked with DeWitt in the 1970s, and in 1977, he met Everett at a conference organized by DeWitt the only time Everett ever presented his ideas to a large audience. Convinced that the MWI was the right way to understand the quantum world, Deutsch became a pioneer in the field of quantum computing, not through any interest in computers as such, but because of his belief that the existence of a working quantum computer would prove the reality of the MWI.

This is where we get back to a version of Schrdingers idea. In the Everett version of the cat puzzle, there is a single cat up to the point where the device is triggered. Then the entire Universe splits in two. Similarly, as DeWitt pointed out, an electron in a distant galaxy confronted with a choice of two (or more) quantum paths causes the entire Universe, including ourselves, to split. In the DeutschSchrdinger version, there is an infinite variety of universes (a Multiverse) corresponding to all possible solutions to the quantum wave function. As far as the cat experiment is concerned, there are many identical universes in which identical experimenters construct identical diabolical devices. These universes are identical up to the point where the device is triggered. Then, in some universes the cat dies, in some it lives, and the subsequent histories are correspondingly different. But the parallel worlds can never communicate with one another. Or can they?

Deutsch argues that when two or more previously identical universes are forced by quantum processes to become distinct, as in the experiment with two holes, there is a temporary interference between the universes, which becomes suppressed as they evolve. It is this interaction that causes the observed results of those experiments. His dream is to see the construction of an intelligent quantum machine a computer that would monitor some quantum phenomenon involving interference going on within its brain. Using a rather subtle argument, Deutsch claims that an intelligent quantum computer would be able to remember the experience of temporarily existing in parallel realities. This is far from being a practical experiment. But Deutsch also has a much simpler proof of the existence of the Multiverse.

What makes a quantum computer qualitatively different from a conventional computer is that the switches inside it exist in a superposition of states. A conventional computer is built up from a collection of switches (units in electrical circuits) that can be either on or off, corresponding to the digits 1 or 0. This makes it possible to carry out calculations by manipulating strings of numbers in binary code. Each switch is known as a bit, and the more bits there are, the more powerful the computer is. Eight bits make a byte, and computer memory today is measured in terms of billions of bytes gigabytes, or Gb. Strictly speaking, since we are dealing in binary, a gigabyte is 230 bytes, but that is usually taken as read. Each switch in a quantum computer, however, is an entity that can be in a superposition of states. These are usually atoms, but you can think of them as being electrons that are either spin up or spin down. The difference is that in the superposition, they are both spin up and spin down at the same time 0 and 1. Each switch is called a qbit, pronounced cubit.

Using a rather subtle argument, Deutsch claims that an intelligent quantum computer would be able to remember the experience of temporarily existing in parallel realities.

Because of this quantum property, each qbit is equivalent to two bits. This doesnt look impressive at first sight, but it is. If you have three qbits, for example, they can be arranged in eight ways: 000, 001, 010, 011, 100, 101, 110, 111. The superposition embraces all these possibilities. So three qbits are not equivalent to six bits (2 x 3), but to eight bits (2 raised to the power of 3). The equivalent number of bits is always 2 raised to the power of the number of qbits. Just 10 qbits would be equivalent to 210 bits, actually 1,024, but usually referred to as a kilobit. Exponentials like this rapidly run away with themselves. A computer with just 300 qbits would be equivalent to a conventional computer with more bits than there are atoms in the observable Universe. How could such a computer carry out calculations? The question is more pressing since simple quantum computers, incorporating a few qbits, have already been constructed and shown to work as expected. They really are more powerful than conventional computers with the same number of bits.

Deutschs answer is that the calculation is carried out simultaneously on identical computers in each of the parallel universes corresponding to the superpositions. For a three-qbit computer, that means eight superpositions of computer scientists working on the same problem using identical computers to get an answer. It is no surprise that they should collaborate in this way, since the experimenters are identical, with identical reasons for tackling the same problem. That isnt too difficult to visualize. But when we build a 300-qbit machinewhich will surely happenwe will, if Deutsch is right, be involving a collaboration between more universes than there are atoms in our visible Universe. It is a matter of choice whether you think that is too great a load of metaphysical baggage. But if you do, you will need some other way to explain why quantum computers work.

Also read: The Science and Chaos of Complex Systems

Most quantum computer scientists prefer not to think about these implications. But there is one group of scientists who are used to thinking of even more than six impossible things before breakfast the cosmologists. Some of them have espoused the Many Worlds Interpretation as the best way to explain the existence of the Universe itself.

Their jumping-off point is the fact, noted by Schrdinger, that there is nothing in the equations referring to a collapse of the wave function. And they do mean thewave function; just one, which describes the entire world as a superposition of states a Multiverse made up of a superposition of universes.

Some cosmologists have espoused the Many Worlds Interpretation as the best way to explain the existence of the Universe itself.

The first version of Everetts PhD thesis (later modified and shortened on the advice of Wheeler) was actually titled The Theory of the Universal Wave Function. And by universal he meant literally that, saying:

Since the universal validity of the state function description is asserted, one can regard the state functions themselves as the fundamental entities, and one can even consider the state function of the whole universe. In this sense this theory can be called the theory of the universal wave function, since all of physics is presumed to follow from this function alone.

where for the present purpose state function is another name for wave function. All of physics means everything, including us the observers in physics jargon. Cosmologists are excited by this, not because they are included in the wave function, but because this idea of a single, uncollapsed wave function is the only way in which the entire Universe can be described in quantum mechanical terms while still being compatible with the general theory of relativity. In the short version of his thesis published in 1957, Everett concluded that his formulation of quantum mechanics may therefore prove a fruitful framework for the quantization of general relativity. Although that dream has not yet been fulfilled, it has encouraged a great deal of work by cosmologists since the mid-1980s, when they latched on to the idea. But it does bring with it a lot of baggage.

The universal wave function describes the position of every particle in the Universe at a particular moment in time. But it also describes every possible location of those particles at that instant. And it also describes every possible location of every particle at any other instant of time, although the number of possibilities is restricted by the quantum graininess of space and time. Out of this myriad of possible universes, there will be many versions in which stable stars and planets, and people to live on those planets, cannot exist. But there will be at least some universes resembling our own, more or less accurately, in the way often portrayed in science fiction stories. Or, indeed, in other fiction. Deutsch has pointed out that according to the MWI, any world described in a work of fiction, provided it obeys the laws of physics, really does exist somewhere in the Multiverse. There really is, for example, a Wuthering Heights world (but not a Harry Potter world).

That isnt the end of it. The single wave function describes all possible universes at all possible times. But it doesnt say anything about changing from one state to another. Time does not flow. Sticking close to home, Everetts parameter, called a state vector, includes a description of a world in which we exist, and all the records of that worlds history, from our memories, to fossils, to light reaching us from distant galaxies, exist. There will also be another universe exactly the same except that the time step has been advanced by, say, one second (or one hour, or one year).

But there is no suggestion that any universe moves along from one time step to another. There will be a me in this second universe, described by the universal wave function, who has all the memories I have at the first instant, plus those corresponding to a further second (or hour, or year, or whatever). But it is impossible to say that these versions of me are the same person. Different time states can be ordered in terms of the events they describe, defining the difference between past and future, but they do not change from one state to another. All the states just exist. Time, in the way we are used to thinking of it, does not flow in Everetts MWI.

John Gribbin is a Visiting Fellow in Astronomy at the University of Sussex, UK and the author of In Search of Schrdingers Cat, The Universe: A Biography and Six Impossible Thingsfrom which this article is excerpted.

Thisarticlehas been republished fromThe MIT Press Reader.

More here:
What Is the Many-Worlds Theory of Quantum Mechanics? - The Wire

Quantum Computing Technologies Market with Sales, Demand, Consumption and strategies 2025 – Cole of Duty

ORBIS RESEARCH has recently announced Global Quantum Computing Technologies Market report with all the critical analysis on current state of industry, demand for product, environment for investment and existing competition. Global Quantum Computing Technologies Market report is a focused study on various market affecting factors and comprehensive survey of industry covering major aspects like product types, various applications, top regions, growth analysis, market potential, challenges for investor, opportunity assessments, major drivers and key players

Request a sample of this report @ https://www.orbisresearch.com/contacts/request-sample/4696468

The Global Quantum Computing Technologies Market report provides a detailed analysis of global market size, regional and country-level market size, segmentation market growth, market share, competitive Landscape, sales analysis, impact of domestic and Global Quantum Computing Technologies Market players, value chain optimization, trade regulations, recent developments, opportunities analysis, strategic market growth analysis, product launches, area marketplace expanding, and technological innovations.

Key vendor/manufacturers in the market:

The major players covered in Quantum Computing Technologies are:Airbus GroupIntel CorporationGoogle Quantum AI LabCambridge Quantum ComputingAlibaba Group Holding LimitedIBMNokia Bell LabsMicrosoft Quantum ArchitecturesToshiba

Browse the complete report @ https://www.orbisresearch.com/reports/index/global-quantum-computing-technologies-market-2020-by-company-regions-type-and-application-forecast-to-2025

Competitive Landscape and Global Quantum Computing Technologies Market Share AnalysisGlobal Quantum Computing Technologies Market competitive landscape provides details by vendors, including company overview, company total revenue (financials), market potential, global presence, Quantum Computing Technologies sales and revenue generated, market share, price, production sites and facilities, SWOT analysis, product launch. For the period 2015-2020, this study provides the Quantum Computing Technologies sales, revenue and market share for each player covered in this report.

Global Quantum Computing Technologies Market By Type:

By Type, Quantum Computing Technologies market has been segmented into:SoftwareHardware

Global Quantum Computing Technologies Market By Application:

By Application, Quantum Computing Technologies has been segmented into:GovernmentBusinessHigh-TechBanking & SecuritiesManufacturing & LogisticsInsuranceOther

Regions and Countries Level AnalysisRegional analysis is another highly comprehensive part of the research and analysis study of the Global Quantum Computing Technologies Market presented in the report. This section sheds light on the sales growth of different regional and country-level Quantum Computing Technologies markets. For the historical and forecast period 2015 to 2025, it provides detailed and accurate country-wise volume analysis and region-wise market size analysis of the global Quantum Computing Technologies market.

The report offers in-depth assessment of the growth and other aspects of the Quantum Computing Technologies market in important countries (regions), including:North America (United States, Canada and Mexico)Europe (Germany, France, UK, Russia and Italy)Asia-Pacific (China, Japan, Korea, India, Southeast Asia and Australia)South America (Brazil, Argentina, Colombia)Middle East and Africa (Saudi Arabia, UAE, Egypt, Nigeria and South Africa)

Make an enquiry before buying this report @ https://www.orbisresearch.com/contacts/enquiry-before-buying/4696468

About Us :

Orbis Research (orbisresearch.com) is a single point aid for all your market research requirements. We have vast database of reports from the leading publishers and authors across the globe. We specialize in delivering customized reports as per the requirements of our clients. We have complete information about our publishers and hence are sure about the accuracy of the industries and verticals of their specialization. This helps our clients to map their needs and we produce the perfect required market research study for our clients.

Contact Us :

Hector CostelloSenior Manager Client Engagements4144N Central Expressway,Suite 600, Dallas,Texas 75204, U.S.A.Phone No.: +1 (972)-362-8199 ; +91 895 659 5155

View post:
Quantum Computing Technologies Market with Sales, Demand, Consumption and strategies 2025 - Cole of Duty

WISeKey is Adapting its R&D and Extended Patents Portfolio to the Post-COVID 19 Economy with Specific Focus on Post-Quantum Cryptography -…

WISeKey is Adapting its R&D and Extended Patents Portfolio to the Post-COVID 19 Economy with Specific Focus on Post-Quantum Cryptography

With more than 25% of its 2019 annual turnover invested in R&D, WISeKey is a significant and recognized contributor to digital trust in an interconnected world. The Companys recent publication and a conference presentation about post-quantum cryptography illustrates once again that innovation is at the heart of the Company.

WISeKey is involved in this NIST PQC (Post-Quantum Cryptography) program with the only objective of providing future-proof digital security solutions based on existing and new hardware architectures

Geneva, Switzerland May 28, 2020: WISeKey International Holding Ltd. (WISeKey) (SIX: WIHN, NASDAQ: WKEY), a leading global cybersecurity and IoT company, published today a technical article (https://www.wisekey.com/articles-white-papers/) discussing how to guarantee digital security and protect against hackers who will take advantage of the power of quantum information science. This research was presented (video here: https://www.wisekey.com/videos/) during the remote International Workshop on Code-Based Cryptography (CBCrypto 2020 Zagreb, Croatia May 9-10 2020).

IoT products are a major component of the 4th industrial revolution which brings together advances in computational power, semiconductors, blockchain, wireless communication, AI and data to build a vast technology infrastructure that works nearly autonomously.

According to a recent report published by Fortune Business Insights and titled Internet of Things (IoT) Market Size, Share and Industry Analysis By Platform (Device Management, Application Management, Network Management), By Software & Services (Software Solution, Services), By End-Use Industry (BFSI, Retail, Governments, Healthcare, Others) And Regional Forecast, 2019 2026., the IoT market was valued at USD 190.0 billion in 2018. It is projected to reach USD 1,102.6 billion by 2026, with a CAGR of 24.7% in the forecast period. Huge advances in manufacturing have allowed even small manufacturers to produce relatively sophisticated IoT products. This brings to the surface issues related to patents governing IoT products and communication standards governing devices.

Studies about quantum computing, namely how to use quantum mechanical phenomena to perform computation, were initiated in the early 1980s. The perspectives are endless and the future computers will get an incredible computing power when using this technology. When used by hackers, these computers will become a risk to cybersecurity: all the cryptographic algorithms used today to secure our digital world are exposed. Therefore, the US National Institute of Standards and Technology (NIST) launched in 2016 a wide campaign to find new resistant algorithms.

WISeKeys R&D department is very much involved in this NIST PQC (Post-Quantum Cryptography) program with the only objective to provide the market with future-proof digital security solutions based on existing and new hardware architectures. The new article reports one of the Companys current contributions to this safer cyber future. ROLLO-I, a NIST shortlisted algorithm, was implemented on some of WISeKeys secure chips (MS600x secure microcontrollers, VaultIC secure elements, ) with countermeasures to make them robust against attacks.

Although nobody exactly knows when quantum computers are going to be massively available, this is certainly going to happen. WISeKey is significantly investing to develop new technologies and win this race.

With a rich portfolio of more than 100 fundamental individual patents and 20 pending ones in various domains including the design of secure chips, Near Field Communication (NFC), the development of security firmware and backend software, the secure management of data, the improvement of security protocols between connected objects and advanced cryptography, to mention a few, WISeKey has become a key technology provider in the cybersecurity arena, says Carlos Moreira, Founder and CEO of WISeKey. This precious asset makes WISeKey the right Digital Trust Partner to deploy the current and future Internet of Everything.

Want to know more about WISeKeys Intellectual Properties? Please visit our website: https://www.wisekey.com/patents/.

About WISeKey

WISeKey (NASDAQ: WKEY; SIX Swiss Exchange: WIHN) is a leading global cybersecurity company currently deploying large scale digital identity ecosystems for people and objects using Blockchain, AI and IoT respecting the Human as the Fulcrum of the Internet. WISeKey microprocessors secure the pervasive computing shaping todays Internet of Everything. WISeKey IoT has an install base of over 1.5 billion microchips in virtually all IoT sectors (connected cars, smart cities, drones, agricultural sensors, anti-counterfeiting, smart lighting, servers, computers, mobile phones, crypto tokens etc.). WISeKey is uniquely positioned to be at the edge of IoT as our semiconductors produce a huge amount of Big Data that, when analyzed with Artificial Intelligence (AI), can help industrial applications to predict the failure of their equipment before it happens.

Our technology is Trusted by the OISTE/WISeKeys Swiss based cryptographic Root of Trust (RoT) provides secure authentication and identification, in both physical and virtual environments, for the Internet of Things, Blockchain and Artificial Intelligence. The WISeKey RoT serves as a common trust anchor to ensure the integrity of online transactions among objects and between objects and people. For more information, visitwww.wisekey.com.

Press and investor contacts:

Disclaimer:This communication expressly or implicitly contains certain forward-looking statements concerning WISeKey International Holding Ltd and its business. Such statements involve certain known and unknown risks, uncertainties and other factors, which could cause the actual results, financial condition, performance or achievements of WISeKey International Holding Ltd to be materially different from any future results, performance or achievements expressed or implied by such forward-looking statements. WISeKey International Holding Ltd is providing this communication as of this date and does not undertake to update any forward-looking statements contained herein as a result of new information, future events or otherwise.This press release does not constitute an offer to sell, or a solicitation of an offer to buy, any securities, and it does not constitute an offering prospectus within the meaning of article 652a or article 1156 of the Swiss Code of Obligations or a listing prospectus within the meaning of the listing rules of the SIX Swiss Exchange. Investors must rely on their own evaluation of WISeKey and its securities, including the merits and risks involved. Nothing contained herein is, or shall be relied on as, a promise or representation as to the future performance of WISeKey.

Originally posted here:
WISeKey is Adapting its R&D and Extended Patents Portfolio to the Post-COVID 19 Economy with Specific Focus on Post-Quantum Cryptography -...

Smart Cities and eGovernance Trends in India – Analytics Insight

Smart Cities Mission, an initiative launched in 2015, aims at creating the next generation cities in India. These cities would not just have an easy-to-access infrastructure but also be technologically advanced in government-citizen interaction. Technologies like Artificial Intelligence, Internet of Things, Radio-frequency identification (RFID), cloud computing, and many more would be used by the government to offer smarter solutions. It would ease the resource-deficit burden of the country by empowering the government to do much more with less.

And when cities are becoming smarter, the traditional methods of governing would not suffice. Thats why the government is taking new eGovernance initiatives that are laced with the latest technologies. Digital transformation in government is here and each government agency is taking required steps to ensure smooth eGovernance.

By eGovernance in smart cities, we mean a type of governance that aims at efficient usage of information and communication technology (ICT) for improving the services offered by the government to its citizens and increasing the stakeholder participation in decision making and policy formation. This would help improve the governance of the state and move towards government digital transformation.

The government has crossed the most crucial stages of eGovernance, starting from having an online presence, to allowing digital interaction opportunities to citizens, and ensuring digital transactions like paying of taxes, fees, etc. Now, it aims at reaching the fourth stage of eGovernance to make the smart cities truly smart. This is the transformation stage where it seeks to improve its functioning through e-means like automation, RPA, data collection, and much more.

The inhabitants of these smart cities would get to enjoy various e-benefits like e-consultation, e-democracy, e-participation, and policymaking. As smart cities lead towards a government digital transformation, the citizens would get everything online.

The government launched the smart city initiative back in 2015 but is still fighting several odds and challenges. There have been not one, not two, but many challenges to the smart city plan and eGovernance in these cities. Whether we talk about the illiteracy of the people, their unwillingness and resistance to change, or the risk of data breach, there are several challenges that are hindering the progress of smart cities in India. Apart from that, getting the right funding for transforming your government is also a challenge.

For smooth eGovernance in the country, especially the smart cities, the government needs to utilize several channels. Until now, the government has been using channels like smartphone applications, social media applications, SMS services, voice-prompted interface, and many more. In the coming times, there are several trends that are expected to change the way smart cities and eGovernance in India. Lets have a quick look:

The buzzword in the network and communications community right now is 5G. With higher bandwidth and better performance, 5G offers much faster connectivity. Smart cities would definitely be seen utilizing this network technology to allow faster connectivity. Moreover, using 5G, eGovernance operations and processes can be completed in half the time. And it wouldnt go without saying that the latest technologies like the Internet of Things and ICT would also benefit which are acting as a foundation of transforming these Indian cities into smart cities.

Augmented Reality and Virtual Reality are two of the emerging technologies that will transform the way users interact with businesses. As we can see real estate using AR and VR to showcase property listings, the government would also be seen using AR & VR for visualizing different scenarios in smart cities. By visualizing scenarios like those for any emergency situation in controlled environments, the government can make better decisions for the future. It provides them with an opportunity to view structural elements that could not be performed in reality.

Moreover, they can visualize data and interact with the environment to analyze it using various perspectives. The government developers can also use AR and VR to compare different infrastructure plans and visualize their roads, highways, bridges, etc., to shuffle and see which plan would work more efficiently for development.

Quantum computing allows scientists to calculate computational problems in a jiffy. It can be used to detect any anomaly in large bits of data to see what deviates from the normal.Machine Learning can also be used for detecting anomalies. The government can use these quantum computing algorithms on the data collected from the people and offer help. In smart cities, the government can use quantum computing to see anomalies in data from domains like medicine, traffic flow, economic forecasting, tax collection, meteorology, etc. It can quickly offer a solution by detecting these anomalies and creating a solution even before the problem starts or goes out of hand.

Smart cities would have autonomous processes that would collect real-time data from everything like the traffic management to weather forecasting to make better decisions. These smart cities would follow a data-driven governance mode to ensure they are serving their citizens to the best of their capabilities. From identifying the problems to analyzing opportunities and creating solutions, data would certainly empower the government of these smart cities. Even the UN has set high-quality, timely, reliable and disaggregated data as one of its major agendas by 2030.

The government agencies would be able to utilize the data to analyze anything from specific analysis of high disease rate in certain areas to general data analysis for housing & infrastructure plan, data will help in eGovernance.

IoT aka Internet of Things is indeed connecting everything, from humans and machines to machines and machines, to make them smarter. Whether we talk about smart TVs, smart homes or smart Cars, everything is connected and can be accessed by the click on a button on your smartphone. So, how can smart cities lag behind?

These smart cities would be laced with built-in sensors in street lights, electricity grids, traffic signals, and everything else to effectively monitor and automate the data collection and distribution. By analyzing the data, these smart sensors would help the utility companies and other organizations save energy and make sustainable decisions for the cities. Things would become intelligent with edge computing and AI technology. It would be a new form of governance that would be witnessed in these smart cities because of government digital transformation that IoT and smart things would bring.

Network convergence means the bringing together of different networks to promise the delivery of high-speed internet. With network convergence, you can get more convenient and flexible modes for communicating and accessing information online. Smart cities will see more wire-line and wireless networking systems offering a centralized infrastructure. Not only will this help the people of the smart cities and the government in monitoring the people here, but it would also enable the businesses to model state-of-the-art excellent business plans for the future.

If you are keeping up with the current situation, you would be aware of how the government is using drone technology to analyze the current situation in different regions. They are using drones to see if people are in their homes or still roaming the streets. Well, this is just the beginning, the government of the smart cities would be using more of these advanced and modern eDevices to monitor the people and make smart decisions.

Smart ICT and IoT devices would be on the rise in smart cities to ensure efficient monitoring of the cities and addressing real-time issues effectively.

The Indian government has advanced from using client-side systems to web-based systems and is now going complete cloud to ensure stability and connectivity. Cloud-based systems will help the government in creating national-level registries that are stored centrally on cloud. Downtime and maintenance cost reduces when everything is stored on the cloud; making eGovernance easier and quicker. Cloud migration or storing data on the cloud only requires a strong internet connection and the emergence of 5G would only add icing on the cake.

The aim is to create a unified e-government infrastructure that would be based on the cloud and enable easy monitoring and also eases the concern of interoperability. Services are accessible remotely over the internet and not locally, which allows quick access to all. There are various domains in which cloud can help in centralized monitoring and easier eGovernance. These are:

Indias National Informatics Centre has deployed an open-source eucalyptus software that acts as the foundation for its cloud approach. It allows broad-scale cloud-based eGovernance in India.

Wrapping it up, lets throw light on the four models of eGovernance that we will be seeing in our smart Indian cities. These would be G2C: Government to Citizen Model, G2G: Government To Government Model, G2B: Government To Business Model, and G2E: Government To Employee. These four models would allow a better and seamless flow of information from the government to different aspects of the system. When dealing with the life of citizens of smart cities, the government would need to follow these models of governance to ensure its smart services are creating the digital infrastructure that is needed.

Innovative ICT applications would rule the eGovernance of smart cities and we would certainly see emerging technologies like data analytics, GIS, Artificial Intelligence, Quantum computing, Internet of Things, and many more to rule smart cities. It will be interesting to see how these trends evolve as government digital transformation takes shape in Indian smart cities.

Tanya Kumari leads the Digital Marketing & Content for Classic Informatics, a global web development company. She is an avid reader, music lover and a technology enthusiast who likes to be up to date with all the latest advancements happening in the techno world. When she is not working on her latest article on tech dynamics, you can find her by the coffee machine, briefing co-workers on the perks of living a healthy lifestyle and how to achieve it.

See more here:
Smart Cities and eGovernance Trends in India - Analytics Insight

Tencent to Invest $70 Billion in ‘New Infrastructure’ Supporting AI and Cloud Computing – Caixin Global

Tencent to Invest $70 Billion in New Infrastructure Supporting AI and Cloud Computing

Chinese tech giant Tencent plans to invest 500 billion yuan ($70 billion) in digital infrastructure over the next five years in response to a government call to energize the worlds second-largest economy with investment in new infrastructure.

New infrastructure is broadly defined as infrastructure that supports technology and science based projects.

The massive investment by Tencent will focus on areas ranging from cloud computing, artificial intelligence (AI), blockchain and Internet of Things (IoT) to 5G networks, quantum computing and supercomputer centers, according to a company statement published Tuesday.

Tencent did not provide further details about the investment plan, but underscored the progress it has made in boosting its cloud computing capabilities. The company has built a network of data centers housing more than 1 million servers, the statement said.

In the fourth quarter of 2019, Tencent controlled 18% of Chinas cloud infrastructure service market, far behind market leader Alibaba, which grabbed 46.4%. Alibaba has announced plans to spend $28 billion on its cloud infrastructure over the next three years in a bid to help businesses embrace digitalization.

Tencent will also deepen partnerships with scientific research experts, laboratories and top universities to cultivate talents, tackle scientific problems and formulate industry standards, the statement added.

Tencents announcement comes days after Chinese premier Li Keqiang highlighted the role of new infrastructure in Chinas push to accelerate the tech-driven structural upgrade of its economy in his government work report delivered to the National Peoples Congress (NPC), the countrys top legislature.

Last month, Chinas National Development and Reform Commission (NDRC), the countrys top economic planner, divided new infrastructure into three areas: information-based infrastructure such as 5G and IoT; converged infrastructure supported by the application of the internet, big data and AI; and innovative infrastructure that supports scientific research, technology development and product development.

Contact reporter Ding Yi (yiding@caixin.com)

Related: Alibaba Now Controls Nearly Half of Chinas Cloud Service Market, Research Says

Follow this link:
Tencent to Invest $70 Billion in 'New Infrastructure' Supporting AI and Cloud Computing - Caixin Global