Monthly Archives: May 2022

Commercial AI Adoption is Picking Up Pace But Only A Third of Businesses Recognise that Data Strategy Facilitates AI – GlobeNewswire

Posted: May 27, 2022 at 2:07 am

MANCHESTER, United Kingdom, May 26, 2022 (GLOBE NEWSWIRE) -- Most businesses (90%) now use or plan to use Artificial Intelligence (AI) and even more (98%) have, or intend to, implement a data strategy. Yet new findings suggest that while the success of each is reliant on the other only one in three (35%) businesses with a data strategy say it includes provisions for Artificial Intelligence (AI). Decision Intelligence company Peak, released its State of AI, 2022 report today, highlighting that many businesses are making investments in both data and AI without connecting the two.

AI is inherently a data technology, and must function as part of an overarching data strategy, yet the majority of respondents in this survey are thinking about the two separately, said Richard Potter, co-founder and CEO of Peak.

The amount of data businesses produce is increasing exponentially and current predictions suggest businesses globally will create, capture, copy and consume 97 zettabytes of data this year alone.1 Implementing a data strategy is essential for defining how to collect, store and put that data to use.

Peaks State of AI report suggests that effectively leveraging their data is a priority for many businesses. One in two (52%) organizations with 100 or more employees now have a Chief Data Officer, and 86% have invested in a data lake or warehouse. Still, two-thirds (65%) of those with a data strategy have not made provisions for AI. Instead, data strategies are most likely to be focused on centralizing data within the business (55%), although establishing measurable goals for the use of data (54%) and managing security risks and compliance (53%) are similarly important goals.

The report also uncovered inconsistencies among leadership teams on data strategy. While 89% of CEOs said their company had a data strategy, only 63% of wider c-suite executives agreed.

This disconnect extends to perceptions of AI as well. Of the respondents in businesses that have or are working towards AI, 55% of CEOs thought the technology was being implemented throughout the business, compared to just 42% of other c-suite leaders.

Our research also reveals that within many companies there is a lack of clarity around the overall AI strategy, even at the top levels of management, said Richard Potter. If businesses want to successfully implement AI and, critically, drive value with this technology, we need open discussion within a business to ensure everyone is aligned to and understands the vision.

The report also provides a look at the regional differences in attitudes and progress being made in digital transformation, data maturity and AI adoption, with India more advanced than both the US and UK.

Methodology The report is based on a survey of 775 decision makers (senior managers and above) from the US, UK or India. Respondents were all from companies with 100 or more staff. Research was conducted for Peak by Opinium, in partnership with the Center for Economics and Business Research (Cebr). The survey ran from 19-25 April, 2022.

About Peak Jointly founded in Manchester and Jaipur in 2015 by Richard Potter, David Leitch and Atul Sharma, Peak is on a mission to change the way the world works.

A pioneer of the Decision Intelligence category, its platform enables customers to apply AI to the commercial decision making process. Peak features three products Dock, Factory, and Work that can be leveraged by businesses at every stage of their AI journey to build apps that deliver against specific business needs. With features to support both technical and line-of-business users, Peak makes these apps widely accessible to everyone within a business, simplifying and accelerating adoption of AI. It is used by leading brands including Nike, ASOS, PepsiCo, KFC and Sika.

The company has grown significantly over the last three years and, in August 2021, Peak announced a $75m Series C funding round led by SoftBanks Vision Fund II. The same year, it received a Best Companies 3-star accreditation, which recognises extraordinary levels of employee engagement and was ranked by The Sunday Times as one of the Best 100 Companies to Work For in 2020 and 2021.

1 Statista, 2022. Volume of data/information created, captured, copied, and consumed worldwide from 2010 to 2025 (in zettabytes). Statista.

Contact:

Mica Hahnpeak@praytellagency.com510.206.3932

More here:

Commercial AI Adoption is Picking Up Pace But Only A Third of Businesses Recognise that Data Strategy Facilitates AI - GlobeNewswire

Posted in Ai | Comments Off on Commercial AI Adoption is Picking Up Pace But Only A Third of Businesses Recognise that Data Strategy Facilitates AI – GlobeNewswire

Algorithms, AI, and the ADA | McAfee & Taft – JDSupra – JD Supra

Posted: at 2:07 am

On May 12, the Equal Employment Opportunity Commission and the U.S. Department of Justice issued guidance to caution employers about using artificial intelligence (AI) and software tools to make employment decisions. The guidance, titled The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees, warns that using these tools without safeguards could result in an Americans with Disabilities Act violation.

The ADA requires that people with disabilities have full access to public and private services and facilities. It also bans employers from discriminating on the basis of disabilities, and requires that they provide reasonable accommodations to employees with disabilities to enable them to do a job.

Companies increasingly rely on software and AI to monitor employees locations, productivity, and performance, such as surveillance monitoring that tracks employees time on task. Employers may then use this information to make pay, disciplinary, and termination decisions. The guidance warns employers that doing so may result in discrimination against employees with disabilities.

The EEOC advises that employers should not be using algorithms, software or AI to make employment decisions without giving employees a chance to request reasonable accommodations. Heres why: Software and AI are designed to provide information based on preset specifications of the average or ideal worker. However, employees with disabilities may not work under typical conditions if they are utilizing an accommodation.

To use these tools without running afoul of the law, employers should train their HR personnel, managers and supervisors to recognize and respond to requests for accommodations from employees, even when the request is informal. This could include an employees requests to take a test in an alternative format, to be assessed in another way, or to be allowed to spend extra time off task due to a medical condition, for example. Employers should also ensure they use software that has been tested by users with disabilities. Other tips for staying in compliance include providing clear instructions to employees for requesting accommodations and avoiding pre-employment screening for traits that may reveal disabilities.

Employers would do well to remember that employees are humans, and sometimes decisions about humans need to be made by humans, not by computers. Metrics tracked by computers should be just one piece of the puzzle when evaluating employees performance.

Read more here:

Algorithms, AI, and the ADA | McAfee & Taft - JDSupra - JD Supra

Posted in Ai | Comments Off on Algorithms, AI, and the ADA | McAfee & Taft – JDSupra – JD Supra

AI chip startup SiMa.ai bags another $30M ahead of growth – TechCrunch

Posted: at 2:07 am

As the demand for AI-powered apps grows, startups developing dedicated chips to accelerate AI workloads on-premises are reaping the benefits. A recent ZDNet piece reaffirms that the AI edge chip market is booming, fueled by staggering venture capital financing in the hundreds of millions of dollars. EdgeQ, Kneron and Hailo are among the dozens of upstarts vying for customers, the last of which nabbed $136 million in October as it doubles down on new opportunities.

Another company competing in the increasingly saturated segment is SiMa.ai, which is developing a system-on-chip platform for AI applications particularly computer vision applications. After emerging from stealth in 2019, SiMa.ai began demoing an accelerator chipset that combines traditional compute IP from Arm with a custom machine learning accelerator and dedicated vision accelerator, linked via a proprietary interconnect,

To lay the groundwork for future growth, SiMa.ai today closed a $30 million additional investment from Fidelity Management & Research Company, with participation from Lip-Bu Tan (whos joining the board) and previous investors, concluding the startups Series B. It brings SiMa.ais total capital raised to $150 million.

The funding will be used to accelerate scaling of the engineering and business teams globally, and to continue investing in both hardware and software innovation, founder and CEO Krishna Rangasayee told TechCrunch in an email interview. The appointment of Lip-Bu Tan as the newest member of SiMa.ais board of directors is a strategic milestone for the company. He has a deep history of investing in deep tech startups that have gone on to disrupt industries across AI, data, semiconductors, among others.

Rangasayee spent most of his career in the semiconductor industry at Xilinx, where he was GM of the companys overall business. An engineer by trade Rangasayee was the COO at Groq and once headed product planning at Altera, which Intel acquired in 2015 he says that he was motivated to start SiMa.ai by the gap he saw in the machine learning market for edge devices.

I founded SiMa.ai with two questions: What are the biggest challenges in scaling machine learning to the embedded edge? and How can we help?, Rangasaye said. By listening to dozens of industry-leading customers in the machine learning trenches, SiMa.ai developed a deep understanding of their problems and needs like getting the benefits of machine learning without a steep learning curve, preserving legacy applications along with future proof ML implementations, and working with a high-performance, low-power solution in a user-friendly environment.

SiMa.ai aims to work with companies developing driverless cars, robots, medical devices, drones and more. The company claims to have completed several customer engagements and last year announced the opening of a design center in Bengaluru, India, as well as a collaboration with the University of Tbingen to identify AI hardware and software solutions for ultra-low energy consumption.

As over-100-employee SiMa.ai works to productize its first-generation chip, work is underway on the second-generation architecture, Rangasayee said.

SiMa.ais software and hardware platform can be used to enable scaling machine learning to [a range of] embedded edge applications. Even though these applications will use many diverse computer vision pipelines with a variety of machine learning models, SiMa.ais software and hardware platform has the flexibility to be used to address these, Rangasayee added. SiMa.ais platform addresses any computer vision application using any model, any framework, any sensor, any resolution [We as a company have] seized the opportunity to disrupt the burgeoning edge computing space in pursuit of displacing decades-old technology and legacy incumbents.

SiMa.ais challenges remain mass manufacturing its chips affordably and beating back the many rivals in the edge AI computing space. (The companys says that it plans to ship mass-produced production volumes of its first chip sometime this year.) But the startup stands to profit handsomely if it can capture even a sliver of the sector. Edge computing is forecast to be a $6.72 billion market by 2022, accordingto Markets and Markets. Its growth will coincide with that of the deep learning chipset market, whichsome analysts predict will reach $66.3 billion by 2025.

Machine learning has had a profound impact on the cloud and mobile markets over the past decade and the next battleground is the multi-trillion-dollar embedded edge market, Tan said in a statement. SiMa.ai has created a software-centric, purpose-built platform that exclusively targets this large market opportunity. SiMa.ais unique architecture, market understanding and world-class team has put them in a leadership position.

Read the original here:

AI chip startup SiMa.ai bags another $30M ahead of growth - TechCrunch

Posted in Ai | Comments Off on AI chip startup SiMa.ai bags another $30M ahead of growth – TechCrunch

AI and machine learning are improving weather forecasts, but they won’t replace human experts – The Conversation Indonesia

Posted: at 2:07 am

A century ago, English mathematician Lewis Fry Richardson proposed a startling idea for that time: constructing a systematic process based on math for predicting the weather. In his 1922 book, Weather Prediction By Numerical Process, Richardson tried to write an equation that he could use to solve the dynamics of the atmosphere based on hand calculations.

It didnt work because not enough was known about the science of the atmosphere at that time. Perhaps some day in the dim future it will be possible to advance the computations faster than the weather advances and at a cost less than the saving to mankind due to the information gained. But that is a dream, Richardson concluded.

A century later, modern weather forecasts are based on the kind of complex computations that Richardson imagined and theyve become more accurate than anything he envisioned. Especially in recent decades, steady progress in research, data and computing has enabled a quiet revolution of numerical weather prediction.

For example, a forecast of heavy rainfall two days in advance is now as good as a same-day forecast was in the mid-1990s. Errors in the predicted tracks of hurricanes have been cut in half in the last 30 years.

There still are major challenges. Thunderstorms that produce tornadoes, large hail or heavy rain remain difficult to predict. And then theres chaos, often described as the butterfly effect the fact that small changes in complex processes make weather less predictable. Chaos limits our ability to make precise forecasts beyond about 10 days.

As in many other scientific fields, the proliferation of tools like artificial intelligence and machine learning holds great promise for weather prediction. We have seen some of whats possible in our research on applying machine learning to forecasts of high-impact weather. But we also believe that while these tools open up new possibilities for better forecasts, many parts of the job are handled more skillfully by experienced people.

Today, weather forecasters primary tools are numerical weather prediction models. These models use observations of the current state of the atmosphere from sources such as weather stations, weather balloons and satellites, and solve equations that govern the motion of air.

These models are outstanding at predicting most weather systems, but the smaller a weather event is, the more difficult it is to predict. As an example, think of a thunderstorm that dumps heavy rain on one side of town and nothing on the other side. Furthermore, experienced forecasters are remarkably good at synthesizing the huge amounts of weather information they have to consider each day, but their memories and bandwidth are not infinite.

Artificial intelligence and machine learning can help with some of these challenges. Forecasters are using these tools in several ways now, including making predictions of high-impact weather that the models cant provide.

In a project that started in 2017 and was reported in a 2021 paper, we focused on heavy rainfall. Of course, part of the problem is defining heavy: Two inches of rain in New Orleans may mean something very different than in Phoenix. We accounted for this by using observations of unusually large rain accumulations for each location across the country, along with a history of forecasts from a numerical weather prediction model.

We plugged that information into a machine learning method known as random forests, which uses many decision trees to split a mass of data and predict the likelihood of different outcomes. The result is a tool that forecasts the probability that rains heavy enough to generate flash flooding will occur.

We have since applied similar methods to forecasting of tornadoes, large hail and severe thunderstorm winds. Other research groups are developing similar tools. National Weather Service forecasters are using some of these tools to better assess the likelihood of hazardous weather on a given day.

Researchers also are embedding machine learning within numerical weather prediction models to speed up tasks that can be intensive to compute, such as predicting how water vapor gets converted to rain, snow or hail.

Its possible that machine learning models could eventually replace traditional numerical weather prediction models altogether. Instead of solving a set of complex physical equations as the models do, these systems instead would process thousands of past weather maps to learn how weather systems tend to behave. Then, using current weather data, they would make weather predictions based on what theyve learned from the past.

Some studies have shown that machine learning-based forecast systems can predict general weather patterns as well as numerical weather prediction models while using only a fraction of the computing power the models require. These new tools dont yet forecast the details of local weather that people care about, but with many researchers carefully testing them and inventing new methods, there is promise for the future.

There are also reasons for caution. Unlike numerical weather prediction models, forecast systems that use machine learning are not constrained by the physical laws that govern the atmosphere. So its possible that they could produce unrealistic results for example, forecasting temperature extremes beyond the bounds of nature. And it is unclear how they will perform during highly unusual or unprecedented weather phenomena.

And relying on AI tools can raise ethical concerns. For instance, locations with relatively few weather observations with which to train a machine learning system may not benefit from forecast improvements that are seen in other areas.

Another central question is how best to incorporate these new advances into forecasting. Finding the right balance between automated tools and the knowledge of expert human forecasters has long been a challenge in meteorology. Rapid technological advances will only make it more complicated.

Ideally, AI and machine learning will allow human forecasters to do their jobs more efficiently, spending less time on generating routine forecasts and more on communicating forecasts implications and impacts to the public or, for private forecasters, to their clients. We believe that careful collaboration between scientists, forecasters and forecast users is the best way to achieve these goals and build trust in machine-generated weather forecasts.

Follow this link:

AI and machine learning are improving weather forecasts, but they won't replace human experts - The Conversation Indonesia

Posted in Ai | Comments Off on AI and machine learning are improving weather forecasts, but they won’t replace human experts – The Conversation Indonesia

Managing disaster and disruption with AI, one tree at a time – ZDNet

Posted: at 2:07 am

World Weather Attribution

It sounds like a contradiction in terms, but disaster and disruption management is a thing. Disaster and disruption are precisely what ensues when catastrophic natural events occur, and unfortunately, the trajectory the world is on seems to be exacerbating the issue. In 2021 alone, the US experienced15+ weather/climate disaster eventswith damages exceeding $1 billion.

Previously, we have explored various aspects of the ways data science and machine learning intertwine with natural events -- fromweather predictiontothe impact of climate change on extreme phenomenaandmeasuring the impact of disaster relief. AiDash, however, is aiming at something different: helping utility and energy companies, as well as governments and cities, manage the impact of natural disasters, including storms and wildfires.

We connected with AiDash co-founder and CEO Abhishek Singh to learn more about its mission and approach, as well its newly releasedDisaster and Disruption Management System (DDMS).

Singh describes himself as a serial entrepreneur with multiple successful exits. Hailing from India, Singh founded one of the world's first mobile app development companies in 2005 and then an education tech company in 2011.

Following the merger of Singh's mobile tech company with a system integrator, the company was publicly listed, and Singh moved to the US. Eventually, he realized that power outages are a problem in the US, with the wildfires of 2017 were a turning point for him.

That, and the fact that satellite technology has been maturing -- with Singh marking 2018 as an inflection point for the technology -- led to founding AiDash in 2020.

AiDash notes that satellite technology has reached maturity as a viable tool. Over 1,000 satellites are launched every year, employing various electromagnetic bands, including multispectral bands and synthetic aperture radar (SAR) bands.

The company uses satellite data, combined with a multitude of other data, and builds products around predictive AI models to allow preparation and resource placement, evaluate damages to understand what restoration is needed and which sites are accessible and help plan the restoration itself.

AiDash uses a variety of data sources. Weather data, to be able to predict the course storms take and their intensity. Third-party or enterprise data, to know what assets need to be protected and what their locations are.

Also:The EU AI Act could help get to Trustworthy AI, according to the Mozilla Foundation

The company's primary client thus far has been utility companies. For them, a common scenario involves damages caused by falling trees or floods. Vegetation, in general, is a key factor in AiDash AI models but not the only one.

As Singh noted, AiDash has developed various AI models for specific use cases. Some of them include an encroachment model, an asset health model, a tree health model and an outage prediction model.

Those models have taken considerable expertise to develop. As Singh noted, in order to do that, AiDash is employing people such as agronomists and pipeline integrity experts.

"This is what differentiates a product from a technology solution. AI is good but not good enough if it's not domain-specific, so the domain becomes very important. We have this team in-house, and their knowledge has been used in building these products and, more importantly, identifying what variables are more important than others", said Singh.

To exemplify the application of domain knowledge, Singh referred to trees. As he explained, more than 50% of outages that happen during a storm are because of falling trees. Poles don't normally fall on their own -- generally, it's trees that fall on wires and snap them or cause poles to fall. Therefore, he added that understanding trees is more important than understanding the weather in this context.

"There are many weather companies. In fact, we partner with them -- we don't compete with them. We take their weather data, and we believe that the weather prediction model, which is also a complicated model, works. But then we supplement that with tree knowledge", said Singh.

In addition, AiDash uses data and models about the assets utilities manage. Things such as what parts may break when lightning strikes, or when devices were last serviced. This localized, domain-specific information is what makes predictions granular. How granular?

Also:Averting the food crisis and restoring environmental balance with data-driven regenerative agriculture

Supplementing data and AI models with domain-specific knowledge, in this case knowledge about trees, is what makes the difference for AiDash

"We know each and every tree in the network. We know each and every asset in the network. We know their maintenance history. We know the health of the tree. Now, we can make predictions when we supplement that with weather information and the storm's path in real-time. We don't make a prediction that Texas will see this much damage. We make a prediction that this street in this city will see this much damage," Singh said.

In addition to utilizing domain knowledge and a wide array of data, Singh also identified something else as key to AiDash's success: serving the right amount of information to the right people the right way. All the data live and feed the elaborate models under the hood and are only exposed when needed -- for example if required by regulation.

For the most part, what AiDash serves is solutions, not insights, as Singh put it. Users access DDMS via a mobile application and a web application. Mobile applications are meant to be used by people in the field, and they also serve to provide validation for the system's predictions. For the people doing the planning, a web dashboard is provided, which they can use to see the status in real-time.

Also:H2O.ai brings AI grandmaster-powered NLP to the enterprise

DDMS is the latest addition toAiDash's product suite, including the Intelligent Vegetation Management System, the Intelligent Sustainability Management System, the Asset Cockpit and Remote Monitoring & Inspection. DDMS is currently focused on storms and wildfires, with the goal being to extend it to other natural calamities like earthquakes and floods, Singh said.

The company's plans also include extending its customer base to public authorities. As Singh said, when data for a certain region are available, they can be used to deliver solutions to different entities. Some of these could also be given free of charge to government entities, especially in a disaster scenario, as AiDash does not incur an incremental cost.

AiDash is headquartered in California, with its 215 employees spread in offices in San Jose and Austin in Texas, Washington DC, London and India. The company also has clients worldwide and has been seeing significant growth. As Singh shared, the goal is to go public around 2025.

Link:

Managing disaster and disruption with AI, one tree at a time - ZDNet

Posted in Ai | Comments Off on Managing disaster and disruption with AI, one tree at a time – ZDNet

ALTERED STATE MACHINE ANNOUNCES THE FIRST-EVER METAVERSE AI BOXING GAME "MUHAMMAD ALI — "THE NEXT LEGENDS" – PR Newswire

Posted: at 2:07 am

download hi-res assets here

'Muhammad Ali The Next Legends' is part of ASM's new long-term partnership with Authentic Brands Group (ABG), which owns Muhammad Ali Enterprises in partnership with Lonnie Ali, who is a trustee of the Muhammad Ali Family Trust. The game will be driven by ASM's artificial intelligence protocol, while web3 creative factory Non-Fungible Labs leads the visual game and character design. This partnership is the first of many for ABG's brands within the ASM ecosystem.

When 'Muhammad Ali The Next Legends' launches, players will be able to collect unique NFT boxers powered by ASM's artificial intelligence, train them and compete for the first time in a fully AI, web3 combat experience. Players will eventually need to unite two NFTs to play the game an ASM Brain, which is also an NFT, with their boxer. Utilizing "Artificial Intelligence Gyms," a signature component of the ASM Protocol, owners will train their brain on the skills needed to win a match (i.e., agility, endurance, stance, sparring, jabs, hooks and uppercuts). Following in the footsteps of the 3-time world champion, the goal of the game will be to train and develop your boxer and win matches against other boxers to eventually become "The Next Legend." Each 3D boxer is completely unique in both the physical character model and its mental ability. Minting dates for boxers and additional game details about the game will be revealed in the coming months.

"We are incredibly excited and honored to welcome world-renowned icon Muhammad Ali into web3," says David McDonald, CEO of ASM. "ABG's portfolio of world-leading brands make them the ideal partner to introduce NFT-curious consumers to the metaverse. Our Non-Fungible Intelligence powers and revolutionizes web3 gaming by allowing NFT owners to train their characters, producing limitless possibilities and interoperable use cases in the metaverse."

"We are thrilled for fans to immerse themselves in the world of 'Muhammad Ali The Next Legends'," said Marc Rosen, President, Entertainment at ABG. "ABG is delighted to kick off this unique partnership with ASM by using the strength of our brands to positively contribute to storytelling behind ASM's innovative technology."

"Our team is so proud to be on the cutting edge creating an entire digital world inspired by Muhammad Ali," says Alex Smeele, CEO at Non-Fungible Labs. "The Next Legends project is a huge endeavor as we are working to ensure every single aspect and character is unique and one-of-a-kind. These boxers have distinct personalities and characteristics and it's been an adventure for our team to build."

About Altered State MachineAltered State Machine (ASM) are leaders in the democratization of Artificial Intelligence, creating a world where the value of A.I. flows directly back to the community. ASM's revolutionary protocol allows everyone to own and utilize A.I. via NFT Brains, also known as Non-Fungible IntelligenceTM. Non-Fungible IntelligenceTM is interoperable across avatars, games, worlds and other Web3 applications, and is trained and owned by each individual user. Join the movement for decentralized Artificial Intelligence with Altered State Machine. Follow ASM on Twitter, Instagramand Discord.

About Non-Fungible LabsNon-Fungible Labs is a web3 dream factory based in Auckland, New Zealand. With an existing ecosystem of successful projects, including FLUF World, NF Labs are working closely with a range of partners to co-create fun and immersive decentralized content on their mission to empower everyday creatives in the development of an open metaverse.

Follow NF Labs on Twitter, Instagramand Facebook.

About Muhammad AliMuhammad Ali is one of the most influential athletes and humanitarians of the 20th century and has created some of the most legendary moments in sports and civil rights history. More than 50 years after he emerged as a Gold Medalist in Boxing at the 1960 Rome Olympics, Ali's legacy extends beyond the ring and he continues to be widely recognized as one of the most celebrated and beloved icons of all time. His incomparable work ethic, signature boxing techniques, and fearlessness towards standing up for his beliefs, all contribute to the legend that is Muhammad Ali. Among his countless awards and accolades, he was named Sports Illustrated's "Sportsman of the Century," GQ's "Athlete of the Century," a United Nations Messenger of Peace, and has received the Presidential Medal of Freedom and the Amnesty International Lifetime Achievement Award. Muhammad Ali's legacy is celebrated across cultures and continues to inspire today's most influential athletes, artists, musicians and humanitarians around the world. Follow Muhammad Ali on Instagram, Facebook andTwitter.

About Authentic Brands GroupAuthentic Brands Group (ABG) is a brand development, marketing and entertainment company, which owns a portfolio of global media, entertainment and lifestyle brands. Headquartered in New York City, with offices around the world, ABG elevates and builds the long-term value of more than 50 consumer brands and properties by partnering with best-in-class manufacturers, wholesalers and retailers. Its brands have a global retail footprint across the luxury, specialty, department store, mid-tier, mass and e-commerce channels and in more than 6,100 freestanding stores and shop-in-shops around the world.

ABG is committed to transforming brands by delivering compelling product, content, business and immersive experiences. It creates and activates original marketing strategies to drive the success of its brands across all consumer touchpoints, platforms and emerging media.

ABG's portfolio of iconic and world-renowned brands generates more than $21.3 billion in global annual retail sales, and includes Marilyn Monroe, Elvis Presley, Muhammad Ali, Shaquille O'Neal, David Beckham, Dr. J, Greg Norman, Neil Lane, Thalia, Sports Illustrated, Reebok, Eddie Bauer, Spyder, Volcom, Airwalk, Nautica, Izod, Forever 21, Aropostale, Juicy Couture, Vince Camuto, Lucky Brand, Nine West, Jones New York, Frederick's of Hollywood, Adrienne Vittadini, Van Heusen, Arrow, Tretorn, Tapout, Prince, Vision Street Wear, Brooks Brothers, Barneys New York, Judith Leiber, Herve Leger, Frye, Hickey Freeman, Hart Schaffner Marx, Thomasville, Drexel and Henredon.

For more information, visit authenticbrands.com.Follow ABG on Twitter, LinkedIn and Instagram.

Authentic Brands/Muhammad Ali Contact:Michelle Ciciyasvili[emailprotected]

Altered State Machine Contact:Chelsey Northern[emailprotected]

SOURCE Authentic Brands Group

Original post:

ALTERED STATE MACHINE ANNOUNCES THE FIRST-EVER METAVERSE AI BOXING GAME "MUHAMMAD ALI -- "THE NEXT LEGENDS" - PR Newswire

Posted in Ai | Comments Off on ALTERED STATE MACHINE ANNOUNCES THE FIRST-EVER METAVERSE AI BOXING GAME "MUHAMMAD ALI — "THE NEXT LEGENDS" – PR Newswire

Back to the Future: Protecting Against Quantum Computing – Nextgov

Posted: at 2:06 am

The previous two years have proven the importance of proactively working to secure our data, especially as organizations underwent digital transformations and suffered increased cyberattacks as a result. For those organizations that have been breached, but their data hasnt yet been exploited and released to the wild, it may already be too late.

Organizations that have already experienced a data breach may become victims of harvest today, decrypt tomorrow or capture-now-decrypt-later attacks. These attacks, also referred to as harvesting for short, capitalize on known vulnerabilities to steal data that may not even be truly accessible using todays decryption technologies.

These attacks require long-term planning and projections on the advancement of quantum-computing technologies. While these technologies may still be years away from being commercially available and widely used, organizations should look to protect against these threats now to prevent themselves from becoming a future casualty.

Before getting into more detail on the future threat posed by quantum computing, we should look to a historic example to inform our present decision-making.

Lessons from the Enigma

In 1919 a Dutchman invented an encoding machine that was universally adopted by the German army, called the Enigma. Unbeknownst to Germany, the Allied powers managed to break the coding scheme, and were able to decode some messages as early as 1939, when the first German boots set foot in Poland. For years, however, the German army believed the Enigma codes were unbreakable and was communicating in confidence, never realizing their messages were out in the open.

History may already be repeating itself. I cant help but think that most organizations today also believe that their encrypted data is safe, but someone else may be close to, or already, reading their secure mail without them even knowing.

Todays modern cryptography is often deemed unbreakable, but a big, shiny black building in Maryland suggests that governments may be better at this than is widely believed. Although a lot of credit goes to the magical and elusive quantum computer, the reality is different: poor implementations of crypto suites are the primary vector for breaking encryption of captured traffic. So are certificates captured through other means, brute-forced passwords and even brute-forced crypto, because insufficient entropy is used to generate random numbers.

All these techniques are part of the arsenal of any nation who wants to strategically collect information on the happenings of other international playerswhether government or private companies. These techniques also require higher levels of coordination and financial backing to be a successful part of an intelligence strategy. As I continue to see, when the value of the captured information is high enough, the investment is worth it. Consider then the vast data centers being built by many governments: they are full of spinning disks of memory storage just in case current approaches don't yield access. Data storage has become an investment in the future of intelligence gathering.

Looking towards the future

Harvesting attacks does not just work as a strategy for quantum computers. We will likely have more powerful processors for brute-forcing in the future. Additionally, other types of stochastic computation machines, such as spintronics, are showing promise and even the de-quantification of popular algorithms may one day see a binary computer version of Peter Shors algorithm. The latter helps us explain how quantum computing may help to make quick work of current encryption techniques. This will allow breaking of Diffie-Hellman key exchanges or RSA on a conventional computer in smaller time frames.

So how do we shield ourselves? It is hard to imagine armoring oneself against any possible threat to encryption. Just like it is difficult to predict exactly which stocks will do well, and which ones won't. There are too many factors and too much chaos. One is left with only the option of diversification: using an out-of-band key distributing strategy that allows multiple paths for key and data to flow, and a range of algorithms and keys to be used. By diversifying our cryptographic approaches we are also able to minimize the damage in case a particular strategy fails us. Monocultures are at risk of pandemics, let's not fall victim to encryption monoculture as we move into the future.

It is past time to take steps now that will protect organizations from future threats. This includes developing actionable standards. Both federal agencies and the private sector need to embrace quantum-safe encryption. Additionally, they should look to develop next-generation, standards-based systems that will address current encryption method shortcomings and poor key management practices. This will help to ensure not only quantum-safe protection from future threats, but also stronger security from contemporary threats.

Organizations face a dizzying array of threats and need to constantly remain vigilant to thwart attacks. While looking to protect against current threats is certainly important, organizations should begin projecting future threats, including the threat posed by quantum computing. As technology continues to advance each day, one should remember that past encryption, like the Enigma machine, didnt remain an enigma for long and was broken in time. The advent of quantum computing may soon make our unbreakable codes go the way of the dinosaur. Prepare accordingly.

Read more:

Back to the Future: Protecting Against Quantum Computing - Nextgov

Posted in Quantum Computing | Comments Off on Back to the Future: Protecting Against Quantum Computing – Nextgov

Q&A with Atos’ Eric Eppe, an HPCwire Person to Watch in 2022 – HPCwire

Posted: at 2:06 am

HPCwire presents our interview with Eric Eppe, head of portfolio & solutions, HPC & Quantum at Atos, and an HPCwire 2022 Person to Watch. In this exclusive Q&A, Eppe recounts Atos major milestones from the past year and previews whats in store for the year ahead. Exascale computing, quantum hybridization and decarbonization are focus areas for the company and having won five out of the seven EuroHPC system contracts, Atos is playing a big role in Europes sovereign technology plans. Eppe also shares his views on HPC trends whats going well and what needs to change and offers advice for the next-generation of HPC professionals.

Eric, congratulations on your selection as a 2022 HPCwire Person to Watch. Summarize the major milestones achieved last year for Atos in your division and briefly outline your HPC/AI/quantum agenda for 2022.

2021 was a strong year for Atos Big Data and Security teams, despite the pandemic. Atos BullSequana XH2000 was in its third year and was already exceeding all sales expectations. More than 100,000 top bin AMD CPUs were sold on this platform, and it made one of the first entries for AMD Epyc in the Top500.

We have not only won five out of seven EuroHPC petascale projects, but also delivered some of the most significant HPC systems. For example, we delivered one of largest climate studies and weather forecast systems in the world to the European Centre for Medium-Range Weather Forecasts (ECMWF). In addition, Atos delivered a full BullSequana XH2000 cluster to the German climate research center (DKRZ). 2021 was also the launch of Atos ThinkAI and the delivery of a number of very large AI systems such as WASP in Sweden.

2022 is the year in which we are preparing the future with our next-gen Atos BullSequana XH3000 supercomputer, a hybrid computing platform bringing together flexibility, performance and energy-efficiency. Announced recently in Paris, this goes along with the work that has started on hybrid computing frameworks to integrate AI and quantum accelerations with supercomputing workflows.

Sovereignty and sustainability were key themes at Atos launch of its exascale supercomputing architecture, the BullSequana XH3000. Please address in a couple paragraphs how Atos views these areas and why they are important.

This was a key point I mentioned during the supercomputers reveal. For Europe, the real question is should we indefinitely rely on foreign technologies to find new vaccines, develop autonomous electric vehicles, and find strategies to face climate changes?

The paradox is that Europe leads the semiconductor substrate and manufacturing markets (with Soitec and ASML) but has no European foundry in the <10nm class yet. It is participating in the European Processor Initiative (EPI) and will implement SiPearl technologies in the BullSequana XH3000, but it will take time to mature enough and replace other technologies.

Atos has built a full HPC business in less than 15 years, becoming number one in Europe and in the top four worldwide in the supercomputer segment, with its entire production localized in its French factory. We are heavily involved in all projects that are improving European sovereignty.

EU authorities are today standing a bit behind compared to how the USA and China regulations are managing large petascale or exascale procurements, as well as the difference between how funding flows to local companies developing HPC technologies. This is a major topic.

Atos has developed a significant amount of IP, ranging from supercomputing platforms, low latency networks, cooling technologies, software and AI, security and large manufacturing capabilities in France with sustainability and sovereignty as a guideline. We are partnering with a number of European companies, such as SiPearl, IQM, Pasqal, AQT, Graphcore, ARM, OVH and many labs, to continue building this European Sovereignty.

Atos has announced its intention to develop and support quantum accelerators. What is Atos quantum computing strategy?

Atos has taken a hardware-agnostic approach in crafting quantum-powered supercomputers and enabling end-user applications. Atos ambition is to be a major player in multiple domains amongst which are quantum programming and simulation, the next-generation quantum-powered supercomputers, consulting services, and of course, quantum-safe cybersecurity.Atos launched the Atos Quantum Learning Machine (QLM) in 2017, a quantum appliance emulating almost all target quantum processing units with abstractions to connect to real quantum computing hardware when available. We have been very successful with the QLM in large academics or research centers on all continents. In 2021, there was a shift of many commercial companies starting to work on real use cases, and the QLM is the best platform to start these projects without waiting for hardware to be available at scale.

Atos plays a central role in European-funded quantum computing projects. We are cooperating with NISC QPU makers to develop new technologies and increase their effectiveness in a hybrid computing scenario. This includes, but is not limited to, hybrid frameworks, containerization, parallelization, VQE, GPU usage and more.

Where do you see HPC headed? What trends and in particular emerging trends do you find most notable? Any areas you are concerned about, or identify as in need of more attention/investment?

As for upcoming trends in the world of supercomputing, I see a few low-noise trends. Some technological barriers that may trigger drastic changes, and some arising technologies that may have large impacts on how we do HPC in the future. Most players, and Atos more specifically, are looking into quantum hybridization and decarbonization which will open many doors in the near future.

Up to this point, HPC environment has been quite conservative. I believe that administrators are starting to see the benefits of orchestration and micro service-based cluster management. There are some obstacles, but I do see more merits than issues in containerizing and orchestrating HPC workloads. There are some rising technological barriers that may push our industry in a corner, while at the same time giving us opportunities to change the way we architect our systems.

High performance low latency networks are making massive use of copper cables. With higher data rates (400Gb/s in 2022 and 800Gb/s in 2025) the workable copper cable length will be divided by 4x, replaced by active or fiber cables with cabling costs certainly increasing by 5 or 6x. This is clearly an obstacle to systems that are going to range in the 25,000 endpoints, with a cabling budget in tens of millions.

This very simple problem may impose a paradigm shift in the way devices, from a general standpoint, are connected and communicate together. This triggers deeper architectural design points changes from racks to nodes and down to elements that are deeply integrated today such as compute cores, buses, memory and associated controllers, and switches. I wont say the 800Gb/s step alone will change everything, but the maturity of some technologies, such as silicon photonics and the emerging standardization on very powerful protocols like CXL, will enable a lot more flexibility while continuing to push the limits. Also, note that CXL is just in its infancy, but already shows promise for a memory coherent space between heterogenous devices, centralized or distributed, mono or multi-tenant memory pools.

Silicon photonic integrated circuits (PICs), because they offer theoretically Tb/s bandwidth through native fiber connection, should allow a real disaggregation between devices that are today very tightly connected together on more complex and more expensive than ever PCBs.

What will be possible inside a node will be possible outside of it, blurring the traditional frontier between a node, a blade, a rack and a supercomputer, offering a world of possibilities and new architectures.

The market is probably not fully interested in finding an alternative to the ultra-dominance of the Linpack or its impact on how we imagine, engineer, size and deliver our supercomputers. Ultimately, how relevant is its associated ranking to real life problems? I wish we could initiate a trend that ranks global system efficiency versus available peak power. This would help HPC players to consider working on all optimization paths rather than piling more and more compute power.

Lastly, I am concerned by the fact that almost nothing has changed in the last 30 years in how applications are interacting with data. Well, HPC certainly uses faster devices. We now have clustered shared file systems like Lustre. Also, we have invented object-oriented key and value abstractions, but in reality storage subsystems are most of the time centralized. They are connected on the high-speed fabric. They are also oversized to absorb checkpoints from an ever-growing node count, while in nominal regime they only use a portion of the available bandwidth. Ultimately with workloads, by nature spread across all fabric, most of the power consumption comes from IOs.

However, its time to change this situation. There are some possible avenues, and they will improve as a side effect, the global efficiency of HPC workloads, hence the sustainability and the value of HPC solutions.

More generally, what excites you about working in high-performance computing?

Ive always loved to learn and be intellectually stimulated, especially in my career environment. High performance computing, along with AI and now quantum, are giving me constant food for thoughts and options to solve big problems than I will ever been able to absorb.

I appreciate pushing the limits every day, driving the Atos portfolio and setting the directions, ultimately helping our customers to solve their toughest problems. This is really rewarding for me and our Atos team. Im never satisfied, but Im very proud of what we have achieved together, bringing Atos into the top four ranking worldwide in supercomputers.

What led you to pursue a career in the computing field and what are your suggestions for engaging the next generation of IT professionals?

Ive always been interested by technology, initially attracted by everything that either flew or sailed. Really, Im summarizing this into everything that plays with wind. In my teenage years, after experiencing sailboards and gliders, I was fortunate enough to have access to my first computer in late 1979 when I was 16. My field of vision prevented me from being a commercial pilot, thus I started pursuing a software engineering master degree that led me into the information technology world.

When I began my career in IT, I was not planning any specific path to a specific domain. I simply took all opportunities to learn a new domain, work hard to succeed, and jump to something new that excited me. In my first position, I was lucky enough to work on an IBM mainframe doing CAD with some software development, as well as embracing a fully unknown system engineering role that I had to learn from scratch. Very educational! I jumped from developing in Fortran and doing system engineering on VM/SP and Unix. Then I learned Oracle RDMBS and Internet at Intergraph, HPC servers and storage at SGI. I pursued my own startups, and now Im leading the HPC, AI and quantum portfolio at Atos.

What I would tell the next generation of IT professional for their career is to:

First, only take roles in which you will learn new things. It could be managerial, financial, technical it doesnt matter. To evolve in your future career, the more diverse experience you have, the better you will be able to react and be effective. Move to another role when you are not learning anymore or if you are far too long in your comfort zone.

Second, look at problems to solve, think out of the box and with a 360-degree vision. Break the barriers, and change the angle of view to give new perspectives and solutions to your management and customers.

Also, compensation is important, but its not all. What you will do, how it will make you happy in your life, and what you will achieve professionally is more important. Ultimately, compare your salary with the free time that remains to spend it with your family and friends. Lastly, compensation is not always an indicator of success, but rather changing the world for the better and making our planet a better place to live is the most important benefit you will find in high performance computing.

Outside of the professional sphere, what can you tell us about yourself family stories, unique hobbies, favorite places, etc.? Is there anything about you your colleagues might be surprised to learn?

Together with my wife, we are the proud parents of two beautiful adult daughters. Also we have our three-year-old, bombshell Jack Russell named Pepsy, who brings a lot of energy to our house.

We live Northwest of Paris in a small city on the Seine river. Im still a private pilot and still cruising sail boats with family and friends. I recently participated in the ARC 2021 transatlantic race with three friends on a trimaran boat a real challenge and a great experience. Soon, were off to visiting Scotland for a family vacation!

Eppe is one of 12 HPCwire People to Watch for 2022. You can read the interviews with the other honorees at this link.

Read the original here:

Q&A with Atos' Eric Eppe, an HPCwire Person to Watch in 2022 - HPCwire

Posted in Quantum Computing | Comments Off on Q&A with Atos’ Eric Eppe, an HPCwire Person to Watch in 2022 – HPCwire

@HPCpodcast: Satoshi Matsuoka on the TOP500, Fugaku and Arm, Quantum and Winning Japan’s Purple Ribbon Medal of Honor – insideHPC – insideHPC

Posted: at 2:06 am

Satoshi Matsuoka

An eminent figure in the HPC community, Prof. Satoshi Matsuoka, director of the RIKEN Center for Computational Science (R-CCS) and professor of computer science at Tokyo Institute of Technology, joined our @HPCpodcast for a far ranging discussion of supercomputing past, present and future.

At RIKEN, Matsuoka has overseen development of Fugaku, number 1 on the TOP500 list of the worlds most powerful supercomputers (the list will be updated next week during the ISC 2022 conference in Hamburg as of now its not known if Fugaku will retain its position). Previously, Matsuoka was lead developer of another well-know supercomputer, TSUBAMI, the most powerful supercomputer in Japan at the time.

He also is a recent winner of the Purple Ribbon Medal, one of Japans highest honors, and in our conversation Matsuoka explains why the award ceremony did not include the usual presence of the Emperor of Japan. Thats how our discussion starts; other topics are time stamped below:

start The Purple Ribbon Medal of Honor

2:15 The role of Japan in supercomputing

3:45 TOP500 and ORNLs Exascale system

5:00 Fugaku and Arm

8:00 Why not SPARC

11:30 The balance and beauty of Fugaku and its predecessor, the K-Computer

15:15 Notable applications of Fugaku, including Covid research

25:00 Future of supercomputing and whats next after Fugaku

31:45 FPGA and CGRA

36:00 Quantum Computing

40:30 Nintendo days and working with the late, great Satoru Iwata

48:30 Pursuit of perfection, with a mention of the movie Jiro Dreams of Sushi

You can find our podcasts at insideHPCs@HPCpodcast page, onTwitterand at theOrionX.net blog.Heresthe RSS feed.

Read the original:

@HPCpodcast: Satoshi Matsuoka on the TOP500, Fugaku and Arm, Quantum and Winning Japan's Purple Ribbon Medal of Honor - insideHPC - insideHPC

Posted in Quantum Computing | Comments Off on @HPCpodcast: Satoshi Matsuoka on the TOP500, Fugaku and Arm, Quantum and Winning Japan’s Purple Ribbon Medal of Honor – insideHPC – insideHPC

Could quantum computing bring down Bitcoin and end the age of crypto? – OODA Loop

Posted: at 2:06 am

Quantum computers will eventually break much of todays encryption, and that includes the signing algorithm of Bitcoin and other cryptocurrencies. Approximately one-quarter of the Bitcoin ($168bn) in circulation in 2022 is vulnerable to quantum attack, according to a study by Deloitte.Cybersecurity specialist Itan Barmes led the vulnerability study of the Bitcoin blockchain. He found the level of exposure that a large enough quantum computer would have on the Bitcoin blockchain presents a systemic risk. If [4 million] coins are eventually stolen in this way, then trust in the system will be lost and the value of Bitcoin will probably go to zero, he says.Todays cryptocurrency market is valued at approximately $3trn and Bitcoin reached an all-time high of more than $65,000 per coin in 2021, making crypto the best-performing asset class of the past ten years, according to Geminis Global State of Crypto report for 2022. However, Bitcoins bumpy journey into mainstream investor portfolios coincides with major advances in quantum computing.

Full story : Could quantum computing bring down Bitcoin and end the age of crypto?

Excerpt from:

Could quantum computing bring down Bitcoin and end the age of crypto? - OODA Loop

Posted in Quantum Computing | Comments Off on Could quantum computing bring down Bitcoin and end the age of crypto? – OODA Loop