Page 21234..1020..»

Category Archives: Ai

Analysis | House AI task force leaders take long view on regulating the tools – The Washington Post

Posted: March 8, 2024 at 6:25 am

Happy Thursday! Maybe this year we'll get lucky and the State of the Union will feature a discussion of intermediary liability or duty of care standards. Send predictions and observations to: cristiano.lima@washpost.com.

House leaders took a key step toward hatching a game plan on artificial intelligence last month by launching a new bipartisan task force, which will issue recommendations for how Congress could boost AI innovation while keeping the tools in check.

But the lawmakers leading the effort told The Technology 202 in a joint interview that implementing a full response will probably be a lengthy undertaking as they consider the technologys vast impact across elections, national security, the economy and more.

Rep. Jay Obernolte (R-Calif.), who was tapped by House leaders to chair the group, pointed to Europes efforts to agree on a comprehensive AI law as a cautionary tale.

If you look at the attempts in Europe to create an omnibus bill for the regulation of AI, you'll see some of the fallacies in that, said Obernolte, one of the few lawmakers with computer science bona fides. They've had to rewrite that bill several times as the face of AI has changed.

We dont envision a 5,000 page bill that deals with 57 topics and then were done with AI, said Rep. Ted Lieu (D-Calif.), the task forces co-chair. Its going to be a multiyear process, and there'll be a variety of different bills that try to tackle different aspects of AI.

The task force is set to release a report by the end of year, but that doesnt preclude more immediate legislative action on discrete issues, Obernolte and Lieu said.

Like Senate Majority Leader Charles E. Schumer (D-N.Y.), Obernolte pointed to the risks that AI-generated content poses to elections as one area with potential for fast action.

There should be broad bipartisan agreement that no one should be allowed to impersonate a candidate with AI so we're going to be looking at what we can do to tighten up the regulations to try and prevent that, he said.

Lieu seconded the sentiment and floated the idea of criminal and civil enhancements to make fines or jail time steeper if certain crimes are perpetrated using AI.

One way to provide more deterrence is to say, look, if you use AI to impersonate a voice that defrauds someone, [that] would enhance the punishment that you may get, he said.

Obernolte said hes hopeful that Congress will prioritize taking up the Create AI Act, which aims to fully stand up the National Artificial Intelligence Research Resource (NAIRR). The White House in January launched a pilot version of the center, which is set to run for two years.

In the Senate, Schumer has come under fire from some of his colleagues for keeping his series of AI insight forums closed to the public. (In response, he has noted that the chamber has held many public committee hearings on AI over the years.)

In the House, Obernolte and Lieu said they are planning to have both public and private sessions to dig into the many facets of AI.

We want to have open meetings in a traditional hearing format to make sure that we're being transparent with the public, Obernolte said. But we're also going to have some closed meetings because it's very important to me that everyone feels comfortable asking questions that could come off as ignorant.

While Schumers bipartisan AI working group has yet to unveil any proposals or legislative text, he predicted in June that there would be action from the Senate in months not years.

House leaders, meanwhile, did not launch the task force until nearly a year after Schumer unveiled his plans, prompting concern from some members that the chamber was absent from the debate.

Obernolte and Lieu pushed back on those suggestions.

Were going to chip away at this over the next several years, and we can do that because there are short-term harms, medium-term harms and long-term harms that need to be mitigated, Obernolte said. I don't think that that's inconsistent with what the Senate is doing at all.

Their offices have had informal contacts over the last year with the leaders of Schumers working group, he added, but said they are very aware that we want to work with them and I think theyre very open to working with us. Lieu agreed: Were just getting started.

Thats all for today thank you so much for joining us! Make sure to tell others to subscribe toThe Technology 202 here. Get in touch with Cristiano (via email or social media) and Will (via email or social media) for tips, feedback or greetings!

View post:

Analysis | House AI task force leaders take long view on regulating the tools - The Washington Post

Posted in Ai | Comments Off on Analysis | House AI task force leaders take long view on regulating the tools – The Washington Post

Don’t Give Your Business Data to AI Companies – Dark Reading

Posted: at 6:25 am

COMMENTARY

Artificial intelligence (AI) is challenging our preexisting ideas of what's possible with technology. AI's transformative potential could upend a variety of diverse tasks and business scenarios by applying computer vision and large vision models (LVMs) to usher in a new age of efficiency and innovation.

Yet, as businesses embrace the promises of AI, they encounter a common peril: Every AI company seems to have an insatiable appetite for the world's precious data. These companies are eager to train their proprietary AI models using any available images and videos, employing tactics that sometimes involve inconveniencing users, likeCAPTCHAsmaking you identify traffic lights. Unfortunately, this clandestine approach has become the standard playbook for many AI providers, enticing customers to unwittingly surrender their data and intellectual contributions, only to be monetized by these companies.

This isn't an isolated incident confined to a singlebad applein the industry. Even well-known companies such asDropboxandGitHubhave faced accusations. And whileZoomhas sinceZoom has shifted its stanceon data privacy, such exceptions merely underscore the norm within the industry.

Handing over your business data to AI companies comes with inherent risks. Why should you help train models that may ultimately benefit your competitors? Moreover, in instances where the application of AI could contribute to societal well-being such as identifying wildfires or enhancing public safety why should such data be confined to the exclusive benefit of a few tech giants? The potential benefits of freely sharing and collaboratively improving such data should be harnessed by communities worldwide, not sequestered within the vaults of a select few tech corporations.

To address these concerns,transparencyis the key. AI companies should be obligated to clearly outline how they intend to use your data and for what specific purposes. This transparency will empower businesses to make informed decisions about the fate of their data and guard against exploitative practices.

In addition, businesses should maintain control over how their data is used. Granting AI companies unrestricted access risks unintended consequencesand compromises privacy. Companies must be able to assert their authority in dictating the terms under which their data is used, ensuring alignment with their values and objectives.

Permission should be nonnegotiable. AI companies must seekexplicit consentfrom businesses before utilizing their data. This not only upholds ethical standards but also establishes a foundation of trust between companies and AI providers.

Lastly, businesses aren't just data donors; they are contributors to the development and refinement of AI models. They deserve compensationfor the use of their data. A fair and equitable system should be in place, acknowledging the value businesses bring to the further development of AI models.

The responsibility lies with businesses to safeguard their data and interests. A collective demand for transparency, control, permission, and fair compensation can pave the way for an era in which AI benefits businesses and society at large, fostering collaboration and innovation while safeguarding against the pitfalls of unchecked data exploitation.

Don't surrender your business data blindly demand a future where AI works for you, not the other way around.

Link:

Don't Give Your Business Data to AI Companies - Dark Reading

Posted in Ai | Comments Off on Don’t Give Your Business Data to AI Companies – Dark Reading

NIST, the lab at the center of Bidens AI safety push, is decaying – The Washington Post

Posted: at 6:25 am

At the National Institute of Standards and Technology the government lab overseeing the most anticipated technology on the planet black mold has forced some workers out of their offices. Researchers sleep in their labs to protect their work during frequent blackouts. Some employees have to carry hard drives to other buildings; flaky internet wont allow for the sending of large files.

And a leaky roof forces others to break out plastic sheeting.

If we knew rain was coming, wed tarp up the microscope, said James Fekete, who served as chief of NISTs applied chemicals and materials division until 2018. It leaked enough that we were prepared.

NIST is at the heart of President Bidens ambitious plans to oversee a new generation of artificial intelligence models; through an executive order, the agency is tasked with developing tests for security flaws and other harms. But budget constraints have left the 123-year-old lab with a skeletal staff on key tech teams and most facilities on its main Gaithersburg, Md., and Boulder, Colo., campuses below acceptable building standards.

Interviews with more than a dozen current and former NIST employees, Biden administration officials, congressional aides and tech company executives, along with reports commissioned by the government, detail a massive resources gap between NIST and the tech firms it is tasked with evaluating a discrepancy some say risks undermining the White Houses ambitious plans to set guardrails for the burgeoning technology. Many of the people spoke to The Washington Post on the condition of anonymity because they were not authorized to speak to the media.

Even as NIST races to set up the new U.S. AI Safety Institute, the crisis at the degrading lab is becoming more acute. On Sunday, lawmakers released a new spending plan that would cut NISTs overall budget by more than 10 percent, to $1.46 billion. While lawmakers propose to invest $10 million in the new AI institute, thats a fraction of the tens of billions of dollars tech giants like Google and Microsoft are pouring into the race to develop artificial intelligence. It pales in comparison to Britain, which has invested more than $125 million into its AI safety efforts.

The cuts to the agency are a self-inflicted wound in the global tech race, said Divyansh Kaushik, the associate director for emerging technologies and national security at the Federation of American Scientists.

Some in the AI community worry that underfunding NIST makes it vulnerable to industry influence. Tech companies are chipping in for the expensive computing infrastructure that will allow the institute to examine AI models. Amazon announced that it would donate $5 million in computing credits. Microsoft, a key investor in OpenAI, will provide engineering teams along with computing resources. (Amazon Jeff Bezos owns The Post.)

Tech executives, including OpenAI CEO Sam Altman, are regularly in communication with officials at the Commerce Department about the agencys AI work. OpenAI has lobbied NIST on artificial intelligence issues, according to federal disclosures. NIST asked TechNet an industry trade group whose members include OpenAI, Google and other major tech companies if its member companies can advise the AI Safety Institute.

NIST is also seeking feedback from academics and civil society groups on its AI work. The agency has a long history of working with a variety of stakeholders to gather input on technologies, Commerce Department spokesman Charlie Andrews said.

AI staff, unlike their more ergonomically challenged colleagues, will be working in well-equipped offices in the Gaithersburg campus, the Commerce Departments D.C. office and the NIST National Cybersecurity Center of Excellence in Rockville, Md., Andrews said.

White House spokeswoman Robyn Patterson said the appointment of Elizabeth Kelly to the helm of the new AI Safety Institute underscores the White Houses commitment to getting this work done right and on time. Kelly previously served as special assistant to the president for economic policy.

The Biden-Harris administration has so far met every single milestone outlined by the presidents landmark executive order, Patterson said. We are confident in our ability to continue to effectively and expeditiously meet the milestones and directives set forth by President Biden to protect Americans from the potential risks of AI systems while catalyzing innovation in AI and beyond.

NISTs financial struggles highlight the limitations of the administrations plan to regulate AI exclusively through the executive branch. Without an act of Congress, there is no new funding for initiatives like the AI Safety Institute and the programs could be easily overturned by the next president. And as the presidential elections approach, the prospects of Congress moving on AI in 2024 are growing dim.

During his State of the Union address on Thursday, Biden called on Congress to harness the promise of AI and protect us from its peril.

Congressional aides and former NIST employees say the agency has not been able to break through as a funding priority even as lawmakers increasingly tout its role in addressing technological developments, including AI, chips and quantum computing.

After this article published, Senate Majority Leader Charles E. Schumer (D-N.Y.) on Thursday touted the $10 million investment in the institute in the proposed budget, saying he fought for this funding to make sure that the development of AI prioritizes both innovation and safety.

A review of NISTs safety practices in August found that the budgetary issues endanger employees, alleging that the agency has an incomplete and superficial approach to safety.

Chronic underfunding of the NIST facilities and maintenance budget has created unsafe work conditions and further fueled the impression among researchers that safety is not a priority, said the NIST safety commission report, which was commissioned following the 2022 death of an engineering technician at the agencys fire research lab.

NIST is one of the federal governments oldest science agencies with one of the smallest budgets. Initially called the National Bureau of Standards, it began at the dawn of the 20th century, as Congress realized the need to develop more standardized measurements amid the expansion of electricity, the steam engine and railways.

The need for such an agency was underscored three years after its founding, when fires ravaged through Baltimore. Firefighters from Washington, Philadelphia and even New York rushed to help put out the flames, but without standard couplings, their hoses couldnt connect to the Baltimore hydrants. The firefighters watched as the flames overtook more than 70 city blocks in 30 hours.

NIST developed a standard fitting, unifying more than 600 different types of hose couplings deployed across the country at the time.

Ever since, the agency has played a critical role in using research and science to help the country learn from catastrophes and prevent new ones. Its work expanded after World War II: It developed an early version of the digital computer, crucial Space Race instruments and atomic clocks, which underpin GPS. In the 1950s and 1960s, the agency moved to new campuses in Boulder and Gaithersburg after its early headquarters in Washington fell into disrepair.

Now, scientists at NIST joke that they work at the most advanced labs in the world in the 1960s. Former employees describe cutting-edge scientific equipment surrounded by decades-old buildings that make it impossible to control the temperature or humidity to conduct critical experiments.

You see dust everywhere because the windows dont seal, former acting NIST director Kent Rochford said. You see a bucket catching drips from a leak in the roof. You see Home Depot dehumidifiers or portable AC units all over the place.

The flooding was so bad that Rochford said he once requested money for scuba gear. That request was denied, but he did receive funding for an emergency kit that included squeegees to clean up water.

Pests and wildlife have at times infiltrated its campuses, including an incident where a garter snake entered a Boulder building.

More than 60 percent of NIST facilities do not meet federal standards for acceptable building conditions, according to a February 2023 report commissioned by Congress from the National Academies of Sciences, Engineering and Medicine. The poor conditions impact employee output. Workarounds and do-it-yourself repairs reduce the productivity of research staff by up to 40 percent, according to the committees interviews with employees during a laboratory visit.

Years after Rochfords 2018 departure, NIST employees are still deploying similar MacGyver-style workarounds. Each year between October and March, low humidity in one lab creates a static charge making it impossible to operate an instrument ensuring companies meet environmental standards for greenhouse gases.

Problems with the HVAC and specialized lights have made the agency unable to meet demand for reference materials, which manufacturers use to check whether their measurements are accurate in products like baby formula.

Facility problems have also delayed critical work on biometrics, including evaluations of facial recognition systems used by the FBI and other law enforcement agencies. The data center in the 1966 building that houses that work receives inadequate cooling, and employees there spend about 30 percent of their time trying to mitigate problems with the lab, according to the academies reports. Scheduled outages are required to maintain the data centers that hold technology work, knocking all biometric evaluations offline for a month each year.

Fekete, the scientist who recalled covering the microscope, said his teams device never completely stopped working due to rain water.

But other NIST employees havent been so lucky. Leaks and floods destroyed an electron microscope worth $2.5 million used for semiconductor research, and permanently damaged an advanced scale called a Kibble balance. The tool was out of commission for nearly five years.

Despite these constraints, NIST has built a reputation as a natural interrogator of swiftly advancing AI systems.

In 2019, the agency released a landmark study confirming facial recognition systems misidentify people of color more often than White people, casting scrutiny on the technologys popularity among law enforcement. Due to personnel constraints, only a handful of people worked on that project.

Four years later, NIST released early guidelines around AI, cementing its reputation as a government leader on the technology. To develop the framework, the agency connected with leaders in industry, civil society and other groups, earning a strong reputation among numerous parties as lawmakers began to grapple with the swiftly evolving technology.

The work made NIST a natural home for the Biden administrations AI red-teaming efforts and the AI Safety Institute, which were formalized in the November executive order. Vice President Harris touted the institute at the U.K. AI Safety Summit in November. More than 200 civil society organizations, academics and companies including OpenAI and Google have signed on to participate in a consortium within the institute.

OpenAI spokeswoman Kayla Wood said in a statement that the company supports NISTs work, and that the company plans to continue to work with the lab to "support the development of effective AI oversight measures.

Under the executive order, NIST has a laundry list of initiatives that it needs to complete by this summer, including publishing guidelines for how to red-team AI models and launching an initiative to guide evaluating AI capabilities. In a December speech at the machine learning conference NeurIPS, the agencys chief AI adviser, Elham Tabassi, said this would be an almost impossible deadline.

It is a hard problem, said Tabassi, who was recently named the chief technology officer of the AI Safety Institute. We dont know quite how to evaluate AI.

The NIST staff has worked tirelessly to complete the work it is assigned by the AI executive order, said Andrews, the Commerce spokesperson.

While the administration has been clear that additional resources will be required to fully address all of the issues posed by AI in the long term, NIST has been effectively carrying out its responsibilities under the [executive order] and is prepared to continue to lead on AI-related research and other work, he said.

Commerce Secretary Gina Raimondo asked Congress to allocate $10 million for the AI Safety Institute during an event at the Atlantic Council in January. The Biden administration also requested more funding for NIST facilities, including $262 million for safety, maintenance and repairs. Congressional appropriators responded by cutting NISTs facilities budget.

The administrations ask falls far below the recommendations of the national academies study, which urged Congress to provide $300 to $400 million in additional annual funding over 12 years to overcome a backlog of facilities damage. The report also calls for $120 million to $150 million per year for the same period to stabilize the effects of further deterioration and obsolescence.

Ross B. Corotis, who chaired the academies committee that produced the facilities report, said Congress needs to ensure that NIST is funded because it is the go-to lab when any new technology emerges, whether thats chips or AI.

Unless youre going to build a whole new laboratory for some particular issue, youre going to turn first to NIST, Corotis said. And NIST needs to be ready for that.

Eva Dou and Nitasha Tiku contributed to this report.

Read the original post:

NIST, the lab at the center of Bidens AI safety push, is decaying - The Washington Post

Posted in Ai | Comments Off on NIST, the lab at the center of Bidens AI safety push, is decaying – The Washington Post

Essay | AI is Coming! Tips for Staying Calm and Carrying On – The Wall Street Journal

Posted: at 6:25 am

Essay | AI is Coming! Tips for Staying Calm and Carrying On  The Wall Street Journal

See the article here:

Essay | AI is Coming! Tips for Staying Calm and Carrying On - The Wall Street Journal

Posted in Ai | Comments Off on Essay | AI is Coming! Tips for Staying Calm and Carrying On – The Wall Street Journal

AI can be easily used to make fake election photos – report – BBC.com

Posted: at 6:25 am

AI can be easily used to make fake election photos - report  BBC.com

More here:

AI can be easily used to make fake election photos - report - BBC.com

Posted in Ai | Comments Off on AI can be easily used to make fake election photos – report – BBC.com

5 Artificial Intelligence (AI) Stocks That Could Make You a Millionaire – Yahoo Finance

Posted: at 6:25 am

5 Artificial Intelligence (AI) Stocks That Could Make You a Millionaire  Yahoo Finance

Read more:

5 Artificial Intelligence (AI) Stocks That Could Make You a Millionaire - Yahoo Finance

Posted in Ai | Comments Off on 5 Artificial Intelligence (AI) Stocks That Could Make You a Millionaire – Yahoo Finance

AI could be an extraordinary force for good. So why do our politicians still not have a plan? – The Guardian

Posted: at 6:25 am

AI could be an extraordinary force for good. So why do our politicians still not have a plan?  The Guardian

Original post:

AI could be an extraordinary force for good. So why do our politicians still not have a plan? - The Guardian

Posted in Ai | Comments Off on AI could be an extraordinary force for good. So why do our politicians still not have a plan? – The Guardian

Mapping Disease Trajectories from Birth to Death with AI – Neuroscience News

Posted: at 6:25 am

Summary: Researchers mapped disease trajectories from birth to death, analyzing over 44 million hospital stays in Austria to uncover patterns of multimorbidity across different age groups.

Their groundbreaking study identified 1,260 distinct disease trajectories, revealing critical moments where early and personalized prevention could alter a patients health outcome significantly. For instance, young men with sleep disorders showed two different paths, indicating varying risks for developing metabolic or movement disorders later in life.

These insights provide a powerful tool for healthcare professionals to implement targeted interventions, potentially easing the growing healthcare burden due to an aging population and improving individuals quality of life.

Key Facts:

Source: CSH

The world population is aging at an increasing pace. According to the World Health Organization (WHO), in 2023, one in six people were over 60 years old. By 2050, the number of people over 60 is expected to double to 2.1 billion.

As age increases, the risk of multiple, often chronic diseases occurring simultaneouslyknown as multimorbiditysignificantly rises, explainsElma Dervicfrom theComplexity Science Hub (CSH). Given the demographic shift we are facing, this poses several challenges.

On one hand, multimorbidity diminishes the quality of life for those affected. On the other hand, this demographic shift creates a massive additional burden for healthcare and social systems.

Identifying typical disease trajectories

We wanted to find out which typical disease trajectories occur in multimorbid patients from birth to death and which critical moments in their lives significantly shape the further course. This provides clues for very early and personalized prevention strategies, explains Dervic.

Together with researchers from the Medical University of Vienna, Dervic analyzed all hospital stays in Austria between 2003 and 2014, totaling around 44 million. To make sense of this vast amount of data, the team constructed multilayered networks. A layer represents each ten-year age group, and each diagnosis is represented by nodes within these layers.

Using this method, the researchers were able to identify correlations between different diseases among different age groups for example, how frequently obesity, hypertension, and diabetes occur together in 20-29-year-olds and which diseases have a higher risk of occurring after them in the 30s, 40s or 50s.

The team identified 1,260 different disease trajectories (618 in women and 642 in men) over a 70-year period. On average, one of these disease trajectories includes nine different diagnoses, highlighting how common multimorbidity actually is, emphasizes Dervic.

Critical moments

In particular, 70 trajectories have been identified where patients exhibited similar diagnoses in their younger years, but later evolved into significantly different clinical profiles.

If these trajectories, despite similar starting conditions, significantly differ later in life in terms of severity and the corresponding required hospitalizations, this is a critical moment that plays an important role in prevention, says Dervic.

Menwith sleep disorders

The model, for instance, shows two typical trajectory paths for men between 20 and 29 years old who suffer from sleep disorders. In trajectory A, metabolic diseases such as diabetes mellitus, obesity, and lipid disorders appear years later. In trajectory B, movement disorders occur, among other conditions.

This suggests that organic sleep disorders could be an early marker for the risk of developing neurodegenerative diseases such as Parkinsons disease.

If someone suffers from sleep disorders at a young age, that can be a critical event prompting doctors attention, explains Dervic.

The results of the study show that patients who follow trajectory B spend nine days less in hospital in their 20s but 29 days longer in hospital in their 30s and also suffer from more additional diagnoses. As sleep disorders become more prevalent, the distinction in the course of their illnesses not only matters for those affected but also for the healthcare system.

Women with high blood pressure

Similarly, when adolescent girls between the ages of ten and nineteen have high blood pressure, their trajectory varies as well. While some develop additional metabolic diseases, others experience chronic kidney disease in their twenties, leading to increased mortality at a young age.

This is of particular clinical importance as childhood hypertension is on the rise worldwide and is closely linked to the increasing prevalence of childhood obesity.

There are specific trajectories that deserve special attention and should be monitored closely, according to the authors of the study.

With these insights derived from real-life data, doctors can monitor various diseases more intensively and implement targeted, personalized preventive measures decades before serious problems arise, explains Dervic.

By doing so, they are not only reducing the burden on healthcare systems, but also improving patients quality of life.

Author: Eliza Muto Source: CSH Contact: Eliza Muto CSH Image: The image is credited to Neuroscience News

Original Research: Open access. Unraveling cradle-to-grave disease trajectories from multilayer comorbidity networks by Elma Dervic et al. npj Digital Medicine

Abstract

Unraveling cradle-to-grave disease trajectories from multilayer comorbidity networks

We aim to comprehensively identify typical life-spanning trajectories and critical events that impact patients hospital utilization and mortality. We use a unique dataset containing 44 million records of almost all inpatient stays from 2003 to 2014 in Austria to investigate disease trajectories.

We develop a new, multilayer disease network approach to quantitatively analyze how cooccurrences of two or more diagnoses form and evolve over the life course of patients. Nodes represent diagnoses in age groups of ten years; each age group makes up a layer of the comorbidity multilayer network.

Inter-layer links encode a significant correlation between diagnoses (p<0.001, relative risk>1.5), while intra-layers links encode correlations between diagnoses across different age groups. We use an unsupervised clustering algorithm for detecting typical disease trajectories as overlapping clusters in the multilayer comorbidity network.

We identify critical events in a patients career as points where initially overlapping trajectories start to diverge towards different states. We identified 1260 distinct disease trajectories (618 for females, 642 for males) that on average contain 9 (IQR 26) different diagnoses that cover over up to 70 years (mean 23 years).

We found 70 pairs of diverging trajectories that share some diagnoses at younger ages but develop into markedly different groups of diagnoses at older ages. The disease trajectory framework can help us to identify critical events as specific combinations of risk factors that put patients at high risk for different diagnoses decades later.

Our findings enable a data-driven integration of personalized life-course perspectives into clinical decision-making.

See more here:

Mapping Disease Trajectories from Birth to Death with AI - Neuroscience News

Posted in Ai | Comments Off on Mapping Disease Trajectories from Birth to Death with AI – Neuroscience News

India plans 10,000-GPU sovereign AI supercomputer – The Register

Posted: at 6:25 am

Oh no, you're thinking, yet another cookie pop-up. Well, sorry, it's the law. We measure how many people read us, and ensure you see relevant ads, by storing cookies on your device. If you're cool with that, hit Accept all Cookies. For more info and to customize your settings, hit Customize Settings.

Here's an overview of our use of cookies, similar technologies and how to manage them. You can also change your choices at any time, by hitting the Your Consent Options link on the site's footer.

Read more from the original source:

India plans 10,000-GPU sovereign AI supercomputer - The Register

Posted in Ai | Comments Off on India plans 10,000-GPU sovereign AI supercomputer – The Register

SAP enhances Datasphere and SAC for AI-driven transformation – CIO

Posted: at 6:25 am

SAP announced today a host of new AI copilot and AI governance features for SAP Datasphere and SAP Analytics Cloud (SAC). Jurgen Mueller, SAP CTO and executive board member, called the innovations, which includes an expanded partnership with data governance specialist Collibra, a quantum leap in the companys ability to help customers drive intelligent business transformation through data.

SAP is executing on a roadmap that brings an important semantic layer to enterprise data, and creates the critical foundation for implementing AI-based use cases, said analyst Robert Parker, SVP of industry, software, and services research at IDC.

SAP unveiled Datasphere a year ago as a comprehensive data service, built on SAP Business Technology Platform (BTP), to provide a unified experience for data integration, data cataloging, semantic modeling, data warehousing, data federation, and data virtualization. At SAP Dataspheres core is the concept of the business data fabric, a data management architecture delivering an integrated, semantically rich data layer over the existing data landscape, and providing seamless and scalable access to data without duplication while retaining business context and logic.

With todays announcements, SAP is building on that vision. The company is expanding its partnership with Collibra to integrate Collibras AI Governance platform with SAP data assets to facilitate data governance for non-SAP data assets in customer environments.

We have cataloging inside Datasphere: It allows you to catalog, manage metadata, all the SAP data assets were seeing, said JG Chirapurath, chief marketing and solutions officer for SAP. We are also seeing customers bringing in other data assets from other apps or data sources. In this model, it doesnt make sense for us to say our catalog has to understand all of these corpuses or data. Collibra does a fantastic job of understanding it.

The expanded partnership gives customers the ability to use Collibra as a catalog of catalogs, with Dataspheres catalog also managed by the Collibra platform.

Go here to see the original:

SAP enhances Datasphere and SAC for AI-driven transformation - CIO

Posted in Ai | Comments Off on SAP enhances Datasphere and SAC for AI-driven transformation – CIO

Page 21234..1020..»