Page 59«..1020..58596061..7080..»

Category Archives: Technology

Marketing Technology Highlights of The Week: Featuring WPP – MarTech Series

Posted: February 7, 2022 at 6:55 am

Google updated their brand logo, theres a lot happening for OTTs and streaming platforms thereby changing the online marketing and ad experience for both B2B and B2C teams, whats the latest that is redefining martech and marketing? Catch more from this weekly highlight:

__________

Because online customer experiences are in high demand, brands have to find ways to stand out from their competitors who are trying to do the same thing. To do this, brands need to prioritize a great user experience by eliminating as much friction as possible from digital channels. This means having expedited purchasing processes, one-click sign-ups and pre-filled forms. Qualitative research methods, like online focus groups, are also important for gathering customer sentiment and firsthand knowledge on what they value in an experience, helping to build a better one.

Rick Kelly, CPO at Fuel Cycle

As a marketing team, how do you know the emails and SMS messages you send will be relevant to each recipient? Will people be receptive to the communications, or will they opt out? What about your website, mobile wallet offers, and social ads will they resonate with each customers interests, desires, or values? How can you be sure?

The only way to know is to ask. Customers will freely opt to share their preferences, interests and other zero-party data with a brand, in exchange for a more personalized experience. Yelp, for example, asks people for their food, dietary, and lifestyle preferences. When a user tells Yelp that theyre a vegetarian, Yelp immediately provides them with vegetarian-friendly recommendations rather than steakhouses.Wendell Lansford, Co-founder at Wyng

Original post:

Marketing Technology Highlights of The Week: Featuring WPP - MarTech Series

Posted in Technology | Comments Off on Marketing Technology Highlights of The Week: Featuring WPP – MarTech Series

Top Technology Trends for Transactional Lawyers – Law …

Posted: February 5, 2022 at 5:05 am

Tremendous opportunities exist for transactional practitioners to leverage technology to optimize outcomes. These opportunities center on four trends discussed below. Each trend has its roots in the now well-established field of litigation-support technology.

Litigation technology was born out of a need to manage the high volume of potentially responsive documents because of the shift from paper to electronic documents and communication. Electronically Stored Information (ESI) renders a 3D image of a document by introducing metadata, or data about data. ESI also results in content being created at a faster rate. This increase in complexity and volume required a different set of professional skills needed for litigation practitioners that combined an intimate knowledge of both the litigation process and how technology can be used to drive efficiencies in the e-discovery process.

A document repository that contains not only the content of a document but the metadata, along with memorializing attorney analysis on each document, is an important element to the litigators toolbox. Creating a robust document repository also allows for the reuse of client documents and attorney work product.

Another important tool in the litigators toolbox is Technology Assisted Review (TAR). This is a broad term referring to the application of technology in a traditional litigation document review. TAR applications can deal with high volumes of data without increasing headcount. It can also use artificial or augmented technology to rapidly synthesize documents and prioritize attorney review.

Lessons learned from the evolution of the above tools in the litigation setting are directly relevant to opportunities available to transactional lawyers to efficiently execute work on transactional matters.

Applications used to manage voluminous and complex litigation document reviews can be used for reviewing contracts in transactional matters. These tools can turn contract clauses into data points, which can then be analyzed, filtered, compared, etc., using TAR. Determining contracts that need updated language (such as LIBOR provisions), creating a control center for managing the due diligence process during M&A transactions, and executing real estate title reviews are just some examples of the type of projects that can be organized more efficiently with the use of litigation document review technology. Expect to see specialized document review tools continue to develop in the coming year tailored to the unique needs of transactional professionals.

Automating repeatable tasks can help streamline certain aspects of the deal closing process. Litigation support tools and teams have developed repeatable e-discovery workflows to streamline often complex processes and to ensure that anyone on the team can jump in and complete the tasks. The same technological approach is now gaining steam on such transactional matters as preparing an acquisition transaction for closing. There are tools that integrate with document management systems to automate closing checklists and signature packages. Improving protocols for security enhancement, such as procedures for validating wire payment information and ensuring bad actors do not slip into the process diverting funds is another way to utilize automation to strengthen and improve the deal closing process. Transactional professionals will want to look for opportunities to benefit from introducing aspects of automation where appropriate.

Your technology support personnelwhether internal, external, or a hybridcan help brainstorm technology solutions for many legal projects. They tend to see problems from a unique perspective combining an understanding of legal practice, information technology, and workflow efficiencies. This space is where technology support professionals can be an invaluable resource for solving problems and increasing efficiency. Transactional professionals increasingly are tapping into the expertise of technology support providers and that trend will continue.

Litigation support professionals have been pioneers in using the principles of Legal Process Improvement (LPI) because of the high volume of client data managed during the e-discovery process. Discovery is typically the most expensive part of a litigation, and much attention is paid to ensuring every step is executed as efficiently as possible. Mapping out e-discovery processes helps demystify complex technical workflows. LPI principles can be used on transactional matters to create a clearer picture of best practices for repeatable tasks for new participants. It can uncover potential bottlenecks in existing workflows, allowing for improvements before the workflow is executed. It can also be an opportunity to define roles to make sure that the person with the appropriate billable hourly rate is doing the appropriate work.

Technology and strategy used to manage litigation efficiently are ripe for expanded use in transactional matters. Document review tools, automation, technology support teams, and legal process improvement have all been deployed for years in bringing order to what otherwise would be litigation chaos. Transactional workflows and outcomes can also be improved by the same strategies. These trends will bring transformational change to transactional practice benefitting both clients and their counsel in both the short and long run.

Kate Jansons Johns is the Litigation Support Manager at Nutter. In this role, Kate is responsible for the day-to-day operation of Nutters e-discovery and litigation support initiatives, trainings, infrastructure, applications, and resources. Kate is a Certified E-Discovery Specialist (CEDS) with the Association of Certified E-Discovery Specialists (ACEDS).

Go here to read the rest:

Top Technology Trends for Transactional Lawyers - Law ...

Posted in Technology | Comments Off on Top Technology Trends for Transactional Lawyers – Law …

Sources and acknowledgments | The Economist

Posted: at 5:05 am

Jan 27th 2022

In addition to the people named in the text, the author would like to thank Justin Bronk, Kevin Copsey, Keith Dear, Michael C. Horowitz, Michael Kofman, Thomas Mahnken, Todd Master, Phil Muir, Rob Magowan, Nick Moran, Ruslan Pukhov, Henning Robach, Jack Shanahan, Ed Stringer, Phil Weir, Jerry Welsh and others who would prefer to remain anonymous.

Further reading on defence technology:

Christian Brose, The Kill Chain Defending America in the Future of High-Tech Warfare, Hachette BooksJustin Bronk and Jack Watling, Necessary Heresies Challenging the Narratives Distorting Contemporary UK Defence, RUSIPaul Scharre, Army of None Autonomous Weapons and the Future of War, W.W. Norton

Owen Cote vs Sebastian Brixley-Williams on anti-submarine warfareRemy Hemez on decoys and Jennifer McArdle on deceptionJack Watling and CSIS on the lessons of the Nagorno-Karabakh warT.X. Hammes on defence dominance

This article appeared in the Technology Quarterly section of the print edition under the headline "Sources and acknowledgments"

See the article here:

Sources and acknowledgments | The Economist

Posted in Technology | Comments Off on Sources and acknowledgments | The Economist

Privacy and Technology – wrps.on.ca

Posted: at 5:05 am

The exploration and use of technology are essential for WRPS to meet its obligations to the community regarding public safety, including the prevention and investigation of crimes, as well as to improve overall administration.Technologies are assessedto protect privacy and security while ensuring thepublic has access to police information as outlined inthe Municipal Freedom of Information and Protection of Privacy Act(MFIPPA).

WRPS iscommitted to assessing the impacts ofnew and existing technology, procedures and programs with access and privacy at the forefront, as well as to ensure compliance with the Criminal Code of Canada, the Charter of Rights and Freedoms, the Police Services Act, the Youth Criminal Justice Act and any other relevant laws or legislation. As such, information is collected through lawful authority, judicial authorization or upon consent.

We continue our commitment to providing citizens with responsive policing services that foster a relationship of trust and transparency within our community.

Remotely Piloted Vehicle(RPV)

The Waterloo Regional Police Service (WRPS) uses a remotely piloted vehicle (RPV), to assist with a variety of law enforcement functions:

A RPV is also used to create internal training videos and external communication videos. Any use of personal information for these purposes requires a signedPhotograph/Video Release and Consentform.

Body-Worn Video (BWV)/In-Car Video (ICV)

WRPS use of Body-Worn Video is informed by the Information andPrivacy Commissioner of Ontario'sGuidance for the use of Body-Worn Cameras by Law Enforcement Authoritiestoprovide oversight and accountability for police interactions with the public.

Device Extraction Technologies are utilized to unlock electronic devices and extract data relevant to law enforcement investigations or prosecutions. Use of this technologyis based on consent, judicial authorization or immediate risk to public safety and is used by authorized police officers only.

Greykey and Cellebrite

Statistics

Image Analytics

Image Analytics Technologies are utilized by authorized police officers to view, process and analyze lawfully obtained photographs, video footage, etc.for specific images that are relevant to law enforcement investigations or prosecutions. While this technology does utilizefacial recognition, it is notused to scan the internet, social media, etc.It is used solely for video obtained on consent, through a warrant or targeted electronic surveillance. The purpose of using this technology is to expedite the process of locating objects or individuals withinthe lawfully obtainedvideo.

BriefCam

BriefCamis a new program utilized by WRPS in 2022.BriefCam software can quickly search volumes of video that would otherwise be impossible to examine manually, providing investigative clues that create intelligence and operational information for officers. BriefCam does not expand the collection of personal information by investigators.

Select the WRPS Feedbackform to submit any questions or commentsregardinguse of technology.

If you have questions about making an Access to Information request under the MFIPPA, please contact theAccess to Information Unit.

Excerpt from:

Privacy and Technology - wrps.on.ca

Posted in Technology | Comments Off on Privacy and Technology – wrps.on.ca

House competition bill aims to boost minorities in science and technology – CNBC

Posted: at 5:05 am

Congress is aiming to reshape America's workforce through new legislation that would direct more than $1 billion toward increasing diversity of the scientists, researchers and technologists who drive the innovation economy.

The measure includes $900 million for grants and partnerships with historically Black colleges and universities, $164 million to study barriers for people of color in the field and $17.5 million to combat sexual harassment. They're part of a expansive package of bills known as the America Competes Act, which lawmakers hope will ensure the United States continues to lead the global economy.

"We cannot compete internationally without having the available talent," House Science Committee Chairwoman Eddie Bernice Johnson, D-Texas, told CNBC. "We've got to make sure we build in the mechanism to get that talent."

Mirimus, Inc. lab scientists work to validate rapid IgM/IgG antibody tests of COVID-19 samples from recovered patients on April 10, 2020 in the Brooklyn borough of New York City.

Misha Friedman Getty Images

The House passed the package Friday. It includes signature items such as funding for the domestic semiconductor industry and efforts to tackle supply chain shortages. Speaker Nancy Pelosi had enough support to pass the legislation despite opposition from House Republicans who want to take a tougher stance against China.

A version of the bill passed the Senate last summer with strong bipartisan support. The two chambers will have to negotiate a compromise version of the legislation. The White House has made getting the bill to the president's desk one of the administration's top priorities as its social spending plan and other legislative initiatives languish.

"Our red line is doing nothing or taking too long," Commerce Secretary Gina Raimondo told reporters Friday. She added: "My message to everyone is to find common ground, quickly. This should take weeks not months."

Read more of CNBC's politics coverage:

A report from the National Academies of Sciences, Engineering and Medicine estimated the United States will need 1 million more people employed in those sectors over the next decade than it is currently on track to produce. The group said the country will not reach that goal without substantially increasing diversity in the labor force.

"A clear takeaway from the projected demographic profile of the nation is that the educational outcomes and STEM readiness of students of color will have direct implications on America's economic growth, national security, and global prosperity," the report said.

The bill would also authorize new investments for colleges and universities that primarily serve students of color through research funding and enhanced engagement. About 18% of Black college graduates in science and technology come from historically Black colleges and universities, according to the National Science Foundation.

"We've got to build the opportunity," Johnson said. "We've got to invest in building talent from this level, which means if they're at HBCUs, then we've got to invest in HBCUs."

At Spelman College in Atlanta, more Black women have graduated with doctoral degrees than at any other school in the country. The historically Black college is planning to build a new innovation lab over the next two years thanks to a $10 million gift from the foundation named after Arthur Blank, co-founder of Home Depot.

The school's president, Mary Schmidt Campbell, told CNBC that Washington also plays an important role by setting the national agenda. She said the new legislation could "democratize" innovation and ultimately benefit businesses' bottom line.

"There's of course the altruistic mission of making sure that everybody is included," Campbell said. "But there is a self-interested reason why companies should be interested in diversity: It's because it makes them better companies."

Correction: This story was updated to correct the spelling of the Spelman College president's name.

Read more:

House competition bill aims to boost minorities in science and technology - CNBC

Posted in Technology | Comments Off on House competition bill aims to boost minorities in science and technology – CNBC

Severance Creator on the Technology, Location and Timeline – TheWrap

Posted: at 5:05 am

Very mild Spoilers for the first episode of Severance below.

For every answer that the new Apple TV+ dramatic thriller series Severance offers, more compelling questions arise. Thats certainly true of the technology at the center of the show, which allows people to literally split their work life and their home life through a controversial procedure called, appropriately enough, severance. Once severed, a person will have no memories of their work life while at home, and no memories of their home life at work. And the details of exactly how that technology works, according to creator and showrunner Dan Erickson, are purposefully a little vague.

Adam Scott plays Mark, a man who has undergone the severance procedure. On the outside, hes lonely and mourning his dead wife. On the inside, hes a bright and loyal worker at a mysterious corporation called Lumon.

The switch between Outie and Innie is visually represented by the character going down an elevator to some lower level where their Innie work life takes place. So how, exactly, does the elevator trigger the severance process? I have file upon file on my on my laptop of walking through in my head how this scientifically makes sense. But suffice it to say, theres some sort of a barrier that if youre basically halfway down the elevator, you pass it, Erickson told TheWrap. Weve talked about it as a wire. Weve talked about it as as just some sort of a threshold, you pass that, and it sends a frequency to the chip in your head that causes you to switch to your Innie mode. And then it just comes back up when youre going home.

The show also finds characters exiting a stairwell where the same switch occurs. For the stairwell, its similar, whatever that thing is and again, we sort of intentionally, never fully decided even for ourselves, like what is the exact technology, he said. But that threshold is in the doorway. So when Helly (Britt Lower) is running through, thats the moment that shes switching, is the moment that she passes through the doorway.

As the season progresses, the characters grow more and more curious about what, exactly, is going on inside Lumon, and viewers are unraveling the mystery at the same time. That mystery extends to the location of the show, which Erickson confesses was also left intentionally ambiguous.

We sort of intentionally kept a lot of ambiguity to the time and place, Erickson told TheWrap. We obviously shot mostly in New York and New Jersey, so theres sort of a vague New England, east coast-y feel to to the city, but we didnt really want to know exactly where it was or tie it to a specific locale.

As for when Severance takes place, its not a far-off future. It is around now, its like vaguely now-ish, he said. Were not going for something where this is 10 years in the future where severance has been invented and already exists. Its sort of an alternate, vaguely now-ish timeline.

The first two episodes of Severance premiere on Apple TV+ on Feb. 18 with new episodes airing weekly on Fridays.

See more here:

Severance Creator on the Technology, Location and Timeline - TheWrap

Posted in Technology | Comments Off on Severance Creator on the Technology, Location and Timeline – TheWrap

Game-Changing Carbon Capture Technology To Remove 99% of CO2 From Air – SciTechDaily

Posted: at 5:05 am

University of Delaware researchers have broken new ground that could bring more environmentally friendly fuel cells closer to commercialization. Credit: Graphic illustration by Jeffrey C. Chase

University of Delaware researchers carbon capture advance could bring environmentally friendly fuel cells closer to market.

University of Delaware engineers have demonstrated a way to effectively capture 99% of carbon dioxide from air using a novel electrochemical system powered by hydrogen.

It is a significant advance for carbon dioxide capture and could bring more environmentally friendly fuel cells closer to market.

The research team, led by UD Professor Yushan Yan, reported their method in Nature Energy on Thursday, February 3.

Fuel cells work by converting fuel chemical energy directly into electricity. They can be used in transportation for things like hybrid or zero-emission vehicles.

Yan, Henry Belin du Pont Chair of Chemical and Biomolecular Engineering, has been working for some time to improve hydroxide exchange membrane (HEM) fuel cells, an economical and environmentally friendly alternative to traditional acid-based fuel cells used today.

But HEM fuel cells have a shortcoming that has kept them off the road they are extremely sensitive to carbon dioxide in the air. Essentially, the carbon dioxide makes it hard for a HEM fuel cell to breathe.

This defect quickly reduces the fuel cells performance and efficiency by up to 20%, rendering the fuel cell no better than a gasoline engine. Yans research group has been searching for a workaround for this carbon dioxide conundrum for over 15 years.

The UD research teams spiral wound module takes in hydrogen and air through two separate inlets (shown on the left) and emits carbon dioxide and carbon dioxide-free air (shown on the right) after passing through two large-area, catalyst-coated shorted membranes. The inset image on the right shows, in part, how the molecules move within the short-circuited membrane. Credit: University of Delaware

A few years back, the researchers realized this disadvantage might actually be a solution for carbon dioxide removal.

Once we dug into the mechanism, we realized the fuel cells were capturing just about every bit of carbon dioxide that came into them, and they were really good at separating it to the other side, said Brian Setzler, assistant professor for research in chemical and biomolecular engineering and paper co-author.

While this isnt good for the fuel cell, the team knew if they could leverage this built-in self-purging process in a separate device upstream from the fuel cell stack, they could turn it into a carbon dioxide separator.

It turns out our approach is very effective. We can capture 99% of the carbon dioxide out of the air in one pass if we have the right design and right configuration, said Yan.

So, how did they do it?

They found a way to embed the power source for the electrochemical technology inside the separation membrane. The approach involved internally short-circuiting the device.

Its risky, but we managed to control this short-circuited fuel cell by hydrogen. And by using this internal electrically shorted membrane, we were able to get rid of the bulky components, such as bipolar plates, current collectors or any electrical wires typically found in a fuel cell stack, said Lin Shi, a doctoral candidate in the Yan group and the papers lead author.

Now, the research team had an electrochemical device that looked like a normal filtration membrane made for separating out gases, but with the capability to continuously pick up minute amounts of carbon dioxide from the air like a more complicated electrochemical system.

This picture shows the electrochemical system developed by the Yan group. Inside the highlighted cylindrical metal housing shown is the research teams novel spiral wound module. As hydrogen is fed to the device, it powers the carbon dioxide removal process. Computer software on the laptop plots the carbon dioxide concentration in the air after passing through the module. Credit: University of Delaware

In effect, embedding the devices wires inside the membrane created a short-cut that made it easier for the carbon dioxide particles to travel from one side to the other. It also enabled the team to construct a compact, spiral module with a large surface area in a small volume. In other words, they now have a smaller package capable of filtering greater quantities of air at a time, making it both effective and cost-effective for fuel cell applications. Meanwhile, fewer components mean less cost and, more importantly, provided a way to easily scale up for the market.

The research teams results showed that an electrochemical cell measuring 2 inches by 2 inches could continuously remove about 99% of the carbon dioxide found in air flowing at a rate of approximately two liters per minute. An early prototype spiral device about the size of a 12-ounce soda can is capable of filtering 10 liters of air per minute and scrubbing out 98% of the carbon dioxide, the researchers said.

Scaled for an automotive application, the device would be roughly the size of a gallon of milk, Setzer said, but the device could be used to remove carbon dioxide elsewhere, too. For example, the UD-patented technology could enable lighter, more efficient carbon dioxide removal devices in spacecraft or submarines, where ongoing filtration is critical.

We have some ideas for a long-term roadmap that can really help us get there, said Setzler.

According to Shi, since the electrochemical system is powered by hydrogen, as the hydrogen economy develops, this electrochemical device could also be used in airplanes and buildings where air recirculation is desired as an energy-saving measure. Later this month, following his dissertation defense, Shi will join Versogen, a UD spinoff company founded by Yan, to continue advancing research toward sustainable green hydrogen.

Reference: A shorted membrane electrochemical cell powered by hydrogen to remove CO2 from the air feed of hydroxide exchange membrane fuel cells by Lin Shi, Yun Zhao, Stephanie Matz, Shimshon Gottesfeld, Brian P. Setzler and Yushan Yan, 3 February 2022, Nature Energy.DOI: 10.1038/s41560-021-00969-5

Co-authors on the paper from the Yan lab include Yun Zhao, co-first author and research associate, who performed experimental work essential for testing the device; Stephanie Matz, a doctoral student who contributed to the designing and fabrication of the spiral module, and Shimshon Gottesfeld, an adjunct professor of chemical and biomolecular engineering at UD. Gottesfeld was principal investigator on the 2019 project, funded by the Advanced Research Projects Agency-Energy (ARPA-E), that led to the findings.

Read more from the original source:

Game-Changing Carbon Capture Technology To Remove 99% of CO2 From Air - SciTechDaily

Posted in Technology | Comments Off on Game-Changing Carbon Capture Technology To Remove 99% of CO2 From Air – SciTechDaily

Advancing AI trustworthiness: Updates on responsible AI research – Microsoft

Posted: at 5:05 am

Editors note: This year in review of responsible AI research was compiled by Aether, a Microsoft cross-company initiative on AI Ethics and Effects in Engineering and Research, as outreach from their commitment to advancing the practice of human-centered responsible AI. Although many of the papers authors are participants in Aether, the research presented here expands beyond, encompassing work from across Microsoft, as well as with collaborators in academia and industry.

Inflated expectations around the capabilities of AI technologies may lead people to believe that computers cant be wrong. The truth is AI failures are not a matter of if but when. AI is a human endeavor that combines information about people and the physical world into mathematical constructs. Such technologies typically rely on statistical methods, with the possibility for errors throughout an AI systems lifespan. As AI systems become more widely used across domains, especially in high-stakes scenarios where peoples safety and wellbeing can be affected, a critical question must be addressed: how trustworthy are AI systems, and how much and when should people trust AI?

As part of their ongoing commitment to building AI responsibly, research scientists and engineers at Microsoft are pursuing methods and technologies aimed at helping builders of AI systems cultivate appropriate trustthat is, building trustworthy models with reliable behaviors and clear communication that set proper expectations. When AI builders plan for failures, work to understand the nature of the failures, and implement ways to effectively mitigate potential harms, they help engender trust that can lead to a greater realization of AIs benefits.

Pursuing trustworthiness across AI systems captures the intent of multiple projects on the responsible development and fielding of AI technologies. Numerous efforts at Microsoft have been nurtured by its Aether Committee, a coordinative cross-company council comprised of working groups focused on technical leadership at the frontiers of innovation in responsible AI. The effort is led by researchers and engineers at Microsoft Research and from across the company and is chaired by Chief Scientific Officer Eric Horvitz. Beyond research, Aether has advised Microsoft leadership on responsible AI challenges and opportunities since the committees inception in 2016.

The following is a sampling of research from the past year representing efforts across the Microsoft responsible AI ecosystem that highlight ways for creating appropriate trust in AI. Facilitating trustworthy measurement, improving human-AI collaboration, designing for natural language processing (NLP), advancing transparency and interpretability, and exploring the open questions around AI safety, security, and privacy are key considerations for developing AI responsibly. The goal of trustworthy AI requires a shift in perspective at every stage of the AI development and deployment life cycle. Were actively developing a growing number of best practices and tools to help with the shift to make responsible AI more available to a broader base of users. Many open questions remain, but as innovators, we are committed to tackling these challenges with curiosity, enthusiasm, and humility.

AI technologies influence the world through the connection of machine learning modelsthat provide classifications, diagnoses, predictions, and recommendationswith larger systems that drive displays, guide controls, and activate effectors. But when we use AI to help us understand patterns in human behavior and complex societal phenomena, we need to be vigilant. By creating models for assessing or measuring human behavior, were participating in the very act of shaping society. Guidelines for ethically navigating technologys impacts on societyguidance born out of considering technologies for COVID-19prompt us to start by weighing a projects risk of harm against its benefits. Sometimes an important step in the practice of responsible AI may be the decision to not build a particular model or application.

Human behavior and algorithms influence each other in feedback loops. In a recent Nature publication, Microsoft researchers and collaborators emphasize that existing methods for measuring social phenomena may not be up to the task of investigating societies where human behavior and algorithms affect each other. They offer five best practices for advancing computational social science. These include developing measurement models that are informed by social theory and that are fair, transparent, interpretable, and privacy preserving. For trustworthy measurement, its crucial to document and justify the models underlying assumptions, plus consider who is deciding what to measure and how those results will be used.

In line with these best practices, Microsoft researchers and collaborators have proposed measurement modeling as a framework for anticipating and mitigating fairness-related harms caused by AI systems. This framework can help identify mismatches between theoretical understandings of abstract conceptsfor example, socioeconomic statusand how these concepts get translated into mathematics and code. Identifying mismatches helps AI practitioners to anticipate and mitigate fairness-related harms that reinforce societal biases and inequities. A study applying a measurement modeling lens to several benchmark datasets for surfacing stereotypes in NLP systems reveals considerable ambiguity and hidden assumptions, demonstrating (among other things) that datasets widely trusted for measuring the presence of stereotyping can, in fact, cause stereotyping harms.

Flaws in datasets can lead to AI systems with unfair outcomes, such as poor quality of service or denial of opportunities and resources for different groups of people. AI practitioners need to understand how their systems are performing for factors like age, race, gender, and socioeconomic status so they can mitigate potential harms. In identifying the decisions that AI practitioners must make when evaluating an AI systems performance for different groups of people, researchers highlight the importance of rigor in the construction of evaluation datasets.

Making sure that datasets are representative and inclusive means facilitating data collection from different groups of people, including people with disabilities. Mainstream AI systems are often non-inclusive. For example, speech recognition systems do not work for atypical speech, while input devices are not accessible for people with limited mobility. In pursuit of inclusive AI, a study proposes guidelines for designing an accessible online infrastructure for collecting data from people with disabilities, one that is built to respect, protect, and motivate those contributing data.

When people and AI collaborate on solving problems, the benefits can be impressive. But current practice can be far from establishing a successful partnership between people and AI systems. A promising advance and direction of research is developing methods that learn about ideal ways to complement people with problem solving. In the approach, machine learning models are optimized to detect where people need the most help versus where people can solve problems well on their own. We can additionally train the AI systems to make decisions as to when a system should ask an individual for input and to combine the human and machine abilities to make a recommendation. In related work, studies have shown that people will too often accept an AI systems outputs without question, relying on them even when they are wrong. Exploring how to facilitate appropriate trust in human-AI teamwork, experiments with real-world datasets for AI systems show that retraining a model with a human-centered approach can better optimize human-AI team performance. This means taking into account human accuracy, human effort, the cost of mistakesand peoples mental models of the AI.

In systems for healthcare and other high-stakes scenarios, a break with the users mental model can have severe impacts. An AI system can compromise trust when, after an update for better overall accuracy, it begins to underperform in some areas. For instance, an updated system for predicting cancerous skin moles may have an increase in accuracy overall but a significant decrease for facial moles. A physician using the system may either lose confidence in the benefits of the technology or, with more dire consequences, may not notice this drop in performance. Techniques for forcing an updated system to be compatible with a previous version produce tradeoffs in accuracy. But experiments demonstrate that personalizing objective functions can improve the performance-compatibility tradeoff for specific users by as much as 300 percent.

System updates can have grave consequences when it comes to algorithms used for prescribing recourse, such as how to fix a bad credit score to qualify for a loan. Updates can lead to people who have dutifully followed a prescribed recourse being denied their promised rights or services and damaging their trust in decision makers. Examining the impact of updates caused by changes in the data distribution, researchers expose previously unknown flaws in the current recourse-generation paradigm. This work points toward rethinking how to design these algorithms for robustness and reliability.

Complementarity in human-AI performance, where the human-AI team performs better together by compensating for each others weaknesses, is a goal for AI-assisted tasks. You might think that if a system provided an explanation of its output, this could help an individual identify and correct an AI failure, producing the best of human-AI teamwork. Surprisingly, and in contrast to prior work, a large-scale study shows that explanations may not significantly increase human-AI team performance. People often over-rely on recommendations even when the AI is incorrect. This is a call to action: we need to develop methods for communicating explanations that increase users understanding rather than to just persuade.

The allure of natural language processings potential, including rash claims of human parity, raises questions of how we can employ NLP technologies in ways that are truly useful, as well as fair and inclusive. To further these and other goals, Microsoft researchers and collaborators hosted the first workshop on bridging human-computer interaction and natural language processing, considering novel questions and research directions for designing NLP systems to align with peoples demonstrated needs.

Language shapes minds and societies. Technology that wields this power requires scrutiny as to what harms may ensue. For example, does an NLP system exacerbate stereotyping? Does it exhibit the same quality of service for people who speak the same language in different ways? A survey of 146 papers analyzing bias in NLP observes rampant pitfalls of unstated assumptions and conceptualizations of bias. To avoid these pitfalls, the authors outline recommendations based on the recognition of relationships between language and social hierarchies as fundamentals for fairness in the context of NLP. We must be precise in how we articulate ideas about fairness if we are to identify, measure, and mitigate NLP systems potential for fairness-related harms.

The open-ended nature of languageits inherent ambiguity, context-dependent meaning, and constant evolutiondrives home the need to plan for failures when developing NLP systems. Planning for NLP failures with the AI Playbook introduces a new tool for AI practitioners to anticipate errors and plan human-AI interaction so that the user experience is not severely disrupted when errors inevitably occur.

To build AI systems that are reliable and fairand to assess how much to trust thempractitioners and those using these systems need insight into their behavior. If we are to meet the goal of AI transparency, the AI/ML and human-computer interaction communities need to integrate efforts to create human-centered interpretability methods that yield explanations that can be clearly understood and are actionable by people using AI systems in real-world scenarios.

As a case in point, experiments investigating whether simple models that are thought to be interpretable achieve their intended effects rendered counterintuitive findings. When participants used an ML model considered to be interpretable to help them predict the selling prices of New York City apartments, they had difficulty detecting when the model was demonstrably wrong. Providing too many details of the models internals seemed to distract and cause information overload. Another recent study found that even when an explanation helps data scientists gain a more nuanced understanding of a model, they may be unwilling to make the effort to understand it if it slows down their workflow too much. As both studies show, testing with users is essential to see if people clearly understand and can use a models explanations to their benefit. User research is the only way to validate what is or is not interpretable by people using these systems.

Explanations that are meaningful to people using AI systems are key to the transparency and interpretability of black-box models. Introducing a weight-of-evidence approach to creating machine-generated explanations that are meaningful to people, Microsoft researchers and colleagues highlight the importance of designing explanations with peoples needs in mind and evaluating how people use interpretability tools and what their understanding is of the underlying concepts. The paper also underscores the need to provide well-designed tutorials.

Traceability and communication are also fundamental for demonstrating trustworthiness. Both AI practitioners and people using AI systems benefit from knowing the motivation and composition of datasets. Tools such as datasheets for datasets prompt AI dataset creators to carefully reflect on the process of creation, including any underlying assumptions they are making and potential risks or harms that might arise from the datasets use. And for dataset consumers, seeing the dataset creators documentation of goals and assumptions equips them to decide whether a dataset is suitable for the task they have in mind.

Interpretability is vital to debugging and mitigating the potentially harmful impacts of AI processes that so often take place in seemingly impenetrable black boxesit is difficult (and in many settings, inappropriate) to trust an AI model if you cant understand the model and correct it when it is wrong. Advanced glass-box learning algorithms can enable AI practitioners and stakeholders to see whats under the hood and better understand the behavior of AI systems. And advanced user interfaces can make it easier for people using AI systems to understand these models and then edit the models when they find mistakes or bias in them. Interpretability is also important to improve human-AI collaborationit is difficult for users to interact and collaborate with an AI model or system if they cant understand it. At Microsoft, we have developed glass-box learning methods that are now as accurate as previous black-box methods but yield AI models that are fully interpretable and editable.

Recent advances at Microsoft include a new neural GAM (generalized additive model) for interpretable deep learning, a method for using dropout rates to reduce spurious interaction, an efficient algorithm for recovering identifiable additive models, the development of glass-box models that are differentially private, and the creation of tools that make editing glass-box models easy for those using them so they can correct errors in the models and mitigate bias.

When considering how to shape appropriate trust in AI systems, there are many open questions about safety, security, and privacy. How do we stay a step ahead of attackers intent on subverting an AI system or harvesting its proprietary information? How can we avoid a systems potential for inferring spurious correlations?

With autonomous systems, it is important to acknowledge that no system operating in the real world will ever be complete. Its impossible to train a system for the many unknowns of the real world. Unintended outcomes can range from annoying to dangerous. For example, a self-driving car may splash pedestrians on a rainy day or erratically swerve to localize itself for lane-keeping. An overview of emerging research in avoiding negative side effects due to AI systems incomplete knowledge points to the importance of giving users the means to avoid or mitigate the undesired effects of an AI systems outputs as essential to how the technology will be viewed or used.

When dealing with data about people and our physical world, privacy considerations take a vast leap in complexity. For example, its possible for a malicious actor to isolate and re-identify individuals from information in large, anonymized datasets or from their interactions with online apps when using personal devices. Developments in privacy-preserving techniques face challenges in usability and adoption because of the deeply theoretical nature of concepts like homomorphic encryption, secure multiparty computation, and differential privacy. Exploring the design and governance challenges of privacy-preserving computation, interviews with builders of AI systems, policymakers, and industry leaders reveal confidence that the technology is useful, but the challenge is to bridge the gap from theory to practice in real-world applications. Engaging the human-computer interaction community will be a critical component.

Reliability and safety

Privacy and security

AI is not an end-all, be-all solution; its a powerful, albeit fallible, set of technologies. The challenge is to maximize the benefits of AI while anticipating and minimizing potential harms.

Admittedly, the goal of appropriate trust is challenging. Developing measurement tools for assessing a world in which algorithms are shaping our behaviors, exposing how systems arrive at decisions, planning for AI failures, and engaging the people on the receiving end of AI systems are important pieces. But what we do know is change can happen today with each one of us as we pause and reflect on our work, asking: what could go wrong, and what can I do to prevent it?

The rest is here:

Advancing AI trustworthiness: Updates on responsible AI research - Microsoft

Posted in Technology | Comments Off on Advancing AI trustworthiness: Updates on responsible AI research – Microsoft

GOP lawmakers move to stop IRS facial recognition technology – Accounting Today

Posted: at 5:05 am

Republicans in Congress are raising concerns about the Internal Revenue Services move to use facial recognition technology to authenticate taxpayers before they can access their online accounts, introducing a bill that would ban the practice.

The IRS has contracted with ID.me, a third-party provider of authentication technology, to help deter identity theft by requiring taxpayers to send a selfie along with an image of a government document like a passport or drivers license before they can access their online taxpayer account or use tools like Get Transcript (see story). Although the new technology is intended to protect taxpayers from cybercriminals, it also has raised privacy concerns. The IRS has emphasized that taxpayers wont need such measures to file their taxes or pay taxes online. The agency began rolling out the technology this year for new taxpayer accounts and its expected to be in place by this summer for existing accounts as well.

Andrew Harrer/Bloomberg

The Treasury is already looking into alternatives to ID.me over privacy concerns amid reports that the company has amassed images of tens of millions of faces from its contracts with other federal and state government agencies and businesses (see story). But if Congress bans the use of such technology, or discourages the IRS from requiring it, that could prompt the agency to find other ways to authenticate users besides selfies.

In a letter Thursday, a group of Senate Republicans, led by Senate Finance Committee ranking member Mike Crapo, R-Idaho, questioned the IRSs plans to expand its collaboration with ID.me by requiring taxpayers to have an ID.me account to access some of the main IRS online resources. To register with ID.me, taxpayers will need to submit to ID.me personal information, including sensitive biometric data, starting this summer.

The IRS has unilaterally decided to allow an outside contractor to stand as the gatekeeper between citizens and necessary government services, the senators wrote in a letter to IRS Commissioner Chuck Rettig. The decision millions of Americans are forced to make is to pay the toll of giving up their most personal information, biometric data, to an outside contractor or return to the era of a paper-driven bureaucracy where information moves slow, is inaccurate, and some would say is processed in ways incompatible with contemporary life.

They pointed to a number of issues, pointing out that a selfie couldnt be changed if its compromised, unlike a password. They also asked about cybersecurity standards, and how the sensitive data would be stored and protected. The lawmakers also pointed out that ID.me is not subject to the same oversight rules as a government agency, and asked what assurances and rights would be allowed taxpayers under the collaboration, as taxpayers apparently would be subject to multiple terms of agreement filled with dense legal print.

ID.me defended its technology. We are committed to working together with the IRS to implement the best identity verification solutions to prevent fraud, protect Americans privacy, and ensure equitable, bias-free access to government services, said the company in a statement. To date, IRS and ID.me have worked together to substantially increase the identity verification pass rates from previous levels. These services are essential to helping prevent government benefits fraud that is occurring on a massive scale.

In the House, Rep. Jackie Walorski, R-Indiana, a senior member of the tax-writing House Ways and Means Committee, introduced the Save Taxpayers Privacy Act, which would prevent the IRS from requiring facial recognition technology to pay taxes or access account information. The bill would prohibit the Treasury from requiring the technology for access to any IRS online account or service.

It is outrageous that the IRS is planning to force American taxpayers to submit to dangerous facial recognition software in order to fulfill their basic civic responsibility, Walorski said in a statement Friday. Given the agencys previous failures to safeguard Americans private data and prevent political weaponization within its ranks, emboldening the IRS with any additional sensitive data or personal information would be a disservice to taxpayers and an affront to their rights. In the 21st century, the IRS can use secure alternatives to confirm taxpayers identities without resorting to facial recognition surveillance. To protect taxpayers privacy and security, I introduced legislation to stop IRS spying and defend citizens right to privacy.

View post:

GOP lawmakers move to stop IRS facial recognition technology - Accounting Today

Posted in Technology | Comments Off on GOP lawmakers move to stop IRS facial recognition technology – Accounting Today

Technology-first broker Prevu is more than its message: Tech Review – Inman

Posted: at 5:05 am

Prevu is a real estate buyer solution with a tech-forward, high-touch approach to working with customers. It uses salaried agents and customer concierge staff to assist buyers with its branded property search, market questions and offer submission.

Are you receiving Inmans Agent Edge? Make sure youre subscribed for the latest on real estate technology from Inmans expert Craig Rowe.

Platforms: BrowserIdeal for: Agents considering new brokerages, homebuyers

While tech-centered, Prevu is a brokerage. Its model is unique but might face scaling issues without a mechanism for taking on listings.

Prevu is a brokerage, but its prominent use of technology warrants a review of how it delivers service.

Prevu is a real estate buyer solution with a tech-forward, high-touch approach to working with customers. It uses salaried agents and customer concierge staff to assist buyers with its branded property search, market questions and offer submission.

This is a challenging company to write about because of its multiple value propositions to the industry. As of now, it leads with buyer rebates and low commissions. In that respect, its the same sort of look at what you can save argument as IdealAgent and Redfin.

Although saving money will always appeal to consumers, it doesnt resonate with those who automatically see savings as another term for limited service. That unfortunate stigma is even more pronounced in real estate, thanks largely to FSBO firms and other alternatives that sell a-la-carte listing marketing services.

But, while Prevu may be a brokerage at heart, its the technology that makes things pump.

The company leans heavily on buyer service, aiming to provide a more modernized, less paper-driven user experience. Although the team at Prevu describes its approach as similar to Expedia and Carvana, to me, its closer to the consumer insurance industrys move to shed its suit-and-tie, bureaucratic reputation with online quotes, service and mobile experiences.

Prevu spreads its technology evenly between buyers and the teams serving them. Collaboration is primarily chat- and email-driven. Buyers will work with both a concierge and an agent, the former elevating discussions to the latter as the relationship deepens.

Once onboard, buyers are offered access to Prevus collaborative search portal, from which they can save homes, ignore bad matches, request tours and submit offers via the call-to-action block, a section of features designed to engage users.

Every listing in Prevus search experience displays the potential rebate amount. The offer screen asks about funding source and down payment amount, and it encourages them to upload a qualification letter or some form of financial wherewithal. Its not a formal offer; rather, like many other online-offer tools today, its designed to express legitimate intent. It also asks for a good time to schedule a conversation with an agent.

The benefit to agents who want to work with Prevu is that you are 100 percent focused on service. The companys marketing will provide you with the clients, and you provide the wisdom.

Agent tools include a CRM module for viewing buyer profiles, property interest, budget, offers made, tours taken and all messages that have transpired since initial engagement, whether via text or email. (In many cases, this is all you need a CRM to do.)

Tours that get scheduled are routed to the listing agent on record, according to the local MLS data that Prevu uses as its property source. Calendar alerts are sent out, and for now, Prevu can integrate with ShowingTime.

Other features of note are a buyer document library, text-based onboarding and no formal representation agreement, as part of a deliberate approach to keep things low-pressure.

For now, Prevu is operating in New York, Connecticut, California, Pennsylvania, Massachusetts and Washington.

Many may read this review and think that most of whats happening here has already been done. Agents use text. Redfin pays agents. Keller Williams says its a technology company.

Most of that is true.

But Prevu, along with a small but growing cadre of up-and-coming colleagues, are instigating fissures in the foundation of the traditional brokerage model. Note that I refuse to use the D word because to me that term signals naive haughtiness, an attempt to usurp for the sake of it.

Before you blow off that idea, know that 47 percent of brokers surveyed in NARs 2021 Real Estate in Digital Age report cited competition from nontraditional market participants as one of their biggest challenges in the next two years. Know too that Prevu has sold close to $1 billion to date and tripled its revenue year-over-year in 2021.

What I see in Prevu is an application of what its founders merely consider to be a better way to serve a consumer need. Its simply what they know as a good business, a better way to provide value to homebuyers.

It should be noted as well that there is some Zillow and StreetEasy experience behind the company, so its not as if these guys came in from outside the industry.

The ultimate intent here is to automate as much of the buyer journey as possible without letting them put into the ditch. Its more consultative than heavy-handed and lead-driven.

Is it the future of how real estate services are offered? Thats up to the consumers. Might want to think about that.

Have a technology product you would like to discuss? Email Craig Rowe

Craig C. Rowe started in commercial real estate at the dawn of the dot-com boom, helping an array of commercial real estate companies fortify their online presence and analyze internal software decisions. He now helps agents with technology decisions and marketing through reviewing software and tech for Inman.

Read this article:

Technology-first broker Prevu is more than its message: Tech Review - Inman

Posted in Technology | Comments Off on Technology-first broker Prevu is more than its message: Tech Review – Inman

Page 59«..1020..58596061..7080..»