The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Monthly Archives: February 2022
Advancing AI trustworthiness: Updates on responsible AI research – Microsoft
Posted: February 5, 2022 at 5:05 am
Editors note: This year in review of responsible AI research was compiled by Aether, a Microsoft cross-company initiative on AI Ethics and Effects in Engineering and Research, as outreach from their commitment to advancing the practice of human-centered responsible AI. Although many of the papers authors are participants in Aether, the research presented here expands beyond, encompassing work from across Microsoft, as well as with collaborators in academia and industry.
Inflated expectations around the capabilities of AI technologies may lead people to believe that computers cant be wrong. The truth is AI failures are not a matter of if but when. AI is a human endeavor that combines information about people and the physical world into mathematical constructs. Such technologies typically rely on statistical methods, with the possibility for errors throughout an AI systems lifespan. As AI systems become more widely used across domains, especially in high-stakes scenarios where peoples safety and wellbeing can be affected, a critical question must be addressed: how trustworthy are AI systems, and how much and when should people trust AI?
As part of their ongoing commitment to building AI responsibly, research scientists and engineers at Microsoft are pursuing methods and technologies aimed at helping builders of AI systems cultivate appropriate trustthat is, building trustworthy models with reliable behaviors and clear communication that set proper expectations. When AI builders plan for failures, work to understand the nature of the failures, and implement ways to effectively mitigate potential harms, they help engender trust that can lead to a greater realization of AIs benefits.
Pursuing trustworthiness across AI systems captures the intent of multiple projects on the responsible development and fielding of AI technologies. Numerous efforts at Microsoft have been nurtured by its Aether Committee, a coordinative cross-company council comprised of working groups focused on technical leadership at the frontiers of innovation in responsible AI. The effort is led by researchers and engineers at Microsoft Research and from across the company and is chaired by Chief Scientific Officer Eric Horvitz. Beyond research, Aether has advised Microsoft leadership on responsible AI challenges and opportunities since the committees inception in 2016.
The following is a sampling of research from the past year representing efforts across the Microsoft responsible AI ecosystem that highlight ways for creating appropriate trust in AI. Facilitating trustworthy measurement, improving human-AI collaboration, designing for natural language processing (NLP), advancing transparency and interpretability, and exploring the open questions around AI safety, security, and privacy are key considerations for developing AI responsibly. The goal of trustworthy AI requires a shift in perspective at every stage of the AI development and deployment life cycle. Were actively developing a growing number of best practices and tools to help with the shift to make responsible AI more available to a broader base of users. Many open questions remain, but as innovators, we are committed to tackling these challenges with curiosity, enthusiasm, and humility.
AI technologies influence the world through the connection of machine learning modelsthat provide classifications, diagnoses, predictions, and recommendationswith larger systems that drive displays, guide controls, and activate effectors. But when we use AI to help us understand patterns in human behavior and complex societal phenomena, we need to be vigilant. By creating models for assessing or measuring human behavior, were participating in the very act of shaping society. Guidelines for ethically navigating technologys impacts on societyguidance born out of considering technologies for COVID-19prompt us to start by weighing a projects risk of harm against its benefits. Sometimes an important step in the practice of responsible AI may be the decision to not build a particular model or application.
Human behavior and algorithms influence each other in feedback loops. In a recent Nature publication, Microsoft researchers and collaborators emphasize that existing methods for measuring social phenomena may not be up to the task of investigating societies where human behavior and algorithms affect each other. They offer five best practices for advancing computational social science. These include developing measurement models that are informed by social theory and that are fair, transparent, interpretable, and privacy preserving. For trustworthy measurement, its crucial to document and justify the models underlying assumptions, plus consider who is deciding what to measure and how those results will be used.
In line with these best practices, Microsoft researchers and collaborators have proposed measurement modeling as a framework for anticipating and mitigating fairness-related harms caused by AI systems. This framework can help identify mismatches between theoretical understandings of abstract conceptsfor example, socioeconomic statusand how these concepts get translated into mathematics and code. Identifying mismatches helps AI practitioners to anticipate and mitigate fairness-related harms that reinforce societal biases and inequities. A study applying a measurement modeling lens to several benchmark datasets for surfacing stereotypes in NLP systems reveals considerable ambiguity and hidden assumptions, demonstrating (among other things) that datasets widely trusted for measuring the presence of stereotyping can, in fact, cause stereotyping harms.
Flaws in datasets can lead to AI systems with unfair outcomes, such as poor quality of service or denial of opportunities and resources for different groups of people. AI practitioners need to understand how their systems are performing for factors like age, race, gender, and socioeconomic status so they can mitigate potential harms. In identifying the decisions that AI practitioners must make when evaluating an AI systems performance for different groups of people, researchers highlight the importance of rigor in the construction of evaluation datasets.
Making sure that datasets are representative and inclusive means facilitating data collection from different groups of people, including people with disabilities. Mainstream AI systems are often non-inclusive. For example, speech recognition systems do not work for atypical speech, while input devices are not accessible for people with limited mobility. In pursuit of inclusive AI, a study proposes guidelines for designing an accessible online infrastructure for collecting data from people with disabilities, one that is built to respect, protect, and motivate those contributing data.
When people and AI collaborate on solving problems, the benefits can be impressive. But current practice can be far from establishing a successful partnership between people and AI systems. A promising advance and direction of research is developing methods that learn about ideal ways to complement people with problem solving. In the approach, machine learning models are optimized to detect where people need the most help versus where people can solve problems well on their own. We can additionally train the AI systems to make decisions as to when a system should ask an individual for input and to combine the human and machine abilities to make a recommendation. In related work, studies have shown that people will too often accept an AI systems outputs without question, relying on them even when they are wrong. Exploring how to facilitate appropriate trust in human-AI teamwork, experiments with real-world datasets for AI systems show that retraining a model with a human-centered approach can better optimize human-AI team performance. This means taking into account human accuracy, human effort, the cost of mistakesand peoples mental models of the AI.
In systems for healthcare and other high-stakes scenarios, a break with the users mental model can have severe impacts. An AI system can compromise trust when, after an update for better overall accuracy, it begins to underperform in some areas. For instance, an updated system for predicting cancerous skin moles may have an increase in accuracy overall but a significant decrease for facial moles. A physician using the system may either lose confidence in the benefits of the technology or, with more dire consequences, may not notice this drop in performance. Techniques for forcing an updated system to be compatible with a previous version produce tradeoffs in accuracy. But experiments demonstrate that personalizing objective functions can improve the performance-compatibility tradeoff for specific users by as much as 300 percent.
System updates can have grave consequences when it comes to algorithms used for prescribing recourse, such as how to fix a bad credit score to qualify for a loan. Updates can lead to people who have dutifully followed a prescribed recourse being denied their promised rights or services and damaging their trust in decision makers. Examining the impact of updates caused by changes in the data distribution, researchers expose previously unknown flaws in the current recourse-generation paradigm. This work points toward rethinking how to design these algorithms for robustness and reliability.
Complementarity in human-AI performance, where the human-AI team performs better together by compensating for each others weaknesses, is a goal for AI-assisted tasks. You might think that if a system provided an explanation of its output, this could help an individual identify and correct an AI failure, producing the best of human-AI teamwork. Surprisingly, and in contrast to prior work, a large-scale study shows that explanations may not significantly increase human-AI team performance. People often over-rely on recommendations even when the AI is incorrect. This is a call to action: we need to develop methods for communicating explanations that increase users understanding rather than to just persuade.
The allure of natural language processings potential, including rash claims of human parity, raises questions of how we can employ NLP technologies in ways that are truly useful, as well as fair and inclusive. To further these and other goals, Microsoft researchers and collaborators hosted the first workshop on bridging human-computer interaction and natural language processing, considering novel questions and research directions for designing NLP systems to align with peoples demonstrated needs.
Language shapes minds and societies. Technology that wields this power requires scrutiny as to what harms may ensue. For example, does an NLP system exacerbate stereotyping? Does it exhibit the same quality of service for people who speak the same language in different ways? A survey of 146 papers analyzing bias in NLP observes rampant pitfalls of unstated assumptions and conceptualizations of bias. To avoid these pitfalls, the authors outline recommendations based on the recognition of relationships between language and social hierarchies as fundamentals for fairness in the context of NLP. We must be precise in how we articulate ideas about fairness if we are to identify, measure, and mitigate NLP systems potential for fairness-related harms.
The open-ended nature of languageits inherent ambiguity, context-dependent meaning, and constant evolutiondrives home the need to plan for failures when developing NLP systems. Planning for NLP failures with the AI Playbook introduces a new tool for AI practitioners to anticipate errors and plan human-AI interaction so that the user experience is not severely disrupted when errors inevitably occur.
To build AI systems that are reliable and fairand to assess how much to trust thempractitioners and those using these systems need insight into their behavior. If we are to meet the goal of AI transparency, the AI/ML and human-computer interaction communities need to integrate efforts to create human-centered interpretability methods that yield explanations that can be clearly understood and are actionable by people using AI systems in real-world scenarios.
As a case in point, experiments investigating whether simple models that are thought to be interpretable achieve their intended effects rendered counterintuitive findings. When participants used an ML model considered to be interpretable to help them predict the selling prices of New York City apartments, they had difficulty detecting when the model was demonstrably wrong. Providing too many details of the models internals seemed to distract and cause information overload. Another recent study found that even when an explanation helps data scientists gain a more nuanced understanding of a model, they may be unwilling to make the effort to understand it if it slows down their workflow too much. As both studies show, testing with users is essential to see if people clearly understand and can use a models explanations to their benefit. User research is the only way to validate what is or is not interpretable by people using these systems.
Explanations that are meaningful to people using AI systems are key to the transparency and interpretability of black-box models. Introducing a weight-of-evidence approach to creating machine-generated explanations that are meaningful to people, Microsoft researchers and colleagues highlight the importance of designing explanations with peoples needs in mind and evaluating how people use interpretability tools and what their understanding is of the underlying concepts. The paper also underscores the need to provide well-designed tutorials.
Traceability and communication are also fundamental for demonstrating trustworthiness. Both AI practitioners and people using AI systems benefit from knowing the motivation and composition of datasets. Tools such as datasheets for datasets prompt AI dataset creators to carefully reflect on the process of creation, including any underlying assumptions they are making and potential risks or harms that might arise from the datasets use. And for dataset consumers, seeing the dataset creators documentation of goals and assumptions equips them to decide whether a dataset is suitable for the task they have in mind.
Interpretability is vital to debugging and mitigating the potentially harmful impacts of AI processes that so often take place in seemingly impenetrable black boxesit is difficult (and in many settings, inappropriate) to trust an AI model if you cant understand the model and correct it when it is wrong. Advanced glass-box learning algorithms can enable AI practitioners and stakeholders to see whats under the hood and better understand the behavior of AI systems. And advanced user interfaces can make it easier for people using AI systems to understand these models and then edit the models when they find mistakes or bias in them. Interpretability is also important to improve human-AI collaborationit is difficult for users to interact and collaborate with an AI model or system if they cant understand it. At Microsoft, we have developed glass-box learning methods that are now as accurate as previous black-box methods but yield AI models that are fully interpretable and editable.
Recent advances at Microsoft include a new neural GAM (generalized additive model) for interpretable deep learning, a method for using dropout rates to reduce spurious interaction, an efficient algorithm for recovering identifiable additive models, the development of glass-box models that are differentially private, and the creation of tools that make editing glass-box models easy for those using them so they can correct errors in the models and mitigate bias.
When considering how to shape appropriate trust in AI systems, there are many open questions about safety, security, and privacy. How do we stay a step ahead of attackers intent on subverting an AI system or harvesting its proprietary information? How can we avoid a systems potential for inferring spurious correlations?
With autonomous systems, it is important to acknowledge that no system operating in the real world will ever be complete. Its impossible to train a system for the many unknowns of the real world. Unintended outcomes can range from annoying to dangerous. For example, a self-driving car may splash pedestrians on a rainy day or erratically swerve to localize itself for lane-keeping. An overview of emerging research in avoiding negative side effects due to AI systems incomplete knowledge points to the importance of giving users the means to avoid or mitigate the undesired effects of an AI systems outputs as essential to how the technology will be viewed or used.
When dealing with data about people and our physical world, privacy considerations take a vast leap in complexity. For example, its possible for a malicious actor to isolate and re-identify individuals from information in large, anonymized datasets or from their interactions with online apps when using personal devices. Developments in privacy-preserving techniques face challenges in usability and adoption because of the deeply theoretical nature of concepts like homomorphic encryption, secure multiparty computation, and differential privacy. Exploring the design and governance challenges of privacy-preserving computation, interviews with builders of AI systems, policymakers, and industry leaders reveal confidence that the technology is useful, but the challenge is to bridge the gap from theory to practice in real-world applications. Engaging the human-computer interaction community will be a critical component.
Reliability and safety
Privacy and security
AI is not an end-all, be-all solution; its a powerful, albeit fallible, set of technologies. The challenge is to maximize the benefits of AI while anticipating and minimizing potential harms.
Admittedly, the goal of appropriate trust is challenging. Developing measurement tools for assessing a world in which algorithms are shaping our behaviors, exposing how systems arrive at decisions, planning for AI failures, and engaging the people on the receiving end of AI systems are important pieces. But what we do know is change can happen today with each one of us as we pause and reflect on our work, asking: what could go wrong, and what can I do to prevent it?
The rest is here:
Advancing AI trustworthiness: Updates on responsible AI research - Microsoft
Posted in Technology
Comments Off on Advancing AI trustworthiness: Updates on responsible AI research – Microsoft
GOP lawmakers move to stop IRS facial recognition technology – Accounting Today
Posted: at 5:05 am
Republicans in Congress are raising concerns about the Internal Revenue Services move to use facial recognition technology to authenticate taxpayers before they can access their online accounts, introducing a bill that would ban the practice.
The IRS has contracted with ID.me, a third-party provider of authentication technology, to help deter identity theft by requiring taxpayers to send a selfie along with an image of a government document like a passport or drivers license before they can access their online taxpayer account or use tools like Get Transcript (see story). Although the new technology is intended to protect taxpayers from cybercriminals, it also has raised privacy concerns. The IRS has emphasized that taxpayers wont need such measures to file their taxes or pay taxes online. The agency began rolling out the technology this year for new taxpayer accounts and its expected to be in place by this summer for existing accounts as well.
Andrew Harrer/Bloomberg
The Treasury is already looking into alternatives to ID.me over privacy concerns amid reports that the company has amassed images of tens of millions of faces from its contracts with other federal and state government agencies and businesses (see story). But if Congress bans the use of such technology, or discourages the IRS from requiring it, that could prompt the agency to find other ways to authenticate users besides selfies.
In a letter Thursday, a group of Senate Republicans, led by Senate Finance Committee ranking member Mike Crapo, R-Idaho, questioned the IRSs plans to expand its collaboration with ID.me by requiring taxpayers to have an ID.me account to access some of the main IRS online resources. To register with ID.me, taxpayers will need to submit to ID.me personal information, including sensitive biometric data, starting this summer.
The IRS has unilaterally decided to allow an outside contractor to stand as the gatekeeper between citizens and necessary government services, the senators wrote in a letter to IRS Commissioner Chuck Rettig. The decision millions of Americans are forced to make is to pay the toll of giving up their most personal information, biometric data, to an outside contractor or return to the era of a paper-driven bureaucracy where information moves slow, is inaccurate, and some would say is processed in ways incompatible with contemporary life.
They pointed to a number of issues, pointing out that a selfie couldnt be changed if its compromised, unlike a password. They also asked about cybersecurity standards, and how the sensitive data would be stored and protected. The lawmakers also pointed out that ID.me is not subject to the same oversight rules as a government agency, and asked what assurances and rights would be allowed taxpayers under the collaboration, as taxpayers apparently would be subject to multiple terms of agreement filled with dense legal print.
ID.me defended its technology. We are committed to working together with the IRS to implement the best identity verification solutions to prevent fraud, protect Americans privacy, and ensure equitable, bias-free access to government services, said the company in a statement. To date, IRS and ID.me have worked together to substantially increase the identity verification pass rates from previous levels. These services are essential to helping prevent government benefits fraud that is occurring on a massive scale.
In the House, Rep. Jackie Walorski, R-Indiana, a senior member of the tax-writing House Ways and Means Committee, introduced the Save Taxpayers Privacy Act, which would prevent the IRS from requiring facial recognition technology to pay taxes or access account information. The bill would prohibit the Treasury from requiring the technology for access to any IRS online account or service.
It is outrageous that the IRS is planning to force American taxpayers to submit to dangerous facial recognition software in order to fulfill their basic civic responsibility, Walorski said in a statement Friday. Given the agencys previous failures to safeguard Americans private data and prevent political weaponization within its ranks, emboldening the IRS with any additional sensitive data or personal information would be a disservice to taxpayers and an affront to their rights. In the 21st century, the IRS can use secure alternatives to confirm taxpayers identities without resorting to facial recognition surveillance. To protect taxpayers privacy and security, I introduced legislation to stop IRS spying and defend citizens right to privacy.
View post:
GOP lawmakers move to stop IRS facial recognition technology - Accounting Today
Posted in Technology
Comments Off on GOP lawmakers move to stop IRS facial recognition technology – Accounting Today
Technology-first broker Prevu is more than its message: Tech Review – Inman
Posted: at 5:05 am
Prevu is a real estate buyer solution with a tech-forward, high-touch approach to working with customers. It uses salaried agents and customer concierge staff to assist buyers with its branded property search, market questions and offer submission.
Are you receiving Inmans Agent Edge? Make sure youre subscribed for the latest on real estate technology from Inmans expert Craig Rowe.
Platforms: BrowserIdeal for: Agents considering new brokerages, homebuyers
While tech-centered, Prevu is a brokerage. Its model is unique but might face scaling issues without a mechanism for taking on listings.
Prevu is a brokerage, but its prominent use of technology warrants a review of how it delivers service.
Prevu is a real estate buyer solution with a tech-forward, high-touch approach to working with customers. It uses salaried agents and customer concierge staff to assist buyers with its branded property search, market questions and offer submission.
This is a challenging company to write about because of its multiple value propositions to the industry. As of now, it leads with buyer rebates and low commissions. In that respect, its the same sort of look at what you can save argument as IdealAgent and Redfin.
Although saving money will always appeal to consumers, it doesnt resonate with those who automatically see savings as another term for limited service. That unfortunate stigma is even more pronounced in real estate, thanks largely to FSBO firms and other alternatives that sell a-la-carte listing marketing services.
But, while Prevu may be a brokerage at heart, its the technology that makes things pump.
The company leans heavily on buyer service, aiming to provide a more modernized, less paper-driven user experience. Although the team at Prevu describes its approach as similar to Expedia and Carvana, to me, its closer to the consumer insurance industrys move to shed its suit-and-tie, bureaucratic reputation with online quotes, service and mobile experiences.
Prevu spreads its technology evenly between buyers and the teams serving them. Collaboration is primarily chat- and email-driven. Buyers will work with both a concierge and an agent, the former elevating discussions to the latter as the relationship deepens.
Once onboard, buyers are offered access to Prevus collaborative search portal, from which they can save homes, ignore bad matches, request tours and submit offers via the call-to-action block, a section of features designed to engage users.
Every listing in Prevus search experience displays the potential rebate amount. The offer screen asks about funding source and down payment amount, and it encourages them to upload a qualification letter or some form of financial wherewithal. Its not a formal offer; rather, like many other online-offer tools today, its designed to express legitimate intent. It also asks for a good time to schedule a conversation with an agent.
The benefit to agents who want to work with Prevu is that you are 100 percent focused on service. The companys marketing will provide you with the clients, and you provide the wisdom.
Agent tools include a CRM module for viewing buyer profiles, property interest, budget, offers made, tours taken and all messages that have transpired since initial engagement, whether via text or email. (In many cases, this is all you need a CRM to do.)
Tours that get scheduled are routed to the listing agent on record, according to the local MLS data that Prevu uses as its property source. Calendar alerts are sent out, and for now, Prevu can integrate with ShowingTime.
Other features of note are a buyer document library, text-based onboarding and no formal representation agreement, as part of a deliberate approach to keep things low-pressure.
For now, Prevu is operating in New York, Connecticut, California, Pennsylvania, Massachusetts and Washington.
Many may read this review and think that most of whats happening here has already been done. Agents use text. Redfin pays agents. Keller Williams says its a technology company.
Most of that is true.
But Prevu, along with a small but growing cadre of up-and-coming colleagues, are instigating fissures in the foundation of the traditional brokerage model. Note that I refuse to use the D word because to me that term signals naive haughtiness, an attempt to usurp for the sake of it.
Before you blow off that idea, know that 47 percent of brokers surveyed in NARs 2021 Real Estate in Digital Age report cited competition from nontraditional market participants as one of their biggest challenges in the next two years. Know too that Prevu has sold close to $1 billion to date and tripled its revenue year-over-year in 2021.
What I see in Prevu is an application of what its founders merely consider to be a better way to serve a consumer need. Its simply what they know as a good business, a better way to provide value to homebuyers.
It should be noted as well that there is some Zillow and StreetEasy experience behind the company, so its not as if these guys came in from outside the industry.
The ultimate intent here is to automate as much of the buyer journey as possible without letting them put into the ditch. Its more consultative than heavy-handed and lead-driven.
Is it the future of how real estate services are offered? Thats up to the consumers. Might want to think about that.
Have a technology product you would like to discuss? Email Craig Rowe
Craig C. Rowe started in commercial real estate at the dawn of the dot-com boom, helping an array of commercial real estate companies fortify their online presence and analyze internal software decisions. He now helps agents with technology decisions and marketing through reviewing software and tech for Inman.
Read this article:
Technology-first broker Prevu is more than its message: Tech Review - Inman
Posted in Technology
Comments Off on Technology-first broker Prevu is more than its message: Tech Review – Inman
Cybersecurity Risks of Biometric Related Technology Use – The National Law Review
Posted: at 5:05 am
Thursday, February 3, 2022
Facial recognition, voiceprint, and other biometric-related technology are booming, and they continue to infiltrate different facets of everyday life. The technology brings countless potential benefits, as well as significant data privacy and cybersecurity risks.
Whether it is facial recognition technology being used with COVID-19 screening tools and in law enforcement, continued use of fingerprint-based time management systems, or the use of various biometric identifiers such as voiceprint for physical security and access management, applications in the public and private sectors involving biometric identifiers and information continue to grow so do concerns about the privacy and security of that information and civil liberties. Over the past few years, significant compliance and litigation risks have emerged that factor heavily into the deployment of biometric technologies, particularly facial recognition.
Research suggeststhat the biometrics market is expected to grow to approximately $44 billion in 2026 (from about $20 billion in 2020). This is easy to imagine, considering how ubiquitous biometric applications have become in everyday life. Biometrics are used for identity verification in a myriad of circumstances, such as unlocking smartphones, accessing theme parks, operating cash registers, clocking in and out for work, and travelling by plane. Concerns about security and identity theft, coupled with weak practices around passwords,have led some to ask whether biometrics will eventually replace passwordsfor identity verification. While that remains to be seen, there is little doubt the use of biometrics will continue to expand.
A significant piece of that market, facial recognition technology, has become increasingly popular in employment and consumer areas (e.g.,employee access, passport check-in systems, and payments on smartphones), as well as with law enforcement. For approximately 20 years, law enforcement has used facial recognition technology to aid criminal investigation, but with mixed results,according to a New York Times report. Additionally, the COVID-19 pandemic has helped to drive broader use of this technology. The need to screen persons entering a facility for symptoms of the virus, including their temperature, led to increased use of thermal cameras, kiosks, and similar devices embedded with facial recognition capabilities. When federal and state unemployment benefit programs experienced massive fraud as they tried to distribute hundreds of billions in COVID-19 relief, many turned to facial recognition and similar technologies for help. By late-summer 2021, more than half the states in the United States have contracted with ID.me to provide ID-verification services, according to aCNN report.
Many have objected to the use of this technology in its current form, however. They raise concerns over a lurch toward a more Orwellian society and related to due process, noting some of the technologys shortcomings in accuracy and consistency.Others have observedthat the ability to compromise the technology can become a new path to committing fraud against individuals.
Additionally, the use of voice recognition technology has seen massive growth in the past year. A newreportfrom Global Market Insights, Inc. estimates the global market valuation for voice recognition technology will reach approximately $7 billion by 2026. It said this is in main part due to the surge of AI and machine learning across a wide array of devices, including smartphones, healthcare apps, banking apps, and connected cars, among many others. While the ease and efficacy of voice recognition technology is clear, the privacy and security obligations associated with this technology, as with facial recognition, cannot be overlooked.
With the increasingly broad and expanding use of facial recognition and other biometrics has come more regulation and the related compliance and litigation risks.
Perhaps one of the most well-known laws regulating biometric information is theIllinois Biometric Information Privacy Act(BIPA). Enacted in 2008, the BIPA was one of the first state laws to address a businesss collection of biometric data. The BIPA protects biometric identifiers (a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry) and biometric information (any information, regardless of how it is captured, converted, stored, or shared, based on an individuals biometric identifier used to identify an individual). The law established a comprehensive set of rules for companies collecting biometric identifiers and information from state residents, including the following key features:
Informed consent in connection with collection
Disclosure limitation
Reasonable safeguard and retention guidelines
Prohibition on profiting from biometric data
A private right of action for individuals harmed by violations of the BIPA
The BIPA largely went unnoticed until 2015, when a series of five similar class action lawsuits were brought against businesses. The lawsuits alleged unlawful collection and use of the biometric data of Illinois residents. Since the BIPA was enacted, more than750 putative class actions lawsuits have been filed. The onslaught is primarily due to the BIPAs private right of action provision. That provision provides statutory damages up to $1,000 for each negligent violation, and up to $5,000 for each intentional or reckless violation. Adding fuel to the fire, the Illinois Supreme Court ruled that an individual is aggrieved under the BIPA and has standing to sue for technical violations, such as a failure to provide the laws required notice.Rosenbach v. Six Flags Entertainment Corp.,No. 123186 (Ill. Jan. 25, 2019). While most of these cases involved collection of fingerprints for time management systems, several involved facial recognition, includingone that reportedly settled for $650 million. In 2021, a new wave of BIPA litigation arose with the increased use of voice recognition technology by businesses. While general voice data is not covered by the BIPA, voiceprints have a personal identifying quality, thus potentially making them subject to the BIPA. For example, a large fast-food chain is facing BIPA litigation over alleged use of AI voice recognition technology at their drive-throughs. Claims in both state and federal courts allege failures to implement BIPA-compliant data retention policies, informed consent requirements, and prohibitions on profiting and disclosure.
Many have arguedthat the BIPA went too far, opening the floodgates to litigation for plaintiffs who, in many cases, suffered little to no harm. Indeed,efforts have been madeto moderate the BIPAs impact. However, massive data breaches and surges in identity theft and fraud have supported calls for stronger measures to protect sensitive personal information, including with regard to the use of facial recognition. At the same time, mismatches and allegations of bias in the application of facial recognition have led to calls for changes.
In the last year, there has been an uptick in hackers trying to trick facial recognition technology in many settings, such as fraudulently claiming unemployment benefits from state workforce agencies. The majority of states are using facial recognition technology to verify persons eligible for government benefits to prevent fraud. The firm ID.me. Inc., which provides the facial recognition technology to help verify individual eligibility for unemployment benefits, has seen over 80,000 attempts to fool government identification facial recognition systems between June 2020 and January 2021. Hackers of facial recognition systems usea myriad of techniques includingdeepfakes(AI-generated images), special masks, or even holding up images or videos of the individual the hacker is looking to impersonate.
Fraud is not the only concern with facial recognition technology. Despite its appeal for employers and organizations, there are concerns over the technologys accuracy, as well as significant legal implications to consider. Importantly, there are growing concerns regarding accuracy and biases of the technology. Areportby the National Institute of Standards and Technology said a study of 189 facial recognition algorithms considered the majority of the industry found that most of the algorithms exhibit bias, falsely identifying Asian and Black faces 10-to-beyond-100 times more than White faces. Moreover, false positives are significantly more common for women than men and higher for the elderly and children than middle-aged adults.
A result has beenincreasing regulationof the use of biometrics, including facial recognition. Examples include:
Facial Recognition Bans.Several U.S. localities have banned the use of facial recognition for law enforcement, other government agencies, or private and commercial use.
Portland.In September 2020, theCity of Portland, Oregon, became the first city in the United States to ban the use of facial recognition technologies in the private sector. Proponents of the measure cited a lack of standards for the technology and wide ranges in accuracy and error rates that differ by race and gender, among other criticisms.
The term facial recognition technologies is broadly defined to include automated or semi-automated processes using face recognition that assist in identifying, verifying, detecting, or characterizing facial features of an individual or capturing information about an individual based on an individuals face. The ordinance carves out limited exceptions, including the use of facial recognition technologies to comply with law, to verify users of personal and employer-provided devices, and for social medial application. Failure to comply can be painful. Like the BIPA, theprovides persons injured by a material violation a cause of action for damages or $1,000 per day for each day of violation, whichever is greater.
Baltimore.TheCity of Baltimore, for example,has bannedthe use of facial recognition technologies by city residents, businesses, and most of the citys government (excluding the police department) until December 2022.Council Bill 21-0001prohibits persons from obtaining, retaining, accessing, or using certain face surveillance technology or any information obtained from certain face surveillance technology. Any person who violates the ordinance is guilty of a misdemeanor and, on conviction, is subject to a fine of not more than $1,000, imprisonment for not more than 12 months, or both fine and imprisonment.
Biometrics, Generally.Beyond the BIPA, state and local governments have enacted laws to regulate the collection, use, and disclosure of biometric information. Here are a few examples:
Texas, Washington, and New York.BothTexasandWashingtonhave enacted comprehensive biometric laws similar to the BIPA, but without the same kind of private-right-of-action provision. New York, on the other hand, isconsideringa BIPA-like privacy bill that mirrors the BIPA enforcement scheme.
The California Consumer Privacy Act (CCPA).Modeled to some degree after the EUs General Data Protection Regulation (GDPR), the CCPA seeks to provide individuals who are residents of California (consumers) greater control over their personal information. Cal. Civ. Code 1798.100et seq.Personal information is defined broadly and is broken into several categories, one being biometric information. In addition to new rights relating to their personal information (such as the right to opt out of the sale of their personal information), consumers have a private right of action relating to data breaches. If a CCPA-covered business experiences a data breach involving personal information, such as biometric information, the CCPA authorized a private cause of action against the business if a failure to implement reasonable security safeguards caused the breach. For this purpose, the CCPA points to personal information as defined in subparagraph (A) of paragraph (1) of subdivision (d) of Section 1798.81.5. That section defined biometric information as Unique biometric data generated from measurements or technical analysis of human body characteristics, such as a fingerprint, retina, or iris image, used to authenticate a specific individual. Unique biometric data does not include a physical or digital photograph, unless used or stored for facial recognition purposes. Cal. Civ. Code 1798.150. If successful, a plaintiff can seek to recover statutory damages in an amount not less than $100 and not greater than $750 per consumer per incident or actual damages, whichever is greater, and injunctive or declaratory relief and any other relief the court deems proper. This means that, like under the BIPA, plaintiffs generally do not have to show actual harm to recover.
New York City.The Big Apple amendedTitle 22 of its Administrative Codeto create BIPA-like requirements for the retail, restaurant, and entertainment businesses concerning collection of biometric information from customers. Under the law, customers have a private right of action to remedy violations, subject to a 30-day notice and cure period, with damages ranging from $500 to $5,000 per violation, along with attorneys fees.
In addition, New York City passed theTenant Privacy Act, which, among other things, requires owners of smart access buildings (i.e.,those that use key fobs, mobile apps, biometric identifiers, or other digital technologies to grant access to their buildings) to provide privacy policies to their tenants prior to collecting certain types of data from them. It also strictly limits (a) the categories and scope of data that the building owner collects from tenants, (b) how it uses that data (including a prohibition on data sales), and (c) how long it retains the data. The law creates a private right of action for tenants whose data is unlawfully sold. Those tenants may seek compensatory damages or statutory damages ranging from $200 to $1,000 per tenant and attorneys fees.
Other states.Additionally, states are increasingly amending their breach notification laws to add biometric information to the categories of personal information that require notification, including 2020 amendments in California, Vermont, and Washington, D.C. Moreover, there are a myriad of data destruction, reasonable safeguards, and vendor requirements to consider, depending on the state, when collecting biometric data.
Organizations that collect, use, and store biometric data increasingly face compliance obligations as the law attempts to keep pace with technology, cybersecurity crimes, and public awareness of data privacy and security. It is critical that they maintain a robust privacy and data protection program to ensure compliance and minimize business and litigation risks.
Jackson Lewis P.C. 2022National Law Review, Volume XII, Number 34
Read the original post:
Cybersecurity Risks of Biometric Related Technology Use - The National Law Review
Posted in Technology
Comments Off on Cybersecurity Risks of Biometric Related Technology Use – The National Law Review
Polunsky Beitel Green Recognized as a Legal Technology Trailblazer by The National Law Journal – Business Wire
Posted: at 5:05 am
SAN ANTONIO--(BUSINESS WIRE)--Polunsky Beitel Green, the countrys leading law firm representing mortgage lenders, has been named one of the nations most innovative law firms by The National Law Journal. The publications 2022 Legal Technology Trailblazers, which recognizes law firms and companies that have used technology to change the way their businesses operate, honored PBG for its development of proprietary technology that automates and streamlines the mortgage loan document preparation and review process. The National Law Journals full profile of the firm is available here.
Texas-based PBG occupies a specialized niche in the residential real estate finance industry, representing mortgage lenders in Texas and other states that require closing documents to be reviewed by the lenders third-party legal counsel.
PBG blends a team of renowned mortgage lending lawyers with a secure technology and workflow solution, enabling the firm to prepare and review closing documents for an astounding 30,000-plus transactions each month. The system automatically retrieves data from its clients systems and routes the appropriate data to PBGs team of more than 300 mortgage document specialists, who quickly confirm the accuracy of critical data so that they can turn their attention to identifying legal or regulatory compliance concerns that require more in-depth involvement from the firms lawyers.
Our technology, process, and a team of the most talented lawyers and professionals I could ever hope to work with, facilitates the efficient resolution of legal or compliance impediments, allowing files to proceed to closing without undue delay, said Eric Gilbert, Polunsky Beitel Greens Chief Technology Officer and the architect of its technology platform. The technology is the engine that enables us to provide a detailed, meaningful legal review of loan documents, while also addressing the need for perfect documents, delivered quickly and seamlessly every time.
Legal industry experts have defined New Law as having four critical characteristics, but PBG is perhaps the first to have demonstrated a mastery of each technology, alternative staffing, process improvement, and use of data.
We are delighted to be recognized as a New Law leader by one of the legal professions bellwether media outlets, said Allan Polunsky, managing partner and founder of Polunsky Beitel Green. This recognition is a testament to our staffs extreme dedication to service, which drives us each day to find better, more efficient ways of helping our clients.
About Polunsky Beitel Green
Polunsky Beitel Green is Texas' oldest law firm exclusively dedicated to providing residential mortgage originators with document preparation and review services, as well as legal, regulatory and compliance support. The firms principals, Allan Polunsky, Jay Beitel and Marty Green have more than 100 years of combined experience in the specialized field of residential mortgage lending. Polunsky Beitel Green has offices in San Antonio, Dallas and Houston, with firm employees also embedded in clients offices throughout Texas and in more than 25 other states. Collectively, the firm serves residential mortgage lenders in all 50 U.S. states.
See the original post here:
Posted in Technology
Comments Off on Polunsky Beitel Green Recognized as a Legal Technology Trailblazer by The National Law Journal – Business Wire
Technology used in mRNA COVID vaccines offers hope for millions with heart disease, study suggests – The Columbus Dispatch
Posted: at 5:05 am
How the new RNA technology is used to create the COVID-19 vaccines
The COVID-19 vaccine is using new technology that has never been used before in traditional vaccines. Here's how an mRNA vaccine works.
Just the FAQs, USA TODAY
Combining technologies that proved hugely successful against cancer and in COVID-19 vaccines, researchers at the University of Pennsylvania have shown they can effectively treat a leading cause of heart disease.
For now, the success has only been achieved in mice, but the milestone offers hope for millions of people whose heart muscle is damaged by scar tissue.
There is no effective treatment for this fibrosis, whichleads to heart disease,the leading cause of death in the United States, said Dr. Jonathan Epstein, a Penn professor of cardiovascular researchwho helped lead the new work, published last month in the journal Science.
In his new research, Epstein reversed fibrosis by re-engineering cells, as has been done with a successful blood cancer treatment called CAR-T,which stands for chimeric antigen receptor T cells. In this case, however, the treatment took placeinside the body rather than in a lab dish.
The team delivered the treatmentusingmRNA technology, which hasbeen proven over the last year with hundreds of millions of people receiving mRNA-based COVID vaccines.
"Ifit works (in people), it really could have enormous impact," Epstein said."Almost every type of heart disease is accompanied by fibrosis."
About 50% of heart failure is directly caused by this scar tissue, whichprevents the heart from relaxing and pumping effectively. Fibrosis also is involved in leading causes of lung and kidney disease.
In the decade-oldCAR-T approach to fighting blood cancers, developed at Penn by study co-author Carl June,immune cells from the patient are taken out of the body and genetically altered to identify tumor cells. Then, they're reinserted so they can destroy the cancer.
More:mRNA technology in COVID vaccines could one day lower your cholesterol, prevent cancer
CAR-Thas been hugely expensive because it's personalized for every patient. By working inside the body, the new approach would allow treatment with the samegeneric approach for everyone.
"It is now scalable. That makes it to me really more exciting," Epstein said.
Unlike cancer therapy, where every last cancer cell has to be killed to prevent recurrence, in fibrosis, almost any significant reduction will improve someone's quality of life, he said.
Though still a long way from helping people, the method shows the potential ofmRNA technology, wellbeyond COVID vaccines.
"It's really cool," said Dr. Crystal Mackall a Stanford University cancer researcherwho uses CAR-Tto treat cancer and was not involved in this work. "I think we all knew when the COVID vaccine was so successful and so well tolerated in so many people… those of us who are scientists immediately began thinking, 'Wow, what else can Ido with this?'"
In the COVID vaccine, mRNA spurs cells to make a protein normally found on the surface of the coronavirus. That way, when the immune system sees the actual virus, it will recognize the protein and attack the virus before it can do serious damage.
In the new application, the mRNA trains the cells to produce a protein found on the surface of fibrotic cells, so immune cells will destroy them.
In previous studies, engineered T cells were delivered in a way that allowed themto persist over a long time, risking thatthe immune system would attackother fibrotic cells, including those involved inwound healing. By delivering the protein with mRNA, whichonly sticks around for a few days, the researchers think they can avoid this problem.
"The window for potential trouble is relatively small," Epstein said.
This short-term durability is a major advantage, he and others said.
"The idea that you could do this over a period of days is actually pretty exciting," said Dr. Stanley Riddell, a professor and immunology expert at the Fred Hutchinson Cancer Research Center in Seattle. "It's a very nice application of cutting-edge synthetic biology."
Still, unexpected problems could crop up, and the Penn team remains a long way from safely treating people with fibrotic heart disease, Epsteinsaid.
Next, they plan to test their approach in larger mammals, before moving on topeople, hopefully in about two years.They still have to work out the most appropriate dose and how many times the treatment might need to be delivered to be most effective, he said.
The research team has started a company to help advance the technology.
One advantage, Epstein said, is that imaging technology cannow "see" fibrotic tissue, allowing doctors to evaluate a patient's disease and response to therapy. "There are tools that already exist to bring this forward," he said.
More:Hilliard man celebrating a new year with a new heart and lungs
Like many great scientific advances, the idea behind thenew treatment approachstarted with a chance meeting in an elevator.
One of Epstein's graduate students had wondered aloud about the possibility of using CAR-Ts to treat cardiac fibrosis. A few days later, Epstein ran into Carl June in an elevator and posed the same question.
Graduate students led the effort, because "they have the energy to go back and forth between labs," Epstein said, "and they're smart enough to learn different disciplines."
The teamshad been collaborating for several years when Dr. Drew Weissman,a Penn scientist whose research underlies mRNA vaccines,approached them to suggest delivering the treatment viamRNA.
"I just walked into Jon's office and said, 'We can do this,'" Weissman said.
Weissman, not surprisingly, is a big believer in mRNA technology, whichis already being tried inother vaccines to preventthe flu, shingles and respiratorysyncytial virus, as well as cancer. The new study shows it has much broader potential, he said.
Fibrosis is a part of many diseases, not just heart disease. Duchenne Muscular Dystrophy, pulmonary fibrosis, scleroderma and COVID lung are all caused by a hardening of vital tissues, noted Weissman, who is now using mRNA as the basis for an experimental HIV vaccine.People are also experimenting with using mRNAto treat autoimmune disease and to deliver gene therapies.
"The potential for it really is enormous," Weissman said. "It's the beginning of the RNA world."
More:Fast-acting 'chain of survival' saves man from sudden cardiac death at Worthington Community Center
Light running could help save your life
Runners are less likely to die of cancer or heart disease. Buzz60s Sean Dowling has more.
Buzz60
Contact Karen Weintraub at kweintraub@usatoday.com.
Health and patient safety coverage at USA TODAY is made possible in part by a grant from the Masimo Foundation for Ethics, Innovation and Competition in Healthcare. The Masimo Foundation does not provide editorial input.
Continued here:
Posted in Technology
Comments Off on Technology used in mRNA COVID vaccines offers hope for millions with heart disease, study suggests – The Columbus Dispatch
Extreme green: the new issue of Future Power Technology is out now – Power Technology
Posted: at 5:05 am
How can you ensure renewable power technology is resilient enough? The answer, perhaps, is to deploy it in the Arctic, one of the worlds harshest environments, and that is exactly what a team of Russian researchers has done. The $27m Snowflake facility is a proving ground for renewable power generation, and could prove crucial in the worlds ongoing efforts to cut its carbon footprint, by demonstrating that industry, administration and workers can all function in the most extreme of environments.
Elsewhere, we look into recent power supply struggles in the UK and the US, and ask if widespread freezes or soaring energy prices are the result of temporary hardships, or signs of deeper issues within their respective power industries. With both countries eager to position themselves as leaders in the worlds fight against climate change, they face serious questions about fixing their own domestic energy supplies, before going on to tackle global challenges. We also ask what more decision-makers in the electric vehicle sector can do to spread clean cars to the US, and consider the potential, and challenges, associated with concentrated solar power.
Whether you are on a desktop, tablet or smartphone, you canread the magazinefor free online, and join the conversation onTwitter.
Arctic exploration: developing green energy technology in an extreme environment
A $27m clean energy-powered Russian research facility is being built in the Arctic to bring carbon-free technologies to the remote and climatically harsh region. Heidi Vella speaks to the projects lead Yury Vasiliev at the Moscow Institute of Physics and Technology to find out more.
Read more.
Fixing the UKs broken energy market
Regulations meant to ensure low energy prices have trapped utilities in a death spiral. Matthew Farmer investigates the challenges facing the UK energy market.
Read more.
High-power potential: the future of concentrated solar power
As photovoltaic solar production grows around the world, concentrated solar power has historically been left behind. JP Casey speaks to John King of Hyperlight Energy, to learn how the latters efficient and flexible characteristics could aid in the worlds clean energy transition.
Read more.
Keeping the lights on in the USs stormy century
As extreme weather events become fiercer and more frequent, what steps are operators taking to keep maintenance manageable? Matthew Farmer investigates current US power infrastructure.
Read more.
The US electric vehicle market needs to shift a gear
With the US lagging behind Europe and China in the transition to electric vehicles, an ING report says more needs to be done to promote the technology. Andrew Tunnicliffe talks with co-author Rico Luman about what the countrys ambitions are and how they might be met.
Read more.
The lynchpin of the worlds decarbonisation efforts, or an unsafe practice always a step away from a humanitarian disaster? The truth surrounding nuclear power is likely somewhere in the middle, and questions remain as to whether its undoubted power potential can be realised amid safety concerns and financing challenges for such large-scale projects. In our next issue, well profile some of the worlds newest nuclear plants, and assess whether they could be the future of power.
Read more:
Extreme green: the new issue of Future Power Technology is out now - Power Technology
Posted in Technology
Comments Off on Extreme green: the new issue of Future Power Technology is out now – Power Technology
Meet the 14 year old who develops fire prevention technology – Inhabitat
Posted: at 5:05 am
While most middle schoolers were learning about history and grammar, young climate activist Ryan Honary was putting his passion for STEM to work. Living in California, he witnessed the devastating 2018 Camp Fire, which killed 85 people and destroyed over 18,000 structures. It led Honary to develop a fire-detection technology to help avoid wildfire disasters in the future.
Continue reading below
Our Featured Videos
His invention earned the Grand Prize at the 2019 Ignite Innovation Student Challenge. It also established the Early Wildfire Detection Network, for which he was named the 2020 American Red Cross Disaster Services Hero for Orange County.
Related: He transformed a school bus into an eco-friendly tiny home
Now 14, Honary has achieved more in the way of business development, award-winning ideas and climate action than most people on the planet. His invention caught the attention of the Irvine Ranch Conservancy, a non-profit, non-advocacy organization created in 2005 to help preserve and support the Irvine Ranch Natural Landmarks. The organization aims to encourage citizens to connect with the land and facilitates stewardship through landowners in the area.
Thank you!
Keep an eye out for our weekly newsletter.
Join Our Newsletter
Receive the latest in global news and designs building a better future.
SIGN UP
SIGN UP
In alignment with these goals, the Irvine Ranch Conservancy invited Honary to conduct a pilot project with its support. The goal is to evaluate the potential for the proprietary AI-driven sensor network technology.
The system will be put to work, testing its ability to prevent fire through detection, measurement, notification and prediction of a variety of environmental threats. For example, the technology monitors air and water pollution and soil moisture levels. It will be deployed in early 2022, with research continuing throughout the year.
We were impressed with Ryans research, and we are excited about its potential to improve our ability to detect threats and monitor our natural resources, which are essential to our adaptive management approach, said Dr. Nathan Gregory, vice president and chief programs officer of the Irvine Ranch Conservancy.
The emergency detection and response system relies on remote sensors and AI to identify fire outbreaks and predict spread patterns. The low-cost mesh network is easy to deploy and can be placed in remote locations that are otherwise unmonitored. The onboard technology allows communication via an app, to alert scientists and emergency responders.
In addition to his work with the Irvine Ranch Conservancy,Honary won the prestigious Office of Naval Research Naval Science Award. The award came in the form of a grant, which led to the formation of Honarys company Sensory AI. Since the initial win in early 2020, the organization has issued several rounds of funding to further develop the technology.
Honary was also recognized as a top 30 finalist at the Broadcom Masters. The program, founded and produced by the Society for Science and the Public and the Broadcom Foundation, is the nations premier STEM competition for middle schoolers.
While headlines rage about the costs and loss associated with wildfires, Honary is working to encourage other students to pursue any interests in STEM fields of study.
I believe that environmental engineering will be one of the most important fields of my generation, and my hope is that students will be encouraged to pursue it and have the resources to do so, said Honary. I am really excited about the opportunity to demonstrate my solution in a larger context, in collaboration with Dr. Gregory and his team, and expect the outcomes to be instrumental in future conservation efforts.
In addition to addressing issues of climate change, he hopes to stand as an inspiration for other youths who may not have considered STEM opportunities.
The Bureau of Labor Statistics estimates that by 2030 STEM occupations will increase by 10.5% compared to a 7.5% growth in non-STEM occupations. That opens the door for innumerable careers for candidates with a strong foundation in science, technology, engineering and math (STEM).
Yet a White House study found that only 20% of high school graduates are prepared for college level coursework in a STEM major. It also found that less than 20% of undergraduates who declare plans to major in a STEM field actually graduate with a related degree. Its a field of study thats flooded with potential but short on applicants. And the problem starts early in the educational process.
The Skyhook Foundation reports only 33% of eighth graders are interested in STEM majors. That might come from a lack of inspiration even earlier in elementary school. Research supports the idea that if STEM topics arent engaging, the vast majority of students lose interest by fifth grade. This data highlights the need for access and emphasis on STEM-based education starting early on.
Fortunately for the wildlife and human population, Honary is one of the few who are passionate and inspired about STEM from a young age. When hes not actively working to save the planet, Honary reports he enjoys tennis and teaches the sport to autistic youth. He also enjoys singing and playing the guitar, as well as surfing the waves in his hometown of Newport Beach, CA.
+ Ryan Honary
Images via Ryan Honary
See the rest here:
Meet the 14 year old who develops fire prevention technology - Inhabitat
Posted in Technology
Comments Off on Meet the 14 year old who develops fire prevention technology – Inhabitat
Renishaw projects FY 2022 profit of up to 181M as manufacturing technology demand rises 30% – 3D Printing Industry
Posted: at 5:05 am
Insiders and analysts have made their predictions on the 3D printing trends to watch out for. Find out more in our series focused on thefuture of 3D printing.
UK-based engineering firm Renishaw (RSW) has revealed that the revenue generated by its Manufacturing Technologies division rose by 30% during the second half of 2021.
Renishaws H1 2022 financials, which it reports from June 30 2021 to December 31 2021, show that its manufacturing arm brought in 309 million, 30% more than the 237 million it reported in H1 2021. Over the period, the division benefited from an impressive recovery in demand from its semiconductor and electronics clientele, while its 3D printing sales also saw strong repeat business from key accounts.
In anticipation of the companys results, investors drove its share price up by 15% in the lead up to their publication, and having met expectations with its financial performance, it now projects that its FY 2022 pre-tax profit could rise to 181 million.
There was growth for all product lines within our Manufacturing Technologies segment, most notably for the encoder and gauging lines, explained Sir David McMurtry, Executive Chairman of Renishaw. The strong demand for our encoder product lines continues to be driven by increased investments in industrial automation and the semiconductor and electronics capital equipment markets.
Renishaws H1 2022 financials
While Renishaw used to report its revenue across Metrology and Healthcare segments, it now does so in the form of Manufacturing Technologies and Analytical Instruments and Healthcare divisions, making it difficult to compare its H1 2022 figures against those it achieved in the periods prior to this reorganization.
What we do know is that the former, which includes the firms industrial metrology, precision measurement and 3D printing offerings, was the driving force behind its H1 2022 revenue growth. During the period, Renishaw says that a spike in demand for consumer electronics and EVs led to increased interest in its gauging, magnetic and optical encoder product lines, with its measuring offering also seeing growth.
As a result, the companys Manufacturing Technologies division was able to achieve an adjusted operating profit of 81 million over the course of H1 2022, a 98% increase on the 41 million it reported in H1 2021. Although Renishaws financials make little reference to how its 3D printing business helped contribute to this, its Chief Executive William Lee did tell analysts on its earnings call, that its wider strategy of targeting key repeat customers is currently working well in this area.
Revenue generated by the firms Analytical Instruments and Healthcare segment, on the other hand, fell from 18.3 million to 16.5 million between H1 2021 and H1 2022, due mainly to delays in shipping its spectroscopy lines to China.
Despite this though, the company was still able to bring in an overall revenue of 325 million in H1 2022, 27% more than the 255 million it reported during H1 2021, and as a result, it has not only been able to increase its cash balance from 215 million to 222 million during this period, but offer a shareholders a 16p per share interim dividend.
Renishaws profitable H1 2022
During Renishaws H1 2022 earnings call, Lee stressed that its ambition in the 3D printing space is still to be the best hardware supplier to its clientele. To achieve this, Lee said the firm is actively working with customers in the early stages of adopting the technology, making initial parts and sometimes even hosting machines for them, until theyre ready to install them at their own facilities.
In terms of new clients, the company revealed that one of its 3D printers had been deployed by Optimus3D during the quarter, to manufacture optimized titanium chainstay brackets for Angel Cycle Works bicycles. Using a RenAM 500S system, the company is said to have been able to automate the parts production process, while achieving a high level of consistency and improved quantities.
Renishaw also introduced its new RenAM 500 Flex 3D printers at Formnext in H1 2022, complete with simplified powder handling systems, designed to enable users to more easily change materials. Although the machines were launched too late into the period to have had a meaningful impact on the companys financials, their sales performance is likely to be vital to its 3D printing divisions future success.
More broadly, when it comes to the reasons behind its impressive H1 revenue figures, its worth noting how Renishaw moved to insulate its business from the impact of Brexit as well. In February 2021, the firm opted to expand its facilities and stock levels at its EU offices, while making some of them independent subsidiaries, enabling it to avoid some of the supply chain disruption seen elsewhere in the UK.
3D printing ready for take-off?
Renishaw has made some bold predictions ahead of H2, forecasting revenue for the 2022 financial year of 650-690 million, and a pre-tax profit of 157-181 million. If realized, these figures would represent annual rises of 22% and 4% respectively.
In a statement issued issued alongside its figures, the firm said that it remains confident in its long-term prospects due to its strong financial position, product pipeline and high-value manufacturing relevance, while McMurtry added on its earnings call, that 3D printing remains an important part of achieving its goals moving forwards, particularly in the world of small-batch production.
The future is the price per part and when it comes down, the potential for additive manufacturing is enormous, added McMurtry. Theres no tool that needs to be sharpened. There are a lot of advantages, and when the price per part comes down really rapidly, it will really take off in the future, in my view.
To stay up to date with the latest 3D printing news, dont forget to subscribe to the 3D Printing Industry newsletter or follow us on Twitter or liking our page on Facebook.
For a deeper dive into additive manufacturing, you can now subscribe to our Youtube channel, featuring discussion, debriefs, and shots of 3D printing in-action.
Are you looking for a job in the additive manufacturing industry? Visit 3D Printing Jobs for a selection of roles in the industry.
Featured image shows a row of Renishaws new 500Q Flex 3D printers. Photo via Renishaw.
View post:
Posted in Technology
Comments Off on Renishaw projects FY 2022 profit of up to 181M as manufacturing technology demand rises 30% – 3D Printing Industry
Year three of COVID reveals technology’s limits – CatholicPhilly.com
Posted: at 5:05 am
Father Eric J. Banecker
By Father Eric J. Banecker Posted February 4, 2022
As we enter the third year of the pandemic, there is an odd disconnect. Some are debating the when and how of moving out of the emergency phase. Others, irresponsibly, never entered the emergency phase at all. Most of us, of course, are somewhere in between, trying to make sense of a flurry of recommendations, variants and information.
In the process, we have reached a fascinating point in the way this pandemic has affected our relationship with technology. For the first year of the pandemic, science and technology represented all that was right with the world: nurses working past exhaustion to care for COVID patients; wise old figures explaining steps we could take to keep ourselves and others healthy; front-line heroes making sure masks, gloves and tests were properly manufactured and distributed.
It also represented major social change (Zoom means we can all work from home forever!), salvation from loneliness (Zoom happy hour!), a new way of fulfilling religious obligations (livestream Mass!), and then, finally, escape from the pandemic itself (mRNA vaccines!).
But something happened on the way to this bright, promising future promised by technology. It turns out that some jobs just cannot be done from our home offices, and it also turns out that many of those jobs are lower-wage jobs.
Suddenly, those wise old figures became experts we were either told were trying to rob our freedom or whom we had to follow with extreme scrupulosity. Some of the front-line heroes were fired from their jobs for not getting a vaccine. Others watched loved ones die because they didnt get a vaccine.
Zoom lost its charm, Twitter became snarkier and angrier, and Facebook already under fire became Meta and tried to convince us that our lives would be lived online from now on.
This disenchantment must be behind the many articles from very different kinds of publications which have recently lamented various ways technology (and the corporations which sell it to us) has negatively affected our society.
Dear friends, we must live in the real. We are not avatars, and the internet is not a home, office, nation, or church. As we enter year three of the pandemic, we must recognize the inherent limits of science and technology. Acknowledging limits, of course, doesnt mean denying reality. Indeed, the human mind has come to understand many aspects of reality precisely because reality is intelligible. And when that knowledge is applied in a way that helps to sustain and promote life, that is a great thing.
So yes, mRNA vaccines are a very good thing. The digitization of certain aspects of life and work can open up possibilities of interaction and collaboration heretofore impossible. Just as things we dont even think of as technology the printing press, advances in agriculture and animal husbandry transformed previous eras but are considered analog today, so many aspects of life we consider revolutionary will probably be considered ancient in a few hundred years.
The essential criterion for any kind of technology chemical, biological, electronic is simply this: does this promote the flourishing of the human person and the human community or not? The answer for most forms of technology will be well, it depends.
Smartphones can help us listen to the Bible in a Year podcast, or they can be devices for the easy access and distribution of pornography. (In fact, they are both at the same time.) Social media can be a tool to share ideas and meet people, or it can be a fantasyland of arguments designed to make us addicted and depressed at the same time. From major research laboratories can come advances in the treatment and prevention of COVID-19 and also experiments with human chimeras and cloning.
The choice about whether or not to adopt a certain technology cannot be left to those in board rooms or even, sadly, in many ethics committee meetings. It must be left to the actual real communities which make up our lives families, legislatures, voluntary organizations and the church through the magisterium to discern advances in technology and determine whether they truly respond to our needs or simply create new ones.
The truly bright, promising future is one which respects the world as created by God. In such a world, with God at the center, the human person is lifted up by scientific advances rather than brought down. The choice lies not in the atoms and semi-conductors; rather, it lies in the hearts of mothers and fathers, priests and politicians, doctors, nurses, lawyers and scientists.
Catholics have a unique opportunity to extol the wonders of science while also warning of its limitations. We can use the means of digital communication to proclaim the Gospel, while also directing people to real, live experiences of worship and communal life.
May we be guided by faith and reason in fulfilling Christs commission to be the salt of the earth and the light of the world for our day.
Originally posted here:
Year three of COVID reveals technology's limits - CatholicPhilly.com
Posted in Technology
Comments Off on Year three of COVID reveals technology’s limits – CatholicPhilly.com







