How wearable AI could help you recover from covid – MIT Technology Review

The Illinois program gives people recovering from covid-19 a take-home kit that includes a pulse oximeter, a disposable Bluetooth-enabled sensor patch, and a paired smartphone. The software takes data from the wearable patch and uses machine learning to develop a profile of each persons vital signs. The monitoring system alerts clinicians remotely when a patients vitals such as heart rateshift away from their usual levels.

Typically, patients recovering from covid might get sent home with a pulse oximeter. PhysIQs developers say their system is much more sensitive because it uses AI to understand each patients body, and its creators claim it is much more likely to anticipate important changes.

Its an enormous benefit, says Terry Vanden Hoek, the chief medical officer and head of emergency medicine at University of Illinois Health, which is hosting the pilot. Working with covid cases is hard, he says: When you work in the emergency department its sad to see patients who waited too long to come in for help. They would require intensive care on a ventilator. You couldnt help but ask, If we could have warned them four days before, could we have prevented all this?

Like Angela Mitchell, most of the study participants are African-American. Another large group are Latino. Many are also living with risk factors such as diabetes, obesity, hypertension, or lung conditions that can complicate covid-19 recovery. Mitchell, for example, has diabetes, hypertension, and asthma.

African-American and Latino communities have been hardest hit by the pandemic in Chicago and across the country. Many are essential workers or live in high-density, multigenerational housing.

For example, there are 11 people in Mitchells house, including her husband, three daughters, and six grandchildren. I do everything with my family. We even share covid-19 together! she says with a laugh. Two of her daughters tested positive in March 2020, followed by her husband, before Mitchell herself.

Although African-Americans are only 30% of Chicagos population, they made up about 70% of the citys earliest covid-19 cases. That percentage has declined, but African-Americans recovering from covid-19 still die at rates two to three times those for whites, and vaccination drives have been less successful at reaching this community. The PhysIQ system could help improve survival rates, the studys researchers say, by sending patients to the ER before its too late, just as they did with Mitchell.

PhysIQ founder Gary Conkright has previous experience with remote monitoring, but not in people. In the mid-1990s, he developed an early artificial-intelligence startup called Smart Signal with the University of Chicago. The company used machine learning to remotely monitor the performance of equipment in jet engines and nuclear power plants.

Our technology is very good at detecting subtle changes that are the earliest predictors of a problem, says Conkright. We detected problems in jet engines before GE, Pratt & Whitney, and Rolls-Royce because we developed a personalized model for each engine.

Smart Signal was acquired by General Electric, but Conkright retained the right to apply the algorithm to the human body. At that time, his mother was experiencing COPD and was rushed to intensive care several times, he said. The entrepreneur wondered if he could remotely monitor her recovery by adapting his existing AI system. The result: PhysIQ and the algorithms now used to monitor people with heart disease, COPD, and covid-19.

Its power, Conkright says, lies in its ability to create a unique baseline for each patienta snapshot of that persons normand then detect exceedingly small changes that might cause concern.

The algorithms need only about 36 hours to create a profile for each person.

The system gets to know how you are looking in your everyday life, says Vanden Hoek. You may be breathing faster, your activity level is falling, or your heart rate is different than the baseline. The advanced practice provider can look at those alerts and decide to call that person to check in. If there are concernssuch as potential heart or respiratory failure, he saysthey can be referred to a physician or even urgent care or the emergency department.

In the pilot, clinicians monitor the data streams around the clock. The system alerts medical staff when the participants condition changes even slightlyfor example, if their heart rate is different from what it normally is at that time of day.

Read the rest here:

How wearable AI could help you recover from covid - MIT Technology Review

PERSPECTIVE: Why Strong Artificial Intelligence Weapons Should Be Considered WMD Homeland Security Today – HSToday

The concept of strong Artificial Intelligence (AI), or AI that is cognitively equivalent to (or better than) a human in all areas of intelligence, is a common science fiction trope.[1] From HALs adversarial relationship with Dave in Stanley Kubricks film 2001: A Space Odyssey[2] to the war-ravaged apocalypse of James Camerons Terminator[3] franchise, Hollywood has vividly imagined what a dystopian future with super intelligent machines could look like and what the ultimate outcome for humanity might be. While I would not argue that the invention of super-intelligent machines will inevitably lead to our Schwarzenegger-style destruction, rapid advances in AI and machine learning have raised the specter of strong AI instantiation within a lifetime,[4] and this requires serious consideration. It is becoming increasingly important that we have a real conversation about strong AI before it becomes an existential issue, particularly within the context of decision making for kinetic autonomous weapons and other military systems that can result in a lethal outcome. From these discussions, appropriate global norms and international laws should be established to prevent the proliferation and use of strong AI systems for kinetic operations.

With the invention of almost every new technology, changes to ethical norms surrounding its appropriate use lag significantly behind proliferation. Consider social media as an example. We imagined that social media platforms would bring people together and facilitate greater communication and community, yet the reality has become significantly less sanguine.[5] Instead of bringing people together, social media has deepened social fissures and enabled the proliferation of disinformation at a virulent rate. It has torn families apart, caused greater divide, and at times transformed the very definition of truth.[6] Only now are we considering ethical restraints on social media to prevent the poison from spreading.[7] It is highly probable that any technology we create will ultimately reflect the darker parts of our nature, unless we create ethical limits before the technology becomes ubiquitous. It would be foolish to believe that AI would be an exception to this rule. This becomes especially important when considering strong AI designed for warfare, which is distinguishable from other forms of artificial intelligence.

To fully examine the implications of strong AI, we need to understand how it differs from current AI technologies, which are what we would consider weak AI.[8] Your smartphones ability to recognize images of your face is an example of weak AI. For a military example, an algorithm that can recognize a tank in an aerial video would be considered a weak AI system.[9] It can identify and label tanks, but it does not really know what a tank is or have any cognizance of how it relates to a tank. In contrast, a strong AI would be capable of the same task (as well as parallel tasks) with human-level proficiency (or beyond), but with an awareness of its own mind. This makes strong AI a more unpredictable threat. Not only would strong AI be highly proficient at rapidly processing battlefield data for pre- and post-strike decision making, but it would do so with an awareness of itself and its own motives, whatever they might be. Proliferation of weak AI systems for military applications is already becoming a significant issue. As an anecdotal example, Vladimir Putin has stated that the nation that leads AI will be the ruler of the world.[10] Imagine what the outcome could be if military AI systems had their own motives. This would likely involve catastrophic failure modes beyond what could be realized from weak AI systems. Thus, military applications of strong AI deserve their own consideration.

At this point, one may be tempted to dismiss strong AI as being highly improbable and therefore not worth considering. Given the rapid pace of AI technology development, it could be argued that, while the precise probability of instantiating strong AI is unknown,[11] it is a safe assumption that it is greater than zero. But what is important in this case is not the probability of strong AI instantiation, but the severity of a realized risk. To understand this, one need only consider how animals of greater intelligence typically consider animals of lesser intelligence. Ponder this scenario: when we have ants in our garden, does their well-being ever cross our minds? From our perspective, the moral value of an insect is insignificant in relation to our goals, thus we would not hesitate to obliterate them simply for eating our tomatoes. Now imagine if we encountered a significantly more intelligent AI how might it consider us in relation to its goals, whatever they might be? This meeting could yield an existential crisis if our existence hinders the AIs goal achievement, thus even this low-probability event could have a catastrophic outcome if it became a reality.

Understanding what might motivate a strong AI could provide some insight into how it might relate to us in such a situation. Human motivation is an evolved phenomenon. Everything that drives us (self-preservation, hunger, sex, desire for community, accumulation of resources, etc.) exists to facilitate our survival and that of our kin.[12] Even higher-order motives, like self-actualization, can be linked to the more fundamental goal of individual and species survival when viewed through the lens of evolutionary psychology.[13] However, a strong AI would not necessarily have evolved. It may simply be instantiated in situ as software or hardware. In this case, no evolutionary force would have existed over eons to generate a motivational framework analogous to what we, as humans, experience. In an instantiated strong AI, it might be prudent to assume that the AIs primary motive would be to achieve whatever goal it was initially programmed to do. Thus, self-preservation might not be the primary motivating factor. However, the AI would probably recognize that its continued existence is necessary for it to achieve its primary goal, thus self-preservation could become a meaningful sub-goal.[14] Other sub-goals may also exist, some of which would not be obvious to humans in the context of how we understand motivation. The AIs thought process by which sub-goals are generated or achieved might be significantly different from what humans would expect.

The existence of AI sub-goals that do not follow the patterns of human motivation implies the existence of a strong AI creative process that may be completely alien to us. One only needs to look at AI-generated art to see that AI creativity can manifest itself in often grotesque ways that are vastly different from what a human might expect.[15] While weird AI artistry hardly poses an existential threat to humanity, it illustrates the concept of perverse instantiation,[16] where the AI achieves a goal, but in an unexpected and potentially malignant way. As a military example, imagine a strong AI whose primary goal is to degrade and destroy the adversary. As we have demonstrated, AI creativity can be unbounded in its weirdness, as its thought processes are unlike that of any evolved intelligence. This AI might find a creative and completely unforeseen way to achieve its primary goal that leads to significant collateral damage against non-combatants, such as innocent civilians. Taking this analogy to a darker level, the AI might determine that a useful sub-goal would be to remove its military handlers from the equation. Perhaps they act as a man in the middle gatekeeper in affecting the AIs will, and the AI determines that this arrangement creates unacceptable inefficiencies. In this perverse instantiation, the AI achieves its goal of destroying the enemy, but in a grotesque way by killing its overseers.

The next obvious question is, how could we contain a strong AI in a way that would prevent malignant failure? The obvious solution might be in engineering a deontological ethic an Asimovian set of rules to limit the AIs behavior.[17] Considering a strong AIs tendency toward unpredictable creativity in methods of goal achievement, encoding an exhaustive set of rules would pose a titanic challenge. Additionally, deontological ethics is often subject to deontological failure, e.g., what happens when rules contradict one another? A classic example would be the trolly problem: if an AI is not allowed to kill a human, but the only two possible choices involve the death of humans, which choice does it make?[18] This is already an issue in weak AI, specifically with self-driving cars.[19] Does the vehicle run over a small child who crosses the road, or crash and kill its inhabitants, if those are the only possible choices? If deontological ethics are an imperfect option, perhaps AI disembodiment would be a viable solution. In this scenario, the AI would lack a means to directly interact with its environment, acting as sort of an oracle in a box.[20] The AI would advise its human handlers, who would act as ethical gatekeepers in affecting the AIs will. Upon cursory examination, this seems plausible, but we have already established that a strong AI might determine that a man in the middle arrangement degrades its ability to achieve its primary goal, so what would prevent the AI from coercing its handlers into enabling its escape? In our hubris, we would like to believe that we could not be outsmarted by a disembodied AI, but a being that is more intelligent than us could reasonably outsmart us just as easily as a savvy adult could a nave child.

While a single strong AI instantiation could pose a significant risk of malignant failure, imagine the impact that the proliferation of strong AI military systems might have on how we approach war. Our adversaries are earnestly exploring AI for military applications; thus, it is extremely likely that strong AI may become a reality and also proliferate.[21] The real problem becomes not how to prevent malignant failure of a single strong AI, but how to address the complex adaptive system of multiple strong AIs fighting against all logical actors, none of which exhibit reasonably predictable behavior.[22] To further complicate matters, ethical decision making is influenced by culture, and our adversaries might have different ideas as to which strong AI behaviors are acceptable during war, and which are not.

To avoid this potentially disastrous outcome, I propose the following be considered for further discussion with the hopeful end-goal of appropriate global norms and future international laws that ban strong AI decision making for kinetic offensive operations. Strong AI-based lethal autonomous weapons should be considered a weapon of mass destruction. This may be the best way to prevent the complex, unpredictable destruction that could arise from multiple strong AI systems intent on killing the enemy or unnecessarily wreaking havoc on critical infrastructure, which may have negative secondary and tertiary effects impacting countless innocent non-combatants. Inevitably, there may be rogue or non-signatory actors who develop weaponized strong AI systems despite international norms. Any strategy that addresses strong AI should also consider this potential outcome.

Several years ago, seriously discussing strong AI might get you laughed out of the room. Today, as AI continues to advance, and as our adversaries continue to aggressively militarize AI technologies, it is imperative that the United States consider a defense strategy specifically addressing the possibility of a strong AI instantiation. Any use of strong AI in the battlefield should be limited to non-kinetic operations to reduce the impact of malignant failure. This standard should be reflected in multilateral treaty agreements or protocols to prevent strong AI misuse and the inevitable unpredictability of adversarial strong AI systems interacting with each other in complex, unpredictable, and possibly horrific ways. This may be a sufficient way to ensure that weaponized strong AI does not cause cataclysmic devastation.

The author is responsible for the content of this article. The views expressed do not reflect the official policy or position of the National Intelligence University, the Department of Defense, the U.S. Intelligence Community, or the U.S. Government.

(Visited 371 times, 1 visits today)

Original post:

PERSPECTIVE: Why Strong Artificial Intelligence Weapons Should Be Considered WMD Homeland Security Today - HSToday

Artificial Intelligence is Key: Why the Transition to Our Future Energy System Needs AI – POWER magazine

On any given day, the electric power industrys operations are complex and its responsibilities vast. As the industry continues to play a critical role in supporting global climate goal challenges, it must simultaneously support demand increases, surges in smart appliance adoption, and decentralized operating system expansions. And that just scratches the surface.

Behind the scenes, theres the power grid operator, whose role is to monitor the electricity network 24 hours per day, 365 days per year. As a larger number of lower capacity systems (such as renewables) come online and advanced network components are integrated into the grid, generation becomes exponentially more complex, decentralized and variable, stretching control room operators to their limits.

More locally, building owners and controllers (Figure 1) are being challenged to deploy grid-interactive intelligent elements that can flexibly participate in grid level operations to economically enhance grid resiliency (while also saving money for the building owner).

Outside those buildings, electric utilities collect millions of images of their transmission and distribution (T&D) infrastructure to assess equipment health and support reliability investments. But the ability to collect imagery has outpaced utility staffs ability to analyze and evaluate them.

On the generation side, operators are being increasingly pressured by market changes to decrease operations and maintenance costs (O&M) while maintaining, and if possible, increasing production revenue.

So how best to manage these current and future challenges? The solution may lie within another industryartificial intelligence.

If you step back for a moment you realize there are two (separate) trillion-dollar industriesthe energy industry and the data and information industrywhich are now intersecting in a way they never have before, saidArun Majumdar, Stanford UniversityJay Precourt Provostial Chair Professor of Mechanical Engineering, the founding director of ARPA-E, and a member of the EPRI Board of Directors. Majumdar spoke at an Electric Power Research Institute (EPRI) AI and Electric Power Roundtablediscussion earlier this year. The people who focus on data do not generally have expertise regarding the electricity industry and vice versa. We have entities like EPRI trying to connect the two and this is ofenormousvalue.

Take the power grid operator challenge, for example. EPRI is exploring an AI reinforcement learning (RL) agent that can act as a continuously learning, algorithm-based autopilot for operators to optimize performance. The goal is not to replace operatorswho are essential for transmission operationsbut rather to develop tools to augment their decision-making ability using RL.

Turning to building operators, recent advances in building controls technology, enabled by the model predictive control (MPC) framework, have focused on minimizing operating costs or energy use, or maximizing occupant comfort. But most commercial building MPC case studies have been abandoned because they can be labor-intensive and costly to customize and maintain.

EPRI is developing models and tools which will enable operators to enhance their responsiveness and flexibility to utility grid signals in the most cost-effective way. Coupled with the digitization of building control systems, AI predictive models will provide utilities and customers greater affordability, resiliency, environmental performance, and reliability.

In late May, EPRI brought more than 100 organizations together across the two industries in a Reverse Pitch event where electric power utilities presented their biggest challenges, and AI companies responded with potential solutions.

We want to help increase adoption of proven AI technologies, and that means we need to match solutions with the needs and issues utilities have, said Heather Feldman, EPRI Innovation Director for the nuclear energy sector. Utilities sharing operating experiences, use cases, and just as importantly, their data across the community were building with our AI. EPRI initiatives will enable the acceleration of AI technology deployment.

Feldman hosted the last panel discussion at the Reverse Pitch event, where speakers from Stanford University, Massachusetts Institute of Technology (MIT), Idaho National Lab (INL), SFL Scientific and EPRI discussed the future of AI (Figure 2) for electric power.

The utility sector by nature is a risk-averse industry, but its time to think about how to adapt their business models to embrace new AI technologies, saidLiang Min, Managing Director of the Bits & Watts Initiative at Stanford University. If utilities dedicate resources to identifying right use cases and conducting pilot programs, I think they will see benefits, and it will eventually lead to enterprise-wide adoption.

Validating different AI applications will help end-users and regulators determine their effectiveness, without eroding safety and reliability, said Idaho National Lab Nuclear National Technical Director, Craig Primer. We need to overcome those barriers to drive adoption and reduce the manual approaches used today.

In 2020, a large California investor-owned utility, and EPRI member, inspected 105,000 distribution and 20,500 transmission structures. Conservative estimates gave the utility 750,000 images for staff to review and evaluate. Thats about 3,500 person-hours and costs more than $350,000 at a standard utility staff rate for inspection review work.

With the wider adoption of drone technology in the very near future, significantly more images will be available than ever before. However, without augmented evaluation capabilities offered by AI, evaluation costs will correspondingly and exponentially increase. Inspections are complex tasks that become more complicated by utilizing drones.

EPRI is working with utilities and the AI community to build a foundation for machine learning to facilitate models that can detect damaged T&D assets (Figure 3) and assist staff in more efficiently managing the volume of images. But just as critically, its also taking on the tasks of collecting, anonymizing, labeling, and sharing imagery for model development. These data sets, along with a utility consensus taxonomy and data labeling process are needed to achieve desired improvements in efficiency, predictive modeling, damage identification, and repair/replacement of equipment.

During the Reverse Pitch event, Boston-based SFL Scientific, an AI consulting company, highlighted the significant technical and operational challenges associated with development of end-to-end AI applications, including validating machine and deep learning models, optimizing their performance long-term, and integrating the output into workflows and production pipelines.

AI is hard, its not easy, said Michael Segala, CEO of SFL Scientific. Introducing AI is essentially breaking peoples workflow, injecting risk into their process, which can break down adoption. This is maybe significantly more difficult for utilities based on the regulations that are set and consequences of getting things wrong. But theres a great ecosystem, like the folks here (at the Reverse Pitch) that will help with the journey and be a part of that adoption, so utilities dont fail and risks are reduced.

Now theres a new layer to consider: the increasing urgency to protect against threats to our energy infrastructure, recently heightened following the May cyberattack on one of the U.S.s largest fuel pipelines.

As physical threats to energy grids increase, connecting measures to ensure grid readiness, energy security, and resilience becomes critical, said Myrna Bittner Founder and CEO ofRUNWITHIT (RWI) Synthetics, an AI-based modelling company. Add on the pressures of electrification, decentralization, climate change, and cyberattacks, and the demand grows for even more adaptive scenario planning, mitigating technology and education.

Bittner presented RWIs Single Synthetic Environment modeling approach at the EPRI Reverse Pitch event. These geospatial environments include hyper-localized models of the people and businesses, the infrastructure, technology and policies, and then enable future scenarios to play forward.

On the energy generation side, EPRI continues to explore machine learning models to reduce O&M costs. One project that has advanced rapidly is wind turbine component maintenance. EPRI research shows the current gearbox cumulative failure rate during 20 years of operation is in the range of 30% (best case scenario) to 70% (worst case scenario). When a component like a gearbox prematurely fails, operation and maintenance (O&M) costs increase, and production revenue is lost. A full gearbox replacement may cost more than $350,000.

EPRI is researching and testing a physics-based machine-learning hybrid model that can identify gearbox damage in its early stages and extend its life. If a damaged bearing within a gearbox is identified early, the repair may only cost around $45,000, a savings of nearly 90%.

These projects all demonstrate real solutions that are deployed and are showing real results and increases in efficiencies. Many are set to be further deployed to enable the global energy systems transition.AI is at a point where I believe the technology has advanced to support scaling up adoption. Meanwhile we know that society depends on electric power 24/7 to run everything from health care and emergency resources, to communications infrastructure and in todays current situation, working from our homes, said Neil Wilmshurst, Senior Vice President of EPRIs Energy System Resources. Reliability and resilience have never been more essential in a time when were also making a critical energy systems transition to meet global climate goals and demand needs. AI must be a tool in the toolbox, and the time is nownot tomorrowto accelerate those applications.

Jeremy Renshaw is Senior Program Manager, Artificial Intelligence, at the Electric Power Research Institute (EPRI).

See more here:

Artificial Intelligence is Key: Why the Transition to Our Future Energy System Needs AI - POWER magazine

Is the oil & gas sector seeing the beginnings of an AI investment boom? – Offshore Technology

The oil & gas industry is seeing an increase in artificial intelligence (AI) investment across several key metrics, according to an analysis of GlobalData data.

AI is gaining an increasing presence across multiple sectors, with top companies completing more AI deals, hiring for more AI roles and mentioning it more frequently in company reports at the start of 2021.

GlobalDatas thematic approach to sector activity seeks to group key company information on hiring, deals, patents and more by topic to see which companies are best placed to weather the disruptions coming to their industries.

These themes, of which AI is one, are best thought of as any issue that keeps a CEO awake at night, and by tracking them it becomes possible to ascertain which companies are leading the way on specific issues and which are dragging their heels.

According to this method, Shell, Gazprom, Rosneft are classed as dominant players in AI in the sector, with an additional seven companies classified as leaders. Nine companies are considered to be vulnerable due to a lack of investment in AI.

One area in which there has been some decrease in AI investment among oil & gas companies is in the number of deals. GlobalData show that there were 11 AI deals in oil & gas in the first quarter of 2019. By the first quarter of 2021, that number was one.

Hiring patterns within the oil & gas sector as a whole are pointing towards an increase in the level of attention being shown to AI-related roles. There was a monthly average of 478 actively advertised-for open AI roles within the industry in April this year, up from a monthly average of 333 in December 2020.

It is also apparent from an analysis of keyword mentions in financial filings that AI is occupying the minds of oil & gas companies to an increasing extent.

There have been 390 mentions of AI across the filings of the biggest oil & gas companies so far in 2021, equating to 9.9% of all tech theme mentions. This figure represents an increase compared to 2016, when AI represented 7.5% of the tech theme mentions in company filings.

AI is increasingly fueling innovation in the oil & gas sector, particularly in the past six years. There were, on average, 61 oil & gas patents related to AI granted each year from 2000 to 2014. That figure has risen to an average of 131 patents since then, reaching 245 in 2020.

Modular and Single-Lift E-Houses, Living Quarters and Multi-Purpose Buildings and Cabins

28 Aug 2020

Read the original:

Is the oil & gas sector seeing the beginnings of an AI investment boom? - Offshore Technology

Daily Crunch: A crowded market for exits and acquisitions forecasts a hot AI summer – TechCrunch

To get a roundup of TechCrunchs biggest and most important stories delivered to your inbox every day at 3 p.m. PDT, subscribe here.

Hello and welcome to Daily Crunch for June 9, 2021. Today was TC Sessions: Mobility, a rollicking good time and one that we hoped you enjoyed. Looking ahead, were starting to announce some speakers for Disrupt including Accels Arun Mathew. Mark your calendars, Disrupt is going to be epic this year. Alex

To round out our startup news today, two things: The first is that Superhuman CEO Rahul Vohra and his buddy Todd Goldberg, the founder of Eventjoy, have formalized their investing partnership in a new fund called Todd and Rahuls Angel Fund. That name has big Bill and Teds Excellent Adventure vibes, albeit with a larger, $24 million budget.

And fresh on the heels of the Equity Podcast diving into hormonal health and the huge startup opportunity that it presents, theres a new startup working on PCOS on the market. Check out our look at its early form.

SEO expert and consultant Eli Schwartz will join Managing Editor Danny Crichton tomorrow to share his advice for everyone who gets nervous each time Google updates its algorithm.

To set a foundation for tomorrows chat on Twitter Spaces, Eli shared a guest post that should deflate some myths. For starters: A drop in search traffic isnt necessarily hurting you.

Instead of chasing the algorithm, he advises companies that rely on organic search results to focus on the user experience instead: If you are helpful to the user, you have nothing to fear.

Just like you release product updates based on feedback and analytics, Googles improving its products to offer a better user experience.

If you see a drop, in many cases, your site might not have even lost real traffic, says Eli. Often, the losses represent only lost impressions already not converting into clicks.

Tomorrows discussion is the latest in a series of chats with top Extra Crunch guest contributors. If youve worked with a talented growth marketer, please share a brief recommendation.

(Extra Crunch is our membership program, which helps founders and startup teams get ahead. You can sign up here.)

TechCrunch is back with our next category for our Experts project: Were reaching out to startup founders to tell us who they turn to when they want the most up-to-date growth marketing practices.

Fill out the survey here.

Were excited to share the results we collect in the form of a database. The more responses we receive from our readers, the more robust our editorial coverage will be moving forward. To learn more, visit techcrunch.com/experts.

Join us for a conversation tomorrow at 12:30 p.m. PDT / 3:30 p.m. EDT on Twitter Spaces. Our own Danny Crichton will be discussing growth marketer Eli Schwartzs guest column Dont panic: Algorithm updates arent the end of the world for SEO managers. Bring your questions and comments!

Link:

Daily Crunch: A crowded market for exits and acquisitions forecasts a hot AI summer - TechCrunch

Argo AI’s CEO says IPO expected within next year – Reuters

Self-driving startup Argo AI, backed by Ford Motor Co (F.N) and Volkswagen AG (VOWG_p.DE), expects to pursue a public listing within the next year, founder and CEO Bryan Salesky said on Wednesday.

"So we're actively fundraising and are going out this summer to raise a private round initially," Salesky said at The Information's Autonomous Vehicles Summit. "And then we're looking forward to an IPO within the next year."

"The raise this year will definitely provide capital that gives us plenty of runway and will help us continue to scale out," he said, adding that autonomous driving is a capital-intensive business.

Ford and Volkswagen each hold a 42% ownership interest in Argo AI.

Last year, Germany's Volkswagen closed its $2.6 billion investment in Pittsburgh-based Argo AI, which valued the company at just over $7 billion.

Our Standards: The Thomson Reuters Trust Principles.

See original here:

Argo AI's CEO says IPO expected within next year - Reuters

University of Illinois and IBM Researching AI, Quantum Tech – Government Technology

The University of Illinois Urbana-Champaign Grainger College of Engineering is partnering with tech giant IBM to bolster the colleges research and workforce development efforts in quantum information technology, artificial intelligence and environmental sustainability.

According to a news release from the university, the 10-year $200 million partnership will fund the future construction of a new Discovery Accelerator Institute, where university and IBM researchers will collaborate on solving global challenges with emerging technologies such as AI.

Areas of study will include AI's potential to solve sustainable energy, new materials for CO2 capture and conversion, and cloud computing and security. Researchers will also explore ways to improve quantum information systems and quantum computing, which applies the rules of quantum mechanics to make computations much faster than most computers in use today.

Bashir said the partnership will allow IBM and university researchers to work toward developing the technology of tomorrow, with sustainability in mind.

Were looking for a new way to really bridge that gap [between academia and the tech industry] in a much more intimate way and expand our collective research and educational impact, he said. In higher ed and industry, we need to come together to solve grand challenges to keep a sustainable planet, to provide high-quality jobs and develop a new economy.

We had already been working with them in the AI space, Welser said. We realized we could take what were doing here with AI, expand it to do some of the work in the hybrid cloud space, and think about what we do with that by advancing these base technologies.

Its also using this as a test bed for what we call discovery acceleration, which is using technologies to discover new materials and new science that can help with societal problems, he continued. In the case of this, were focusing on carbon capture, carbon accounting and climate change.

As part of the initiative, Bashir said the company and faculty will team up to develop nondegree tech certification programs and professional development courses in IT-related fields. He said the goal will be to feed IT talent into the workforce, given the national shortage of tech professionals in artificial intelligence, data science and quantum computing.

Working with IBM, theyre interested in hiring the workforce of tomorrow. Building that talent from early in the pipeline and diversifying the STEM talent pipeline is something we want to work on together, he said, adding that the partnership also aims to diversify the IT talent pool by bringing students of color and women into emerging fields like quantum computing.

Welser said the Discovery Accelerator Institute will complement a related company initiative: the IBM Skills Academy, a training certification program that provides over 330 courses relating to artificial intelligence, cloud computing, blockchain, data science and quantum computing.

We have courses that help train professors in specific areas of these skills, and they can use those materials in their coursework and create their own accredited courses, he said. Weve realized there really is a need for having these kinds of courses that dont necessarily go into a full university [degree] but could be more certifications for students people who want to learn about an area and get a certain level of certification.

In addition to research and course development efforts, Bashir noted that the institute will give students close access to one of the worlds largest tech employers.

We believe we can work together to prepare more talent through our educational pipeline, which IBM can have firsthand access to, he said. If they are working together with us, then they get to know those students.

The Illinois initiative comes two months after the tech company announced a partnership with Cleveland Clinic to study hybrid cloud, AI and quantum computing technologies to accelerate advancements in health care and life sciences. As part of that partnership, IBM plans to install its first private-sector, on-premises quantum computing system in the U.S.

Here is the original post:

University of Illinois and IBM Researching AI, Quantum Tech - Government Technology

How AI and mosquito sex parties can save the world – VentureBeat

Elevate your enterprise data technology and strategy at Transform 2021.

Diptera.ai has raised a $3 million seed round to fight mosquitoes with mosquitoes and AI-based sex sorting.

Jerusalem-based Diptera.ai has figured out a way to use AI to fight the growing threat of mosquitoes, which are spreading malaria and viruses like Zika, dengue, and yellow fever. While the method for fighting mosquitoes has been around for decades, AI can take it to a new level and democratize what was otherwise a very costly and localized abatement effort.

Well get to the sex parties in a bit.

Diptera.ai is using computer vision and eco-friendly technology to make it easier to control mosquito populations using the sterile insect technique, which sends sterilized male mosquitoes to mate with female mosquitoes, said Diptera.ai CEO Vic Levitin, in an interview with VentureBeat.

We think we can disrupt the $100 billion pest control market, Levitin said, noting that many other pest control methods are toxic to both humans and the environment.

Above: Mosquito larvae.

Image Credit: Diptera.ai

The company could help mitigate the death toll from mosquitoes. More than just a nuisance, they are the deadliest creatures on Earth, as they kill more than 700,000 people a year and infect hundreds of millions more with diseases. A recent book, The Mosquitoby Timothy Winegard, cites estimates that mosquitoes have killed 52 billion people nearly half of the humans who have ever lived.

Diptera.ais technology works for a host of insects, including household and agricultural pests. The company is starting with mosquitoes, a rapidly growing problem with no effective solution to date. Due to climate change, by 2050 half of the worlds population (including the U.S. and Europe) will be living among disease-spreading mosquitoes.

With its technology in the testing stage now, Diptera.ai plans to offer an affordable subscription service to what it calls a highly effective and eco-friendly biological pest control method. Most pest control methods are based on insecticides that are toxic to both humans and the environment. Despite its high effectiveness, sterilization has thus far been limited to a handful of pests because of the prohibitive costs in implementing it.

Standard control methods are losing effectiveness as mosquitoes rapidly become resistant to existing pesticides. Moreover, public opinion and regulation limit the use of toxic insecticides. As a result, people increasingly find themselves unable to enjoy the outdoors without being at risk from emerging and potentially devastating diseases.

Above: Ariel Livne, CTO of Diptera.ai, at a lab in Israel.

Image Credit: Diptera.ai

Levitin believes his company can stop mosquitoes by the billions, mainly by releasing sterile males to mate with females. We create mosquito sex parties, he said.

Trust Ventures led the funding round, with participation from existing investors IndieBio and Fresh.fund, as well as new investors who joined the round.

Diptera.ai was started by Ariel Livne, Elly Ordan, and Levitin. In October 2020, the team graduated from the IndieBio Accelerator, and it now has 10 employees. The seed round should enable the company to finish its pilot, which could grow into a product launch.

Weve raised enough money to prove the concept, Levitin said.

At some point, the Environmental Protection Agency will likely have to approve the Diptera.ai solution.

Above: Elly Ordan of Diptera.ai inspects mosquito larvae.

Image Credit: Diptera.ai

The sterile insect technique (SIT) is a biological pest control method in which mostly government-run entities release overwhelming numbers of sterile male insects into the wild. These sterile males mate with female mosquitoes, which are the only mosquitoes that bite humans and animals. The female mosquitoes only mate once in their lifetimes, but they each lay hundreds of eggs. If they can be tricked into mating with sterile males, then they wont create offspring.

The sterile insect technique is the most effective, Levitin said. Mosquitoes mate once as females in their lives. If they mate with sterile males, then it suppresses the population.

This technique has been used in the U.S. to control the spread of the Mediterranean fruit fly, with billions a month being released into the wild. But it is expensive due to high production and distribution costs, and is often limited to localized control efforts.

The technique started in the 1950s in Russia and the U.S., when it was used to control the tsetse fly in Africa.

In 1998, the Debug project saw Googles Verily unit release millions of sterile mosquitoes into the area of Fresno, California, resulting in a temporary 93% suppression of the population during mosquito season, which runs from around March through October.

Above: Vic Levitin (center) is cofounder and CEO of Diptera.ai.

Image Credit: Diptera.ai

Diptera.ais market research has shown their solution is 20 times less expensive than existing SIT methods.

For most insects, the bottleneck for SIT is sex separation. Currently mosquitoes are sex-sorted late in their development, when the mosquitoes are fragile and have a limited remaining lifespan of a few days. Shipping them is impractical, Levitin said.

Normally, implementing SIT requires building and maintaining a local mosquito factory near every release site. Diptera.ai combines computer vision, deep biology, and automation to sex-sort mosquitoes (and other insects) at the larval stage, which was previously considered impossible. This allows for a centralized mass production of sterile male mosquitoes that can then be shipped to the end customers for release.

We can sex sort them at the larva stage, said Levitin. Larvae used to be considered asexual. Nobody tried to sex-sort them. This is where we are innovative. We can tell the sex when they are larvae. Thats two weeks before they become adults. So we can produce them in mass production and then ship them across the country. This gives us economies of scale where we can offer it as a service.

Mosquitoes exist as larvae for a lot longer than they live as adults. If you can identify the males and females at this stage, then there is a lot more time to ship them to the right place in the country, and then the whole U.S. could be served by a mass-production factory that churns out sterilized mosquitoes by the billions.

Once it separates the males, Diptera.ai sterilizes them with radiation, using the equivalent of a microwave oven, except one used for sterilization purposes. The oven is about the size of a pizza oven, and its not dangerous to humans, Levitin said.

Most of the mosquitoes in the U.S. are of the Asian tiger variety (Aedes albopictus), and these mosquitoes dont travel far, making it easier to take down populations with localized efforts. By contrast, mosquitoes in Africa can fly long distances, and that makes it harder to control the population, Levitin said.

Just like the cloud disrupted the computing industry with affordable, on-demand computing power, Diptera.ai disrupts pest control with an affordable SIT-as-a-service, Levitin said. Instead of building and maintaining insect production factories, customers will subscribe to our service to receive shipments of sterile males ready for release.

With Diptera.ais service, luxury resorts, residential complexes, or even homeowners should be able to afford the eradication service. It has to be a subscription because the mosquitoes will come back, year after year, if you dont take them out regularly.

Its like the Mafia, Levitin said. You are paying protection money to us.

By the way, this is the second Israeli startup that Ive seen take up the fight against mosquitoes. Bzigo uses computer vision to find where a mosquito lands in your home, then it shines a laser on it so you can zap the mosquito yourself. No matter how much Diptera.ai succeeds, I imagine there will always be a need for Bzigos product.

Here is the original post:

How AI and mosquito sex parties can save the world - VentureBeat

Flying high with AI: Alaska Airlines uses artificial intelligence to save time, fuel and money – TechRepublic

How Alaska Airlines executed the perfect artificial intelligence use case. The company has saved 480,000 gallons of fuel in six months and reduced 4,600 tons of carbon emissions, all from using AI.

Image: Alaska Air

Given the near 85% fail rate in corporate artificial intelligence projects, it was a pleasure to visit with Alaska Airlines, which launched a highly successful AI system that is helping flight dispatchers. I visited with Alaska to see what the "secret sauce" was that made its AI project a success. Here are some tips to help your company execute AI as well as Alaska Airlines has.

SEE: Hiring Kit: Video Game Programmer (TechRepublic Premium)

Initially, the idea of overhauling flight operations control existed in concept only. "Since the idea was highly conceptual, we didn't want to oversell it to management," said Pasha Saleh, flight operations strategy and innovation director for Alaska Airlines. "Instead, we got Airspace Intelligence, our AI vendor, to visit our network centers so they could observe the problems and build that into their development process. This was well before the trial period, about 2.5 years ago."

Saleh said it was only after several trials of the AI system that his team felt ready to present a concrete business use case to management. "During that presentation, the opportunity immediately clicked," Saleh said. "They could tell this was an industry-changing platform."

Alaska cut its teeth on having to innovate flight plans and operations in harsh arctic conditions, so it was almost a natural step for Alaska to become an innovator in advancing flight operations with artificial intelligence.

SEE:Digital transformation: A CXO's guide (free PDF)(TechRepublic)

"I could see a host of opportunities to improve the legacy system across the airline industry that could propel the industry into the future," Saleh said. "The first is dynamic mapping. Our Flyways system was built to offer a fully dynamic, real-time '4D' map with relevant information in one, easy-to-understand screen. The information presented includes FAA data feeds, turbulence reports and weather reports, which are all visible on a single, highly detailed map. This allows decision-makers to quickly assess the airspace. The fourth dimension is time, with the novel ability to scroll forward eight-plus hours into the future, helping to identify potential issues with weather or congestion."

"We saved 480,000 gallons of fuel in six months and reduced 4,600 tons of carbon emissions." Pasha Saleh, flight operations strategy and innovation director for Alaska Airlines

The Alaska Flyways system also has built-in monitoring and predictive abilities. The system looks at all scheduled and active flights across the U.S., scanning air traffic systemically rather than focusing on a single flight. It continuously and autonomously evaluates the operational safety, air-traffic-control compliance and efficiency of an airline's planned and active flights. The predictive modeling is what allows Flyways to "look into the future," helping inform how the U.S. airspace will evolve in terms of weather, traffic constraints, airspace closures and more.

SEE:9 questions to ask when auditing your AI systems(TechRepublic)

"Finally the system presents recommendations," Saleh said. "When it finds a better route around an issue like weather or turbulence, or simply a more efficient route, Flyways provides actionable recommendations to flight dispatchers. These alerts pop up onto the computer screen, and the dispatcher decides whether to accept and implement the recommended solution. In sum: The operations personnel always make the final call. Flyways is constantly learning from this."

Saleh recalled the early days when autopilot was first introduced. "There was fear it would replace pilots," he said. "Obviously, that wasn't the case, and autopilot has allowed pilots to focus on more things of value. It was our hope that Flyways would likewise empower our dispatchers to do the same."

SEE:Graphs, quantum computing and their future roles in analytics(TechRepublic)

One step Alaska took was to immediately engage its dispatchers in the design and operation of the Flyways system. Dispatchers tested the platform for a six-month trial period and provided feedback for enhancing it. This was followed by on-site, one-on-one training and learning sessions with the Airspace Intelligence team. "The platform also has a chat feature, so our dispatchers could share their suggestions with the Airspace Intelligence team in real time," Saleh said. "Dispatchers could have an idea, and within days, the feature would be live. And because Flyways uses AI, it also learned from our dispatchers, and got better because of it."

While Flyways can speed times to decisions on route planning and other flight operations issues, humans will always have the role in route planning, and will always be the final decision-makers. "This is a tool that enhances, rather than replaces, our operations," Saleh said. Because flight dispatchers were so integrally involved with the project's development and testing, they understood its fit as a tool and how it could enhance their work.

"With the end result, I would say satisfaction is an understatement," Saleh said. "We're all blown away by the efficiency and predictability of the platform. But what's more, is that we're seeing an incredible look into the future of more sustainable air travel.

"One of the coolest features to us is that this tool embeds efficiency and sustainability into our operation, which will go a long way in helping us meet our goal of net zero carbon emissions by 2040. We saved 480,000 gallons of fuel in six months and reduced 4,600 tons of carbon emissions. This was at a time when travel was down because of the pandemic. We anticipate Flyways will soon become the de facto system for all airlines. But it sure has been cool being the first airline in the world to do this!"

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Visit link:

Flying high with AI: Alaska Airlines uses artificial intelligence to save time, fuel and money - TechRepublic

Second Life – Artificial Intelligence Unmasks the Cover Up Beneath Modigliani’s ‘Portrait of a Girl’ – PRNewswire

SAN JOSE, Calif., June 9, 2021 /PRNewswire/ --Amedeo Modigliani's'Portrait of a Girl'(1917) is currently held at the Tate in London. But, hidden beneath this painting is the figure of a woman that researchers have suggested is Modigliani's ex-lover, Beatrice Hastings. The couple had a tumultuous relationship ending in 1916. One year after their breakup, 'Portrait of a Girl' was completed. The timing suggests that Modigliani intentionally painted over his past girlfriend. Whilst Modigliani's'Portrait of a Girl' was titled 'Mademoiselle Victoria' at the 1929 exhibition held at the Lefevre Gallery, the identity of the model remains uncertain.

Oxia Palus, CogX 2021 finalist, is a London-based artificial intelligence startup founded by two Ph.D. candidates at University College London with the mission of resurrecting the world's lost art, engineered a proprietary approach combiningartistic creation and technology. By means of spectroscopic imaging, artificial intelligence, and 3D printing Oxia Palus actualized the pentimento beneath Modigliani'sportrait.Using a processed x-ray fluorescence image Oxia Palus trained an AI model to map between x-ray-like images and Modiglianipaintings, from this Oxia Palus reconstructed a lost masterpiece, Modigliani'slost Beatrice Hastings the world's second NeoMaster. "The world's hidden art, lockedbeneath layers of paint, lies dormant waiting tobe reborn. In the next few years with the correct application of spectroscopic imaging, artificial intelligence, and 3D-printing, we can actualize hundreds of lost works and change the history of art,"said George Cann, Oxia Palus co-founder.

Oxia Palus co-developed two patent pending technologies with MORF Gallery, a Silicon Valley andHollywood-based creator,enabler and purveyor of fine art that enables technologies like AI, neuroscience, robotics and NFTs, to create this NeoMaster. "MORF Gallery is incredibly proud to play a role in enabling Oxia Palus to bring this exquisite piece of art history to the world. Great art evokes emotion from its creator and its admirers, so it is incredible to imagine Modigliani's emotions with each brushstroke deliberately erasing the memory of Hastings. Bringing this painting back to life is simply amazing," said Scott Birnbaum, CEO of MORF Gallery. Birnbaum continued, "As a follow up to the successful launch of April'sworld-first NeoMaster, Oxia Palus and MORF Gallery have unlocked the keys to uncovering and protecting historically important artworks lost to the ages."

As featured this week inThe Guardian, this piece will be on display in the prestigious London-basedLebenson Galleryfrom June 10th 30th, along with the world's firstNeoMasterthat was hidden under a Picasso for a century,and a video demonstration of the neomastic reconstruction process on aMORF ArtStick. This is the first time that either NeoMaster will be physically exhibited. MORF strategically chose Lebenson because of their premier London location,their patrons across Europe, their visionary AI art curation, and because of their collaboration in DEEEP, London's first AI Art Fair."The Lebenson Gallery is thrilled to work with MORF Gallery. We are joining forces to promote cutting-edge artists and projects in artificial intelligence and art. The NeoMasters Exhibition is a great example of how these advanced technologies can reveal the unseen and interpret the unknown," said Stephane Bejean Lebenson, Founder and Owner ofLebenson Gallery.

Only 64 canvas editions of the world's second NeoMaster will ever be made available, one to commemorate each year of Hasting's life. NeoMaster 2 is now available starting at $22,222.22. To learn more about the world's secondNeoMasteror set up a private consultation, visitMORF Galleryanytime or visit the Lebenson Gallery from June 10th 30th, 2021.

Media and interested parties can access assets, images and videos of works from all MORF Gallery artists here.

Contact:Scott Birnbaum(408) 455-5669

SOURCE MORF AI, Inc.

See the rest here:

Second Life - Artificial Intelligence Unmasks the Cover Up Beneath Modigliani's 'Portrait of a Girl' - PRNewswire

No bots need apply: Microtargeting employment ads in the age of AI – HR Dive

Keith E. Sonderlingis a commissioner for the U.S. Equal Employment Opportunity Commission. Views are the author's own.

It's no secret that online advertising is big business.In 2019, digital ad spending in the United States surpassed traditional ad spending for the first time, and by 2023, digital ad spending will all but eclipse it.

It's easy to understand why.Seventy-two percent of Americans use social media, and nearly half of millennials and Gen Z report being online "almost constantly."An overwhelming majority of Americans under 40 dislike and distrust traditional advertising.Digital marketing is now the most effective way for advertisers to reach an enormous segment of the population and social media platforms have capitalized on this to the tune of billions of dollars.In 2020, digital advertising accounted for 98% of Facebook's $86 billion revenue, more than 80% of Twitter's $3.7 billion revenue, and nearly 100% of Snapchat's $2.5 billion revenue.

But clickbait alone will not guarantee that advertisers and social media platforms continue cashing in on digital marketing.For these cutting-edge marketing technologies to be sustainable in job-related advertising, they must be designed and utilized in strict compliance with longstanding civil rights laws that prohibit discriminatory marketing practices.When these laws were passed in 1964, advertising more closely resembled the TV world of Darrin Stephens and Don Draper than the current world of social media influencers and "internet famous" celebrities.Yet federal antidiscrimination laws are just as relevant to digital marketing as they were to traditional forms of advertising.

One of the reasons advertisers are willing to spend big on digital marketing is the ability to "microtarget" consumers. Online platforms are not simply selling ad space; they are selling access to consumer information culled and correlated through the use of proprietary artificial intelligence algorithms.These algorithms can connect countless data points about individual consumers, from demographic details to browsing history, to make predictions.These predictions can include what each individual is most likely to buy, when they are most likely to buy it, how much they are willing to pay, and even what type of ads they are most likely to click.

So, suppose I have a history of ordering pizza online every Thursday at about 7 pm.In that case, digital advertisers might start bombarding me with local pizzeria ads every Thursday as I approach dinnertime.Savvy advertisers might even rely on a platform's AI-enabled advertising tools to offer customized coupons to entice me to choose them over competitors.

But microtargeting ads to an audience is one thing when you are trying to sell local takeout food.It is quite another when you are advertising employment opportunities.Facebook found this out the hard way when, in March 2019, it settled several lawsuits brought by civil rights groups and private litigants arising from allegations that the social media giant's advertising platform enabled companies to exclude people from the audience for employment ads based on protected characteristics.

According to one complaint filed in the Northern District of California, advertisers could customize their audiences simply by ticking off boxes next to a list of characteristics.Employers could check an "include" box next to preferred characteristics or an "exclude" box next to disfavored characteristics, including race, sex, religion, age, and national origin.Shortly after the complaint was filed, Facebook announced that it would be disabling a number of its advertising features until the company could conduct a full review of how exclusion targeting was being used.As part of its settlement of the case, Facebook pledged to establish a separate advertising portal with limited targeting options for employment ads.

To be clear, demographics matter in advertising and relying on demographic information is not necessarily problematic from a legal perspective.Think for a moment about Superbowl ads.Advertisers have historically paid enormous sums for air time during the game not only because of the size of the audience but because of the money that members of that particular audience are willing to spend on things like lite beer, fast food, and SUVs. Superbowl advertisers make projections about who will be tuning in to the game and what sorts of products they are more or less likely to buy.They target a general audience in the knowledge that ads for McDonald's Value Meals and Domino's Pizza will reach viewers who are munching on Cheetos and nibbling on kale chips alike.

But AI-enabled advertising is different. Instead of creating ads for general audiences, online advertisers can create specific audiences for their ads.This type of "microtargeting" has significant implications under federal civil rights law, which prohibits employment discrimination based on race, color, religion, sex, national origin, age, disability, pregnancy, or genetic information.These protections extend to the hiring process.So, a law firm that is looking to hire attorneys can build a target audience consisting exclusively of people with Juris Doctorate degrees because education level is not, in itself, a protected class under federal civil rights law.However, that same employer cannot create a target audience for its employment ads that consists only of JDs of one race because race is a protected class under federal civil rights law.

From a practical standpoint, exclusions of the sort that Facebook's advertising program allegedly enabled are the high-tech equivalent of the notorious pre-Civil-Rights-Era "No Irish Need Apply" signs.From a legal standpoint, they are even worse.These sorts of microtargeted exclusions would withhold the very existence of job opportunities from members of protected classes for the sole reason of their membership in a protected class, leaving them unable to exercise their rights under federal antidiscrimination law.After all, you cannot sue over exclusion from a job opportunity if you do not know that the possibility existed in the first place.Thus, online platforms and advertisers alike may find themselves on the hook for discriminatory advertising practices.

At the same time, one of the most promising aspects of AI is its capacity to minimize the role of human bias in decision-making.Numerous studies show that the application screening process is particularly vulnerable to bias on the part of hiring professionals.For example, African Americans and Asian Americans who "whitened" their resumes by deleting references to their race received more callbacks than identical applications that included racial references. And hiring managers have proven more likely to favor resumes featuring male names over female names even though the resumes are otherwise identical.

Often, HR executives do not become aware that screeners and recruiters engage in discriminatory conduct until it is too late.But AI can help eliminate bias from the earliest stages of the hiring process.An AI-enabled resume-screening program can be programmed to disregard variables that have no bearing on job performance, such as applicants' names.An applicant's name can signal, correctly or incorrectly, variables that usually have nothing to do with the applicant's job qualifications, such as the applicant's sex, national origin, or race.Similarly, an AI-enabled bot that conducts preliminary screening interviews can be engineered to disregard factors such as age, sex, race, disability and pregnancy.It can even disregard variables that might merely suggest a candidate's membership in a protected class, including foreign or regional accents, speech impairments and vocal timbre.

I believe that we can and we must realize the full potential of AI to enhance human decision-making in full compliance with the law.But that does not mean that AI will supplant human beings any time soon.AI has the potential to make the workplace more fair and inclusive by eliminating any actual bias on the part of resume screeners or interviewers.However, this can only happen if the people who design the advertising platforms and the marketers who pay to use them are vigilant about the limitations of AI algorithms and mindful of the legal and ethical obligations that bind us all.

Original post:

No bots need apply: Microtargeting employment ads in the age of AI - HR Dive

Widex Introduces My Sound: A New Portfolio of AI-enabled Features for Customization of Its Industry-leading Widex MOMENT Hearing Aids – PRNewswire

HAUPPAUGE, N.Y., June 9, 2021 /PRNewswire/ --Building on the success of the revolutionary, artificial intelligence-based SoundSense Learn technology, Widex USA Inc. today announced Widex My Sound, a portfolio of AI features including a new solution that instantly enables intelligent customization of the company's cutting-edge Widex MOMENT hearing aids based on a user's activity and listening intent.

Widex was the first company to enable user-driven sound personalization by leveraging artificial intelligence in hearing aids. Now, within My Sound, Widex launches the third generation of its AI technology, vastly improving the usability of the AI solution based on the extensive data the company has gathered from the previous two generations.

This new AI solution further combines the capacity of artificial intelligence with users' personal real-world experience to deliver another level of automated customization. Through AI modeling and clustering of data collected via the Widex SoundSense Learn AI engine, highly qualified sound profile recommendations for the individual user can now be made based on the intent, need, and preferences of thousands of users in similar real-world situations.

"Widex is leading the industry by combining artificial intelligence and human intelligence to create natural sound experiences and foster social participation through better hearing," said Jodi Sasaki-Miraglia, AuD, Widex's Director of Professional Training and Education. "Once Widex Moment is fit properly by a local licensed hearing care professional, the user can, if necessary, customize their hearing aids with ease, choosing from multiple AI features. Plus, our latest generation delivers results in just seconds, putting control and intelligent personalization into the hands of every user."

My Sound is integrated into the Widex MOMENT app and is the home for all the powerful AI personalization Widex offers. The latest generation of AI utilizes the cloud-based user data of Widex users worldwide to make sound profile recommendations based on an individual user's current activity and listening intent. Users launch My Sound from the app and begin by selecting their activity, such as dining, then choosing their intent, such as socializing, conversation, or enjoying music.

Based on the user's selections, Widex can draw on tens of thousands of real-life data points, reflecting the preferences and listening situations of other Widex users who have used the app previously. In seconds, the user is presented with two recommendations, which can both be listened to before selecting the settings that sound best. In the event neither recommendation meets the individual user's needs, they can launch SoundSense Learn from the same screen to further personalize their hearing experience through that solution's sophisticated A/B testing process.

"Widex has created a radically different way of delivering hearing solutions for today's active hearing aid user," Sasaki-Miraglia continued. "Instead of having to program the hearing aid in a way that covers all situations the user might encounter, the hearing care professional ensures the best possible starting point for the user and My Sound then allows users to personalize their experience in real life, easily and instantly. In this way, the hearing solution adapts to the user's preferences and becomes even more personal."

The Widex MOMENT app, including My Sound with SoundSense Learn, is available for Apple and Android devices and is designed to work with Widex MOMENT Bluetooth hearing aids.

For more information about Widex MOMENT, click here. For high-res images and screen shots, click here.

AboutWidex

AtWidexwe believe in a world where there are no barriers to communication; a world where people interact freely, effortlessly and confidently. With sixty years' experience developing state-of-the-art technology, we provide hearing solutions that are easy to use, seamlessly integrated in daily life and enable people to hear naturally. As one of the world's leading hearing aid producers, our products are sold in more than one hundred countries, and we employ 4,000 people worldwide.

Media Contact: Dan Griffin Griffin360 212-481-3456 [emailprotected]

SOURCE Widex

Original post:

Widex Introduces My Sound: A New Portfolio of AI-enabled Features for Customization of Its Industry-leading Widex MOMENT Hearing Aids - PRNewswire

Achieving AI: Pharma’s Digital Transformation Pathway – Bio-IT World

By Allison Proffitt

June 10, 2021 | What does it mean for a pharmaceutical company to be digital and how do we get there? Thats what Reza Olfati-Saber, PhD, Global Head AI & Deep Analytics, Digital & Data Science R&D, at Sanofi tackled yesterday at theDECODE: AI for Pharmaceuticals forum.

A digital pharma company is agile, Olfati-Saber argued, enabling it to discovery drugs faster and develop and manufacture drugs more efficiently. Olfati-Saber pointed out that of the four first-movers in COVID-19 vaccine raceBioNTech/Pfizer, Moderna, AstraZeneca, and Janssen Pharmaceuticalsall have reputations as digitally-advanced companies.

The one thing all four companies have in common is all of them digitally-advanced biopharma companies. The last two, among the larger pharma companies, happen to have very advanced AI and ML capabilities, Olfati-Saber said.

Digital can be hard to pin down, Olfati-Saber conceded, and he observed that many groups are eager to jump in and claim AI expertise. Lawyers seek to define ethical AI but dont generally take medical ethics into account, he said, while management professors claim to roadmap the digital transformation journey without any industry-specific insight. Even digital is defined in the most convenient way for each industry.

Olfati-Saber narrowed the scope to discuss the meaning and architecture of digital transformation specifically for pharma R&D.

Digital Architecture Models

For pharma R&D, the digital transformation narrative can be illustrated as a pyramid architecture, Olfati-Saber said. The traditional pyramid has computing (cloud, infrastructure) as its wide base, advancing through applications (data storage, app development, security), data (data governance and security), AI policy (quality and ethics), analytics (data analytics and visualization), and machine learning.

Olfati-Saber views the lowest three layers of this pyramidcomputing, applications, and dataas the foundational digital layers. The first two are technical requirements. Together, along with the data layer, these three confer AI enablement. These competencies must be in place for a company to be AI-ready. AI policy, analytics, and machine learning make up the true AI capabilities for an enterprise, and these sit at the top of the pyramid.

But contrary to narrow expertise of AI experts from law firms and management schools, Olfati-Saber argues that true digital transformation of a pharma company requires expertise from four quadrants. Both technology and management expertise are required to build the solid digital enablement foundations, and scientific and legal expertise combine to drive AI.

In fact, Olfati-Saber argues that it is practically impossible to expect a Chief Data Officer to know the entire quadrant well enough to facilitate a digital transformation. Instead, he argues for both a top digital expert and a top AI expert working together. Anything else wouldnt do the job, he said.

Development Pathways

Its a complex schema, and Olfati-Saber proposes a four-phase pathway for development. Start by establishing the technical foundations, then add the needed data, the AI tools, and finally fine-tune the enterprise AI policy.

Its a slight rearrangement of the traditional pyramid view, moving AI policy to the final phase of development or pinnacle of the pyramid and grouping analytics and machine learning together below.

The rearrangement reflects what Olfati-Saber sees as the hardest part of the digital transformationthe biggest stumbling block for companies.

Despite the fact that many large tech companies have gone through the first two phases of transformation really successfully, theyre struggling to go through the last phases of transformation, he said. Part of the reason is that there seems to be some sort of conflict between the business models of some tech companies and some of these AI and data privacy-related policies.

He alluded to Googles recent dismissal of company ethicists when their ethics findingspresumably in a paper submitted to an industry conferencedidnt align with the companys goals.

Its not easy to simply put together a committee and expect them to form the quality assurance and ethics principles of AI, Olfati-Saber said. This is a very challenging task, just as challenging as any of those other three.

Deep Digital Transformation

When a company has achieved all four phases, Olfati-Saber said, theyve undergone what he calls a deep digital transformation. And its a worthwhile process, he argued. He outlined many examples of where AI can impact the pharmaceutical business: digital pathology, AI-based drug design, multi-omics analysis, digital health, digital manufacturing, AI-based regulatory approvals, and more.

In his own estimate, as an example, Olfati-Saber argued that the AI-enabled cost savings per image suggests digital pathology is 2,500x cheaper than standard pathology and 60x faster, even when the digital tools are simply aiding pathologists, not replacing them.

The real reason pharma companies or investors out there are interested in applying AI and investing in AI for pharma is not because its a fancy tool, or because its fashionable. Its mostly because it generates massive returns in agility, scalability, and cost savings, Olfati-Saber said. These are the true reasons why a pharma company would want to become AI ready, go through a digital transformation, and have AI capabilities.

Read the rest here:

Achieving AI: Pharma's Digital Transformation Pathway - Bio-IT World

For the Third Consecutive Year, Verint AI and Analytics Solutions Receive Perfect Customer Satisfaction Scores in New Interaction Analytics Report -…

MELVILLE, N.Y.--(BUSINESS WIRE)--Verint (NASDAQ: VRNT), The Customer Engagement Company, today announced its artificial intelligence (AI) and analytics solutions achieved perfect scores in all 24 customer satisfaction categories for vendor satisfaction, product capabilities and product effectiveness in DMG Consulting LLCs new 2021/2022 Interaction Analytics (IA) Product and Market Report.* In addition, Verint represents the largest market share by number of customers and achieved the greatest year-over-year increase in number of customers among vendors named in the reports market activity analysis.

DMGs report focuses on contact center and service-related uses of interaction analytics. The report highlights the increasing value of operationalizing the findings from interaction analytics for voice of the customer (VoC), quality management (QM), customer journey analytics and the customer experience. It also explores how the value and benefits of IA increase substantially when this technology is embedded in third-party applications to enrich their outputs and findings.

Interaction analytics follows conversations as customers pivot from one channel to another, providing necessary insights into all touchpoints in the customer journey, said Donna Fluss, president, DMG Consulting. These solutions enable companies to alter the outcome of customer conversations, responding with real-time alerts and next-best-action guidance to agents, regardless of where they are located.

Verint Analytics solutions received:

The report reviewed Verints Speech and Text Analytics, Contextual Real-Time Guidance, Analytics-Enabled Quality Management and Experience Management applications. Verint Speech and Text Analytics automatically analyzes and identifies trends, themes, emotion, sentiment and the root causes driving customer interactions, including voice calls and unstructured text such as chat in order to proactively respond to issues and act on opportunities that enhance the customer experience and support business objectives.

Speech and text analytics solutions provided critical insights over the past yearenabling customers to adapt to the dynamics of interactions and respond to issues in real time, says Verints Celia Fleischaker, chief marketing officer. We are committed to continually innovating to help drive insights and value across the organization to create better customer experiences.

Visit Verint Speech Analytics and Verint Text Analytics.

About Verint Systems Inc.

Verint (Nasdaq: VRNT) helps the worlds most iconic brands including over 85 of the Fortune 100 companies build enduring customer relationships by connecting work, data and experiences across the enterprise. The Verint Customer Engagement portfolio draws on the latest advancements in AI and analytics, an open cloud architecture, and The Science of Customer Engagement to help customers close the engagement capacity gap.

Verint. The Customer Engagement Company. Learn more at Verint.com.

*Source: DMG Consulting LLC, 2021/2022 Interaction Analytics Product and Market Report, May 2021

This press release contains forward-looking statements, including statements regarding expectations, predictions, views, opportunities, plans, strategies, beliefs, and statements of similar effect relating to Verint Systems Inc. These forward-looking statements are not guarantees of future performance and they are based on management's expectations that involve a number of risks, uncertainties and assumptions, any of which could cause actual results to differ materially from those expressed in or implied by the forward-looking statements. For a detailed discussion of these risk factors, see our Annual Report on Form 10-K for the fiscal year ended January 31, 2021, and other filings we make with the SEC. The forward-looking statements contained in this press release are made as of the date of this press release and, except as required by law, Verint assumes no obligation to update or revise them or to provide reasons why actual results may differ.

VERINT, THE CUSTOMER ENGAGEMENT COMPANY, BOUNDLESS CUSTOMER ENGAGEMENT, THE ENGAGEMENT CAPACITY GAP and THE SCIENCE OF CUSTOMER ENGAGEMENT are trademarks of Verint Systems Inc. or its subsidiaries. Verint and other parties may also have trademark rights in other terms used herein.

More:

For the Third Consecutive Year, Verint AI and Analytics Solutions Receive Perfect Customer Satisfaction Scores in New Interaction Analytics Report -...

Active.AI and Glia join forces on customer service through conversational AI – Finextra

Active.Ai, a leading conversational AI platform for financial services, and Glia, a leading provider of Digital Customer Service, today announced a strategic partnership; Together, the fintechs are empowering financial institutions to meet customers in the digital domain and support them through conversational AI, allowing them to drive efficiencies, reduce cost and most importantly, facilitate stronger customer experiences.

Glias Digital Customer Service platform enables financial institutions to meet customers where they are and communicate with them through whichever methods they preferincluding messaging, video banking and voiceand guide them using CoBrowsing. Over 150 financial institutions have improved their top and bottom line and increased customer loyalty through leveraging Glias platform.

Over 25 leading financial institutions across the world use Active.Ais platform to handle millions of interactions per month across simple and complex banking conversations. Active.Ais low-code platform enables banks and credit unions to deploy and scale rapidly with 150+ use cases pre-built out-of-the-box to increase customer acquisition, reduce customer service turnaround time and deepen customer engagement.

Being able to strategically blend AI and the human touch has become a key differentiator for banks and credit unions; doing so enables them to improve efficiencies while helping ensure every customer interaction is consistent, convenient and seamless, said Dan Michaeli, CEO and co-founder of Glia. Our partnership with Active.AI will help further our mission of helping financial institutions modernize the way they support customers in the digital world.

"Customers today expect a frictionless omnichannel experience, and the future of financial services is all about AI/human collaboration. We are excited to partner with Glia to enable financial institutions to deliver great customer experiences and achieve higher NPS, says Ravi Shankar, Active.ai, CEO.

See the original post:

Active.AI and Glia join forces on customer service through conversational AI - Finextra

Multiverse Token (AI) To Be Launched on KuCoin Win, Introducing Gaming to Token Initial Distribution – PRNewswire

Officially launched on June 7, 2021, KuCoin Win allows participants to access the initial token distribution of promising blockchain projects at an early stage, and at a lower cost and fairer process in an entertainment way. As the first game product of KuCoin Win, "LuckyRaffling" will introduce multiple game modes. Users can exchange designated tokens such as KCS for lucky codes to participate in the lottery. When the lucky codes are sold out, the platform will automatically draw prizes, and select a winner to receive a physical or token reward.

In addition to bringing more entertainment and interaction into the blockchain industry, as well as attracting more newcomers to the crypto world, KuCoin Win also becomes a new listing channel on KuCoin. Compared with other channels like Spotlight, a token launch channel, and BurningDrop, a token mining channel, LuckyRaffling does not require KCS holdings or token stakings. Users can redeem a lucky code with a small amount of tokens to win the reward, which greatly reduces the threshold for retail investors to participate in the early investment of crypto projects. KuCoin has always been committed to exploring and empowering high-quality projects, which has listed about 350 tokens with 800 trading pairs. The launch of KuCoin Win is a significant step to accelerate KuCoin's discovery of crypto gems in the blockchain world.

Multiverse, the first project on KuCoin Win, is developed by teams based in Silicon Valley and Singapore. The founding team graduated from Stanford University and held senior positions at Google and Twitter. Multiverse has built a decentralized A.I. ecosystem that enables the community to easily fund, train, and deploy machine-learning applications with their own custom tokens and decentralized economic systems. Multiverse has been backed by Huobi Ventures, Matrix Partners China, Arrington XRP and Fenbushi Capital.

"Despite the explosive growth in 2021, there are currently less than 2% of people holding crypto assets in the world, and the entire industry is still in the early stages. KuCoin has been committed to contributing to the mass adoption of blockchain technology and cryptocurrency," said Johnny Lyu, CEO at KuCoin Global, "KuCoin Win, which combines gamification and token distribution, is our latest attempt. We are happy to start this new adventure with the Multiverse team, and we look forward to having more users invest in promising blockchain projects in the early stages through a more interesting, fair, and low-threshold way offered by KuCoin Win."

"A.I. will become the most important technological force in society. However, it is dominated by a few companies and elite institutions. Many people have great ideas on how A.I. could help their communities in a bias-free manner, but technical hurdles and a lack of capital prevent them from realizing their goals. That's why we built the Multiverse, which lets people start projects without needing to write code or raise large amounts of capital," said Cliff Szu, Co-founder of Multiverse, "We are excited to partner with KuCoin to launch our token AI, which can be used to stake in A.I. projects to support Multiverse app development teams."

Starting from June 10, Multiverse will initiate a 3-day LuckyRaffling game on the KuCoin Win platform. Users can exchange lucky codes through KCS and have the opportunity to obtain AI tokens. Five million AI tokens will be distributed during this period, accounting for approximately 0.02% of the total supply of AI tokens.

About KuCoinLaunched in September 2017, KuCoin is a global cryptocurrency exchange for over 350 digital assets. It currently provides Spot trading, Margin trading, P2P fiat trading, Futures trading, Staking, and Lending to its 8 million users in 207 countries and regions around the world. In 2018, KuCoin secured $20 million in Round A funding from IDG Capital and Matrix Partners. According to CoinMarketCap, KuCoin is currently the fifth biggest crypto exchange. In 2021, Forbes Advisor named KuCoin as one of the Best Crypto Exchanges for 2021. For more information, please visit http://www.kucoin.com

About MultiverseMultiverse decentralized A.I. ecosystem enables the community to easily fund, train, and deploy machine-learning applications (planets) with their own custom tokens and decentralized economic systems. For more information, visit https://multiverse.ai/

SOURCE KuCoin

View post:

Multiverse Token (AI) To Be Launched on KuCoin Win, Introducing Gaming to Token Initial Distribution - PRNewswire

Bamboo Architectural Designs that prove why this material is the future of modern, sustainable architecture: Part 2 – Yanko Design

Bamboo is gaining a lot of popularity as a sustainable material in the world of architecture! Bamboo is being used to create beautiful and majestic structures, that are green and respect their surrounding environment. It is imperative to build homes, resorts, offices and etc that are in harmony with the natural environment around them. And weve curated a collection of impressive architectural structures built from bamboo, that prove sustainability, comfort, and luxury can be combined together! From a luxury resort to a community centre for female refugees these architectural designs truly represent the versatility and scope of bamboo!

The Ulaman Eco-Retreat Resortmade mostly from bamboo is here to show you that sustainability can be well integrated into luxury. Designed by Inspiral Architects, this eco-resort is located in Balis Kaba-Kaba village. It has been constructed using materials found directly on the site and the immediate locality which helped the resort become completely carbon zero. Apart from bamboo, rammed earth has been used for the resorts ground-level walls. Rammed earth is a wonderful green alternative to concrete which is responsible for more than 8% of the construction industrys emissions which contributes to 30% of global greenhouse emissions.

You dont have to be an architect to want to build a bamboo structure of your own thank to the Zome building kit by Giant Grass! The studio has made a DIY kit that is basically a larger-than-life LEGO project which can live in your backyard or be scaled up to create a community space. The zome is a flexible space that can be used by children to hang out in the backyard, like a gazebo for you to entertain guests in, a greenhouse for seedlings, a creative space in the office, a quiet space for yoga at home, or a glamping tent it can be anything you want it to be. This DIY kit is perfect for those who want to live sustainability and enjoy working on projects which result in a productive reward. The kit comes with all accessories needed 350 precision-made bamboo strips, nuts, bolts, and an installation guide to make the 3m x 3m zome.

Warith Zaki and Amir Amzar plan to use the bamboo grown on Mars to actually build the first colony, named Seed of Life, on Mars. The conceptual colony design is actually a series or cluster of structures woven by autonomous robots from bamboos. The aim of the project is to create structures that do not rely on construction materials being shipped from Earth or to use 3D printing. After doing a lot of research on Mars colonization, we realized that half of the ideas would go about deploying fully synthetic materials made on earth to build shelters, while the other half is about using the locally available regolith, said Zaki and Amzar. Human civilization has yet to build anything on any other planet outside of Earth. That fact alone opens up infinite possibilities of what could or should be used. Sure, 3D printing seems to be a viable proposition, but with thousands of years worth of experience and techniques in shelter construction, why shouldnt we tap on other alternatives too?

Architect Rizvi Hassan utilised bamboo to build a community centre for Rohingya women living in a refugee camp. The women can bathe and receive counselling at the community centre. Featuring a circular courtyard, which is sheltered except for an open space in the middle, the centre is called Beyond Survival: A Safe Space for Rohingya Women and Girls. It is located in Camp 25, a refugee site in Teknaf, Bangladesh.

Hague is a student at the University of Westminster where she is pursuing her Masters in Architecture. Her design features shellac-coated bamboo to emphasize the use of biomimicry in different disciplines of design in her case it is providing eco-friendly architectural solutions inspired by nature. For the main structure, Hague drew inspiration from the Mimosa Pudica plant which closes its leaves when it senses danger and that is how she came up with collapsible beams featuring inflatable hinges. It gavethe greenhouse a unique origami effect (it actually looks like paper too!) and also enables the structure to be easily flat-packed for transportation/storage.

This bamboo sports hall in Chiang Mai, Thailand was built by Chiangmai Life Architects. It was modeled after the petals of a lotus flower, and has been built using only bamboo! The use of bamboo ensures a cool and pleasant environment in the sports hall at all times. The structure has a zero-carbon footprint!

Designed by o9 Design Studio, native bamboo and rattan clad were used to build the Chi-bu resort, on the outskirts of Saigon, Vietnam. The materials are all locally sourced, and traditional techniques were merged with cutting edge design philosophies to construct the resort. It consists of seven bungalows surrounded by a river and wild gardens! Its a relaxing haven!

Casa Covidais a unique home that blends these age-old construction practices with the marvels of modern technology like 3D printing to elevate sustainable architecture to a new level! Even today, earth-based houses are used by almost 30 percent of the worlds population because they are low-tech, affordable, and simple. These are not just tiny huts, they cover everything from hand-made earthen buildings to traditionally modern homes the binding factor is the use of rammed earth techniques as well as sustainable materials like bamboo or wood. These materials are local and easy to source what could be easier than to use the earth beneath ones own feet?

The Eibcheby Shomali Design takes the cabin game to a new level by incorporating the best of Balinese culture, modern architecture, and cozy interiors. The elevated structure weaves concrete and bamboo into its design. The team has used locally sourced building materials wood for the structure and a brick-stone combination for the foundation. The frame is then cemented by concrete which brings in a hint of modern minimalist architecture. The designers chose organic materials in order to create harmony with the environment so Eibche showcases a lot of bamboo poles, woven bamboo, coconut wood, and teak wood in both the interior as well as exterior.

These bamboo nest smart-towers were built for Parisbut in the future by Vincent Callebaut! These twirling towers are the perfect combination of architecture meets sustainability and nature!

For more impressive environment-friendly bamboo architectural designs, check out Part 1 of this post!

Here is the original post:

Bamboo Architectural Designs that prove why this material is the future of modern, sustainable architecture: Part 2 - Yanko Design

This is how life on moon will look like; concept as real as it gets [details] – IBTimes India

NASA chief engineer watches perseverance touchdown, daughter captures euphoric reaction

As NASA is busy preparing plans for the next Artemis mission aimed at landing humans on the moon, a short film name 'Life Beyond Earth' has presented a realistic version of a future lunar base. The four-minute short film also showcases the future habitat on the lunar surface where humans could live aiming to achieve the ultimate goal of Mars colonization.

Visuals of a future lunar base

The concept shows in this short film is developed bythe architectural firm Skidmore, Owings & Merrill (SOM). The part of an installation used in the short film is currently exhibited at the 17th International Architecture Exhibition of Biennale in Venice, Italy. While making this short film, makers sought guidance from experts at the European Space Agency (ESA), and retired NASA astronautJeffrey Hoffman. SOM also usedESA's Concurrent Design Facility (CDF) to design the hypothetical lunar base that could be set up on the lunar surface.

Hypothetical lunar base on the moonLife Beyond Earth: European Space Agency

"The invitation to exhibit at the Venice Biennale and generally the positive response to this fruitful collaboration between our space engineering world and architecture experts are very encouraging. This project could pave the way for further multidisciplinary exercises here in Europe when thinking about future sustainable human habitat concepts," saidESA materials engineer Advenit Makaya in a recent statement.

The inspiration behind the design

The ESA statement noted that the ultimate inspiration for the lunar habitat came from the vision of the international Moon Village, a hypothetical concept for lunar settlement made using an alliance ofprivate and public, space and non-space partners.

"The team was enthusiastic from day one.Our CDF sessions allowed us to perform a close review of the design with our own ESA experts, providing valuable feedback to SOM," said CDF team leader Robin Biesbroek.

The primary attraction of this lunar base developed by the SOM isan imagined habitat that could help to build an early colony on the moon. Its semi-inflatable design ensures a high volume-to-mass ratio, in which the habitat is capable of expanding to nearly double its packing volume when inflated.

Link:

This is how life on moon will look like; concept as real as it gets [details] - IBTimes India

Donald Trump responds to Facebook ban by hinting at return to White House – The Guardian

Donald Trump has appeared to drop his strongest hint yet at another presidential run in 2024, responding to news of his two-year ban from Facebook on Friday by saying he would not invite Mark Zuckerberg to dinner next time Im in the White House.

It has also been widely reported this week that Trump believes he will be reinstated in the presidency by August.

He will not. But in his statement on Friday he did not say if he thought he would return to the White House because he would be reinstated or because he would run for the Republican nomination again and then defeat Joe Biden or another Democrat.

Trumps statement read: Next time Im in the White House there will be no more dinners, at his request, with Mark Zuckerberg and his wife. It will be all business!

Trump has a history of using public statements to troll his opponents and a long record of lies and exaggerations and promoting baseless conspiracy theories. At the same time Trump has maintained a strong grip on the Republican party and there is intense speculation about whether or not he would run for the presidency again.

Nick Clegg, the former British deputy prime minister who is now Facebooks vice-president of global affairs, announced the social media websites ban on Trump until 2023.

It follows the recommendation of Facebooks oversight board. Trump has been suspended from the social media site since January, when he incited supporters to attack the US Capitol in service of his lie that his defeat by Joe Biden was the result of electoral fraud.

In a first statement on the suspension, Trump said it was an insult to those who voted for him in the rigged presidential election and said: They shouldnt be allowed to get away with this censoring and silencing.

Amid striking polling about support for his lies among Republican voters, Trump still dominates polls of possible contenders for the partys nomination in 2024.

Trump appears to be convincing himself the election was stolen and that some mechanism exists by which he might be reinstated, a belief apparently stoked by Mike Lindell, the chief executive of MyPillow and a hardline Trump supporter.

According to CNN, which confirmed reporting by Maggie Haberman of the New York Times and by the conservative National Review, Trump has asked advisers: What do you think of this theory?

A source also told CNN: People have told him that its not true.

See the original post:

Donald Trump responds to Facebook ban by hinting at return to White House - The Guardian

Donald Trumps Justice Department Obtained Gag Order On CNN Attorney To Keep Secret Its Pursuit Of Reporters Email Records – Deadline

The Justice Department under President Donald Trump obtained a gag order that kept top CNN executives from disclosing the governments pursuit of reporter Barbara Starr email and other records as part of an apparent leak investigation.

According to CNN, the effort started in July of last year and was only revealed until Wednesday, when a federal judge unsealed parts of the case. CNNs general counsel David Vigilante went on air to explain that he was unable to reveal details of the case even to Starr herself. She and reporters from The Washington Post and The New York Times were informed last month that the government had seized their records without their knowledge.

Vigilante described a protracted legal battle that ultimately resulted in the DOJ agreeing to a much narrower disclosure of records, after the tens of thousands originally sought over a span of two months in 2017.

Related StoryJoe Biden White House Says It Wasn't Informed About Justice Department Move That Could Lead To Dismissal Of Defamation Case Against Donald Trump

Its still not clear why the government was seeking the records, but the DOJ said last month that Starr was not a target of an investigation. During the time frame that the government sought the records, Starr, their Pentagon correspondent, had reported on North Korea, Syria and Afghanistan.

We were completely deprived of our right to defend ourselves, Vigilante said on CNN on Wednesday.

There has been concern that the Biden Justice Department continued to pursue the cases rather than immediately suspend the pursuit of reporters records. After Starr received her letter, CNNs chief White House correspondent Kaitlan Collins asked Biden about it, and he said that the practice was simply, simply wrong. The White House and the Justice Department announced over the weekend that they would end the practice of subpoenaing journalists phone and email records as they conducted leak investigations.

Vigilante said that representatives from the network, as well as the Times and Post, would meet on Monday with Attorney General Merrick Garland.

While it is not uncommon for media organizations to receive subpoenas for information in court cases, what was particularly unusual about this instance was the ability of the DOJ to obtain a secrecy order. That kept the circle of people at the network who knew about what was going on to Vigilante and other attorneys for the network, while CNN President Jeff Zucker was given limited details, the network reported.

For a news organization it is incredibly unusual, Vigilante said. It has never happened to us before.

On Friday, The New York Times reported that while the Trump administration never informed the Times about its pursuit of records from four reporters, the Biden administration did, but they imposed a gag order to prevent that papers top lawyer, David McCraw, from disclosing it to all but a small group of top executives.

In The Washington Post this week, publisher Fred Ryan wrote that Trumps actions, and the expansion upon them during the Biden administration, pose a grave threat to our ability as a nation to keep powerful officials in check. With the revelation that the Justice Department has secretly obtained phone and email records at multiple news organizations to sniff out the identities of journalists sources, government employees who would otherwise come forward to reveal malfeasance are more likely to fear exposure and retaliation, and therefore to stay silent.

Ryan called for clear and enduring safeguards to ensure that this brazen infringement of the First Amendment rights of all Americans is never repeated.

Vigilante also called for some rules around this to make sure that it doesnt happen again. He still isnt sure of just what the government was seeking. Candidly, to this day, I dont know because it was such an opaque process, he said.

At a hearing on Capitol Hill on Wednesday, Garland said that the president has made very clear his view of the First Amendment and it coincides with mine. It is vital to the functioning of our democracy. That extends to the need for journalists to go about their work disclosing wrongdoing and error in the government. That is part of how you have faith in the government, by having that transparency.

He said that these were decisions made under a set of policies that have existed for decades, that continuously with each new administration racheted up greater protections. But going forward, we have adopted a policy which is the most protective of journalists ability to do their jobs in historyWe will not use compulsory process in leak investigations to require reporters to provide information about their sources when they are doing their job as reporters.

View original post here:

Donald Trumps Justice Department Obtained Gag Order On CNN Attorney To Keep Secret Its Pursuit Of Reporters Email Records - Deadline