Passport Technology expands its casino relationship adding 16 more sites in BC and Alberta – Proactive Investors USA & Canada

Australis Capital revealed that it was acquiring Passport to complement its existing fintech assets.

Passport Technology Inc, a provider of cash access services to casinos, announced Thursday that it had expanded its services, by adding 16 casinos in British Columbia and Alberta to its platform.

This is on top ofits 12 Gateway Casinos & Entertainment (GCEL) locations already supported by Passport in the state of Ontario.

"Were excited to expand our long-term relationship with GCEL across three Canadian Provinces," said Kurt Sullivan,the president of Passport.

"Passport has accelerated growth in 2020 and based on executed long term agreements in Canada, UK, and Europe, we expect to achieve US$12M in revenue in 2021 with $4 millionin operating income, understanding COVID-19 does represent some uncertainty to the global casino industry," he added.

On June 25 this year, (CSE: AUSA) (OTC: AUSAF) revealed that it was acquiring Passport to complement its existing fintech assets.

It willleverage Passports international footprint in brick and mortar casinos with Australis Cocoon technology serving cannabis dispensaries in North America and Australis Paytron Merchant Services.

Link:

Passport Technology expands its casino relationship adding 16 more sites in BC and Alberta - Proactive Investors USA & Canada

Internet of Things fever detection technology ThermalPass featured on Reuters TV – Proactive Investors USA & Canada

The technology was highlighted as one of the leading thermal body temperature scanners that will help countries manage the spread of coronavirus

Internet of Things Inc () (OTCMKTS:INOTF) announced Thursday that its fever detection system ThermalPass was featured on Reuters TV.

The Toronto-based companys technology was highlighted as one of the leading thermal body temperature scanners that will help economies around the world to manage the spread of coronavirus.

ThermalPass is a medical-grade touchless sensor body temperature device thatscans the body 20-times per second looking for a temperature threshold level typically above 98.6 degrees Fahrenheit.

The story, titled Demand for thermal devices skyrockets in coronavirus times, was featured on Reuters, which is seen by over 1 billion viewers a day with more than 33 million unique monthly visitors, according to Internet of Things.

"ThermalPass is an innovation inspired out of necessity to help get this crippled economy back to normal, to get people out of their houses, to get people in malls, to get people back in the office in a post-COVID world," said Michael Lende, CEO of Internet of Things.

"Unlike camera solutions, our touchless medical-grade sensor-based ThermalPass does not breech privacy nor social distancing, as it does not require human intervention, and is more accurate with 400 temperature readings per second. People do not have to slow down as they go through ThermalPass."

According to research firm Markets and Markets, the global thermal imaging sector is forecasted to reach US$4.6 billion by 2025, up from $3.4 billion in 2020.

ThermalPass is a wholly owned subsidiary of Internet of Things.

Contact Angela at [emailprotected]

Follow her on Twitter @AHarmantas

View original post here:

Internet of Things fever detection technology ThermalPass featured on Reuters TV - Proactive Investors USA & Canada

Rapid innovation in clean energy technology needed to meet 2050 carbon goals – E&T Magazine

Without a major acceleration in clean energy innovation, it is unlikely that the world will meet its 2050 net zero carbon climate goals the International Energy Agency (IEA) has said.

In a new report, the IEAoutlines how the world can quickly shift towards clean energy infrastructure while analysing the market readiness of more than 400 clean energy technologies.

With the cost of technologies like solar and offshore wind plummeting in recent years, fossil fuel plants are becoming increasingly uneconomical in comparison.

However, the need for baseload electricity supply to pick up the slack when atmospheric conditions arent suitable for renewable generation - in addition to significant lobbying from the fossil fuel sector - has meant that many countries are failing to transition quickly enough to meet carbon targets.

There is a stark disconnect today between the climate goals that governments and companies have set for themselves and the current state of affordable and reliable energy technologies that can realise these goals, said Dr Fatih Birol, the IEA's executive director.

This report examines how quickly energy innovation would have to move forward to bring all parts of the economy - including challenging sectors like long-distance transport and heavy industry - to net-zero emissions by 2050 without drastic changes to how we go about our lives.

This analysis shows that getting there would hinge on technologies that have not yet even reached the market today. The message is very clear: in the absence of much faster clean energy innovation, achieving net-zero goals in 2050 will be all but impossible.

A significant part of the challenge comes from major sectors where there are currently few technologies available for reducing emissions to zero, such as shipping, trucking, aviation and heavy industries such assteel, cement and chemicals.

The IEA said that decarbonisation in these sectors will rely on the development of new technologies, a potentially long-winded process that needs to be undertaken now in the hope that solutions can be found within the required timeline.

Ifcertain key technologies can be developed by 2030, they could be implemented during the next round of plant refurbishments in heavy industry, saving a potential 60 gigatons of carbon emissions, the report found.

It added that the four most critical clean technologies needing innovation are battery technologies, carbon capture and storage, bioenergy and low-carbon hydrogen, which are currently mostly in the development phase and/or costly.

Global CO2 emissions are expected to be 8 per cent lower this year than in 2019 - their lowest level since 2010 - as energy demand has slumped due to the coronavirus pandemic, but they are likely to rebound as economies recover unless action is taken.

The IEA previously detailed a $3tn green recovery plan designed to generate millions of jobs through measures such as mass retrofitting of buildings to improve energy efficiency and pouring money into renewable energy generation.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Read the rest here:

Rapid innovation in clean energy technology needed to meet 2050 carbon goals - E&T Magazine

The radical technologies to keep offices clear of coronavirus | Free to read – Financial Times

As economies emerge from hibernation, employers are rushing to make workplaces safe with rudimentary tools such as hand sanitiser, face masks and the use of stairs rather than lifts.But engineers are developing more radical technologies to keep the virus out of offices.

The big challenges posed by the virus indoors are the collection of particles on surfaces and the flow of air between individuals. Pandemics like this can provide fertile ground for creative minds to think about how to do things differently, said Shaun Fitzgerald, visiting professor at the University of Cambridge.

Many of the innovations, however, will not come cheap. Here are some of the emerging options:

Viruses and bacteria can survive for a long time on surfaces and can be stubbornly resistant to cleaning. On plastics and steel, for example, the novel coronavirus can live for up to 72 hours.

Silver and copper, by contrast, are known to kill viruses and bacteria within four hours. But the timeframe we need is seconds to minutes, and it needs to be built into the materials, says Felicity de Cogan, research fellow at the University of Birmingham.

She is also founder of NitroPep, a company that is developing layers of material with tiny spikelike particles that puncture and kill viruses within minutes.

NitroPeps spikes are tiny antimicrobial agents that can be added to desks, walls and other surfaces and rupture anything with a membrane that lands on them.

It doesnt require a change in behaviour, it just sits there and kills whatever lands on it, said Ms de Cogan. The spikes cannot be felt by anyone running their hand across the surface.

The technology is untested for coronavirus, but when it was piloted for a year on a Royal Navy ship it removed more than 95 per cent of bacteria such as E.coli and MRSA, which is resistant to many forms of antibiotics. How effective the technology is at killing viruses such as Sars-Cov-2, which causes Covid-19, remains to be seen.

If deemed to be effective for the novel coronavirus, Ms de Cogan says she will look to apply the microscopic spikes to handles and seating on public transport and use them for self-cleaning protective equipment.

Although exact pricing has not been determined, the technology has been designed to be very economical so that it can be used as widely as possible, Ms de Cogan said.

Some experts have reservations. Were not going to be able to cover every product and material we touch with self-cleaning surfaces, said Joseph Gardner Allen, assistant professor at the Harvard TH Chan School of Public Health.

Coronavirus has brought a new lease of life to a decades-old technology known as germicidal ultraviolet beams of UV light that kill micro-organisms by mangling RNA in viruses and DNA in bacteria and fungi.

It already has a track record: during a series of drug-resistant tuberculosis outbreaks in the 1980s researchers found that placing UV lamps on the ceiling of large rooms effectively stopped transmission of the disease.

It is particularly recommended in crowded and poorly ventilated environments such as food manufacturing facilities, warehouses and airports.

Coronavirus has even turbocharged demand for UV disinfecting robots. Danish company UVD Robots was the first company to invent these machines, which travel around buildings emitting UV light that leaves bacteria and viruses too damaged to function. The robots, which sell for roughly 60,000, can already be found at hospitals, hotels, offices and airports around the world, including Londons Heathrow.

However, there are real concerns about UV radiation causing skin and eye damage to humans. That means it has to be placed high up and encased in light fixtures or air conditioning systems, while the robots are programmed to operate only at night when no one is around.

Real-time environmental monitors that check the pulse of a building already exist to assess things like CO2 levels and could be retuned to focus on the virus.

Some researchers in Switzerland are trying to develop sensors that detect the virus itself. Researchers at the Swiss Federal Institute of Technology (ETH Zurich) and Swiss Federal Laboratories for Materials Science and Technology (Empa)have developed a sensor set inside a chamber that emits a light signal if it comes into contact with the viruss RNA.

Testing in real life environments including hospitals, train stations and shopping malls will start in the next few months.

While high tech solutions may show promise, some engineers argue that the cost of implementation and speed of delivery mean that the focus now should be on simpler upgrades to existing systems. Chief among them are heating, ventilation and air conditioning (HVAC) systems

They can play a key role in preventing the accumulation of tiny airborne microdroplets known as aerosols, but in many cases there is room for improvement. The minimum ventilation flow rate is typically 5-10 litres of fresh air per person per second, but some buildings can have just 1 litre per person per second.

Many ventilation systems also circulate air from one indoor space to another, increasing the risk of airborne infection. Instead, each room needs to be pumped full of 100 per cent outdoor air wherever possible, engineers say.

You always want the air to move from clean to dirty and then out. In the bathroom, you want it to move from indoor, to bathroom and then out through the exhaust, said Mr Allen.

Wirth Research, based in Oxford and set up by Nick Wirth, a former Formula One technical director, is developing a system to destroy airborne particles from some of the least ventilated spaces notably passenger lifts and aeroplanes.

Cool indoor air is circulated out of the space into a viral furnace where it is heated to more than 95C to kill any pathogens and then cooled and filtered back in. Mr Wirth estimates that installing the system in a small space such as a lift would cost several thousand pounds.

The process of heating and then cooling air is fairly energy intensive but Mr Wirth argues that it will be essential to ensure the safety of some of the most stagnant environments.

Read the original post:

The radical technologies to keep offices clear of coronavirus | Free to read - Financial Times

IDology’s ID verification, anti-fraud technology added to Microsoft Azure AD for frictionless onboarding – Biometric Update

IDologys multi-layered ExpectID identity verification and anti-fraud technology has been integrated with Microsoft Azure Active Directory (Azure AD) External Identities for a seamless user experience during the onboarding process, the company announced.

Were pleased to enable our customers to utilizeIDologys ExpectID solutionswithMicrosoft Azure Active Directory, SueBohn, Partner Director of Program Experience, Microsoft Identity Division, at Microsoft Corp. said in a prepared statement. Our B2B and B2C customers can obtain the power of a multi-layered solution that rapidly detects fraud and provides identity verification, while not compromising on end-user experience.

Identity verification in a secure and fast manner is a key product feature which ensures frictionless onboarding and helps meet consumer demands for a frictionless experience, according to the announcement. IDologys Second Annual Consumer Digital Identity Study found that 83 million Americans do not proceed with the account signup process if they experience friction.

In todays threat-heavy environment, detecting and preventing fraud require more than basic identity matching, Christina Luttrell, COO of IDology, said in a prepared statement. ExpectID accesses thousands of data sources and analyzes multiple layers of identity attributes that seamlessly work together to instantly verify identities and detect fraud so that Azure AD External Identities customers can quickly greenlight legitimate individuals with confidence or dynamically escalate to other methods if needed.

ExpectID performs data analysis and helps companies build trust, ensure security and prevent revenue loss by identifying third-party contractors, guests, customers and other third parties. Azure AD customers can now install the identity verification and management product across all external identities for seamless verification, convenience and security.

biometrics | fraud prevention | identity management | identity verification | IDology

Read this article:

IDology's ID verification, anti-fraud technology added to Microsoft Azure AD for frictionless onboarding - Biometric Update

Universities and Tech Giants Back National Cloud Computing Project – The New York Times

Leading universities and major technology companies agreed on Tuesday to back a new project intended to give academics and other scientists access to the computing resources now available mainly to a few tech giants.

The initiative, the National Research Cloud, has received bipartisan support in both the House and the Senate. Lawmakers in both houses have proposed bills that would create a task force of government science leaders, academics and industry representatives to outline a plan to create and fund a national research cloud.

This program would give academic scientists access to the cloud data centers of the tech giants, and to public data sets for research.

Several universities, including Stanford, Carnegie Mellon and Ohio State, and tech companies including Google, Amazon and IBM backed the idea as well on Tuesday. The organizations declared their support for the creation of a research cloud and their willingness to participate in the project.

The research cloud, though a conceptual blueprint at this stage, is another sign of the largely effective campaign by universities and tech companies to persuade the American government to increase government backing for research into artificial intelligence. The Trump administration, while cutting research elsewhere, has proposed doubling federal spending on A.I. research by 2022.

Fueling the increased government backing is the recognition that A.I. technology is essential to national security and economic competitiveness. The national cloud legislation will be proposed as an amendment to this years defense budget authorization.

We have a real challenge in our country from China in terms of what they are doing with A.I., said Representative Anna G. Eshoo, Democrat of California, a sponsor of the bill.

Funding for the project, the terms for paying the cloud providers and what data might be available would be up to the task force and Congress.

This is a logical first step, said Senator Rob Portman, Republican of Ohio, another sponsor of the proposed law. The task force is going to have to grapple with how you pay for it and how you govern it. But you shouldnt have to work at Google to have access to this technology.

The national research cloud would address a problem that is a byproduct of impressive progress in recent years. The striking gains made in tasks like language understanding, computer vision, game playing and common-sense reasoning have been attained thanks to a branch of A.I. called deep learning.

That technology increasingly requires immense computing firepower. A report last year from the Allen Institute for Artificial Intelligence, working with data from OpenAI, another artificial intelligence lab, observed that the volume of calculations needed to be a leader in advanced A.I. had soared an estimated 300,000 times in the previous six years. The cost of training deep learning models, cycling endlessly through troves of data, can be millions of dollars.

The cost and need for vast computing resources are putting some cutting-edge A.I. research beyond the reach of academics. Only the tech giants like Google, Amazon and Microsoft can spend billions a year on data centers that are often the size of a football field, housing rack upon rack with hundreds of thousands of computers.

So there has been a brain drain of computer scientists from universities to the big tech companies, lured by access to their cloud data centers as well as lucrative pay packages. The worry is that academic research the seed corn of future breakthroughs is being shortchanged.

Academic work can be crucial particularly in areas where profits are not on the immediate horizon. That was the story with deep learning, which dates to the 1980s. A small band of academics nurtured the field for years. Only since 2012, with enough computing power and data, did deep learning really take off.

There have been smaller efforts for university research to tap into the big tech clouds. But the current concept of an ambitious public-private partnership for a National Research Cloud came in March from John Etchemendy and Fei-Fei Li, co-directors of the Stanford Institute for Human-Centered Artificial Intelligence.

They posted their idea online and sought support from other universities. The academics then promoted the idea to their political representatives and industry contacts.

The federal government has long backed major research projects like particle accelerators for high-energy physics in the 1960s and supercomputing centers in the 1980s.

But in the past, the government built the labs and facilities. The research cloud would use the cloud factories of the tech companies. Academic scientists would be government-subsidized customers of the tech giants, perhaps at rates below those charged to their business customers.

Many university researchers say that buying rather than building is the only sensible path, given the daunting cost of hyper-scale data centers.

We need to get scientific research on the public cloud, said Ed Lazowska, a professor at the University of Washington. We have to hitch ourselves to that wagon. Its the only way to keep up.

Read more:

Universities and Tech Giants Back National Cloud Computing Project - The New York Times

BT Young Scientist & Technology Exhibition to go virtual in 2021 – The Irish Times

The BT Young Scientist & Technology Exhibition (BTYSTE) is going virtual for 2021 because of continued health and safety concerns around Covid-19.

The move means that for the first time in the exhibitions 56 years it will not be staged physically. Instead it will be transformed into a spectacular virtual showcase for January 2021, according to BT Ireland. The measure is an indication of how the Covid-19 pandemic is likely to affect staging of public events extending into next year.

Europes longest-running science event is scheduled go ahead from January 6th to 9th with students exhibiting virtually and judging taking place across digital platforms.

A total of 1,800 projects were entered into the 2020 contest, while 550 finalists from 244 schools showcased their entries in January. Some 50,000 people viewed the exhibition at the RDS, which has hosted it since 1966 its second year.

Members of the public in Ireland, and globally, will be able to visit the exhibition online and enjoy a full calendar of events including special acts, the Primary Science Fair, business events and the exhibits, BT Ireland confirmed.

The exhibition is retaining its 200 prizes with 7,500 going to the overall winner and an opportunity to represent Ireland at the EU Contest for Young Scientists in Salamanca, Spain, in September 2021.

BT Ireland managing director Shay Walsh said: With the unprecedented global events of the past few months, we have seen first hand the important role that science and technology is playing in finding solutions to this global pandemic.

He said they had looked at the exhibition with a new lens and wanted to ensure it remained firmly on the educational calendar.

Mr Walsh said he wanted to encourage students, teachers and schools to get involved in the virtual event and be part of something truly special in January.

Exhibition head Mari Cahalane said: People who have experienced the exhibition over the past 56 years understand it is about much more than a science competition. Its about imagining an idea and then bringing that idea to life through research and development. Were going to emphasise that in its truest form by bringing the BTYSTE virtual for 2021.

While the exhibition inspires thousands of young people each year to explore near-endless possibilities in science, technology, engineering and maths, the exhibition team sought this year to do something new in light of current circumstances, she added.

We will be holding information sessions online for students and teachers over the comings months and our website is the best source http://www.btyoungscientist.com for up-to-date information for students looking to get started on their entries.

The online entry process remains the same as previous years, but project entry fees have been waived. To enter, an individual or group must submit a one-page proposal outlining their project idea ahead of the closing date of September 22nd. There are four categories: technology; social and behavioural science; biological and ecological science and chemical, physical and mathematical science.

The Primary Science Fair is open to primary-level students from 3rd to 6th class and will run alongside the main exhibition online.

See more here:

BT Young Scientist & Technology Exhibition to go virtual in 2021 - The Irish Times

Color Star Technology Announces that Masa, Asia’s "Guitar Guru," has Joined its Education Platform – PRNewswire

BEIJING, July 2, 2020 /PRNewswire/ --Color Star Technology Co., Ltd. (Nasdaq CM: HHT) (the "Company", "we" or "HHT") is pleased to announce that Color China Entertainment Limited ("Color China"), a wholly-owned subsidiary of Color Star Technology, has just signed a cooperation agreement with a renowned guitarist, Zhengyan You (a.k.a. Masa). Masa will take on the role of a Star Teacher of the "Color World" online education platform, created by Color Star.

Masa, who is widely known as a "Guitar Guru" in Asian pop music, has participated in the production of thousands of albums since he entered the entertainment industry in 1974. His albums include top pop music stars in Chinese music, such as Wenzheng Liu, Dayou Luo, Qin Cai, Rui Su, Qin Qi, Xiang Fei, Jie Wang and so on. Multiple record labels and fellow artists have also been known to reach out to Masa for cooperation. His music has influenced musicians throughout the ages, solidifying Masa as a representative of guitar performance in Asian pop music.

No matter the genre, classical, pop or rock, Masa's performances were widely praised. Many producers, guitarists and entertainers of Chinese pop music are honored to be able to learn from Masa.

On June 26, 2020, we have invited Masa to join Color World as a teacher, as we believe that there will be a great demand for his teachings in the Asian market. The addition of Masa allows our students, regardless of whether they are ordinary music lovers or professional practitioners, to be able to learn and improve with the best, as is the original intention of Color World. We hope that more entertainment enthusiasts will be able to learn what they are truly interested in, and that they can also improve their skills professionally. The signing of this contract with Masa also marks the beginning of Color World's development of its reach into the Asian market. Sean Liu, the CEO of Color Star, said, "In the future, we will cooperate with more top artists and producers in Asia, including music, film, television, animation, dance and other industries. We believe that Color World has the potential to bring richer content to our students." In the future, Color China will continue to sign contracts with top artists from China, South Korea, Japan, Thailand and other countries. We believe that their additions will greatly improve Color World's competitiveness in facing the Asian market of over 2 billion consumers.

About Color Star Technology Co., Ltd.

Color Star Technology, is a holding company whose primary business is offering both online and offline innovative education services. Its business operations are conducted through its wholly-owned subsidiaries Color China Entertainment Ltd. and CACM Group NY, Inc. The Company also anticipates providing an after-school tutoring program in New York via its joint venture entity Baytao LLC, and providing online music education via a platform branded "Color World."

Forward-Looking Statements

Certain statements made herein are "forward-looking statements" within the meaning of the "safe harbor" provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements may be identified by the use of words such as "anticipate", "believe", "expect", "estimate", "plan", "outlook", and "project" and other similar expressions that predict or indicate future events or trends or that are not statements of historical matters. Such forward-looking statements include the business plans, objectives, expectations and intentions of the parties following the completion of the acquisition, and HHT's estimated and future results of operations, business strategies, competitive position, industry environment and potential growth opportunities. These forward-looking statements reflect the current analysis of existing information and are subject to various risks and uncertainties. As a result, caution must be exercised in relying on forward-looking statements. Due to known and unknown risks, our actual results may differ materially from our expectations or projections. All forward-looking statements attributable to the Company or persons acting on its behalf are expressly qualified in their entirety by these factors. Other than as required under the securities laws, the Company does not assume a duty to update these forward-looking statements. The following factors, among others, could cause actual results to differ materially from those described in these forward-looking statements: there is uncertainty about the spread of the COVID-19 virus and the impact it will have on HHT's operations, the demand for the HHT's products and services, global supply chains and economic activity in general. These and other risks and uncertainties are detailed in the other public filings with the Securities and Exchange Commission (the "SEC") by HHT. Additional information concerning these and other factors that may impact our expectations and projections will be found in our periodic filings with the SEC, including our Annual Report on Form 20-F for the fiscal year ended June 30, 2019. HHT's SEC filings are available publicly on the SEC's website at http://www.sec.gov.HHT disclaims any obligation to update the forward-looking statements, whether as a result of new information, future events or otherwise.

For investor and media inquiries, please contact:

Color Star Technology Co., Ltd.Contact: Investor RelationsFinancialBuzzIR[emailprotected] Tele: +1-877-601-1879

SOURCE Color Star Technology Co., Ltd.

Continued here:

Color Star Technology Announces that Masa, Asia's "Guitar Guru," has Joined its Education Platform - PRNewswire

India welcomes FDI in internet technology, but foreign entities will have to abide by law of land – Deccan Herald

Three days after banning the use of 59 Chinese apps in India, New Delhi on Thursday said that it would continue to welcome foreign investments in the area of internet technology, but foreign entities would have to abide by the laws of the land.

While we will continue to welcome foreign investments in India, including in the area of internet technologies, this will have to be in accordance with the rules and regulatory framework established by the Government, Anurag Srivastava, the spokesperson of the Ministry of External Affairs (MEA), said.

Beijing on Tuesday firmly opposed Indias move to ban the use of 59 apps linked to China, stating that New Delhi abused the national security exception to the rules of the World Trade Organization (WTO).New Delhi dismissed the allegation by Beijing. India has one of the most open regimes in the world for attracting Foreign Direct Investment (FDI). In the last few years, the Government has taken a host of measures for creating a more investor-friendly regime. Similarly in the area of digital technology and the internet, India has adopted a very open regime, said the MEA spokesperson.

He noted that India was today one of the worlds largest markets for digital and internet technologies with more than 680 million subscribers. The worlds largest software and internet applications companies are present in India. Naturally while operating in India they have to abide by our rules and regulations issued by the relevant ministries and departments, including those pertaining to data security and privacy of individual data, he added.

The Embassy of the Peoples Republic of China (PRC) in New Delhi expressed serious concerns over the ban imposed by the Government of India. It stated that the ban selectively and discriminatorily targeted the apps developed by the companies based in the communist country on ambiguous and far-fetched grounds.

New Delhis move to ban the apps came amid continuing military stand-off along the disputed India-China boundary in eastern Ladakh. Prime Minister Narendra Modis government stated that the apps had been used in activities prejudicial to sovereignty and integrity of India, defence of India, security of the state and public order.

Read more:

India welcomes FDI in internet technology, but foreign entities will have to abide by law of land - Deccan Herald

Experience never seen before immersive music and art festival with transhuman collective ‘UNRATED’ – RadioandMusic.com

MUMBAI: Are you set to travel a million miles away from the Earth?

Transhuman Collective, an award-winning immersive experience design consultancy and production company introduces its open to all Real-Time 3D Virtual Event called UNRATED on 18th and 19th July, 2020 from 6:30 pm onwards.

Transhuman Collective has been successfully creating and delivering some of the most spectacular events for brands and Govt bodies over the years. The aim behind UNRATED is to raise funds for the COVID-19 warriors. In association with GiveIndia, Transhuman Collective will be engaging the audiences in a never seen before experience through UNRATED. The mission is to raise INR 1 Million in order to provide PPE Kits to COVID-19 warriors.

In view of the ongoing circumstances, the minds behind Transhuman Collective thought of creating and providing a unique concept that will engage and motivate audiences across the globe. The event is sure to be one of the best experiences as UNRATED is a festival by the artist and for the artists. Some of the best Alternative Artists will be performing LIVE to enthral the audiences in their 3D Avatars. Ash Roy, Calm Chor, Vinayak^a, Ox7gen, Zokhuma, Nate08, Helium Project and Nelson, to name a few. The Real-time 3D event will also feature Live visual artists like Cursorama, Vj Decoy, Samvida+Viktor, Naveen Deshpande and public art artists like Daku, Arthat and Yantr.

The industry has been noticing various virtual events that have been hosted in India, but somewhere there was a lack of a never seen before concept. Transhuman Collective along with their team of enthusiasts conceptualised this event with multiple immersive technologies like Realtime 3D, sound reactive virtual lights, augmented reality, holographic, projection mapping etc to offer a one of a kind experience to their audience. Thereby, the organizers built their own Real-Time LIVE 3D Virtual Events platform called TransSpace.

TransSpace is a platform for the brands to create cinematic stories, immersive brand launches, video conferences, webinars, music festivals, among others. The aim is to create a spellbound portrayal of live events for the artists to explore the new virtual medium connecting with the audience. The aim is to enthral the audience to celebrate the indomitable spirit of COVID-19 warriors and the passion of the donors to make this journey endearing.

Looking at the nationwide lockdown scenario, I truly believe that each and every one needs to do their bit. Hence, we at Transhuman Collective along with our group of friends from the alternative music and arts community have come together actively to host UNRATED. We are glad to offer a space with a unique and never seen before concept. The event is open to all across the globe in a hope of raising INR 1 million to provide PPE kits for our medical health workers. Our ambition is to get this message out loud and far, says, Soham Sarcar, Co-Founder, Transhuman Collective.

Sidhraj Shah, Co-Founder, TransSpace, says, Given the prevalent times, the necessity for collaborations and standing united is the need of the hour. Through TransSpace we have created a platform for the brands to create cinematic stories, immersive brand launches, video conferences, webinars, music festivals, among others. We aim to augment the audiences experience of watching a LIVE event virtually with a strong narrative to support. The goal here is to create a gripping narrative which the audience connects to and create a canvas for the artists to explore the new virtual medium.

The event will be live on YouTube / Vimeo and Twitch

Social Media Links

TRANSHUMAN COLLECTIVEFacebook: https://www.facebook.com/transhumancollective/Instagram: https://www.instagram.com/transhumancollective/

UNRATEDInstagram: https://www.instagram.com/unrated.live/

TRANSSPACEInstagram: https://www.instagram.com/transspace.india/

Donation Link: https://unrated.giveindia.org./

Read the original post:

Experience never seen before immersive music and art festival with transhuman collective 'UNRATED' - RadioandMusic.com

A music and arts festival for Mumbaikars – Times of India

Unrated is an alternative arts & music fundraiser concert hosted on a Real-Time Live 3D Virtual Events platform with a Mission for Raising INR 1 Million Funds for COVID-19 Warriors. Transhuman Collective, an award-winning immersive experience design consultancy and production company introduces its open to all Real-Time Live Virtual Event called UNRATED on 18th and 19th July, 2020 from 6:30 pm onwards. Transhuman Collective has been successfully creating and delivering some of the most spectacular events for brands and Govt bodies over the years. The aim behind UNRATED is to raise funds for the COVID-19 warriors. In association with GiveIndia, Transhuman Collective will be engaging the audience in a never seen before experience through UNRATED. The mission is to raise INR 1 Million in order to provide PPE Kits to COVID-19 warriors. In view of the ongoing circumstances, the minds behind Transhuman Collective thought of creating and providing a unique concept that will engage and motivate audiences across the globe. The event is sure to be one of the best campaigns as UNRATED is a festival by the artist and for the artists.

The concert will not only feature the most eclectic music artists but also visual and graffiti artists, like Ash Roy, Calm Chor, Artist Vinayak, Ox7gen, Zokhuma, Nate08, Helium Project and Nelsonto name a few. It will also feature Live visual artists and street artists like Cursorama, Vj Decoy, Samvida+Viktor, Daku, Arthat and Yantr.

The industry has been noticing various virtual events that have been hosted in India, but somewhere there was a lack of a never seen before concept. Transhuman Collective along with their team of enthusiasts conceptualised a multiple immersive technology like Realtime 3D technology, sound reactive lights, augmented reality, holographic, projection mapping etc to offer a one of a kind experience to their audience. Thereby, the organizers built their own Real Time LIVE 3D Virtual Events platform called TransSpace.

Read more here:

A music and arts festival for Mumbaikars - Times of India

This startup is ensuring babies get a good nights sleep with its smart mattress – YourStory

For any new parent, sleep is very important. Not just for themselves but even for their newborn. Studies suggest thata good nights sleepis extremely important for the cognitive growth of infants. And to get a good nights sleep, the mattress plays an important role.

To address this problem, Sameer Agarwal, Swapnil Rao, Aneesha Pillai, and Deepak Gupta founded NapNap in 2017. The Mumbai-based startup is an end-to-end consumer products company, focussed on the global mothercare and baby care segment.

The NapNap Team

According to the founders, its flagship product,NapNap Mat, is a portable baby mattress that mimics a mothers womb using a precise combination of vibrations and white noise to soothe infants and lull them to sleep within minutes.

Based on clinical trials conducted by Harvard Medical Center and Beth Israel Deaconess Medical Center (Boston, USA), NapNap Mat regularises infant breathing and treats apnea among preterm babies by 50 percent.

Sameer comes with over 12 years of experience in Sales and Business Development, Operations and Finance. Before founding NapNap Mat, he co-founded Art Should Tempt and ClassHopr. Swapnil has over 12 years of experience in product development, brand building, and marketing. Before founding NapNap Mat, Swapnil co-founded Mobizon Media and Transhuman Collective.

Deepak has over 12 years of experience in Quality and Systems Engineering. He earlier founded Future Foundry a product design firm. Aneesha brings over eight years in Engineering, Leadership, and General Management. She has an MBA from JBIMS and has previously worked with JP Morgan.

Sameer and Swapnil got the initial idea to start up in the space after one of their friend had a preterm baby. The baby was in distress due to lack of sleep.

Once they had the idea, they reached out to Deepak Gupta and Aneesha Pillai, who were their engineering classmates. The team had been dabbling with product design and they got together and decided to engineer the product based on the scientific data.

The four of us built the first few prototypes and did the initial round of testing in the market before hiring our first employee. Once we had the market acceptance and the product was flying off the shelf, we didn't need to try hard to attract great talent, says Sameer. They are now a team of 16.

While there are a number of mattress brands in the country, the concept of a vibration-therapy-based baby bed is new in the Indian ecosystem.

So, creating a new category and educating the masses about the benefits of using a NapNap Mat was one of the biggest hurdles faced by the team in the initial months.

Apart from this, consumer product companies require high capital infusion and its even higher when someone creates something totally new and deploys resources towards R&D as theres no reference point. So, managing capital initially to create an MVP in the market was quite a challenge and required lot of planning, says Sameer.

When the baby is still in the womb, it gets accustomed to the sounds and vibrations of the mothers body processes, and when the baby is born, it is suddenly in an unfamiliar territory.

Any little change in temperature, movement, and unfamiliar sound makes the infant feel extremely uncomfortable, and the baby reacts by crying due to distress, and hence finds it difficult to fall asleep.

The product works best for babies up to one year old. The startup claims the mat improves breathing, boosts sleep, reduces crying, reduces colic, it is travel friendly, and safe. It can be used in strollers, car seats, cribs, activity mats, bassinets, etc.

The NapNap mat

Indias mattress market is estimated to be worth Rs 10,000 crore, according to media reports. Startups like Cuddl,SleepyCat, Wink&Nod, and Sunday Mattress are all taking sleep seriously and are using advanced tech and raw materials to build their products. There is also The Sleep Company and Sequoia-backed Wakefit operating in the space.

Horizontal marketplaces like Amazon, Flipkart, Snapdeal, and ShopClues also offer branded and unbranded mattresses.

With an ecosystem of smart, inter-connected products, NapNap is redefining how technology and data will enable parenting in the future. NapNap is leading this change by solving one parenting problem at a time. It works with single mothers and gives them a platform to generate wealth in all forms (not just money), says Sameer.

The NapNap Mat is ISO 9001-2015 certified and is also certified by a CPSC-approved lab, and considered safe for 0-2 year old babies. It also meets British safety standards for babies.

The startup claims to have scaled 10X in the first year of its launch. Its average order value is Rs 2,000, and its ARR is Rs 2 crore, with gross margin of 66 percent.

NapNap, which has six products in the pipeline, is available on all online marketplaces in India, including Amazon and FirstCry. It has also expanded its presence to markets outside India, such as Dubai and Australia, and soon plans to start operations in the UK.

Since we go direct-to-customer, we can manage to keep the MRP fairly lower and still deliver a very high quality product by offsetting channel margins, says Sameer.

NapNap has raised a pre seed round led by ThinQbate Ventures LLP and Hatcher Plus. It is currently looking to raise a seed round.

The team aims to become the biggest baby mattress company in the world in the next few years.It aims to pacify babies across the globe using technology, design, and innovation. It also aims to scale and consolidate the Indian market for NapNap Mat and NapNap Nursing cover.

We aim to push traction in the UAE, the UK, and Australia market, launch version 2.0 of the NapNap Mat, launch Shusher + (white noise device), and also launch peripheral products (swaddle, feeding bottle, and pacifier), says Sameer.

Want to make your startup journey smooth? YS Education brings a comprehensive Funding Course, where you also get a chance to pitch your business plan to top investors. Click here to know more.

See the original post:

This startup is ensuring babies get a good nights sleep with its smart mattress - YourStory

Who exactly was Jeffrey Epstein? A history of the mogul and his crimes – Film Daily

If you dont pay much attention to the news until recently, then chances are youre a bit confused by Jeffrey Epstein and just what he did. Who is Jeffrey Epstein? Why do people not believe that he committed suicide? Why was Jeffrey Epstein in jail? While weve covered many, many, many aspects of Epstein and hes crimes, its time for some back to basics facts.

Who is Jeffrey Epstein exactly? The story is complicated, sordid, and terrible on so many levels. Jeffrey Epstein was a predator in life. While many people focus on his death, the actions that preceded it should be given your due attention. Heres everything you need to know, just the basic facts, about who Jeffrey Epstein was in life.

Jeffrey Epstein was an investment banker, who, specifically, had clientele with assets worth more than $1 billion. Epstein operated his business in the US Virgin Islands for tax reasons. Epstein himself was also quite wealthy, but the source of that wealth remains pretty unknown.

Still appearances matter, Epstein had cultivated an image with his townhouse, his large charitable donations, and worked with people such as Bill Clinton, Donald Trump, and Queen Elizabeths son Prince Andrew. Epstein, however, also had a criminal past and this is where things get dark.

Jeffrey Epstein was a registered sex offender. In 2008, Epstein pled guilty to felony charge of solicitation of prostitution involving a minor, and was sentenced to 18 months in prison. He served 13 and registered as a sex offender.

Epstein was arrested in July 2019 on charges of sex trafficking in New Jersey after returning to the US from France. According to the indictment, Epstein sexually exploited and abused dozens of minor girls at his homes in Manhattan, New York, and Palm Beach, Florida, among other locations.

The indictment continued that Epstein paid certain of his victims to recruit additional girls to be similarly abused. The indictment also alleged that some of Epsteins victims were as young as 14. After Epsteins arrest, a search of his New York residence found nude photographs of underage girls.

He was denied bail.

If there is something youve heard about Jeffrey Epstein, then chances are it had to do around his suicide. On July 24, 2019, Epstein was found injured in his cell. In Aug. 2019, Epstein died of what was ruled a suicide. The Federal Bureau of Prisons released the following statement:

On Saturday, August 10, 2019, at approximately 6:30 a.m., inmate Jeffrey Edward Epstein was found unresponsive in his cell in the Special Housing Unit from an apparent suicide at the Metropolitan Correctional Center (MCC) in New York, New York. Life-saving measures were initiated immediately by responding staff.

Staff requested emergency medical services (EMS) and life-saving efforts continued. Mr. Epstein was transported by EMS to a local hospital for treatment of life-threatening injuries, and subsequently pronounced dead by hospital staff.

Conspiracy theories popped up pretty much immediately. People believed that Epsteins high profile social circle wanted him silenced forever. Others believed that it was due to Epsteins own thoughts regarding eugenics and transhumanism that led to his death. Or, as a good part of the population believed, Epstein killed himself rather than being convicted and sent to prison.

View original post here:

Who exactly was Jeffrey Epstein? A history of the mogul and his crimes - Film Daily

Deep learning’s role in the evolution of machine learning – TechTarget

Machine learning had a rich history long before deep learning reached fever pitch. Researchers and vendors were using machine learning algorithms to develop a variety of models for improving statistics, recognizing speech, predicting risk and other applications.

While many of the machine learning algorithms developed over the decades are still in use today, deep learning -- a form of machine learning based on multilayered neural networks -- catalyzed a renewed interest in AI and inspired the development of better tools, processes and infrastructure for all types of machine learning.

Here, we trace the significance of deep learning in the evolution of machine learning, as interpreted by people active in the field today.

The story of machine learning starts in 1943 when neurophysiologist Warren McCulloch and mathematician Walter Pitts introduced a mathematical model of a neural network. The field gathered steam in 1956 at a summer conference on the campus of Dartmouth College. There, 10 researchers came together for six weeks to lay the ground for a new field that involved neural networks, automata theory and symbolic reasoning.

The distinguished group, many of whom would go on to make seminal contributions to this new field, gave it the name artificial intelligence to distinguish it from cybernetics, a competing area of research focused on control systems. In some ways these two fields are now starting to converge with the growth of IoT, but that is a topic for another day.

Early neural networks were not particularly useful -- nor deep. Perceptrons, the single-layered neural networks in use then, could only learn linearly separable patterns. Interest in them waned after Marvin Minsky and Seymour Papert published the book Perceptrons in 1969, highlighting the limitations of existing neural network algorithms and causing the emphasis in AI research to shift.

"There was a massive focus on symbolic systems through the '70s, perhaps because of the idea that perceptrons were limited in what they could learn," said Sanmay Das, associate professor of computer science and engineering at Washington University in St. Louis and chair of the Association for Computing Machinery's special interest group on AI.

The 1973 publication of Pattern Classification and Scene Analysis by Richard Duda and Peter Hart introduced other types of machine learning algorithms, reinforcing the shift away from neural nets. A decade later, Machine Learning: An Artificial Intelligence Approach by Ryszard S. Michalski, Jaime G. Carbonell and Tom M. Mitchell further defined machine learning as a domain driven largely by the symbolic approach.

"That catalyzed a whole field of more symbolic approaches to [machine learning] that helped frame the field. This led to many Ph.D. theses, new journals in machine learning, a new academic conference, and even helped to create new laboratories like the NASA Ames AI Research branch, where I was deputy chief in the 1990s," said Monte Zweben, CEO of Splice Machine, a scale-out SQL platform.

In the 1990s, the evolution of machine learning made a turn. Driven by the rise of the internet and increase in the availability of usable data, the field began to shift from a knowledge-driven approach to a data-driven approach, paving the way for the machine learning models that we see today.

The turn toward data-driven machine learning in the 1990s was built on research done by Geoffrey Hinton at the University of Toronto in the mid-1980s. Hinton and his team demonstrated the ability to use backpropagation to build deeper neural networks.

"This was a major breakthrough enabling new kinds of pattern recognition that were previously not feasible with neural nets," Zweben said. This added new layers to the networks and a way to strengthen or weaken connections back across many layers in the network, leading to the term deep learning.

Although possible in a lab setting, deep learning did not immediately find its way into practical applications, and progress stalled.

"Through the '90s and '00s, a joke used to be that 'neural networks are the second-best learning algorithm for any problem,'" Washington University's Das said.

Meanwhile, commercial interest in AI was starting to wane because the hype around developing an AI on par with human intelligence had gotten ahead of results, leading to an AI winter, which lasted through the 1980s. What did gain momentum was a type of machine learning using kernel methods and decision trees that enabled practical commercial applications.

Still, the field of deep learning was not completely in retreat. In addition to the ascendancy of the internet and increase in available data, another factor proved to be an accelerant for neural nets, according to Zweben: namely, distributed computing.

Machine learning requires a lot of compute. In the early days, researchers had to keep their problems small or gain access to expensive supercomputers, Zweben said. The democratization of distributed computing in the early 2000s enabled researchers to run calculations across clusters of relatively low-cost commodity computers.

"Now, it is relatively cheap and easy to experiment with hundreds of models to find the best combination of data features, parameters and algorithms," Zweben said. The industry is pushing this democratization even further with practices and associated tools for machine learning operations that bring DevOps principles to machine learning deployment, he added.

Machine learning is also only as good as the data it is trained on, and if data sets are small, it is harder for the models to infer patterns. As the data created by mobile, social media, IoT and digital customer interactions grew, it provided the training material deep learning techniques needed to mature.

By 2012, deep learning attained star status after Hinton's team won ImageNet, a popular data science challenge, for their work on classifying images using neural networks. Things really accelerated after Google subsequently demonstrated an approach to scaling up deep learning across clusters of distributed computers.

"The last decade has been the decade of neural networks, largely because of the confluence of the data and computational power necessary for good training and the adaptation of algorithms and architectures necessary to make things work," Das said.

Even when deep neural networks are not used directly, they indirectly drove -- and continue to drive -- fundamental changes in the field of machine learning, including the following:

Deep learning's predictive power has inspired data scientists to think about different ways of framing problems that come up in other types of machine learning.

"There are many problems that we didn't think of as prediction problems that people have reformulated as prediction problems -- language, vision, etc. -- and many of the gains in those tasks have been possible because of this reformulation," said Nicholas Mattei, assistant professor of computer science at Tulane University and vice chair of the Association for Computing Machinery's special interest group on AI.

In language processing, for example, a lot of the focus has moved toward predicting what comes next in the text. In computer vision as well, many problems have been reformulated so that, instead of trying to understand geometry, the algorithms are predicting labels of different parts of an image.

The power of big data and deep learning is changing how models are built. Human analysis and insights are being replaced by raw compute power.

"Now, it seems that a lot of the time we have substituted big databases, lots of GPUs, and lots and lots of machine time to replace the deep problem introspection needed to craft features for more classic machine learning methods, such as SVM [support vector machine] and Bayes," Mattei said, referring to the Bayesian networks used for modeling the probabilities between observations and outcomes.

The art of crafting a machine learning problem has been taken over by advanced algorithms and the millions of hours of CPU time baked into pretrained models so data scientists can focus on other projects or spend more time on customizing models.

Deep learning is also helping data scientists solve problems with smaller data sets and to solve problems in cases where the data has not been labeled.

"One of the most relevant developments in recent times has been the improved use of data, whether in the form of self-supervised learning, improved data augmentation, generalization of pretraining tasks or contrastive learning," said Juan Jos Lpez Murphy, AI and big data tech director lead at Globant, an IT consultancy.

These techniques reduce the need for manually tagged and processed data. This is enabling researchers to build large models that can capture complex relationships representing the nature of the data and not just the relationships representing the task at hand. Lpez Murphy is starting to see transfer learning being adopted as a baseline approach, where researchers can start with a pretrained model that only requires a small amount of customization to provide good performance on many common tasks.

There are specific fields where deep learning provides a lot of value, in image, speech and natural language processing, for example, as well as time series forecasting.

"The broader field of machine learning is enhanced by deep learning and its ability to bring context to intelligence. Deep learning also improves [machine learning's] ability to learn nonlinear relationships and manage dimensionality with systems like autoencoders," said Luke Taylor, founder and COO at TrafficGuard, an ad fraud protection service.

For example, deep learning can find more efficient ways to auto encode the raw text of characters and words into vectors representing the similarity and differences of words, which can improve the efficiency of the machine learning algorithms used to process it. Deep learning algorithms that can recognize people in pictures make it easier to use other algorithms that find associations between people.

More recently, there have been significant jumps using deep learning to improve the use of image, text and speech processing through common interfaces. People are accustomed to speaking to virtual assistants on their smartphones and using facial recognition to unlock devices and identify friends in social media.

"This broader adoption creates more data, enables more machine learning refinement and increases the utility of machine learning even further, pushing even further adoption of this tech into people's lives," Taylor said.

Early machine learning research required expensive software licenses. But deep learning pioneers began open sourcing some of the most powerful tools, which has set a precedent for all types of machine learning.

"Earlier, machine learning algorithms were bundled and sold under a licensed tool. But, nowadays, open source libraries are available for any type of AI applications, which makes the learning curve easy," said Sachin Vyas, vice president of data, AI and automation products at LTI, an IT consultancy.

Another factor in democratizing access to machine learning tools has been the rise of Python.

"The wave of open source frameworks for deep learning cemented the prevalence of Python and its data ecosystem for research, development and even production," Globant's Lpez Murphy said.

Many of the different commercial and free options got replaced, integrated or connected to a Python layer for widespread use. As a result, Python has become the de facto lingua franca for machine learning development.

Deep learning has also inspired the open source community to automate and simplify other aspects of the machine learning development lifecycle. "Thanks to things like graphical user interfaces and [automated machine learning], creating working machine learning models is no longer limited to Ph.D. data scientists," Carmen Fontana, IEEE member and cloud and emerging tech practice lead at Centric Consulting, said.

For machine learning to keep evolving, enterprises will need to find a balance between developing better applications and respecting privacy.

Data scientists will need to be more proactive in understanding where their data comes from and the biases that may inadvertently be baked into it, as well as develop algorithms that are transparent and interpretable. They also need to keep pace with new machine learning protocols and the different ways these can be woven together with various data sources to improve applications and decisions.

"Machine learning provides more innovative applications for end users, but unless we're choosing the right data sets and advancing deep learning protocols, machine learning will never make the transition from computing a few results to providing actual intelligence," said Justin Richie, director of data science at Nerdery, an IT consultancy.

"It will be interesting to see how this plays out in different industries and if this progress will continue even as data privacy becomes more stringent," Richie said.

Originally posted here:
Deep learning's role in the evolution of machine learning - TechTarget

My Invisalign app uses machine learning and facial recognition to sell the benefits of dental work – TechRepublic

Align Technology uses DevSecOps tactics to keep complex projects on track and align business and IT goals.

Image: AndreyPopov/Getty Images/iStockphoto

Align Technology's Chief Digital Officer Sreelakshmi Kolli is using machine learning and DevOps tactics to power the company's digital transformation.

Kolli led the cross-functional team that developed the latest version of the company's My Invisalign app. The app combines several technologies into one product including virtual reality, facial recognition, and machine learning. Kolli said that using a DevOps approach helped to keep this complex work on track.

"The feasibility and proof of concept phase gives us an understanding of how the technology drives revenue and/or customer experience," she said. "Modular architecture and microservices allows incremental feature delivery that reduces risk and allows for continuous delivery of innovation."

SEE: Research: Microservices bring faster application delivery and greater flexibility to enterprises (TechRepublic Premium)

The customer-facing app accomplishes several goals at once, the company said:

More than 7.5 million people have used the clear plastic molds to straighten their teeth, the company said. Align Technology has used data from these patients to train a machine learning algorithm that powers the visualization feature in the mobile app. The SmileView feature uses machine learning to predict what a person's smile will look like when the braces come off.

Kolli started with Align Technology as a software engineer in 2003. Now she leads an integrated software engineering group focused on product technology strategy and development of global consumer, customer and enterprise applications and infrastructure. This includes end user and cloud computing, voice and data networks and storage. She also led the company's global business transformation initiative to deliver platforms to support customer experience and to simplify business processes.

Kolli used the development process of the My Invisalign app as an opportunity to move the dev team to DevSecOps practices. Kolli said that this shift represents a cultural change, and making the transition requires a common understanding among all teams on what the approach means to the engineering lifecycle.

"Teams can make small incremental changes to get on the DevSecOps journey (instead of a large transformation initiative)," she said. "Investing in automation is also a must for continuous integration, continuous testing, continuous code analysis and vulnerability scans." To build the machine learning expertise required to improve and support the My Invisalign app, she has hired team members with that skill set and built up expertise internally.

"We continue to integrate data science to all applications to deliver great visualization experiences and quality outcomes," she said.

Align Technology uses AWS to run its workloads.

In addition to keeping patients connected with orthodontists, the My Invisalign app is a marketing tool to convince families to opt for the transparent but expensive alternative to metal braces.

Kolli said that IT leaders should work closely with business leaders to make sure initiatives support business goals such as revenue growth, improved customer experience, or operational efficiencies, and modernize the IT operation as well.

"Making the line of connection between the technology tasks and agility to go to market helps build shared accountability to keep technical debt in control," she said.

Align Technology released the revamped app in late 2019. In May of this year, the company released a digital version tool for doctors that combines a photo of the patient's face with their 3D Invisalign treatment plan.

This ClinCheck "In-Face" Visualization is designed to help doctors manage patient treatment plans.

The visualization workflow combines three components of Align's digital treatment platform: Invisalign Photo Uploader for patient photos, the iTero intraoral scanner to capture data needed for the 3D model of the patient's teeth, and ClinCheck Pro 6.0. ClinCheck Pro 6.0 allows doctors to modify treatment plans through 3D controls.

These new product releases are the first in a series of innovations to reimagine the digital treatment planning process for doctors, Raj Pudipeddi, Align's chief innovation, product, and marketing officer and senior vice president, said in a press release about the product.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Read more from the original source:
My Invisalign app uses machine learning and facial recognition to sell the benefits of dental work - TechRepublic

2 books to strengthen your command of python machine learning – TechTalks

Image credit: Depositphotos

This post is part ofAI education, a series of posts that review and explore educational content on data science and machine learning. (In partnership withPaperspace)

Mastering machine learning is not easy, even if youre a crack programmer. Ive seen many people come from a solid background of writing software in different domains (gaming, web, multimedia, etc.) thinking that adding machine learning to their roster of skills is another walk in the park. Its not. And every single one of them has been dismayed.

I see two reasons for why the challenges of machine learning are misunderstood. First, as the name suggests, machine learning is software that learns by itself as opposed to being instructed on every single rule by a developer. This is an oversimplification that many media outlets with little or no knowledge of the actual challenges of writing machine learning algorithms often use when speaking of the ML trade.

The second reason, in my opinion, are the many books and courses that promise to teach you the ins and outs of machine learning in a few hundred pages (and the ads on YouTube that promise to net you a machine learning job if you pass an online course). Now, I dont what to vilify any of those books and courses. Ive reviewed several of them (and will review some more in the coming weeks), and I think theyre invaluable sources for becoming a good machine learning developer.

But theyre not enough. Machine learning requires both good coding and math skills and a deep understanding of various types of algorithms. If youre doing Python machine learning, you have to have in-depth knowledge of many libraries and also master the many programming and memory-management techniques of the language. And, contrary to what some people say, you cant escape the math.

And all of that cant be summed up in a few hundred pages. Rather than a single volume, the complete guide to machine learning would probably look like Donald Knuths famous The Art of Computer Programming series.

So, what is all this tirade for? In my exploration of data science and machine learning, Im always on the lookout for books that take a deep dive into topics that are skimmed over by the more general, all-encompassing books.

In this post, Ill look at Python for Data Analysis and Practical Statistics for Data Scientists, two books that will help deepen your command of the coding and math skills required to master Python machine learning and data science.

Python for Data Analysis, 2nd Edition, is written by Wes McKinney, the creator of the pandas, one of key libraries using in Python machine learning. Doing machine learning in Python involves loading and preprocessing data in pandas before feeding them to your models.

Most books and courses on machine learning provide an introduction to the main pandas components such as DataFrames and Series and some of the key functions such as loading data from CSV files and cleaning rows with missing data. But the power of pandas is much broader and deeper than what you see in a chapters worth of code samples in most books.

In Python for Data Analysis, McKinney takes you through the entire functionality of pandas and manages to do so without making it read like a reference manual. There are lots of interesting examples that build on top of each other and help you understand how the different functions of pandas tie in with each other. Youll go in-depth on things such as cleaning, joining, and visualizing data sets, topics that are usually only discussed briefly in most machine learning books.

Youll also get to explore some very important challenges, such as memory management and code optimization, which can become a big deal when youre handling very large data sets in machine learning (which you often do).

What I also like about the book is the finesse that has gone into choosing subjects to fit in the 500 pages. While most of the book is about pandas, McKinney has taken great care to complement it with material about other important Python libraries and topics. Youll get a good overview of array-oriented programming with numpy, another important Python library often used in machine learning in concert with pandas, and some important techniques in using Jupyter Notebooks, the tool of choice for many data scientists.

All this said, dont expect Python for Data Analysis to be a very fun book. It can get boring because it just discusses working with data (which happens to be the most boring part of machine learning). There wont be any end-to-end examples where youll get to see the result of training and using a machine learning algorithm or integrating your models in real applications.

My recommendation: You should probably pick up Python for Data Analysis after going through one of the introductory or advanced books on data science or machine learning. Having that introductory background on working with Python machine learning libraries will help you better grasp the techniques introduced in the book.

While Python for Data Analysis improves your data-processing and -manipulation coding skills, the second book well look at, Practical Statistics for Data Scientists, 2nd Edition, will be the perfect resource to deepen your understanding of the core mathematical logic behind many key algorithms and concepts that you often deal with when doing data science and machine learning.

The book starts with simple concepts such as different types of data, means and medians, standard deviations, and percentiles. Then it gradually takes you through more advanced concepts such as different types of distributions, sampling strategies, and significance testing. These are all concepts you have probably learned in math class or read about in data science and machine learning books.

But again, the key here is specialization.

On the one hand, the depth that Practical Statistics for Data Scientists brings to each of these topics is greater than youll find in machine learning books. On the other hand, every topic is introduced along with coding examples in Python and R, which makes it more suitable than classic statistics textbooks on statistics. Moreover, the authors have done a great job of disambiguating the way different terms are used in data science and other fields. Each topic is accompanied by a box that provides all the different synonyms for popular terms.

As you go deeper into the book, youll dive into the mathematics of machine learning algorithms such as linear and logistic regression, K-nearest neighbors, trees and forests, and K-means clustering. In each case, like the rest of the book, theres more focus on whats happening under the algorithms hood rather than using it for applications. But the authors have again made sure the chapters dont read like classic math textbooks and the formulas and equations are accompanied by nice coding examples.

Like Python for Data Analysis, Practical Statistics for Data Scientists can get a bit boring if you read it end to end. There are no exciting applications or a continuous process where you build your code through the chapters. But on the other hand, the book has been structured in a way that you can read any of the sections independently without the need to go through previous chapters.

My recommendation: Read Practical Statistics for Data Scientists after going through an introductory book on data science and machine learning. I definitely recommend reading the entire book once, though to make it more enjoyable, go topic by topic in-between your exploration of other machine learning courses. Also keep it handy. Youll probably revisit some of the chapters from time to time.

I would definitely count Python for Data Analysis and Practical Statistics for Data Scientists as two must-reads for anyone who is on the path of learning data science and machine learning. Although they might not be as exciting as some of the more practical books, youll appreciate the depth they add to your coding and math skills.

View post:
2 books to strengthen your command of python machine learning - TechTalks

What I Learned From Looking at 200 Machine Learning Tools – Machine Learning Times – machine learning & data science news – The Predictive…

Originally published in Chip Huyen Blog, June 22, 2020

To better understand the landscape of available tools for machine learning production, I decided to look up every AI/ML tool I could find. The resources I used include:

After filtering out applications companies (e.g. companies that use ML to provide business analytics), tools that arent being actively developed, and tools that nobody uses, I got 202 tools. See the full list. Please let me know if there are tools you think I should include but arent on the list yet!

Disclaimer

This post consists of 6 parts:

I. OverviewII. The landscape over timeIII. The landscape is under-developedIV. Problems facing MLOpsV. Open source and open-coreVI. Conclusion

I. OVERVIEW

In one way to generalize the ML production flow that I agreed with, it consists of 4 steps:

I categorize the tools based on which step of the workflow that it supports. I dont include Project setup since it requires project management tools, not ML tools. This isnt always straightforward since one tool might help with more than one step. Their ambiguous descriptions dont make it any easier: we push the limits of data science, transforming AI projects into real-world business outcomes, allows data to move freely, like the air you breathe, and my personal favorite: we lived and breathed data science.

I put the tools that cover more than one step of the pipeline into the category that they are best known for. If theyre known for multiple categories, I put them in the All-in-one category. I also include the Infrastructure category to include companies that provide infrastructure for training and storage. Most of these are Cloud providers.

To continue reading this article click here.

Go here to see the original:
What I Learned From Looking at 200 Machine Learning Tools - Machine Learning Times - machine learning & data science news - The Predictive...

Letters to the editor – The Economist

Jul 4th 2020

Artificial intelligence is an oxymoron (Technology quarterly, June 13th). Intelligence is an attribute of living things, and can best be defined as the use of information to further survival and reproduction. When a computer resists being switched off, or a robot worries about the future for its children, then, and only then, may intelligence flow.

I acknowledge Richard Suttons bitter lesson, that attempts to build human understanding into computers rarely work, although there is nothing new here. I was aware of the folly of anthropomorphism as an AI researcher in the mid-1980s. We learned to fly when we stopped emulating birds and studied lift. Meaning and knowledge dont result from symbolic representation; they relate directly to the visceral motives of survival and reproduction.

Great strides have been made in widening the applicability of algorithms, but as Mr Sutton says, this progress has been fuelled by Moores law. What we call AI is simply pattern discovery. Brilliant, transformative, and powerful, but just pattern discovery. Further progress is dependent on recognising this simple fact, and abandoning the fancy that intelligence can be disembodied from a living host.

ROB MACDONALDRichmond, North Yorkshire

I agree that machine learning is overhyped. Indeed, your claim that such techniques are loosely based on the structure of neurons in the brain is true of neural networks, but these are just one type among a wide array of different machine- learning methods. In fact, machine learning in some cases is no more than a rebranding of existing processes. If by machine learning we simply mean building a model using large amounts of data, then good old ordinary least squares (line of best fit) is a form of machine learning.

TOM ARMSTRONGToronto

The scope of your research into green investing was too narrow to condemn all financial services for their woolly thinking (Hotting up, June 20th). You restricted your analysis to microeconomic factors and to the ability of investors to engage with companies. It overlooked the bigger picture: investors can also shape the macro environment by structured engagement with the system itself.

For example, the data you used largely originated from the investor-led Carbon Disclosure Project (for which we hosted the first ever meeting, nearly two decades ago). In addition, investors have also helped shape sustainable-finance plans in Britain, the EU and UN. Investors also sit on the industry-led Taskforce on Climate-related Financial Disclosure, convened by the Financial Stability Board, which has proved effective.

It is critical that governments apply a meaningful carbon price. But if we are to move money at the pace and scale required to deal with climate risk, governments need to reconsider the entire architecture of markets. This means focusing a wide-angled climate lens on prudential regulation, listing rules, accounting standards, investor disclosure standards, valuation conventions and stewardship codes, as well as building on new interpretations of legal fiduciary duty. This work is done most effectively in partnership with market participants. Green-thinking investors can help.

STEVE WAYGOODChief responsible investment officerAviva InvestorsLondon

Estimating indirectly observable GDP in real time is indeed a hard job for macro-econometricians, or wonks, as you call us (Crisis measures, May 30th). Most of the components are either highly lagged, as your article mentioned, or altogether unobservable. But the textbook definition of GDP and its components wont be changing any time soon, as the reader is led to believe. Instead what has always and will continue to change are the proxy indicators used to estimate the estimate of GDP.

MICHAEL BOERMANWashington, DC

Reading Lexingtons account of his garden adventures (June 20th) brought back memories of my own experience with neighbours in Twinsburg, Ohio, in the late 1970s. They also objected to vegetables growing in our front yard (the only available space). We were doing it for the same reasons as Lexington: pleasure, fresh food to eat, and a learning experience for our young children. The neighbours, recently arrived into the suburban middle class, saw it as an affront. They no longer had to grow food for their table. They could buy it at the store and keep it in the deep freeze. Our garden, in their face every day, reminded them of their roots in Appalachian poverty. They called us hillbillies.

Arthur C. Clarke once wrote: Any sufficiently advanced technology is indistinguishable from magic. Our version read, Any sufficiently advanced lifestyle is indistinguishable from hillbillies.

PHILIP RAKITAPhiladelphia

Bartleby (May 30th) thinks the benefits of working from home will mean that employees will not want to return to the office. I am not sure that is the case for many people. My husband is lucky. He works for a company that already expected its staff to work remotely, so had the systems and habits in place. He has a spacious room to work in, with an adjustable chair, large monitor and a nice view. I do not work so he is not responsible for child care or home schooling.

Many people are working at makeshift workspaces which would make an occupational therapist cringe. Few will have a dedicated room for their home office, so their work invades their mental and physical space.

My husband has noticed that meetings are being set up both earlier and later in the day because there is an assumption that, as people are not commuting, it is fine to extend their work day. Colleagues book a half-hour meeting instead of dropping by someones desk to ask a quick question. Any benefit of not commuting is lost. My husband still struggles to finish in time to have dinner with our children. People with especially long commutes now have more time, but even that was a change of scenery and offered some incidental exercise.

JENNIFER ALLENLondon

As Bartleby pointed out, the impact of pandemic working conditions wont be limited to the current generation. By exacerbating these divides, will covid-19 completely guarantee a future dominated by the baby-Zoomers?

MALCOLM BEGGTokyo

The transition away from the physical office engenders a lackadaisical approach to the work day for many workers. It brings to mind Ignatius Reillys reasoning for his late start at the office from A Confederacy of Dunces:

I avoid that bleak first hour of the working day during which my still sluggish senses and body make every chore a penance. I find that in arriving later, the work which I do perform is of a much higher quality.

ROBERT MOGIELNICKIArlington, Virginia

This article appeared in the Letters section of the print edition under the headline "On artificial intelligence, green investing, GDP, gardens, working from home"

Original post:
Letters to the editor - The Economist

Machine learning finds use in creating sharper maps of ‘ecosystem’ lines in the ocean – Firstpost

EOSJul 01, 2020 14:54:08 IST

On land, its easy for us to see divisions between ecosystems: A rain forests fan palms and vines stand in stark relief to the cacti of a high desert. Without detailed data or scientific measurements, we can tell a distinct difference in the ecosystems flora and fauna.

But how do scientists draw those divisions in the ocean? A new paper proposes a tool to redraw the lines that define an oceans ecosystems, lines originally penned by the seagoing oceanographerAlan Longhurstin the 1990s. The paper uses unsupervised learning, a machine learning method, to analyze the complex interplay between plankton species and nutrient fluxes. As a result, the tool could give researchers a more flexible definition of ecosystem regions.

Using the tool on global modeling output suggests that the oceans surface has more than 100 different regions or as few as 12 if aggregated, simplifying the56 Longhurst regions. The research could complement ongoing efforts to improve fisheries management and satellite detection of shifting plankton under climate change. It could also direct researchers to more precise locations for field sampling.

A sea turtle in the aqua blue waters of Hawaii. Image: Rohit Tandon/Unsplash

Coccolithophores, diatoms, zooplankton, and other planktonic life-formsfloaton much of the oceans sunlit surface. Scientists monitor plankton with long-term sampling stations and peer at their colorsby satellitefrom above, but they dont have detailed maps of where plankton lives worldwide.

Models help fill the gaps in scientists knowledge, and the latest research relies on an ocean model to simulate where 51 types of plankton amass on the surface oceans worldwide. The latest research then applies the new classification tool, called the systematic aggregated ecoprovince (SAGE) method, to discern where neighborhoods of like-minded plankton and nutrients appear.

SAGE relies, in part, on a type of machine learning algorithm called unsupervised learning. The algorithms strength is that it searches for patterns unprompted by researchers.

To compare the tool to a simple example, if scientists told an algorithm to identify shapes in photographs like circles and squares, the researchers could supervise the process by telling the computer what a square and circle looked like before it began. But in unsupervised learning, the algorithm has no prior knowledge of shapes and will sift through many images to identify patterns of similar shapes itself.

Using an unsupervised approach gives SAGE the freedom to let patterns emerge that the scientists might not otherwise see.

While my human eyes cant see these different regions that stand out, the machine can, first author and physical oceanographerMaike Sonnewaldat Princeton University said. And thats where the power of this method comes in. This method could be used more broadly by geoscientists in other fields to make sense of nonlinear data, said Sonnewald.

A machine-learning technique developed at MIT combs through global ocean data to find commonalities between marine locations, based on how phytoplankton species interact with each other. Using this approach, researchers have determined that the ocean can be split into over 100 types of provinces, and 12 megaprovinces, that are distinct in their ecological makeup.

Applying SAGE to model data, the tool noted 115 distinct ecological provinces, which can then be boiled down into 12 overarching regions.

One region appears in the center of nutrient-poor ocean gyres, whereas other regions show productive ecosystems along the coast and equator.

You have regions that are kind of like the regions youd see on land, Sonnewald said.One area in the heart of a desert-like region of the ocean is characterized by very small cells. Theres just not a lot of plankton biomass. The region that includes Perus fertile coast, however, has a huge amount of stuff.

If scientists want more distinctions between communities, they can adjust the tool to see the full 115 regions. But having only 12 regions can be powerful too, said Sonnewald, because it demonstrates the similarities between the different [ocean] basins. The tool was published in arecent paperin the journalScience Advances.

OceanographerFrancois Ribaletat the University of Washington, who was not involved in the study, hopes to apply the tool to field data when he takes measurements on research cruises. He said identifying unique provinces gives scientists a hint of how ecosystems could react to changing ocean conditions.

If we identify that an organism is very sensitive to temperature, so then we can start to actually make some predictions, Ribalet said. Using the tool will help him tease out an ecosystems key drivers and how it may react to future ocean warming.

Jenessa Duncombe.Text 2020. AGU.

This story has been republished from Eosunder the Creative Commons 3.0 license.Read theoriginal story.

Find latest and upcoming tech gadgets online on Tech2 Gadgets. Get technology news, gadgets reviews & ratings. Popular gadgets including laptop, tablet and mobile specifications, features, prices, comparison.

Read more:
Machine learning finds use in creating sharper maps of 'ecosystem' lines in the ocean - Firstpost

Pharmacogenomics Market: Predictable To Witness Sustainable Evolution over 2020-2030 – Cole of Duty

Prophecy Market Insights Pharmacogenomics market research report provides a comprehensive, 360-degree analysis of the targeted market which helps stakeholders to identify the opportunities as well as challenges during COVID-19 pandemic across the globe.

Pharmacogenomics Devices Market reports provide in-depth analysis of Top Players, Geography, End users, Applications, Competitor analysis, Revenue, Financial Analysis, Market Share, COVID-19 Analysis, Trends and Forecast 2020-2029. It incorporates market evolution study, involving the current scenario, growth rate, and capacity inflation prospects, based on Porters Five Forces and DROT analyses.

Get Sample Copy of This Report @ https://www.prophecymarketinsights.com/market_insight/Insight/request-sample/244

An executive summary provides the markets definition, application, overview, classifications, product specifications, manufacturing processes; raw materials, and cost structures.

Market Dynamics offers drivers, restraints, challenges, trends, and opportunities of the Pharmacogenomics market

Detailed analysis of the COVID-19 impact will be given in the report, as our analyst and research associates are working hard to understand the impact of COVID-19 disaster on many corporations, sectors and help our clients in taking excellent business decisions. We acknowledge everyone who is doing their part in this financial and healthcare crisis.

Segment Level Analysis in terms of types, product, geography, demography, etc. along with market size forecast

Segmentation Overview:

The Pharmacogenomics research study comprises 100+ market data Tables, Graphs & Figures, Pie Chat to understand detailed analysis of the market. The predictions estimated in the market report have been resulted in using proven research techniques, methodologies, and assumptions. This Pharmacogenomics market report states the market overview, historical data along with size, growth, share, demand, and revenue of the global industry.

Request Discount @ https://www.prophecymarketinsights.com/market_insight/Insight/request-discount/244

Regional and Country- level Analysis different geographical areas are studied deeply and an economical scenario has been offered to support new entrants, leading market players, and investors to regulate emerging economies. The top producers and consumers focus on production, product capacity, value, consumption, growth opportunity, and market share in these key regions, covering

The comprehensive list of Key Market Players along with their market overview, product protocol, key highlights, key financial issues, SWOT analysis, and business strategies. The report dedicatedly offers helpful solutions for players to increase their clients on a global scale and expand their favour significantly over the forecast period. The report also serves strategic decision-making solutions for the clients.

Competitive landscape Analysis provides mergers and acquisitions, collaborations along with new product launches, heat map analysis, and market presence and specificity analysis.

PharmacogenomicsMarket Key Players:

Thermo Fisher Scientific Inc., Qiagen N.V., F.Hoffmann-La Roche AG, Abbot Laboratories, Diatech Pharmacogenetics, and Assurex Health Inc.

The study analyses the manufacturing and processing requirements, project funding, project cost, project economics, profit margins, predicted returns on investment, etc. With the tables and figures, the report provides key statistics on the state of the industry and is a valuable source of guidance and direction for companies and individuals interested in the market.

Stakeholders Benefit:

Get In-depth TOC @ https://www.prophecymarketinsights.com/market_insight/Global-Pharmacogenomics-Market-By-Technology-244

About us:

Prophecy Market Insights is specialized market research, analytics, marketing/business strategy, and solutions that offers strategic and tactical support to clients for making well-informed business decisions and to identify and achieve high-value opportunities in the target business area. We also help our clients to address business challenges and provide the best possible solutions to overcome them and transform their business.

Contact Us:

Mr Alex (Sales Manager)

Prophecy Market Insights

Phone: +1 860 531 2701

Email: [emailprotected]

VISIT MY BLOG:- https://prophecyconsumerelectronics.blogspot.com/

The rest is here:
Pharmacogenomics Market: Predictable To Witness Sustainable Evolution over 2020-2030 - Cole of Duty