Greens Powders Are The Secret To Getting More Veggies: Here’s How To Pick One – mindbodygreen.com

Even though greens powders are made of nutrient-dense foods, that doesnt mean theyre a good idea for everyone. Since greens powders contain vegetables in a condensed form, theyre also high in specific vitamins, like vitamin K, that can interfere with blood-clotting medications. If youre on blood thinners or other medications or you have chronic health problems, always check in with your doctor before taking a new supplement, greens powders included.

Even if you dont fall into these categories, its important to make sure youre getting your greens powders from a trusted source. When it comes to regulation, the supplement industry is a bit of a gray area. Make sure the supplement manufacturer can verify that the greens have been tested for contamination and passed with acceptable levels. Its best to buy from companies whose greens powders have been certified through a third-party testing laboratory, like NSF International. That way, you know youre getting exactly whats listed on the label and nothing else.

And make sure youre sticking to the recommended serving size. More isnt necessarily better, since some vitamins can build up in the system and lead to toxicity. A scoop or two per day, along with a healthy, vegetable-rich diet, is all you need.

Read more from the original source:
Greens Powders Are The Secret To Getting More Veggies: Here's How To Pick One - mindbodygreen.com

Overview of causal inference in machine learning – Ericsson

In a major operators network control center complaints are flooding in. The network is down across a large US city; calls are getting dropped and critical infrastructure is slow to respond. Pulling up the systems event history, the manager sees that new 5G towers were installed in the affected area today.

Did installing those towers cause the outage, or was it merely a coincidence? In circumstances such as these, being able to answer this question accurately is crucial for Ericsson.

Most machine learning-based data science focuses on predicting outcomes, not understanding causality. However, some of the biggest names in the field agree its important to start incorporating causality into our AI and machine learning systems.

Yoshua Bengio, one of the worlds most highly recognized AI experts, explained in a recent Wired interview: Its a big thing to integrate [causality] into AI. Current approaches to machine learning assume that the trained AI system will be applied on the same kind of data as the training data. In real life it is often not the case.

Yann LeCun, a recent Turing Award winner, shares the same view, tweeting: Lots of people in ML/DL [deep learning] know that causal inference is an important way to improve generalization.

Causal inference and machine learning can address one of the biggest problems facing machine learning today that a lot of real-world data is not generated in the same way as the data that we use to train AI models. This means that machine learning models often arent robust enough to handle changes in the input data type, and cant always generalize well. By contrast, causal inference explicitly overcomes this problem by considering what might have happened when faced with a lack of information. Ultimately, this means we can utilize causal inference to make our ML models more robust and generalizable.

When humans rationalize the world, we often think in terms of cause and effect if we understand why something happened, we can change our behavior to improve future outcomes. Causal inference is a statistical tool that enables our AI and machine learning algorithms to reason in similar ways.

Lets say were looking at data from a network of servers. Were interested in understanding how changes in our network settings affect latency, so we use causal inference to proactively choose our settings based on this knowledge.

The gold standard for inferring causal effects is randomized controlled trials (RCTs) or A/B tests. In RCTs, we can split a population of individuals into two groups: treatment and control, administering treatment to one group and nothing (or a placebo) to the other and measuring the outcome of both groups. Assuming that the treatment and control groups arent too dissimilar, we can infer whether the treatment was effective based on the difference in outcome between the two groups.

However, we can't always run such experiments. Flooding half of our servers with lots of requests might be a great way to find out how response time is affected, but if theyre mission-critical servers, we cant go around performing DDOS attacks on them. Instead, we rely on observational datastudying the differences between servers that naturally get a lot of requests and those with very few requests.

There are many ways of answering this question. One of the most popular approaches is Judea Pearl's technique for using to statistics to make causal inferences. In this approach, wed take a model or graph that includes measurable variables that can affect one another, as shown below.

To use this graph, we must assume the Causal Markov Condition. Formally, it says that subject to the set of all its direct causes, a node is independent of all the variables which are not direct causes or direct effects of that node. Simply put, it is the assumption that this graph captures all the real relationships between the variables.

Another popular method for inferring causes from observational data is Donald Rubin's potential outcomes framework. This method does not explicitly rely on a causal graph, but still assumes a lot about the data, for example, that there are no additional causes besides the ones we are considering.

For simplicity, our data contains three variables: a treatment , an outcome , and a covariate . We want to know if having a high number of server requests affects the response time of a server.

In our example, the number of server requests is determined by the memory value: a higher memory usage means the server is less likely to get fed requests. More precisely, the probability of having a high number of requests is equal to 1 minus the memory value (i.e. P(x=1)=1-z , where P(x=1) is the probability that x is equal to 1). The response time of our system is determined by the equation (or hypothetical model):

y=1x+5z+

Where is the error, that is, the deviation from the expected value of given values of and depends on other factors not included in the model. Our goal is to understand the effect of on via observations of the memory value, number of requests, and response times of a number of servers with no access to this equation.

There are two possible assignments (treatment and control) and an outcome. Given a random group of subjects and a treatment, each subject has a pair of potential outcomes: and , the outcomes Y_i (0) and Y_i (1) under control and treatment respectively. However, only one outcome is observed for each subject, the outcome under the actual treatment received: Y_i=xY_i (1)+(1-x)Y_i (0). The opposite potential outcome is unobserved for each subject and is therefore referred to as a counterfactual.

For each subject, the effect of treatment is defined to be Y_i (1)-Y_i (0) . The average treatment effect (ATE) is defined as the average difference in outcomes between the treatment and control groups:

E[Y_i (1)-Y_i (0)]

Here, denotes an expectation over values of Y_i (1)-Y_i (0)for each subject , which is the average value across all subjects. In our network example, a correct estimate of the average treatment effect would lead us to the coefficient in front of x in equation (1) .

If we try to estimate this by directly subtracting the average response time of servers with x=0 from the average response time of our hypothetical servers with x=1, we get an estimate of the ATE as 0.177 . This happens because our treatment and control groups are not inherently directly comparable. In an RTC, we know that the two groups are similar because we chose them ourselves. When we have only observational data, the other variables (such as the memory value in our case) may affect whether or not one unit is placed in the treatment or control group. We need to account for this difference in the memory value between the treatment and control groups before estimating the ATE.

One way to correct this bias is to compare individual units in the treatment and control groups with similar covariates. In other words, we want to match subjects that are equally likely to receive treatment.

The propensity score ei for subject is defined as:

e_i=P(x=1z=z_i ),z_i[0,1]

or the probability that x is equal to 1the unit receives treatmentgiven that we know its covariate is equal to the value z_i. Creating matches based on the probability that a subject will receive treatment is called propensity score matching. To find the propensity score of a subject, we need to predict how likely the subject is to receive treatment based on their covariates.

The most common way to calculate propensity scores is through logistic regression:

Now that we have calculated propensity scores for each subject, we can do basic matching on the propensity score and calculate the ATE exactly as before. Running propensity score matching on the example network data gets us an estimate of 1.008 !

We were interested in understanding the causal effect of binary treatment x variable on outcome y . If we find that the ATE is positive, this means an increase in x results in an increase in y. Similarly, a negative ATE says that an increase in x will result in a decrease in y .

This could help us understand the root cause of an issue or build more robust machine learning models. Causal inference gives us tools to understand what it means for some variables to affect others. In the future, we could use causal inference models to address a wider scope of problems both in and out of telecommunications so that our models of the world become more intelligent.

Special thanks to the other team members of GAIA working on causality analysis: Wenting Sun, Nikita Butakov, Paul Mclachlan, Fuyu Zou, Chenhua Shi, Lule Yu and Sheyda Kiani Mehr.

If youre interested in advancing this field with us, join our worldwide team of data scientists and AI specialists at GAIA.

In this Wired article, Turing Award winner Yoshua Bengio shares why deep learning must begin to understand the why before it can replicate true human intelligence.

In this technical overview of causal inference in statistics, find out whats need to evolve AI from traditional statistical analysis to causal analysis of multivariate data.

This journal essay from 1999 offers an introduction to the Causal Markov Condition.

Originally posted here:
Overview of causal inference in machine learning - Ericsson

The 17 Best AI and Machine Learning TED Talks for Practitioners – Solutions Review

The editors at Solutions Review curated this list of the best AI and machine learning TED talks for practitioners in the field.

TED Talks are influential videos from expert speakers in a variety of verticals. TED began in 1984 as a conference where Technology, Entertainment and Design converged, and today covers almost all topics from business to technology to global issues in more than 110 languages. TED is building a clearinghouse of free knowledge from the worlds top thinkers, and their library of videos is expansive and rapidly growing.

Solutions Review has curated this list of AI and machine learning TED talks to watch if you are a practitioner in the field. Talks were selected based on relevance, ability to add business value, and individual speaker expertise. Weve also curated TED talk lists for topics like data visualization and big data.

Erik Brynjolfsson is the director of the MIT Center for Digital Business and a research associate at the National Bureau of Economic Research. He asks how IT affects organizations, markets and the economy. His books include Wired for Innovation and Race Against the Machine. Brynjolfsson was among the first researchers to measure the productivity contributions of information and community technology (ICT) and the complementary role of organizational capital and other intangibles.

In this talk, Brynjolfsson argues that machine learning and intelligence are not the end of growth its simply the growing pains of a radically reorganized economy. A riveting case for why big innovations are ahead of us if we think of computers as our teammates. Be sure to watch the opposing viewpoint from Robert Gordon.

Jeremy Howard is the CEO ofEnlitic, an advanced machine learning company in San Francisco. Previously, he was the president and chief scientist atKaggle, a community and competition platform of over 200,000 data scientists. Howard is a faculty member atSingularity University, where he teaches data science. He is also a Young Global Leader with the World Economic Forum, and spoke at the World Economic Forum Annual Meeting 2014 on Jobs for the Machines.

Technologist Jeremy Howard shares some surprising new developments in the fast-moving field of deep learning, a technique that can give computers the ability to learn Chinese, or to recognize objects in photos, or to help think through a medical diagnosis.

Nick Bostrom is a professor at the Oxford University, where he heads theFuture of Humanity Institute, a research group of mathematicians, philosophers and scientists tasked with investigating the big picture for the human condition and its future. Bostrom was honored as one ofForeign Policys 2015Global Thinkers. His bookSuperintelligenceadvances the ominous idea that the first ultraintelligent machine is the last invention that man need ever make.

In this talk, Nick Bostrom calls machine intelligence the last invention that humanity will ever need to make. Bostrom asks us to think hard about the world were building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values or will they have values of their own?

Lis work with neural networks and computer vision (with Stanfords Vision Lab) marks a significant step forward for AI research, and could lead to applications ranging from more intuitive image searches to robots able to make autonomous decisions in unfamiliar situations. Fei-Fei was honored as one ofForeign Policys 2015Global Thinkers.

This talk digs into how computers are getting smart enough to identify simple elements. Computer vision expert Fei-Fei Li describes the state of the art including the database of 15 million photos her team built to teach a computer to understand pictures and the key insights yet to come.

Anthony Goldbloom is the co-founder and CEO ofKaggle. Kaggle hosts machine learning competitions, where data scientists download data and upload solutions to difficult problems. Kaggle has a community of over 600,000 data scientists. In 2011 and 2012,Forbesnamed Anthony one of the 30 under 30 in technology; in 2013 theMIT Tech Reviewnamed him one of top 35 innovators under the age of 35, and the University of Melbourne awarded him an Alumni of Distinction Award.

This talk by Anthony Goldbloom describes some of the current use cases for machine learning, far beyond simple tasks like assessing credit risk and sorting mail.

Tufekci is a contributing opinion writer at theNew York Times, an associate professor at the School of Information and Library Science at University of North Carolina, Chapel Hill, and a faculty associate at Harvards Berkman Klein Center for Internet and Society. Her book,Twitter and Tear Gas was published in 2017 by Yale University Press.

Machine intelligence is here, and were already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that dont fit human error patterns and in ways we wont expect or be prepared for.

In his bookThe Business Romantic, Tim Leberecht invites us to rediscover romance, beauty and serendipity by designing products, experiences, and organizations that make us fall back in love with our work and our life. The book inspired the creation of the Business Romantic Society, a global collective of artists, developers, designers and researchers who share the mission of bringing beauty to business.

In this talk, Tim Leberecht makes the case for a new radical humanism in a time of artificial intelligence and machine learning. For the self-described business romantic, this means designing organizations and workplaces that celebrate authenticity instead of efficiency and questions instead of answers. Leberecht proposes four principles for building beautiful organizations.

Grady Booch is Chief Scientist for Software Engineering as well as Chief Scientist for Watson/M at IBM Research, where he leads IBMs research and development for embodied cognition. Having originated the term and the practice of object-oriented design, he is best known for his work in advancing the fields of software engineering and software architecture.

Grady Booch allays our worst (sci-fi induced) fears about superintelligent computers by explaining how well teach, not program, them to share our human values. Rather than worry about an unlikely existential threat, he urges us to consider how artificial intelligence will enhance human life.

Tom Gruberis a product designer, entrepreneur, and AI thought leader who uses technology to augment human intelligence. He was co-founder, CTO, and head of design for the team that created theSiri virtual assistant. At Apple for over 8 years, Tom led the Advanced Development Group that designed and prototyped new capabilities for products that bring intelligence to the interface.

This talk introduces the idea of Humanistic AI. He shares his vision for a future where AI helps us achieve superhuman performance in perception, creativity and cognitive function from turbocharging our design skills to helping us remember everything weve ever read. The idea of an AI-powered personal memory also extends to relationships, with the machine helping us reflect on our interactions with people over time.

Stuart Russell is a professor (and formerly chair) of Electrical Engineering and Computer Sciences at University of California at Berkeley. His bookArtificial Intelligence: A Modern Approach (with Peter Norvig) is the standard text in AI; it has been translated into 13 languages and is used in more than 1,300 universities in 118 countries. He also works for the United Nations, developing a new global seismic monitoring system for the nuclear-test-ban treaty.

His talk centers around the question of whether we can harness the power of superintelligent AI while also preventing the catastrophe of robotic takeover. As we move closer toward creating all-knowing machines, AI pioneer Stuart Russell is working on something a bit different: robots with uncertainty. Hear his vision for human-compatible AI that can solve problems using common sense, altruism and other human values.

Dr. Pratik Shahs research creates novel intersections between engineering, medical imaging, machine learning, and medicine to improve health and diagnose and cure diseases. Research topics include: medical imaging technologies using unorthodox artificial intelligence for early disease diagnoses; novel ethical, secure and explainable artificial intelligence based digital medicines and treatments; and point-of-care medical technologies for real world data and evidence generation to improve public health.

TED Fellow Pratik Shah is working on a clever system to do just that. Using an unorthodox AI approach, Shah has developed a technology that requires as few as 50 images to develop a working algorithm and can even use photos taken on doctors cell phones to provide a diagnosis. Learn more about how this new way to analyze medical information could lead to earlier detection of life-threatening illnesses and bring AI-assisted diagnosis to more health care settings worldwide.

Margaret Mitchells research involves vision-language and grounded language generation, focusing on how to evolve artificial intelligence towards positive goals. Her work combines computer vision, natural language processing, social media as well as many statistical methods and insights from cognitive science. Before Google, Mitchell was a founding member of Microsoft Researchs Cognition group, focused on advancing artificial intelligence, and a researcher in Microsoft Researchs Natural Language Processing group.

Margaret Mitchell helps develop computers that can communicate about what they see and understand. She tells a cautionary tale about the gaps, blind spots and biases we subconsciously encode into AI and asks us to consider what the technology we create today will mean for tomorrow.

Kriti Sharma is the Founder of AI for Good, an organization focused on building scalable technology solutions for social good. Sharma was recently named in theForbes 30 Under 30 list for advancements in AI. She was appointed a United Nations Young Leader in 2018 and is an advisor to both the United Nations Technology Innovation Labs and to the UK Governments Centre for Data Ethics and Innovation.

AI algorithms make important decisions about you all the time like how much you should pay for car insurance or whether or not you get that job interview. But what happens when these machines are built with human bias coded into their systems? Technologist Kriti Sharma explores how the lack of diversity in tech is creeping into our AI, offering three ways we can start making more ethical algorithms.

Matt Beane does field research on work involving robots to help us understand the implications of intelligent machines for the broader world of work. Beane is an Assistant Professor in the Technology Management Program at the University of California, Santa Barbara and a Research Affiliate with MITs Institute for the Digital Economy. He received his PhD from the MIT Sloan School of Management.

The path to skill around the globe has been the same for thousands of years: train under an expert and take on small, easy tasks before progressing to riskier, harder ones. But right now, were handling AI in a way that blocks that path and sacrificing learning in our quest for productivity, says organizational ethnographer Matt Beane. Beane shares a vision that flips the current story into one of distributed, machine-enhanced mentorship that takes full advantage of AIs amazing capabilities while enhancing our skills at the same time.

Leila Pirhaji is the founder ofReviveMed, an AI platform that can quickly and inexpensively characterize large numbers of metabolites from the blood, urine and tissues of patients. This allows for the detection of molecular mechanisms that lead to disease and the discovery of drugs that target these disease mechanisms.

Biotech entrepreneur and TED Fellow Leila Pirhaji shares her plan to build an AI-based network to characterize metabolite patterns, better understand how disease develops and discover more effective treatments.

Janelle Shane is the owner of AIweirdness.com. Her book, You Look Like a Thing and I Love Youuses cartoons and humorous pop-culture experiments to look inside the minds of the algorithms that run our world, making artificial intelligence and machine learning both accessible and entertaining.

The danger of artificial intelligence isnt that its going to rebel against us, but that its going to do exactly what we ask it to do, says AI researcher Janelle Shane. Sharing the weird, sometimes alarming antics of AI algorithms as they try to solve human problems like creating new ice cream flavors or recognizing cars on the road Shane shows why AI doesnt yet measure up to real brains.

Sylvain Duranton is the global leader of BCG GAMMA, a unit dedicated to applying data science and advanced analytics to business. He manages a team of more than 800 data scientists and has implemented more than 50 custom AI and analytics solutions for companies across the globe.

In this talk, business technologist Sylvain Duranton advocates for a Human plus AI approach using AI systems alongside humans, not instead of them and shares the specific formula companies can adopt to successfully employ AI while keeping humans in the loop.

For more AI and machine learning TED talks, browse TEDs complete topic collection.

Timothy is Solutions Review's Senior Editor. He is a recognized thought leader and influencer in enterprise BI and data analytics. Timothy has been named a top global business journalist by Richtopia. Scoop? First initial, last name at solutionsreview dot com.

See original here:
The 17 Best AI and Machine Learning TED Talks for Practitioners - Solutions Review

Machine Learning Patentability in 2019: 5 Cases Analyzed and Lessons Learned Part 1 – Lexology

Introduction

This article is the first of a five-part series of articles dealing with what patentability of machine learning looks like in 2019. This article begins the series by describing the USPTOs 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG) in the context of the U.S. patent system. Then, this article and the four following articles will describe one of five cases in which Examiners rejections under Section 101 were reversed by the PTAB under this new 2019 PEG. Each of the five cases discussed deal with machine-learning patents, and may provide some insight into how the 2019 PEG affects the patentability of machine-learning, as well as software more broadly.

Patent Eligibility Under the U.S. Patent System

The US patent laws are set out in Title 35 of the United States Code (35 U.S.C.). Section 101 of Title 35 focuses on several things, including whether the invention is classified as patent-eligible subject matter. As a general rule, an invention is considered to be patent-eligible subject matter if it falls within one of the four enumerated categories of patentable subject matter recited in 35 U.S.C. 101 (i.e., process, machine, manufacture, or composition of matter).[1] This, on its own, is an easy hurdle to overcome. However, there are exceptions (judicial exceptions). These include (1) laws of nature; (2) natural phenomena; and (3) abstract ideas. If the subject matter of the claimed invention fits into any of these judicial exceptions, it is not patent-eligible, and a patent cannot be obtained. The machine-learning and software aspects of a claim face 101 issues based on the abstract idea exception, and not the other two.

Section 101 is applied by Examiners at the USPTO in determining whether patents should be issued; by district courts in determining the validity of existing patents; in the Patent Trial and Appeal Board (PTAB) in appeals from Examiner rejections, in post-grant-review (PGR) proceedings, and in covered-business-method-review (CBM) proceedings; and in the Federal Circuit on appeals. The PTAB is part of the USPTO, and may hear an appeal of an Examiners rejection of claims of a patent application when the claims have been rejected at least twice.

In determining whether a claim fits into the abstract idea category at the USPTO, the Examiners and the PTAB must apply the 2019 PEG, which is described in the following section of this paper. In determining whether a claim is patent-ineligible as an abstract idea in the district courts and the Federal Circuit, however, the courts apply the Alice/Mayo test; and not the 2019 PEG. The definition of abstract idea was formulated by the Alice and Mayo Supreme Court cases. These two cases have been interpreted by a number of Federal Circuit opinions, which has led to a complicated legal framework that the USPTO and the district courts must follow.[2]

The 2019 PEG

The USPTO, which governs the issuance of patents, decided that it needed a more practical, predictable, and consistent method for its over 8,500 patent examiners to apply when determining whether a claim is patent-ineligible as an abstract idea.[3] Previously, the USPTO synthesized and organized, for its examiners to compare to an applicants claims, the facts and holdings of each Federal Circuit case that deals with section 101. However, the large and still-growing number of cases, and the confusion arising from similar subject matter [being] described both as abstract and not abstract in different cases,[4] led to issues. Accordingly, the USPTO issued its 2019 Revised Patent Subject Matter Eligibility Guidance on January 7, 2019 (2019 PEG), which shifted from the case-comparison structure to a new examination structure.[5] The new examination structure, described below, is more patent-applicant friendly than the prior structure,[6] thereby having the potential to result in a higher rate of patent issuances. The 2019 PEG does not alter the federal statutory law or case law that make up the U.S. patent system.

The 2019 PEG has a structure consisting of four parts: Step 1, Step 2A Prong 1, Step 2A Prong 2, and Step 2B. Step 1 refers to the statutory categories of patent-eligible subject matter, while Step 2 refers to the judicial exceptions. In Step 1, the Examiners must determine whether the subject matter of the claim is a process, machine, manufacture, or composition of matter. If it is, the Examiner moves on to Step 2.

In Step 2A, Prong 1, the Examiners are to determine whether the claim recites a judicial exception including laws of nature, natural phenomenon, and abstract ideas. For abstract ideas, the Examiners must determine whether the claim falls into at least one of three enumerated categories: (1) mathematical concepts (mathematical relationships, mathematical formulas or equations, mathematical calculations); (2) certain methods of organizing human activity (fundamental economic principles or practices, commercial or legal interactions, managing personal behavior or relationships or interactions between people); and (3) mental processes (concepts performed in the human mind: encompassing acts people can perform using their mind, or using pen and paper). These three enumerated categories are not mere examples, but are fully-encompassing. The Examiners are directed that [i]n the rare circumstance in which they believe[] a claim limitation that does not fall within the enumerated groupings of abstract ideas should nonetheless be treated as reciting an abstract idea, they are to follow a particular procedure involving providing justifications and getting approval from the Technology Center Director.

Next, if the claim limitation recites one of the enumerated categories of abstract ideas under Prong 1 of Step 2A, the Examiner is instructed to proceed to Prong 2 of Step 2A. In Step 2A, Prong 2, the Examiners are to determine if the claim is directed to the recited abstract idea. In this step, the claim does not fall within the exception, despite reciting the exception, if the exception is integrated into a practical application. The 2019 PEG provides a non-exhaustive list of examples for this, including, among others: (1) an improvement in the functioning of a computer; (2) a particular treatment for a disease or medical condition; and (3) an application of the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.

Finally, even if the claim recites a judicial exception under Step 2A Prong 1, and the claim is directed to the judicial exception under Step 2A Prong 2, it might still be patent-eligible if it satisfies the requirement of Step 2B. In Step 2B, the Examiner must determine if there is an inventive concept: that the additional elements recited in the claims provide[] significantly more than the recited judicial exception. This step attempts to distinguish between whether the elements combined to the judicial exception (1) add[] a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field; or alternatively (2) simply append[] well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality. Furthermore, the 2019 PEG indicates that where an additional element was insignificant extra-solution activity, [the Examiner] should reevaluate that conclusion in Step 2B. If such reevaluation indicates that the element is unconventional . . . this finding may indicate that an inventive concept is present and that the claim is thus eligible.

In summary, the 2019 PEG provides an approach for the Examiners to apply, involving steps and prongs, to determine if a claim is patent-ineligible based on being an abstract idea. Conceptually, the 2019-PEG method begins with categorizing the type of claim involved (process, machine, etc.); proceeds to determining if an exception applies (e.g., abstract idea); then, if an exception applies, proceeds to determining if an exclusion applies (i.e., practical application or inventive concept). Interestingly, the PTAB not only applies the 2019 PEG in appeals from Examiner rejections, but also applies the 2019 PEG in its other Section-101 decisions, including CBM review and PGRs.[7] However, the 2019 PEG only applies to the Examiners and PTAB (the Examiners and the PTAB are both part of the USPTO), and does not apply to district courts or to the Federal Circuit.

Case 1: Appeal 2018-007443[8] (Decided October 10, 2019)

This case involves the PTAB reversing the Examiners Section 101 rejections of claims of the 14/815,940 patent application. This patent application relates to applying AI classification technologies and combinational logic to predict whether machines need to be serviced, and whether there is likely to be equipment failure in a system. The Examiner contended that the claims fit into the judicial exception of abstract idea because monitoring the operation of machines is a fundamental economic practice. The Examiner explained that the limitations in the claims that set forth the abstract idea are: a method for reading data; assessing data; presenting data; classifying data; collecting data; and tallying data. The PTAB disagreed with the Examiner. The PTAB stated:

Specifically, we do not find monitoring the operation of machines, as recited in the instant application, is a fundamental economic principle (such as hedging, insurance, or mitigating risk). Rather, the claims recite monitoring operation of machines using neural networks, logic decision trees, confidence assessments, fuzzy logic, smart agent profiling, and case-based reasoning.

As explained in the previous section of this paper, the 2019 PEG set forth three possible categories of abstract ideas: mathematical concepts, certain methods of organizing human activity, and mental processes. Here, the PTAB addressed the second of these categories. The PTAB found that the claims do not recite a fundamental economic principle (one method of organizing human activity) because the claims recite AI components like neural networks in the context of monitoring machines. Clearly, economic principles and AI components are not always mutually exclusive concepts.[9] For example, there may be situations where these algorithms are applied directly to mitigating business risks. Accordingly, the PTAB was likely focusing on the distinction between monitoring machines and mitigating risk; and not solely on the recitation of the AI components. However, the recitation of the AI components did not seem to hurt.

Then, moving on to another category of abstract ideas, the PTAB stated:

Claims 1 and 8 as recited are not practically performed in the human mind. As discussed above, the claims recite monitoring operation of machines using neural networks, logic decision trees, confidence assessments, fuzzy logic, smart agent profiling, and case-based reasoning. . . . [Also,] claim 8 recites an output device that transforms the composite prediction output into human-readable form.

. . . .

In other words, the classifying steps of claims 1 and modules of claim 8 when read in light of the Specification, recite a method and system difficult and challenging for non-experts due to their computational complexity. As such, we find that one of ordinary skill in the art would not find it practical to perform the aforementioned classifying steps recited in claim 1 and function of the modules recited in claim 8 mentally.

In the language above, the PTAB addressed the third category of abstract ideas: mental processes. The PTAB provided that the claim does not recite a mental process because the AI algorithms, based on the context in which they are applied, are computationally complex.

The PTAB also addressed the first of the three categories of abstract ideas (mathematical concepts), and found that it does not apply because the specific mathematical algorithm or formula is not explicitly recited in the claims. Requiring that a mathematical concept be explicitly recited seems to be a narrow interpretation of the 2019 PEG. The 2019 PEG does not require that the recitation be explicit, and leaves the math category open to relationships, equations, or calculations. From this, the PTAB might have meant that the claims list a mathematical concept (the AI algorithm) by its name, as a component of the process, rather than trying to claim the steps of the algorithm itself. Clearly, the names of the algorithms are explicitly recited; the steps of the AI algorithms, however, are not recited in the claims.

Notably, reciting only the name of an algorithm, rather than reciting the steps of the algorithm, seems to indicate that the claims are not directed to the algorithms (i.e., the claims have a practical application for the algorithms). It indicates that the claims include an algorithm, but that there is more going on in the claim than just the algorithm. However, instead of determining that there is a practical application of the algorithms, or an inventive concept, the PTAB determined that the claim does not even recite the mathematical concepts.

Additionally, the PTAB found that even if the claims had been classified as reciting an abstract idea, as the Examiner had contended the claims are not directed to that abstract idea, but are integrated into a practical application. The PTAB stated:

Appellants claims address a problem specifically using several artificial intelligence classification technologies to monitor the operation of machines and to predict preventative maintenance needs and equipment failure.

The PTAB seems to say that because the claims solve a problem using the abstract idea, they are integrated into a practical application. The PTAB did not specify why the additional elements are sufficient to integrate the invention. The opinion actually does not even specifically mention that there are additional elements. Instead, the PTABs conclusion might have been that, based on a totality of the circumstances, it believed that the claims are not directed to the algorithms, but actually just apply the algorithms in a meaningful way. The PTAB could have fit this reasoning into the 2019 PEG structure through one of the Step 2A, Prong 2 examples (e.g., that the claim applies additional elements in some other meaningful way), but did not expressly do so.

Conclusion

This case illustrates:

(1) the monitoring of machines was held to not be an abstract idea, in this context; (2) the recitation of AI components such as neural networks in the claims did not seem to hurt for arguing any of the three categories of abstract ideas; (3) complexity of algorithms implemented can help with the mental processes category of abstract ideas; and (4) the PTAB might not always explicitly state how the rule for practical application applies, but seems to apply it consistently with the examples from the 2019 PEG.

The next four articles will build on this background, and will provide different examples of how the PTAB approaches reversing Examiner 101-rejections of machine-learning patents under the 2019 PEG. Stay tuned for the analysis and lessons of the next case, which includes methods for overcoming rejections based on the mental processes category of abstract ideas, on an application for a probabilistic programming compiler that performs the seemingly 101-vulnerable function of generat[ing] data-parallel inference code.

Read more:
Machine Learning Patentability in 2019: 5 Cases Analyzed and Lessons Learned Part 1 - Lexology

Artnome Wants to Predict the Price of a Masterpiece. The Problem? There’s Only One. – Built In

Buying a Picasso is like buying a mansion.

Theres not that many of them, so it can be hard to know what a fair price should be. In real estate, if the house last sold in 2008 right before the lending crisis devastated the real estate market basing todays price on the last sale doesnt make sense.

Paintings are also affected by market conditions and a lack of data. Kyle Waters, a data scientist at Artnome, explained to us how his Boston-area firm is addressing this dilemma and in doing so, aims to do for the art world what Zillow did for real estate.

If only 3 percent of houses are on the market at a time, we only see the prices for those 3 percent. But what about the rest of the market? Waters said. Its similar for art too. We want to price the entire market and give transparency.

We want to price the entire market and give transparency.

Artnome is building the worlds largest database of paintings by blue-chip artists like Georgia OKeeffe, including her super famous works, lesser-known items, those privately held, and artworks publicly displayed. Waters is tinkering with the data to create a machine learning model that predicts how much people will pay for these works at auctions. Because this model includes an artists entire collection, and not just those works that have been publicly sold before, Artnome claims its machine learning model will be more accurate than the auction industrys previous practice of simply basing current prices on previous sales.

The companys goal is to bring transparency to the auction house industry. But Artnomes new model faces the old problem: Its machine learning system performs poorly on the works that typically sell for the most the ones that people are the most interested in since its hard to predict the price of a one-of-a-kind masterpiece.

With a limited dataset, its just harder to generalize, Waters said.

We talked to Waters about how he compiled, cleaned and created Artnomes machine learning model for predicting auction prices, which launched in late January.

Most of the information about artists included in Artnomes model comes from the dusty basement libraries of auction houses, where they store their catalog raissons, which are books that serve as complete records of an artists work. Artnome is compiling and digitizing these records representing the first time these books have ever been brought online, Waters said.

Artnomes model currently includes information from about 5,000 artists whose works have been sold over the last 15 years. Prices in the dataset range from $100 at the low end to Leonardo DaVincis record-breaking Salvator Mundi a painting thatsold for $450.3 million in 2017, making it the most expensive work of art ever sold.

How hard was it to predict what DaVincis 500-year-old Mundi would sell for? Before the sale, Christies auction house estimated his portrait of Jesus Christ was worth around $100 million less than a quarter of the price.

It was unbelievable, Alex Rotter, chairman of Christies postwar and contemporary art department, told The Art Newspaper after the sale. Rotter reported the winning phone bid.

I tried to look casual up there, but it was very nerve-wracking. All I can say is, the buyer really wanted the painting and it was very adrenaline-driven.

The buyer really wanted the painting and it was very adrenaline-driven.

A piece like Salvatore Mundi could come to market in 2017 and then not go up for auction again for 50 years. And because a machine learning model is only as good as the quality and quantity of the data it is trained on, market, condition and changes in availability make it hard to predict a future price for a painting.

These variables are categorized into two types of data: structured and unstructured. And cleaning all of it represents a major challenge.

Structured data includes information like what artist painted which painting on what medium, and in whichyear.

Waters intentionally limited the types of structured information he included in the model to keep the system from becoming too unruly to work with. But defining paintings as solely two-dimensional works on only certain mediums proved difficult, since there are so many different types of paintings (Salvador Dali famously painted on a cigar box, after all). Artnomes problem represents an issue of high cardinality, Waters said, since there are so many different categorical variables he could include in the machine learning system.

You want the model to be narrow enough so that you can figure out the nuances between really specific mediums, but you also dont want it to be so narrow that youre going to overfit.

You want the model to be narrow enough so that you can figure out the nuances between really specific mediums, but you also dont want it to be so narrow that youre going to overfit, Waters said, adding that large models also become more unruly to work with.

Other structured data focuses on the artist herself, denoting details like when the creator was born or if they were alive during the time of auction. Waters also built a natural language processing system that analyzes the type and frequency of the words an artist used in her paintings titles, noting trends like Georgia OKeeffe using the word white in many of her famous works.

Including information on market conditions, like current stock prices or real estate data, was important from a structured perspective too.

How popular is an artist, are they exhibiting right now? How many people are interested in this artist? Whats the state of the market? Waters said. Really getting those trends and quantifying those could be just as important as more data.

Another type of data included in the model is unstructured data which, as the name might suggest, is a little less concrete than the structured items. This type of data is mined from the actual painting, and includes information like the artworks dominant color, number of corner points and if faces are pictured.

Waters created a pre-trained convolutional neural network to look for these variables, modeling the project after the ResNet 50 model, which famously won the ImageNet Large Scale Visual Recognition Challenge in 2012 after it correctly identified and classified nearly all of the 14 billion objects featured.

Including unstructured data helps quantify the complexity of an image, Waters said, giving it what he called an edge score.

An edge score helps the machine learning system quantify the subjective points of a painting thatseem intuitive to humans, Waters said. An example might be Vincent Van Goghs series of paintings of red-haired men posing in front of a blue background. When youre looking at the painting, its not hard to see youre looking at self portraits of Van Gogh, by Van Gogh.

Including unstructured data in Artnomes system helps the machine spot visual cues that suggest images are part of a series, which has an impact on their value, Waters said.

When you start interacting with different variables, then you can start getting into more granular details.

Knowing that thats a self-portrait would be important for that artist, Waters said. When you start interacting with different variables, then you can start getting into more granular details that, for some paintings by different artists, might be more important than others.

Artnomes convoluted neural network is good at analyzing paintings for data that tells a deeper story about the work. Butsometimes, there are holes inthe story being told.

In its current iteration, Artnomes model includes both paintings with and without frames it doesnt specify which work falls into which category. Not identifying the frame could affect the dominant color the system discovers, Waters said, adding an error to its results.

That could maybe skew your results and say, like, the dominant color was yellow when really the painting was a landscape and it was green, Waters said.

Interested in convolutional neural networks?Convolutional Neural Networks Explained: Using Pytorch to Understand CNNS

The model also lacks information on the condition of the painting which, again, could impact the artworks price. If the model cant detect a crease in the painting, it might overestimate its value. Also missing is data on an artworks provenance, or its ownership history. Some evidence suggests that paintings that have been displayed by prominent institutions sell for more. Theres also the issue of popularity. Waters hasnt found a concrete way to tell the system that people like the work of Georgia OKeeffe more than the paintings by artist and actor James Franco.

Im trying to think of a way to come up with a popularity score for these very popular artists, Waters said.

An auctioneer hits the hammer to indicate a sale has been made. But the last price the bidder shouts isnt what theyactually pay.

Buyers also must pay the auction house a commission, which varies between auction houses and has changed over time. Waters has had to dig up the commission rates for these outlets over the years and add them to the sales price listed. Hes also had to make sure all sales prices are listed in dollars, converting those listed in other currencies. Standardizing each sale ensures the predictions the model makes are accurate, Waters said.

Youd introduce a lot of bias into the model if some things didnt have the commission, but some things did.

Youd introduce a lot of bias into the model if some things didnt have the commission, but some things did, Waters said. It would be clearly wrong to start comparing the two.

Once Artnomes data has been gleaned and cleaned, information is input into the machine learning system, which Waters structured into a random forest model, an algorithm that builds and merges multiple decision trees to arrive at an accurate prediction. Waters said using a random forest model keeps the system from overfitting paintings into one category, and also offers a level of explainability through its permutation score a metric that basically decides the most important aspects of a painting.

Waters doesnt weigh the data he puts into the model. Instead, he lets the machine learning system tell him whats important, with the model weighing factors like todays S&P prices more heavily than the dominant color of a work.

Thats kind of one way to get the feature importance, for kind of a black box estimator, Waters said.

Although Artnome has been approached by private collectors, gallery owners and startups in the art tech world interested in its machine learning system, Waters said its important this dataset and model remain open to the public.

His aim is for Artnomes machine learning model to eventually function like Zillows Zestimate, which estimates real estate prices for homes on and off the market, and act as a general starting point for those interested in finding out the price of an artwork.

When it gets to the point where people see it as a respectable starting point, then thats when Ill be really satisfied.

We might not catch a specific genre, or era, or point in the art history movement, Waters said. I dont think itll ever be perfect. But when it gets to the point where people see it as a respectable starting point, then thats when Ill be really satisfied.

Want to learn more about machine learning? A Tour of the Top 10 Algorithms for Machine Learning Newbies

See the original post here:
Artnome Wants to Predict the Price of a Masterpiece. The Problem? There's Only One. - Built In

Twitter says AI tweet recommendations helped it add millions of users – The Verge

Twitter had 152 million daily users during the final months of 2019, and it says the latest spike was thanks in part to improved machine learning models that put more relevant tweets in peoples timelines and notifications. The figure was released in Twitters Q4 2019 earnings report this morning.

Daily users grew from 145 million the prior quarter and 126 million during the same period a year earlier. Twitter says this was primarily driven by product improvements, such as the increased relevance of what people are seeing in their main timeline and their notifications.

By default, Twitter shows users an algorithmic timeline that highlights what it thinks theyll be most interested in; for users following few accounts, it also surfaces likes and replies by the people they follow, giving them more to scroll through. Twitters notifications will also highlight tweets that are being liked by people you follow, even if you missed that tweet on your timeline.

Twitter has continually been trying to reverse concerns about its user growth. The services monthly user count shrank for a full year going into 2019, leading it to stop reporting that figure altogether. Instead, it now shares daily users, a metric that looks much rosier.

Compared to many of its peers, though, Twitter still has an enormous amount of room to grow. Snapchat, for comparison, reported 218 million daily users during its final quarter of 2019. Facebook reported 1.66 billion daily users over the same time period.

Twitter also announced a revenue milestone this quarter: it brought in more than $1 billion in quarterly revenue for the first time. The total was just over the milestone $1.01 billion during its final quarter, up from $909 million that quarter the prior year.

Last quarter, Twitter said that its ad revenue took a hit due to bugs that limited its ability to target ads and share advertising data with partners. At the time, the company said it had taken steps to remediate the issue, but it didnt say whether it was resolved. In this quarters update, Twitter says it has since shipped remediations to those issues.

View post:
Twitter says AI tweet recommendations helped it add millions of users - The Verge

Putting the Humanity Back Into Technology: 10 Skills to Future Proof Your Career – HR Technologist

Dave Coplin believes that the key to success for all of our futures is how we rise to the challenge and unleash the potential that AI and machine learning will bring us. We all need to evolve and fast. He looks at the10skillswe can nurture tofuture-proofour careers

I have been working with global technology companies for more than 30 years, helping people to truly understand the amazing potential on offer when humans work in harmony with machines.

Your HCM System controls the trinity of talent acquisition, management and optimization - and ultimately, multiple mission-critical performance outcomes. Choosing the right solution for your organization....

I have written two books, Ive worked with businesses and governments all over the world and recently Ive been inspiring and engaging kids and adults alike, all with one single goal in mind, which is simply to help everyone get the absolute best from technology.

The key to success for all of our futures is how we rise to the challenge and unleash the potential that AI and machine learning will bring us. We all need to evolve and fast.

In an age of algorithms and robots, we need to find a way to combine the best of technological capability with the best of human ability and find that sweet spot where humans and machines complement each other perfectly. With this in mind, here are my top ten skills that will enable humans to rise, to achieve more than ever before not just at work but across all aspects of our lives:

When it comes to creativity, I absolutely believe that technology is one of the most creative forces that we will ever get to enjoy. But creativity needs to be discovered and it needs to be nurtured. Our future will be filled with complex, challenging problems, the like of which we will never have encountered before. Were going to need a society of creative thinkers to help navigate it.

While the machines are busy crunching numbers, it will be the humans left to navigate the complicated world of emotions, motivation, and reason. In a world of the dark, cold logic of algorithms, the ability for individuals to understand and share the feelings of others is going to become a crucial skill. Along with creativity, empathy will be one of the most critical attributes that define the border between human and machine.

As well as teaching ourselves and our families to be confident with technology we also need to be accountable for how we use it.

Just because the computer gives you an answer, it doesnt make it right. We all need to learn to take the computers valuable input but crucially combine that with our own human intuition in order to discover the best course of action. Our future is all about being greater than the sum of our parts

One of creativitys most important companions is curiosity it is the gateway to the best way to be creative with technology. We now have at our fingertips access to every fact and opinion that has ever been expressed, but we take this for granted. And what do we choose to do with all that knowledge? Two words, cat videos. Im being playful of course, but part of the solution is to help all of us, especially kids, be curious about the world around us and to use technology to explore it.

Critical thinking will be the 21st-centuryhumans superpower. If we can help individuals both understand and apply it, we can, over time, unleash the full potential of our connected world. We should constantly question content we see, hear and read - and not just assume its true.

One of digital technologys key purposes is to connect humans with each other. Communicating with others is as essential to our future survival as breathing and yet were often just not that good at it, especially when were communicating with others who arent in the same physical space.

Learning to communicate well (and that includes really effective listening) regardless of whether that is on-line or off-line is one of the basic literacies of our digital world.

Building on our communication skills, collaboration is the purpose for much of the reason behind why we need to communicate well. Technology enables large numbers of people to come together, aligned around a common cause but we can only harness the collective power of people if we can find the best way to work together to unleash our collective potential.

The future doesnt stand still and now more than ever, that means neither can we. While we used to think about education as a single phase, early on in most peoples lives, the reality is that learning needs to be an everyday occurrence, regardless of our age or stage of life. Thanks to new technologies like artificial intelligence, skills that are new today will be automated tomorrow and this means we can never afford to stand still.

The by-product of a rapidly changing world is that we need to help people learn to embrace the ambiguity such a world presents. More traditional mindsets of single domains of skills and single careers will have to give way to the much more nebulous world of multiple skillsets for multiple careers.In order to make the transition, people are going to need to find a way to preserve and develop enough energy to be able to embrace every new change and challenge so that they can both offer value and be valued by the ever-changing society they are a part of.

Learn More:5 Skills You Should Develop to Keep Your Job in 2020

As a technologist, and an optimist, we need to remember that the machines and algorithms are here to help.The success of our future will depend entirely on our ability to grasp the potential they offer us. Regardless of the career we choose, our and our childrens lives will be better, more successful, happier and more rewarding if we are confident in how we can use technology to help us achieve more at work, in our relationships and in how we enjoy ourselves.

None of these skills were picked by chance.They were specifically picked because they are the very qualities that will complement the immensely powerful gift that technology brings us.Better still, these are the skills that will remain fundamentally human for decades to come.

Now is the time to think differently about our relationship with technology and its potential.We owe it to ourselves and our children to help ensure we dont just learn to survive in the 21st century but instead we learn how to thrive. If we can get this right for ourselves and our kids, we are going to get some amazing rewards as a result.

The rise of the humans starts with us, and it starts now

Learn More:The Rise of the Corporate Recruiter: Job Description, Salary Expectations, and Key Skills for 2020

The rest is here:
Putting the Humanity Back Into Technology: 10 Skills to Future Proof Your Career - HR Technologist

Insights Into Resveratrol Market 2020 With Top Players, Market Driving Forces and Complete Industry Estimated 2020-2026 – Jewish Life News

The ongoing report distributed on Global Resveratrol Market Research Report examines different elements affecting the development direction of this industry. Essential and auxiliary research is utilized to decide the advancement perspectives and development way in Resveratrol Market on the worldwide, local and nation level scale. The memorable, present and estimate circumstances approaching the Resveratrol Industry elements, rivalry just as development limitations are thoroughly contemplated. This report is a finished mix of mechanical developments, showcase dangers, openings, dangers, difficulties, and specialty Resveratrol Industry fragments.

Significant organizations present comprehensively right now as follows:

Ci Yuan BiotechnologyJF-NATURALXian SinuoteEvolvaJiangxi HengxiangSabinsaNaturexJiaxing Taixin PharmaceuticalDSMMayproGreat Forest BiomedicalXian Gaoyuan Bio-ChemKerui NanhaiChengdu YazhongInterHealthShanghai DND

The significant market patterns, noticeable players, item portfolio, fabricating cost investigation, item types and valuing structure are exhibited. Every single urgent factor like Resveratrol advertise elements, challenges, openings, restrictions are examined right now.

The cutting-edge advertise data exhibits the serious structure of Resveratrol Industry to help players in breaking down the serious structure for development and gainfulness. The striking highlights of this report are Resveratrol Market share dependent on every item type, application, player, and district. Benefit estimation for all market fragments and sub-sections and utilization proportion.

Key Deliverables of Global Resveratrol Research Report are referenced underneath:

Renumeration examination for every application is secured.

Market share per Resveratrol application is anticipated during 2020-2026. Utilization viewpoints for the equivalent are secured.

Resveratrol Market drivers which will improve the commercialization lattice to upgrade the business circle is clarified.

Vital data with respect to difficulties, dangers, SWOT investigation of top players, and piece of the pie is secured.

Consumption rates in Resveratrol Industry for significant districts in particular North America, Europe, Asia-Pacific, MEA, South America and the remainder of the world is secured.

Ask custom questions or solicitation more information: https://reportscheck.biz/report/52848/global-resveratrol-industry-market-research-report-4/

Research Methodology of Resveratrol Market:

The essential and auxiliary research strategy is utilized to assemble information on parent and friend Resveratrol Market. Industry specialists over the worth chain take an interest in approving the market size, income share, supply-request situation, and other key discoveries. The top-down and base up approach is utilized in examining the total market size and offer. The key feeling pioneers of Resveratrol Industry like showcasing executives, VPs, CEOs, innovation chiefs, R&D directors are met to accumulate data on market interest perspectives.

For optional information sources data is assembled from organization financial specialist reports, yearly reports, official statements, government and friends databases, affirmed diaries, distributions, and different other outsider sources.

Chapter by chapter list Is Segmented As Follows:

Report Overview:Product definition, review, scope, development rate examination by type, application, and area from 2020-2026 is secured.

Official Summary:Vital data on industry patterns, Resveratrol showcase size by area and development rate for the equivalent is given.

Profiling of Top Resveratrol Industry players:All top market players are broke down dependent on net edge, value income, deals, generation, and their organization subtleties are secured.

Local Analysis:Top areas and nations are dissected to measure the Resveratrol business potential and nearness based on advertise size side-effect type, application, and market figure. The total investigation time frame is from 2014-2026.

About Us:

ReportsCheck.biz consistently endeavors to convey a top notch item by understanding customer questions and giving exact and intensive industry examination. Our accomplished research group completes an examination of each market altogether to convey significant yields. We give quality confirmation to all statistical surveying and counseling needs.

Contact

ReportsCheck.biz

+1 831 679 3317

Email: [emailprotected]

Site: https://reportscheck.biz/

Continue reading here:
Insights Into Resveratrol Market 2020 With Top Players, Market Driving Forces and Complete Industry Estimated 2020-2026 - Jewish Life News

Global Trans Resveratrol Market to Take on Robust Growth by the End 2025 – TechNews.mobi

VertexMarketInsights.com has published an innovative statistics of the market titled as Trans Resveratrol Market. To clarify the various aspects, the analyst studies and elaborates the terms by using qualitative and quantitative research techniques. Finance teams can use a variety of corporate planning applications to fulfil the budgeting, planning & financial modelling, needs of their organization, whatever its size, industry and location.

Graphs, tables, bar graphs and pie charts have been represented in sophisticate manner for the clients to better understand the analysis. To enlarge the businesses, customers get increased rapidly through Trans Resveratrol industry techniques.

Download Exclusive Sample of Trans Resveratrol Markets Premium Report @ SummaryICRWorlds Trans Resveratrol market research report provides the newest industry data and industry future trends, allowing you to identify the products and end users driving Revenue growth and profitability.The industry report lists the leading #request-sample

Leading Establishments (Key Companies):DSMEvolvaInterHealthMayproLaurus LabsJF-NATURALGreat Forest BiomedicalShaanxi Ciyuan BiotechChengdu YazhongSabinsaChangsha Huir Biological-techXian Gaoyuan Bio-ChemXian Sinuote

Different regions, such as Americas, United States, Canada, Mexico, Brazil, APAC, China, Japan, Korea, Southeast Asia, India, Australia, Europe, Germany, France, UK, Italy, Russia, Spain, Middle East & Africa, Egypt, South Africa, Israel, Turkey and GCC Countries are focused to give the summarized data about the production of Trans Resveratrol market.

The global Trans Resveratrol Market is served as a backbone for the enlargement of the enterprises. To address the challenges, the report examines different key factors such as drivers and opportunities. Restraints are considered for evaluation of risk in market.

Segments covered in the report

This report forecasts revenue growth at a global, regional & country level, and analyses the market trends in each of the sub-segments from 2015 to 2025. For the purpose of this study, VertexMarketInsights have segmented the Trans Resveratrol market on the basis of type, end-user and region:

Type Outlook (Revenue in Million USD; 20152025):SyntheticPlant ExtractFermentation

End Use Outlook (Revenue in Million USD; 20152025):Dietary SupplementCosmeticFood and Beverage

Trans Resveratrol Market Summary: This report includes the estimation of market size for value (million US$) and volume. Estimation methodology validate the market size of Trans Resveratrol industry, to estimate the size of various other dependent submarkets in the overall market. Secondary research is used to identify the top players in the market, and their market shares have been determined through primary and secondary research. Each type is studied based on classification as Sales, Trans Resveratrol Market Share (%), Revenue (Million USD), Price and Gross Margin.

If You Have Any Query, Ask Our Experts @ SummaryICRWorlds Trans Resveratrol market research report provides the newest industry data and industry future trends, allowing you to identify the products and end users driving Revenue growth and profitability.The industry report lists the leading #inquiry-before-buying

Report Objectives:

Target Audience:

Table of Content:

Global Trans Resveratrol Market Research Report 2020-2025

Chapter 1: Industry Overview

Chapter 2: Trans Resveratrol Market International and China Market Analysis

Chapter 3: Environment Analysis of Market.

Chapter 4: Analysis of Revenue by Classifications

Chapter 5: Analysis of Revenue by Regions and Applications

Chapter 6: Analysis of Trans Resveratrol Market Revenue Market Status.

Chapter 7: Analysis of Industry Key Manufacturers

Chapter 8: Conclusion of the Trans Resveratrol Market Industry 2025 Market Research Report.

Continued to TOC

For more detailed Pdf Copy of Table of Content Describing Current Value and Volume of the Market with All Other Essential Information click @ SummaryICRWorlds Trans Resveratrol market research report provides the newest industry data and industry future trends, allowing you to identify the products and end users driving Revenue growth and profitability.The industry report lists the leading #table-of-contents

****Thanks for reading! You can also request custom information like chapter-wise or specific region-wise study as per your interest. ***

Tags: Global Trans Resveratrol IndustryGlobal Trans Resveratrol MarketGlobal Trans Resveratrol Market growthGlobal Trans Resveratrol Market ShareTrans Resveratrol

Continued here:
Global Trans Resveratrol Market to Take on Robust Growth by the End 2025 - TechNews.mobi

2019 Review: Resveratrol Market Growth Analysis and Market Sizing – The Market Journal

This intelligence report provides a comprehensive analysis of the Global Resveratrol Market. This includes Investigation of past progress, ongoing market scenarios, and future prospects. Data True to market on the products, strategies and market share of leading companies of this particular market are mentioned. Its a 360-degree overview of the global markets competitive landscape. The report further predicts the size and valuation of the global market during the forecast period.

Some of the key players profiled in the study are:

DSM (Netherlands), Naturex (France), Evolva (Switzerland), Sabinsa (United States), InterHealth (United States), Maypro (United States), Chengdu Yazhong (China), JF-NATURAL (China), Jiangxi Hengxiang (China) and Great Forest Biomedical (China)

Increasing geriatric population is driving the market of resveratrol market. Resveratrol belongs to the stilbenoids group which contains phenol rings connected to each other. It is found mostly in the skin or seeds of grapes and berries which are used in the preparation of red wine. Resveratrol contains antioxidant which helps lowering the blood pressure. In also decreases cholesterol and increases the HDL. In addition, it protects the brain from any damage and prevents or treats different types of cancer.

Get Latest insights about acute features of the market (Free Sample Report + All Related Graphs & Charts) @ https://www.advancemarketanalytics.com/sample-report/64172-global-resveratrol-market

Market Drivers

Market Trend

Opportunities

Each segment and sub-segment is analyzed in the research report. The competitive landscape of the market has been elaborated by studying a number of factors such as the best manufacturers, prices and revenues. Global Resveratrol Market is accessible to readers in a logical, wise format. Driving and restraining factors are listed in this study report to help you understand the positive and negative aspects in front of your business.

This study mainly helps understand which market segments or Region or Country they should focus in coming years to channelize their efforts and investments to maximize growth and profitability. The report presents the market competitive landscape and a consistent in depth analysis of the major vendor/key players in the market.Furthermore, the years considered for the study are as follows:Historical year 2013-2017Base year 2018Forecast period** 2019 to 2025 [** unless otherwise stated]

**Moreover, it will also include the opportunities available in micro markets for stakeholders to invest, detailed analysis of competitive landscape and product services of key players.

Enquire for customization in Report @ https://www.advancemarketanalytics.com/enquiry-before-buy/64172-global-resveratrol-market

The Global Resveratrol segments and Market Data Break Down are illuminated below:The Study Explore the Product Types of Resveratrol Market: Natural, Artificial

Key Applications/end-users of Global Resveratrol Market: Dietary supplements, Pharmaceutical, Personal care products, Others

Region Included are: North America, Europe, Asia Pacific, Oceania, South America, Middle East & Africa

Country Level Break-Up: United States, Canada, Mexico, Brazil, Argentina, Colombia, Chile, South Africa, Nigeria, Tunisia, Morocco, Germany, United Kingdom (UK), the Netherlands, Spain, Italy, Belgium, Austria, Turkey, Russia, France, Poland, Israel, United Arab Emirates, Qatar, Saudi Arabia, China, Japan, Taiwan, South Korea, Singapore, India, Australia and New Zealand etc.

Objectives of the Study

Read Detailed Index of full Research Study at @ https://www.advancemarketanalytics.com/reports/64172-global-resveratrol-market

Strategic Points Covered in Table of Content of Global Resveratrol Market:

Chapter 1: Introduction, market driving force product Objective of Study and Research Scope the Resveratrol market

Chapter 2: Exclusive Summary the basic information of the Resveratrol Market.

Chapter 3: Displaying the Market Dynamics- Drivers, Trends and Challenges of the Resveratrol

Chapter 4: Presenting the Resveratrol Market Factor Analysis Porters Five Forces, Supply/Value Chain, PESTEL analysis, Market Entropy, Patent/Trademark Analysis.

Chapter 5: Displaying the by Type, End User and Region 2013-2018

Chapter 6: Evaluating the leading manufacturers of the Resveratrol market which consists of its Competitive Landscape, Peer Group Analysis, BCG Matrix & Company Profile

Chapter 7: To evaluate the market by segments, by countries and by manufacturers with revenue share and sales by key countries in these various regions.

Chapter 8& 9: Displaying the Appendix, Methodology and Data Source

Buy the Latest Detailed Report @https://www.advancemarketanalytics.com/buy-now?format=1&report=64172

Key questions answered

Definitively, this report will give you an unmistakable perspective on every single reality of the market without a need to allude to some other research report or an information source. Our report will give all of you the realities about the past, present, and eventual fate of the concerned Market.

Thanks for reading this article; you can also get individual chapter wise section or region wise report version like North America, Europe or Asia.

About Author:

Advance Market Analytics is Global leaders of Market Research Industry provides the quantified B2B research to Fortune 500 companies on high growth emerging opportunities which will impact more than 80% of worldwide companies revenues.

Our Analyst is tracking high growth study with detailed statistical and in-depth analysis of market trends & dynamics that provide a complete overview of the industry. We follow an extensive research methodology coupled with critical insights related industry factors and market forces to generate the best value for our clients. We Provides reliable primary and secondary data sources, our analysts and consultants derive informative and usable data suited for our clients business needs. The research study enable clients to meet varied market objectives a from global footprint expansion to supply chain optimization and from competitor profiling to M&As.

Contact US :

Craig Francis (PR & Marketing Manager)AMA Research & Media LLPUnit No. 429, Parsonage Road Edison, NJNew Jersey USA 08837Phone: +1 (206) 317 1218sales@advancemarketanalytics.com

See the original post here:
2019 Review: Resveratrol Market Growth Analysis and Market Sizing - The Market Journal

BitCherry To Build a Trusted Distributed Business Ecosystem – Coinspeaker

Place/Date: - February 5th, 2020 at 6:52 pm UTC 3 min read Contact: BitCherry,Source: BitCherry

BitCherry is a blockchain infrastructure empowering commercial applications, which is to construct P2Plus point-to-point encrypting network protocol with the new thinking of physical structures and achieve highly extensible data architectures by hashing diagrams modified by relational graphs. BitCherry provides smart contract, cross-chain consensus and other operating mechanisms which can reduce the development cost and provide underlying public chain of high performance, high security and high availability for blockchain business applications.

As the first scalable infrastructure worldwide based on IPv8 technology in the service of commercial applications, BitCherry focuses on comprehensive and in-depth development in the industry. With strong ecological resources, BitCherry pays great attentions to business needs. By gathering research resources of global blockchain technology and industry, BitCherry will actively promote communication and collaboration, and accelerate the application of blockchain technology in various fields.

BitCherry, meanwhile, will realize value interaction with smart contract which will reflect the real business to the blockchain.

In addition, BitCherry will build a new business model and a highly trusted and trusted distributed business ecosystem by the cooperation crossing platforms, spaces and fields which achieves high-speed circulation of ecological value and expand the business closed loop.

In terms of technology, the BitCherry core data structure adopts Hash Relationship Spectrum, which makes the TPS of the public network as high as 100,000 +, far exceeding the industry level. At the same time, BitCherry also refers to the engineering experience of the Internet for many years, and builds its underlying encrypted communication architecture on the IPv8 protocol, so as to realize point-to-point encryption of information and guarantee the privacy and security of user data in the process of interaction.

In addition, BitCherry focuses on the consensus layer. Combining with its original P2Plus network protocol, BitCherry firstly proposed the aBFT+PoUc consensus, which can effectively prevent network attacks while ensuring full equity of BitCherry nodes. In terms of development, BitCherry has perfect development tools, friendly to developers, called with mainstream language programming and support resources, allows developers to reduce the development cost of also smooth the learning curve.

In terms of upper-class economy, BitCherry USES bit-u as the incentive mechanism to provide basic users with active and user resources for Dapp and business ecology, and fully guarantee the healthy and orderly development of business applications in the ecology.

Therefore, BitCherry is a mature public chain system with profound technical background, safe privacy and smooth user experience. Because of this, BitCherry payed a lot of technology cost for the security and enforceability of smart contracts to develop BitCherrys online technology and development platform.

Thanks to this, BitCherry can fully integrate the technical resources, human resources, business resources and various innovative resources of traditional enterprises, including blockchain technology coding and underlying architecture related to product traceability, supply chain finance, business consumption, asset digitalization, finance, e-commerce, cloud computing, etc.

In conclusion, BitCherrys P2Plus encryption network protocol will greatly promote the commercial value of society in the future, and any individual and organization can use the network to improve their products and services greatly.

Moreover, in Singapore, South Korea, Russia and other regions, BitCherry has a large number of communities. Relying on strong technical background and great support of the community at home and abroad, BitCherry achieved rapid growth. And BitCherry will continue to develop intently, explore in business cooperation, accelerate the maturity of underlying public chain and greatly empower 0 to 1 process of enterprise blockchain.

Originally posted here:

BitCherry To Build a Trusted Distributed Business Ecosystem - Coinspeaker

Building a Worker Co-op Ecosystem: Lessons from the Big Apple – Nonprofit Quarterly

What if a nonprofit wrote a report and city policy actually changed as a result?

Truth is, reports by nonprofits that advocate policy changes are written all the time, but most, even when full of good policy ideas, gather electronic dust.

But in New York City, in January 2014, the Federation of Protestant Welfare Agencies (FPWA) published a report, authored by its executive director Jennifer Jones Austin, titled Worker Cooperatives for New York City: A Vision for Addressing Income Inequality. There have been a lot of reports written on worker cooperatives over the past decade, including many by national nonprofits and one published by the Surdna Foundation. But none have been as influential as that 40-page report published in New York City in 2014.

The report, in its executive summary, made the following top-line policy recommendations:

By June, less than six months after the report was published, New York City, which had never before spent a dime to support worker cooperatives, had launched its Worker Cooperative Business Development Initiative, seeded with $1.2 million in city funds. In so doing, New York became, as Jake Blumgart wrote in Next City at the time, the first city in the United States with a line item in its budget specifically for the development and cultivation of worker cooperatives.

It was a big step in an astonishingly short period of time, even if the $1.2 million was a drop in the bucket of the $75 billion budget the city approved that year. Fortunately for worker co-op advocates, the program has been largely successful. City spending accordingly has increased over time to Fiscal Year 2020s allocation of $3.6 million.

Of course, it wasnt simply the reports policy logic that led City Council to respond so promptly. The who behind the report mattereda lot. First, there was the FPWA itself, a large nonprofit which in 2014 had over $6 million in revenues. It mattered too that a leading city anti-poverty agency backed the campaign, rather than just worker co-ops themselves. Additionally, Jones Austin herself was an influential city political leader, who then-mayor-elect Bill de Blasio tapped to co-chair his 60-person transition team. And FPWA was backed by its many coalition partners, who mobilized to support the budget allocation, testifying at a city council committee hearing held a month after the report came out. For example, at the hearing, Elizabeth Mendoza, a worker-owner of the Beyond Care childcare co-op incubated by the nonprofit Center for Family Life, located in Brooklyns Sunset Park neighborhood, told the Council committee:

In 2008, I had the opportunity to begin working with the cooperative Beyond Care. My life changed completelypersonally, professionally, and economically. The beginning of the cooperative was not easy. No one knew about our co-op; we did volunteer work at organizations and universities and often gave childcare in exchange for opportunities to market our group in the places we volunteered. I had basic English then. I have learned so much more. I have also learned to use computers. My salary is better. I work the amount of time I want to work My first daughter will graduate from college in June. My youngest son is in third grade. The best benefit of all of this is giving my children the opportunity to have a better education.

Mendoza added that the co-op, which had begun with 25 members, had grown to 40 members, while wages had climbed from $10 an hour or less before the co-op formed to $16 an hour.

Beyond Care, of course, was one of a number of worker co-ops that had developed in New York City before 2014, even in the absence of policy support. Now that the city has supported worker co-op development for over five years, what can we say about the policys effectiveness?

The framework established largely followed the vision set forth in FPWAs report. The lead agency ended up being Small Business Services (SBS), one of the two city agencies named in the report. In its first-year report on what ended up being called the Worker Cooperative Business Development Initiative, SBS indicated that, the initiative, which disbursed funds to ten organizations, aimed to share information with prospective entrepreneurs, support existing worker cooperatives, spur the creation of new worker cooperatives, and help small businesses transition into the worker cooperative model.

The initial ten nonprofits involved in product delivery included a couple of national nonprofits that support worker cooperatives (Democracy at Work Institute (DAWI) and ICA Group), a couple of local co-op specific organizations (Bronx Cooperative Development Initiative and Green Worker Cooperatives), a worker co-op-focused loan fund (The Working World), the city co-op trade association (New York City Network of Worker Cooperatives), a nonprofit which had historically been the citys leading worker co-op developer (Center for Family Life), and three community-based nonprofits (FPWA, Make the Road New York, and Urban Justice Center).

Over time, the composition of the nonprofit partners has changed, Two of the three community-based nonprofits (FPWA and Make the Road New York) are no longer directly involved, and Urban Justice Centers role has been taken over by Takeroot Justice, which spun off from the organization. Meanwhile, five new nonprofits have been added to the mix, including two business-support nonprofits (CAMBA Small Business Services and Business Outreach Center Network), the City University of New Yorks community and economic development law clinic, and two anti-poverty nonprofits (Urban Upbound and Workers Justice Project)

Different nonprofits have played different roles. Some nonprofits focus on community outreach and education. For instance, over the past three years of the program, WCBDI-funded nonprofits have conducted educational workshops and trainings that have reached over 8,000 New Yorkers. On the WCBDI website, nonprofits roles are set forth in five distinct categories1) groups that help with worker co-op startups; 2) groups that provide one-on-one support services (business plan development, accounting, etc.); 3) groups that provide worker co-ops with legal support; 4) groups that focus on converting existing businesses into worker co-ops (which, as NPQ has detailed, is an essential strategy for maintaining existing businesses as Baby Boomer owners retire); and 5) one group (the Working World) that helps finance worker co-ops.

While most of the direct project work has been done by and through the nonprofit partners, SBS has played an essential role in helping legitimize worker cooperatives in New York City as a respectable way of operating a business. For instance, in the first year of the program, SBS created and offered a course called Ten Steps to Starting a Worker Cooperative, with sessions held at NYC Business Solutions Centers in Lower Manhattan, Harlem, and the Bronx. The agency also in the first year of the program created a two-page brochure on workers co-ops that remains available at the citys seven SBS offices and on its website.

The Big Apples approach of working with multiple nonprofits reflected the coalition that had testified at City Council back in 2014 and had made passage of a city worker co-op support policy possible. At the same time, it is important to understand how the approach differed from past efforts. When the New York City policy was adopted, the leading philanthropic-backed US worker co-op development approach was the one pursued by the Cleveland Foundation, which had backed the launch of the Evergreen Cooperative network, a centralized model that aimed to launch a group of cooperatives from a single, well-funded incubator nonprofit.

New York Cityperhaps by design, but more likely due to the momentum created by the organizing coalitiontook a different tack. Rather than make one big bet, it made a lot of small ones. This has led to far more rapid creation of cooperatives than what a centralized single incubator model has been able to produce. The diverse network of nonprofits involved also likely helps reach the immigrant workforce that to date has been at the heart of New York Citys worker co-op boom. According to the citys annual reports on the program, in the first year alone 21 new worker cooperatives were formed.

One challenge with the multiple nonprofit approach, though, is that the co-ops by and large formed in New York City have been much smaller than the co-ops created through a centralized single incubator model. While New York Citys program in Fiscal Year 2015 supported 21 co-ops, these co-ops had a total of 141 worker-owners, an average of less than seven worker-owners per co-op. (By contrast, Evergreen has formed three co-ops over the course of a decade, but those three co-ops as of 2018 employed 220 people.)

Despite their small size, however, the New York City co-ops have displayed considerable staying power. As the citys Fiscal Year 2019 report indicates, In its first year, the NYC Council distributed $1.2 million across 10 partner organizations who assisted in the creation of 21 worker-owned cooperatives. Fourteen (67 percent) of those businesses are still in operation surpassing the national five-year survival rate for small businesses (about 50 percent).

It is worth noting that New York Citys model has had broad influence outside the Big Apple. A number of cities have, with tweaks, adopted what Michelle Camou and others have labeled the ecosystem approach. Madison, Wisconsin is one of the more prominent, allocating $3 million over five years to support worker co-op development through a process managed by the Madison Cooperative Development Coalition. Minneapolis, albeit at a smaller scale, has implemented a similar approach. In California, a policy passed in Berkeley last year aligns in a similar direction.

Meanwhile, efforts to strengthen the worker co-op ecosystem continue. Even as the effort in New York City has added more than 100 new worker-owners a year, these are still small numbers in the context of a city population that is close to 8.4 million people. A few years ago, a report by DAWI and the Oakland, California-based Project Equity highlighted three challenges:

For its part, the citys most recent report noted that the worker co-ops that it had surveyed had reported problems, despite city efforts, in being successful in winning city procurement contracts. In response, the city recommends stepped-up efforts to assist co-ops with language translation, with assisting co-ops obtain certification as women and/or people-of-color-owned businesses, and with accessing city bond and financing programs.

In thinking about policy, it is also important to consider that it often helps to focus co-op development in industries where they have been most effective. Generally, as Melissa Hoover, who directs DAWI, pointed out last year, worker co-ops often do best in businesses that are value-driven and rely on teamwork. Co-ops in home healthcare and childcare are two prominent examples of this. As Hoover put it last year, If theres something that says we should do this in a humane or human-centered way, [people] often come to cooperatives because it enacts their values across-the-board and it dovetails with the work theyre trying to do.

Go here to see the original:

Building a Worker Co-op Ecosystem: Lessons from the Big Apple - Nonprofit Quarterly

Letters: Healthcare industry needs its own digital ecosystem – ModernHealthcare.com

The feature Start me uphighlighted the investments that health systems are making in digital startups and apps. As a chief innovation officer for a large integrated system, I experience the value that these tools bring to providers, patients and members every day.

Yet, I also experience the difficulties of contracting, security reviews and integration with every new app or tool added to our systems digital ecosystem. Our investments in the newest apps and tools has resulted in an increasing digital debt due to a fragmented and cumbersome digital ecosystem that often increases provider workloads, fragments the patient experience and adds IT expense.

Despite these hurdles, health systems must digitally transform by investing in apps to improve care delivery, access and engagement. However, we need a more sophisticated digital ecosystem where apps can live, similar to an Android or iOS of healthcare.

A standardized ecosystem would create common requirements around security, privacy, data-sharing and contracting which would dramatically alter the speed of integration and adoption of tools developed via venture investment. Simply put: Health systems essentially are buying todays apps and putting them on a flip phone.

No single health system can viably fund their own iOS operating system and app store. Thus, we have a digital transformation problem that exceeds the capacity of any single system. Its a problem that feels about as insurmountable as high pharmaceutical costs. Yet, Civica Rx shows us what can happen when health systems come together to address a problem. What Civica is doing for prescription costs and manufacturing, like-minded systems can do for digital transformation.

Dr. Ries RobinsonChief innovation officerPresbyterian Healthcare ServicesAlbuquerque

Original post:

Letters: Healthcare industry needs its own digital ecosystem - ModernHealthcare.com

The new financial ecosystem of big banks and big tech – ValueWalk

Banks will be competing for market share against big technology companies over time if they do not create a shared ecosystem by partnering with a big tech company. Banks see creating ecosystems and partnerships with big tech as a natural fit and key to their long-term strategy. While traditional big tech companies such as Apple, and previously Amazon, have been hesitant to get involved in the financial industry because of regulations, they are now partnering with banks like Goldman Sachs, on projects like the Apple card. Banks will look to open architectures and leverage big tech for Externalization and access to client market share.

Get the entire 10-part series on Warren Buffett in PDF. Save it to your desktop, read it on your tablet, or email to your colleagues

Q4 2019 hedge fund letters, conferences and more

At a macro level, banks need to re-platform their businesses and be more digital. Banks are very much in product silos and wedded to legacy technology, especially on the back end. By partnering with big tech companies and leveraging new technologies, they can deliver a better customer experience. The more financial services institutions create platforms, the more they will break out of their silos. Much like Amazon does, banks will use their brand to be customer-facing, and then create platforms and eco-systems of augmented services that utilize their platform and enable them to also contribute to revenue. Big Tech will white-label financial services to expand.

Amazon created a business of third party providers that leverage Amazons scale and technology platform of analytics and data to broaden their reach and serve a broader eco-system under the Amazon brand. Amazon scaled by leveraging three things: User Experience, Data, and Technology. Instead of focusing on growth in adjacencies, Amazon grew by customer demand management and having a scalable platform which allowed them to bolt-on product and services which the customers demanded. These three foundational elements of the banking platform, User Experience, Data, and Technology, are all key to ensure that efforts to develop such an eco-system in financial services are sustainable.

Banks like Goldman have realized its smarter, and more cost-effective in terms of customer acquisition costs, to partner and leverage a customer platform like Amazon or Apple to sell financial services that their end clients will need like loans, credit cards, and financial advice. Its unsurprising to see a forward-thinking bank like Goldman betting big on banking as a service business strategy. These firms believe they can offer the best behind the scenes solution to many non-financial companies as they start offering financial services directly to their clients. Additionally appealing to Goldman, is the ability to can co-mingle its data, AI, and liquidity with a big tech platform like Amazon to drive scale and personalization for end users.

Another example is where banks can partner with a travel agency so the agency becomes part of the banks platform. This allows the bank to become more than a transaction facilitator and to have a personal interaction with the client when they are booking a trip.

Enabling banks to win in an open platform economy includes big data, APIs and business model inventions, organization reengineering and marketing automation. Banks need to get out of their swim lane, whether through open banking (a European phenomenon enabled by APIs) or through bringing other types of partnerships and experiences to the eco-system. Investing in ecosystem dynamics enables banks to play in the non-banking industry. A platform approach can enable new value and revenue streams through partnerships, and opening up the banks capabilities, driving real transformation.

About Publicis Sapient:

Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled states, both in the way they work and the way they serve their customers. Publicis Sapient is the digital business transformation hub of Publicis Groupe.

The rest is here:

The new financial ecosystem of big banks and big tech - ValueWalk

To protect ecosystem, improve balance on critical regulatory panel – theday.com

Connecticut law requires the state toreduce greenhouse gas emissions by 80 percentfrom 2001 levels by the year 2050 and to do itwithout discouraging industry or weakening the state's economy. Intermediate goals, including a 45 percent reduction in the next 10 years, are just as ambitious.

The state's Comprehensive Energy Strategy wisely recognizes, however,that in an ecosystem nothing stands alone.Carrying outthe mandaterequires a string of different public and private tactics that willuse energy more efficiently; generate it with fewergreenhouse gas emissions;andfoster elements that balance outemissions.

Energy and the environment are naturallycompeting interests, butone environmental solution can also be the bane ofanother. It is perilously easy to undercut the balance while attempting to make progress in cutting emissions.

To provide expertise on what could happen to an ecosystem is whythe Connecticut Siting Council is statutorily required to have two qualified ecologists on the board. The council's approval is needed for locating "siting" electric generating, transmissionand storage facilities.

Right now the board has two vacancies and one qualified ecologist. By law, the governor appoints five "public members" to the board, among them the two ecologists. Gov. Ned Lamont has yet to appoint at least one more. Energy production proposals are coming in thick and fast, however, and some may cause harm out of proportion to their benefits. The council needs all the expertise it can muster.

Solar panel field siting proposals, in particular,have become a significant subject for the council's agenda. The council has just received a request to reopen a proposal from Greenskies for solar paneling on Oil Mill Road in Waterford,which it denied in 2018.The citizen environmentalist group Save the River-Save the Hills, has fought the proposal, which would clear 75 acres of woodland for 45,976 panels under the latest version.

An East Lyme property ownersued a Greenskies subsidiary over "virtual clearcutting" and siltation of his property and local streams. A member of the Niantic River Watershed Committeetold TheDay last fall that expertise was lacking in the review. Two more eastern Connecticut proposals are coming up. Quinebaug Solar LLC has asked to reopen its application to build a massive 50-megawatt solar voltaic field on 561 acresof 29 privately owned properties in Canterbury and Brooklyn. A much smaller, 1.95-megawatt proposal for 13 acres offShort Hills Road in Old Lyme has caught the attention of environmentalists, who want the siting council and the state Department of Energy and Environmental Protection to hear their viewpoints.

Michael W. Klemens, a seven-year former member of the siting council board, has been sounding alarms about the environmental impact of solar fields when there is clear-cutting as in East Lyme and potentially in Old Lyme and Waterford but even when the site is largely open fields. He asks why the state does not seek to to put such developments along highways, for instance, or in other developed areas where the drainage and habitats are already artificial. It's a good question, and one that the siting council should be considering when asked for approvals.

When the council denied Greenskies'Waterford petition in 2018, it gave three reasons: impact on water quality, storm drainage and wildlife, including birds. What the council will decide about the Oil Mill Road site should dependnot only onwhat it can allow but also on what it should allow, in the big picture.And in a development as huge as the Quinebaug proposal, the effects would inevitably alter the ecology of a pristine part of Connecticut, a tiny state that can't afford to be giving pristineaway.

Above all, don't make things worse.Governor Lamont, appoint one if not two more ecologists to the siting council, and hear their expertise along with that of the engineers and developers.

The Day editorial board meets regularly with political, business and community leaders and convenes weekly to formulate editorial viewpoints. It is composed of President and Publisher Tim Dwyer, Editorial Page Editor Paul Choiniere, Managing Editor Tim Cotter, Staff Writer Julia Bergman and retired deputy managing editor Lisa McGinley. However, only the publisher and editorial page editor are responsible for developing the editorial opinions. The board operates independently from the Day newsroom.

View post:

To protect ecosystem, improve balance on critical regulatory panel - theday.com

How to fall in love with the Apple ecosystem all over again — spend more money and buy new stuff – ZDNet

Regular readers will know that I am giving serious consideration to dumping the iPhone and making a switch to Android. Apple's ecosystem feels buggy and slow, and the Cupertino giant seems to be having trouble keeping up with the fixes. And then there's that constant fear that each update will bring some calamity related to performance or battery life or some other vital part of the system.

Apple AirPods Pro

Active noise cancellation for immersive sound. Three sizes of soft, tapered silicone tips for a customizable fit. Available at Amazon.

Read More

But I've discovered a way to fall in love with Apple again. It's easy. Get new hardware.

Must read:Apple's AirPods Pro are the best earbuds you can buy, but for all the wrong reasons

Over the past few weeks, I've been testing a lot of new Apple hardware -- the iPhone 11 Pro Max, the Apple Watch 5, and the AirPods Pro. And you know what, they're all good.

Really good.

So good that it feels like all my issues with the platform have evaporated.

Everything works, and everything feels tightly aligned, and like it was made to work together.

No, I'm not seriously suggesting that people do this, or that make sense to drop many dollars every year on hardware. Still, it's interesting to notice this side-effect of Apple's aggressive upgrade cycles.

But I do know people who do this, and it is interesting to observe that they are much happier with their tech.

But then, if you are willing to spend thousands a year upgrading tech, you've likely successfully convinced yourself that this is a good move.

Is this a deliberate scheme on Apple's part? Sell us new shiny stuff, then gradually, over months, give us reasons to feel distressed with our once-loved devices.

No idea (although I have a hard time believing that this has escaped Apple's notice), but I have no doubt that this, combined with the fact that shifting platforms is not an easy endeavor, helps drive bountiful quarterly sales.

Apple's problem seems to be that it can't keep older hardware feeling good for long. Batteries wear out, and the silicon starts to groan under the weight of operating system updates and newer apps.

But I also know that come iOS 13, watchOS 7, and the slew of firmware updates that the AirPods Pro will undoubtedly get over the coming months, this hardware too would start to feel old, slow, and buggy and the ecosystem would once again become fragmented and disjointed.

That would signal that it was, once again, time to get new hardware.

And so, the cycle of consumerism continues.

See the original post:

How to fall in love with the Apple ecosystem all over again -- spend more money and buy new stuff - ZDNet

Artificial intelligence requires trusted data, and a healthy DataOps ecosystem – ZDNet

Lately, we've seen many "x-Ops" management practices appear on the scene, all derivatives from DevOps, which seeks to coordinate the output of developers and operations teams into a smooth, consistent and rapid flow of software releases. Another emerging practice, DataOps, seeks to achieve a similarly smooth, consistent and rapid flow of data through enterprises. Like many things these days, DataOps is spilling over from the large Internet companies, who process petabytes and exabytes of information on a daily basis.

Such an uninhibited data flow is increasingly vital to enterprises seeking to become more data-driven and scale artificial intelligence and machine learning to the point where these technologies can have strategic impact.

Awareness of DataOps is high. A recent survey of 300 companies by 451 Research finds 72 percent have active DataOps efforts underway, and the remaining 28 percent are planning to do so over the coming year. A majority, 86 percent, are increasing their spend on DataOps projects to over the next 12 months. Most of this spending will go to analytics, self-service data access, data virtualization, and data preparation efforts.

In the report, 451 Research analyst Matt Aslett defines DataOps as "The alignment of people, processes and technology to enable more agile and automated approaches to data management."

The catch is "most enterprises are unprepared, often because of behavioral norms -- like territorial data hoarding -- and because they lag in their technical capabilities -- often stuck with cumbersome extract, transform, and load (ETL) and master data management (MDM) systems," according to Andy Palmer and a team of co-authors in their latest report,Getting DataOps Right, published by O'Reilly. Across most enterprises, data is siloed, disconnected, and generally inaccessible. There is also an abundance of data that is completely undiscovered, of which decision-makers are not even aware.

Here are some of Palmer's recommendations for building and shaping a well-functioning DataOps ecosystem:

Keep it open: The ecosystem in DataOps should resemble DevOps ecosystems in which there are many best-of-breed free and open source software and proprietary tools that are expected to interoperate via APIs." This also includes carefully evaluating and selecting from the raft of tools that have been developed by the large internet companies.

Automate it all:The collection, ingestion, organizing, storage and surfacing of massive amounts of data at as close to a near-real-time pace as possible has become almost impossible for humans to manage. Let the machines do it, Palmer urges. Areas ripe for automaton include "operations, repeatability, automated testing, and release of data." Look to the ways DevOps is facilitating the automation of the software build, test, and release process, he points out.

Process data in both batch and streaming modes. While DataOps is about real-time delivery of data, there's still a place -- and reason -- for batch mode as well. "The success of Kafka and similar design patterns has validated that a healthy next-generation data ecosystem includes the ability to simultaneously process data from source to consumption in both batch and streaming modes," Palmer points out.

Track data lineage: Trust in the data is the single most important element in a data-driven enterprise, and it simply may cease to function without it. That's why well-thought-out data governance and a metadata (data about data) layer is important. "A focus on data lineage and processing tracking across the data ecosystem results in reproducibility going up and confidence in data increasing," says Palmer.

Have layered interfaces. Everyone touches data in different ways. "Some power users need to access data in its raw form, whereas others just want to get responses to inquiries that are well formulated," Palmer says. That's why a layered set of services and design patterns is required for the different personas of users. Palmer says there are three approaches to meeting these multilayered requirements:

Business leaders are increasingly leaning on their technology leaders and teams to transform their organizations into data-driven digital entities that can react to events and opportunities almost instantaneously. The best way to accomplish this -- especially with the meager budgets and limited support that gets thrown out with this mandate -- is to align the way data flows from source to storage.

Go here to read the rest:

Artificial intelligence requires trusted data, and a healthy DataOps ecosystem - ZDNet

Preserve the Sacred Lands of the Greater Yellowstone Ecosystem – CounterPunch

Storm over the Gallatins. Photo: Jeffrey St. Clair.

Wilderness designation preserves many values. Designated wilderness is a storehouse for carbon and insurance against climate change. Wilderness preserves critical wildlife habitat and wildlife corridors. Wilderness provides for clean water and clean air. And, of course, designated wilderness protects the scenery and ecosystem integrity that supports Montanas economy.

However, there is yet another value preserved and enhanced by wilderness designation. It demonstrates a commitment to the inherent reverence and spiritual significance of wildlands.

In every human culture, we find that wildlands are at the core of hallowed landscapes. Sacred lands are places where the usual activities of any society are limited, and people approach these places with respect, humility, and awe.

In every culture that I have reviewed, I have found that high mountains are revered terrain. Mount Olympus was the home of the gods to the ancient Greeks. The Zoroastrian culture revered Mount Damvand in Iran. Mount Fuji was venerated by the Shinto religion in Japan. Mount Sinai is central to Judaism traditions. The Incas of Peru thought mountains were portal to the Gods. Machapuchare was a sublime Nepalese mountain worthy of a long pilgrimage to visit. Mount Kilimanjaro in Tanzania was fundamental to African tribal religious beliefs. The ancient Celts of the British Isles honored the forces of nature, and among their sacred mountains was Croagh Patrick in Ireland. The San Francisco Peaks were divine in the natural world of the Navajo. Closer to home is the Crow tribes reverence for the Crazy Mountains by Livingston.

Every culture has a way of mountain worship. In American culture, we have hallowed landscapes as well. Designated wilderness, national parks, and such public spaces are our version of sacred lands.

A common denominator of these lands is that people generally did not live among the sacred lands, but they did visit. And when you visited sacred lands, you did so with respect.

In a sense, the Greater Yellowstone Ecosystem is one of our Nations sacred places. The ecological integrity and the spiritual value of this ecosystem are still in jeopardy. As the population of Montana and the country continues to grow, these sacred places become even more critical to our society.

We have a chance to demonstrate our appreciation for sacred places of the Greater Yellowstone Ecosystem by designating wild places like the Gallatin Range, Crazy Mountains, Pryor Mountains, Lionhead, and other roadless lands of the Custer Gallatin National Forest as designated wilderness.

Wilderness is our societys way of codifying self-restraint and humility and appreciation for natural processes and landscapes.

The Custer Gallatin National Forest wildlands are essential to our culture, but also vital to the others or the creatures that reside on these lands such as grizzly bear, bighorn sheep, wolverine, elk, trout, on down to butterflies and other insects.

The opportunity may not come again. We, as a society, have an obligation and responsibility to preserve the sacred lands of the Greater Yellowstone Ecosystem. We can do this by supporting wilderness designation for the Custer Gallatin National Forest roadless lands.

Originally posted here:

Preserve the Sacred Lands of the Greater Yellowstone Ecosystem - CounterPunch

Organizations only protect 60% of their business ecosystem, Accenture finds – CIO Dive

Dive Brief:

Data privacy regulators consider the health of business cybersecurity programs when calculating fines. Companies face fines even if they have extensive cyber hygiene.

Regulators also consider how long it takes companies to recover when calculating fines. More than half of leaders experienced a breach for more than 24 hours, whereas 97% of non-leaders said the same, according to Accenture.

Any lag time in remediation deepens a company's chance of fines under the General Data Protection Regulation or the California Consumer Privacy Act. While GDPR went into effect in 2018, most of its penalties finesare still in the "intent to fine" stage,leaving room for companies to negotiate with regulators.

Early detection is a company's best defense from a breach. However, less than one-fourth of non-leaders are able to detect a breach within a day, compared to 88% of leaders, according to Accenture.

Samantha Schwartz/CIO Dive, data from Accenture

Data lives in motion, flowing between business partners and security systems. Bad actors find holes in data aggregators, brokers, contractors, or other service providers that sit between customers and the companies they do business with.

Quest Diagnostics and LabCorp'sdata breach was caused by a weak link in their business ecosystem: their billing collector. The billing company was compromised for eight months and left the two companies answering to Congress. The companies' third-party risk management was in question, their internal security programs were not.

Only 15% of organizations have some degree of confidence in how they mitigate supply chain threats, according to Microsoft. Whitelisting, a mechanism for approving connections, is a solution for assessing third parties. With whitelisting, transactions are denied by default.

More here:

Organizations only protect 60% of their business ecosystem, Accenture finds - CIO Dive

IBM: building a digital ecosystem to support the mine of the future – Mining Global – Mining News, Magazine and Website

By Daniel Brightmore . Feb 05, 2020, 6:46AM

Technology, in its various stages of evolution, is our business at IBM, reflects Manish Chawla, Global Managing Director for Energy & Natural Resources. We've been involved in mining for decades, and just like in any other industry weve re-invented our offerings to add services, software, data handling, cloud and AI capabilities. Our focus has progressed from IT and core functions to meet the needs of business transformation projects such as SAP implementations or process outsourcing, to support the mining industry in managing data as a strategic asset; helping the industry to capture, monetise and secure it.

IBMs portfolio features a set of offerings targeting enterprise & operations transformation, outsourcing, SAP implementations, and helping clients use their data to their specific strategic advantage. Look at technologies such as blockchain for traceability in the supply chain, Chawla adds. Today, we are a full-service partner focused on the employee experience, while using technology for transforming various functions across a mining organisation.

Chawla notes a recognition from the mining industry that technology can now solve specific problems from connectivity through to autonomous solutions. Now were able to harness the data, the C-suite can see the importance of digitisation and how it will drive the business in the future, he says. A technology-savvy and enabled mining enterprise is critical for attracting and supporting the workforce of the future. How do you get people out of the unsafe conditions of underground mines in remote areas and make the industry attractive to a new generation? Technology holds the key.

We believe industrial businesses are ready to move towards business reinvention: scaling digital and AI and embedding it in the business. It's about hybrid cloud, moving mission critical applications from experimentation to true end-to-end transformation. The key to winning is centred around what we at IBM call the Cognitive Enterprise. - Maxelino Nelson, Senior Executive for Industry Innovation, Global Solutions, & Business Strategy at IBM

A recent study by the World Economic Forum forecast that over the next decade the mining industry will create further value of $190bn from additional transformational measures. When these strategies are executed in a more integrated fashion, inside-out and outside-in transformation, we believe businesses will be at a great advantage from humans and machines working together, explains Maxelino Nelson, Senior Executive for Industry Innovation, Global Solutions, & Business Strategy at IBM. This will outperform humans or machines working on their own. Its a great opportunity for us and our mining clients to solve some of the societal challenges relating to sustainability while developing the mining sectors ecosystem to partner with IBM to truly transform the business in a more holistic way.

Nelson notes that over the past 5-10 years the digital transformation journeys IBMs clients have taken have been characterised by AI and experimentation with customer-facing apps; activities that have been driving the cloud during chapter one of a digital reinvention.

What will chapter two hold? We believe industrial businesses are ready to move towards business reinvention: scaling digital and AI and embedding it in the business. It's about hybrid cloud, moving mission critical applications from experimentation to true end-to-end transformation. The key to winning is centred around what we at IBM call the Cognitive Enterprise.

With the initial trends of the first chapter maturing, Nelson maintains were on the cusp of the next big shift in the business architecture. It will be driven by the pervasive application of AI and cognitive technologies, combined with data, to the core processes and workflows across mining organisations alongside important functional areas such as finance, procurement, talent and supply chain. The results of this revolutionary change will be defined as the Cognitive Enterprise.

Companies that get this journey right are on the way to being a Cognitive Enterprise, affirms Nelson. In our experience, critical areas for natural resources industries to get right on this journey are openness and collaboration, integration, intelligent workflows and cultural skills. In a time of continued volatility and disruption, open innovation and co-creation are vital to be able to partner across ecosystems and learn from other industries to achieve fundamental transformation as 90% of the jobs in mining are changed, not necessarily replaced, through technology.

How is IBM helping companies embrace Mining 4.0 and support the move towards the digital mine of the future? We've developed a data-driven productivity platform with Sandvik, a leading supplier of underground mining equipment. This partnership has seen us connect their assets, their equipment, to our cloud to be able to pull data off. The value proposition to a mining company is not only to get data from the Sandvik equipment, but also from other vendor feeds, explains Chawla. Interoperability as well as the open data standard is critical for a mine operator. They get visibility to production information, help with equipment, maintenance analytics and improved uptime.

Built on IBM technologies, Sandvik offers a platform for underground mine optimisation, both for production and data/maintenance related aspects. We're also the primary data analytics platform and AI software services partner for Vale for where they have an AI centre of competence, Chawla reveals. We're doing an extensive set of use cases with them, including route optimisation for trucks, testing safety use cases and optimisation of smelters. IBM is also working with Newmont Goldcorp to help them better understand their ore body, allowing them to reduce the time spent by geologists in analysis and data collection to determine where to guide the next drilling campaign. We've reduced inaccuracy by 95% with the geology data platform that we call cognitive ore body discovery, says Chawla.

IBM is committed to supporting the sustainability efforts of mining operations across the globe. By using intelligent workflows on the blockchain to address social sustainability in the context of the entire supply chain, miners can demonstrate social responsibility and also begin to build a culture of innovation, believes Chawla. The work we are doing on the Responsible Sourcing Blockchain Network (RSBN) with RCS Global allows businesses to track cobalt from industrial and mining companies to ensure that they are working responsibly, whether it's in the Congo or other parts of the world, across the supply chain from mine to smelter to battery manufacturers and to automotive OEMs.

IBM are seeing many automotive OEMs joining the platform along with key industrial scale miners operating cobalt mines in Congo who wish to augment their use of OECD (Organisation for Economic Co-operation and Development) responsible supply chain guidelines. The company is looking to extend the network to other metals such as tin, tantalum, and gold, which are all important to the new economy emerging for minerals associated with electronics and EVs.

Chawla notes the demand, driven by the rise of EVs and electronic brands, for an active, working and open democratic network to ascertain responsible sourcing and support artisanal miners to be able to operate safely in a fair-trade manner.

IBM is also working with Minehub a mining and metals trading platform helping it streamline operations with various business partners across the mining ecosystem. The MineHub platform is not a market-maker; it allows buyers and sellers to agree on trade. It comes to play once the trade has been set and the terms have been agreed, explains Chawla. This helps to improve the operational efficiencies, logistics and financing, while concentrating the supply chain from the mine to the buyer.

MineHub has been working in collaboration with IBM and other industry participants across a value chain that includes the likes of ING Bank, leading precious metals firm Ocean Partners and Capstone Mining. Minehub also features clearing houses, refiners, smelters, financiers and other providers like Kimura joining along with royalty holders and streamers such as Lincoln. A tier one miner is also set to trial the platform. Chawla notes they are all benefiting from the efficiencies of the platform, all providing key pieces of information to these transactions.

When it comes to digital innovation across the mining panorama, Chawla says its still a challenge to ensure all parties are aligned so that everyone benefits. Its important to get centre-led IT and overall C-suite leadership both working towards the improvement of operating assets, he says. With much of the work we do, we also think hard about the experience of frontline employees and incorporate this into the design of the solutions to ease adoption. Weve taken this approach with Sandvik where weve done design sessions at the mine site with shift supervisors, truck drivers and mine managers.

A key obstacle to overcome in order to successfully integrate digital innovation is access from a network perspective and being able to capture the data. Many companies are upgrading their networks and 5G is exploding, notes Chawla. Allied to this he believes measured interoperability is vital. Mining companies operate differing fleets from a range of vendors with equipment right across the value chain. Each vendor is pitching their own solutions. Do they go with one of the vendors? Or do they go with developing their own platform? And then, will the vendors open up and share the data that the mine operators and ourselves can leverage in partnership? This complex ecosystem becomes a challenge. If it's approached in collaboration with interoperability in mind, then you can accelerate. But that is a continued two steps forward, three steps back kind of situation.

IBM is pushing forward in 2020 to meet its goals around driving innovative solutions for the energy and natural resources industries. We want to infuse more data and AI capabilities into their operations to take them live, pledges Chawla. We will be continuing our work on three specific new platforms to further enhance the idea of ecosystems coming together to drive tangible outcomes for our clients in all the vectors of mining. Chawlas team also plan to nurture and scale IBMs cybersecurity offering to secure operating technology and systems. As more plants, more mines, and more equipment get connected, the cyber threat increases, so were pleased with the tremendous progress were already making to secure operations as they grow.

Nelson confirms IBM is currently working with a large oil and gas super-major for potential partnership to co-create a digital mining services integrated platform. This platform is tendered around developing a different model for how mining companies consume digital solutions and services, and how mining providers and solutions developers can make them available to the industry. It is a game changer and it shows how upbeat and interesting the mining industry has become to wider industries."

Read more:

IBM: building a digital ecosystem to support the mine of the future - Mining Global - Mining News, Magazine and Website