Podcast of the Week: TWIML AI Podcast – 9to5Mac

During the COVID19 pandemic, I decided that I wanted to use the time at home to invest in myself. One of the things I was challenged by in a recent episode of Business Casual was when Mark Cuban discussed the role of Artificial Intelligence in the future and recommended some tools to learn more. He mentioned some Coursera courses, so I am currently working my way through some of their AI training, but he also mentioned an AI-focused podcast called theTWIMLAI Podcast that I added to my podcast subscription list.

9to5Macs Podcast of the Week is a weekly recommendation of a podcast you should add to your subscription list.

TWIML (This Week in Machine Learning and AI) is a perfect way to hear from industry experts about how Machine Learning and AI will change our world. I plan to work through the back catalog soon, but the newest episodes have been informative. I particularly enjoyed this episode with Cathy Wu, Gilbert W. Winslow Career Development Assistant Professor in the Department of Civil and Environmental Engineering at MIT where they discussed simulating the future of traffic.

Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. By sharing and amplifying the voices of a broad and diverse spectrum of machine learning and AI researchers, practitioners, and innovators, our programs help make ML and AI more accessible, and enhance the lives of our audience and their communities.

TWIML has its origins in This Week in Machine Learning & AI, a podcast Sam launched in mid2016 to a small but enthusiastic reception. Fast forward three years, and the TWIML AI Podcast is now a leading voice in the field, with over five million downloads and a large and engaged community following. Our offerings now include online meetups and study groups, conferences, and a variety of educational content.

Subscribe to the TWIML AI Podcast on Apple Podcasts, Spotify, Castro, Overcast, Pocket Casts, and RSS.

Dont forget about the great lineup of podcasts on the 9to5 Network.

FTC: We use income earning auto affiliate links. More.

Check out 9to5Mac on YouTube for more Apple news:

Read more here:
Podcast of the Week: TWIML AI Podcast - 9to5Mac

Machine Learning as a Service (MLaaS) Market | Outlook and Opportunities in Grooming Regions with Forecast to 2029 – Jewish Life News

Documenting the Industry Development of Machine Learning as a Service (MLaaS) Market concentrating on the industry that holds a massive market share 2020 both concerning volume and value With top countries data, Manufacturers, Suppliers, In-depth research on market dynamics, export research report and forecast to 2029

As per the report, the Machine Learning as a Service (MLaaS) Market is anticipated to gain substantial returns while registering a profitable annual growth rate during the predicted time period.The global machine learning as a service (mlaas) market research report takes a chapter-wise approach in explaining the dynamics and trends in the machine learning as a service (mlaas) industry.The report also provides the industry growth with CAGR in the forecast to 2029.

A deep analysis of microeconomic and macroeconomic factors affecting the growth of the market are also discussed in this report. The report includes information related to On-going demand and supply forecast. It gives a wide stage offering numerous open doors for different businesses, firms, associations, and start-ups and also contains authenticate estimations to grow universally by contending among themselves and giving better and agreeable administrations to the clients. In-depth future innovations of machine learning as a service (mlaas) Market with SWOT analysis on the basis Of type, application, region to understand the Strength, Weaknesses, Opportunities, and threats in front of the businesses.

Get a Sample Report for More Insightful Information(Use official eMail ID to Get Higher Priority):https://market.us/report/machine-learning-as-a-service-mlaas-market/request-sample/

***[Note: Our Complimentary Sample Report Accommodate a Brief Introduction To The Synopsis, TOC, List of Tables and Figures, Competitive Landscape and Geographic Segmentation, Innovation and Future Developments Based on Research Methodology are also Included]

An Evaluation of the Machine Learning as a Service (MLaaS) Market:

The report is a detailed competitive outlook including the Machine Learning as a Service (MLaaS) Market updates, future growth, business prospects, forthcoming developments and future investments by forecast to 2029. The region-wise analysis of machine learning as a service (mlaas) market is done in the report that covers revenue, volume, size, value, and such valuable data. The report mentions a brief overview of the manufacturer base of this industry, which is comprised of companies such as- Google, IBM Corporation, Microsoft Corporation, Amazon Web Services, BigML, FICO, Yottamine Analytics, Ersatz Labs, Predictron Labs, H2O.ai, AT and T, Sift Science.

Segmentation Overview:

Product Type Segmentation :

Software Tools, Cloud and Web-based Application Programming Interface (APIs), Other

Application Segmentation :

Manufacturing, Retail, Healthcare and Life Sciences, Telecom, BFSI, Other (Energy and Utilities, Education, Government)

To know more about how the report uncovers exhaustive insights |Enquire Here: https://market.us/report/machine-learning-as-a-service-mlaas-market/#inquiry

Key Highlights of the Machine Learning as a Service (MLaaS) Market:

The fundamental details related to Machine Learning as a Service (MLaaS) industry like the product definition, product segmentation, price, a variety of statements, demand and supply statistics are covered in this article.

The comprehensive study of machine learning as a service (mlaas) market based on development opportunities, growth restraining factors and the probability of investment will anticipate the market growth.

The study of emerging Machine Learning as a Service (MLaaS) market segments and the existing market segments will help the readers in preparing the marketing strategies.

The study presents major market drivers that will augment the machine learning as a service (mlaas) market commercialization landscape.

The study performs a complete analysis of these propellers that will impact the profit matrix of this industry positively.

The study exhibits information about the pivotal challenges restraining market expansion

The market review for the global market is done in context to region, share, and size.

The important tactics of top players in the market.

Other points comprised in the Machine Learning as a Service (MLaaS) report are driving factors, limiting factors, new upcoming opportunities, encountered challenges, technological advancements, flourishing segments, and major trends of the market.

Check Table of Contents of This Report @https://market.us/report/machine-learning-as-a-service-mlaas-market//#toc

Get in Touch with Us :

Mr. Benni Johnson

Market.us (Powered By Prudour Pvt. Ltd.)

Send Email:[emailprotected]

Address:420 Lexington Avenue, Suite 300 New York City, NY 10170, United States

Tel:+1 718 618 4351

Website:https://market.us

Our Trending Blog:https://foodnbeveragesmarket.com/

Get More News From Other Reputed Sources:

Hospital Pharmaceuticals Market Set Encounter Paramount Growth and Forecast 2029 | Sanofi, Bristol-Myers Squibb, Roche | BioSpace

Laser Diode Drivers Market Technological Trends in 2020-2029 | Analog Devices (Linear Technology), Maxim Integrated, Texas Instruments

Link:
Machine Learning as a Service (MLaaS) Market | Outlook and Opportunities in Grooming Regions with Forecast to 2029 - Jewish Life News

AQR’s former machine-learning head says quant funds should start ‘nowcasting’ to react to real-time data instead of trying to predict the future – One…

MagnusRT @rjparkerjr09: "Quants were too reliant on models and forecasts. They need to begin practicing nowcasting reacting to real-time data13 hours ago

Mansoor Fayyaz MianAQR's former machine-learning head says its time for quants to 'pay less attention to crystal balls' and react to re https://t.co/WrhikvFdjM17 hours ago

Jerry Parker"Quants were too reliant on models and forecasts. They need to begin practicing nowcasting reacting to real-time https://t.co/ozQlfldTdI22 hours ago

Truth 2 PowerAQR's former machine-learning head says it's time for quants to 'pay less attention to crystal balls' and react to https://t.co/i0jGvPVwBz1 day ago

JoseWorksAQR's former machine-learning head says its time for quants to 'pay less attention to crystal bal... https://t.co/PGaMlHXBS22 days ago

Manpreet SinghRT @businessinsider: AQR's former machine-learning head says its time for quants to 'pay less attention to crystal balls' and react to real2 days ago

Go here to see the original:
AQR's former machine-learning head says quant funds should start 'nowcasting' to react to real-time data instead of trying to predict the future - One...

Alex Garland on ‘Devs,’ free will and quantum computing – Engadget

Garland views Amaya as a typical Silicon Valley success story. In the world of Devs, it's the first company that manages to mass produce quantum computers, allowing them to corner that market. (Think of what happened to search engines after Google debuted.) Quantum computing has been positioned as a potentially revolutionary technology for things like healthcare and encryption, since it can tackle complex scenarios and data sets more effectively than traditional binary computers. Instead of just processing inputs one at a time, a quantum machine would theoretically be able to tackle an input in multiple states, or superpositions, at once.

By mastering this technology, Amaya unlocks a completely new view of reality: The world is a system that can be decoded and predicted. It proves to them that the world is deterministic. Our choices don't matter; we're all just moving along predetermined paths until the end of time. Garland is quick to point out that you don't need anything high-tech to start asking questions about determinism. Indeed, it's something that's been explored since Plato's allegory of the cave.

"What I did think, though, was that if a quantum computer was as good at modeling quantum reality as it might be, then it would be able to prove in a definitive way whether we lived in a deterministic state," Garland said. "[Proving that] would completely change the way we look at ourselves, the way we look at society, the way society functions, the way relationships unfold and develop. And it would change the world in some ways, but then it would restructure itself quickly."

The sheer difficulty of coming up with something -- anything -- that's truly spontaneous and isn't causally related to something else in the universe is the strongest argument in favor of determinism. And it's something Garland aligns with personally -- though that doesn't change how he perceives the world.

"Whether or not you or I have free will, both of us could identify lots of things that we care about," he said. "There are lots of things that we enjoy or don't enjoy. Or things that we're scared of, or we anticipate. And all of that remains. It's not remotely affected by whether we've got free will or not. What might be affected is, I think, our capacity to be forgiving in some respects. And so, certain kinds of anti-social or criminal behavior, you would start to think about in terms of rehabilitation, rather than punishment. Because then, in a way, there's no point punishing someone for something they didn't decide to do."

More here:
Alex Garland on 'Devs,' free will and quantum computing - Engadget

RAND report finds that, like fusion power and Half Life 3, quantum computing is still 15 years away – The Register

Quantum computers pose an "urgent but manageable" threat to the security of modern communications systems, according to a report published Thursday by influential US RAND Corporation.

The non-profit think tank's report, "Securing Communications in the Quantum Computing Age: Managing the Risks to Encryption," urges the US government to act quickly because quantum code-breaking could be a thing in, say, 12-15 years.

If adequate implementation of new security measures has not taken place by the time capable quantum computers are developed, it may become impossible to ensure secure authentication and communication privacy without major, disruptive changes, said Michael Vermeer, a RAND scientist and lead author of the report in a statement.

Experts in the field of quantum computing like University of Texas at Austin computer scientist Scott Aaronson have proposed an even hazier timeline.

Noting that the quantum computers built by Google and IBM have been in the neighborhood of 50 to 100 quantum bits (qubits) and that running Shor's algorithm to break public key RSA cryptosystems would probably take several thousand logical qubits meaning millions of physical qubits due to error correction Aaronson recently opined, "I dont think anyone is close to that, and we have no idea how long it will take."

But other boffins, like University of Chicago computer science professor Diana Franklin, have suggested Shor's algorithm might be a possibility in a decade and a half.

So even though quantum computing poses a theoretical threat to most current public-key cryptography and less risk for lattice-based, symmetric, privacy key, post-quantum, and quantum cryptography there's not much consensus about how and when this threat might manifest itself.

Nonetheless, the National Institute of Standards and Technology, the US government agency overseeing tech standards, has been pushing the development of quantum-resistant cryptography since at least 2016. Last year it winnowed a list of proposed post-quantum crypto (PQC) algorithms down to a field of 26 contenders.

The RAND report anticipates quantum computers capable of crypto-cracking will be functional by 2033, with the caveat that experts propose dates both before and after that. PQC algorithm standards should gel within the next five years, with adoption not expected until the mid-to-late 2030s, or later.

But the amount of time required for the US and the rest of the world to fully implement those protocols to mitigate the risk of quantum crypto cracking may take longer still. Note that the US government is still running COBOL applications on ancient mainframes.

"If adequate implementation of PQC has not taken place by the time capable quantum computers are developed, it may become impossible to ensure secure authentication and communication privacy without major, disruptive changes to our infrastructure," the report says.

RAND's report further notes that consumer lack of awareness and indifference to the issue means there will be no civic demand for change.

Hence, the report urges federal leadership to protect consumers, perhaps unaware that Congress is considering the EARN-IT Act, which critics characterize as an "all-out assault on encryption."

"If we act in time with appropriate policies, risk reduction measures, and a collective urgency to prepare for the threat, then we have an opportunity for a future communications infrastructure that is as safe as or more safe than the current status quo, despite overlapping cyber threats from conventional and quantum computers," the report concludes.

It's worth recalling that a 2017 National Academy of Sciences, Engineering, and Medicine report, "Global Health and the Future Role of the United States," urged the US to maintain its focus on global health security and to prepare for infection disease threats.

That was the same year nonprofit PATH issued a pandemic prevention report urging the US government to "maintain its leadership position backed up by the necessary resources to ensure continued vigilance against emerging pandemic threats, both at home and abroad."

The federal government's reaction to COVID-19 is a testament to the impact of reports from external organizations. We can only hope that the threat of crypto-cracking quantum computers elicits a response that's at least as vigorous.

Sponsored: Webcast: Build the next generation of your business in the public cloud

Read the original here:
RAND report finds that, like fusion power and Half Life 3, quantum computing is still 15 years away - The Register

Making Sense of the Science and Philosophy of Devs – The Ringer

Let me welcome you the same way Stewart welcomes Forest in Episode 7 of the Hulu miniseries Devs: with a lengthy, unattributed quote.

We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at any given moment knew all of the forces that animate nature and the mutual positions of the beings that compose it, if this intellect were vast enough to submit the data to analysis, could condense into a single formula the movement of the greatest bodies of the universe and that of the lightest atom; for such an intellect nothing could be uncertain and the future, just like the past, would be present before its eyes.

Its a passage that sounds as if it could have come from Forest himself. But its not from Forest, or Katie, or evenas Katie might guess, based on her response to Stewarts Philip Larkin quoteShakespeare. Its from the French scholar and scientist Pierre-Simon Laplace, who wrote the idea down at the end of the Age of Enlightenment, in 1814. When Laplace imagined an omniscient intellectwhich has come to be called Laplaces demonhe wasnt even saying something original: Other thinkers beat him to the idea of a deterministic, perfectly predictable universe by decades and centuries (or maybe millennia).

All of which is to say that despite the futuristic setting and high-tech trappings of Devsthe eight-part Alex Garland opus that will reach its finale next weekthe series central tension is about as old as the abacus. But theres a reason the debate about determinism and free will keeps recurring: Its an existential question at the heart of human behavior. Devs doesnt answer it in a dramatically different way than the great minds of history have, but it does wrap up ancient, brain-breaking quandaries in a compelling (and occasionally kind of confusing) package. Garland has admitted as much, acknowledging, None of the ideas contained here are really my ideas, and its not that I am presenting my own insightful take. Its more Im saying some very interesting people have come up with some very interesting ideas. Here they are in the form of a story.

Devs is a watchable blend of a few engaging ingredients. Its a spy thriller that pits Russian agents against ex-CIA operatives. Its a cautionary, sci-fi polemic about a potentially limitless technology and the hubris of big tech. Like Garlands previous directorial efforts, Annihilation and Ex Machina, its also a striking aesthetic experience, a blend of brutalist compounds, sleek lines, lush nature, and an exciting, unsettling soundtrack. Most of all, though, its a meditation on age-old philosophical conundrums, served with a garnish of science. Garland has cited scientists and philosophers as inspirations for the series, so to unravel the riddles of Devs, I sought out some experts whose day jobs deal with the dilemmas Lily and Co. confront in fiction: a computer science professor who specializes in quantum computing, and several professors of philosophy.

There are many questions about Devs that we wont be able to answer. How high is Kentons health care premium? Is it distracting to work in a lab lit by a perpetually pulsing, unearthly golden glow? How do Devs programmers get any work done when they could be watching the worlds most riveting reality TV? Devs doesnt disclose all of its inner workings, but by the end of Episode 7, its pulled back the curtain almost as far as it can. The main mystery of the early episodeswhat does Devs do?is essentially solved for the viewer long before Lily learns everything via Katies parable of the pen in Episode 6. As the series proceeds, the spy stuff starts to seem incidental, and the characters motivations become clear. All that remains to be settled is the small matter of the intractable puzzles that have flummoxed philosophers for ages.

Heres what we know. Forest (Nick Offerman) is a tech genius obsessed with one goal: being reunited with his dead daughter, Amaya, who was killed in a car crash while her mother was driving and talking to Forest on the phone. (Hed probably blame himself for the accident if he believed in free will.) He doesnt disguise the fact that he hasnt moved on from Amaya emotionally: He names his company after her, uses her face for its logo, and, in case those tributes were too subtle, installs a giant statue of her at corporate HQ. (As a metaphor for the way Amaya continues to loom over his life, the statue is overly obvious, but at least it looks cool.) Together with a team of handpicked developers, Forest secretly constructs a quantum computer so powerful that, by the end of the penultimate episode, it can perfectly predict the future and reverse-project the past, allowing the denizens of Devs to tune in to any bygone event in lifelike clarity. Its Laplaces demon made real, except for the fact that its powers of perception fail past the point at which Lily is seemingly scheduled to do something that the computer cant predict.

I asked Dr. Scott Aaronson, a professor of computer science at the University of Texas at Austin (and the founding director of the schools Quantum Information Center) to assess Devs depiction of quantum computing. Aaronsons website notes that his research concentrates on the capabilities and limits of quantum computers, so hed probably be one of Forests first recruits if Amaya were an actual company. Aaronson, whom I previously consulted about the plausibility of the time travel in Avengers: Endgame, humored me again and watched Devs despite having been burned before by Hollywoods crimes against quantum mechanics. His verdict, unsurprisingly, is that the quantum computing in Devslike that of Endgame, which cites one of the same physicists (David Deutsch) that Garland said inspired himis mostly hand-wavy window dressing.

A quantum computer is a device that uses a central phenomenon of quantum mechanicsnamely, interference of amplitudesto solve certain problems with dramatically better scaling behavior than any known algorithm running on any existing computer could solve them, Aaronson says. If youre wondering what amplitudes are, you can read Aaronsons explanation in a New York Times op-ed he authored last October, shortly after Google claimed to have achieved a milestone called quantum supremacythe first use of a quantum computer to make a calculation far faster than any non-quantum computer could. According to Googles calculations, the task that its Sycamore microchip performed in a little more than three minutes would have taken 100,000 of the swiftest existing conventional computers 10,000 years to complete. Thats a pretty impressive shortcut, and were still only at the dawn of the quantum computing age.

However, that stat comes with a caveat: Quantum computers arent better across the board than conventional computers. The applications where a quantum computer dramatically outperforms classical computers are relatively few and specialized, Aaronson says. As far as we know today, theyd help a lot with prediction problems only in cases where the predictions heavily involve quantum-mechanical behavior. Potential applications of quantum computers include predicting the rate of a chemical reaction, factoring huge numbers and possibly cracking the encryption that currently protects the internet (using Shors algorithm, which is briefly mentioned on Devs), and solving optimization and machine learning problems. Notice that reconstructing what Christ looked like on the cross is not on this list, Aaronson says.

In other words, the objective that Forest is trying to achieve doesnt necessarily lie within the quantum computing wheelhouse. To whatever extent computers can help forecast plausible scenarios for the past or future at all (as we already have them do for, e.g., weather forecasting), its not at all clear to what extent a quantum computer even helpsone might simply want more powerful classical computers, Aaronson says.

Then theres the problem that goes beyond the question of quantum vs. conventional: Either kind of computer would require data on which to base its calculations, and the data set that the predictions and retrodictions in Devs would demand is inconceivably detailed. I doubt that reconstructing the remote past is really a computational problem at all, in the sense that even the most powerful science-fiction supercomputer still couldnt give you reliable answers if it lacked the appropriate input data, Aaronson says, adding, As far as we know today, the best that any computer (classical or quantum) could possibly do, even in principle, with any data we could possibly collect, is to forecast a range of possible futures, and a range of possible pasts. The data that it would need to declare one of them the real future or the real past simply wouldnt be accessible to humankind, but rather would be lost in microscopic puffs of air, radiation flying away from the earth into space, etc.

In light of the unimaginably high hurdle of gathering enough data in the present to reconstruct what someone looked or sounded like during a distant, data-free age, Forest comes out looking like a ridiculously demanding boss. We get it, dude: You miss Amaya. But how about patting your employees on the back for pulling off the impossible? The idea that chaos, the butterfly effect, sensitive dependence on initial conditions, exponential error growth, etc. mean that you run your simulation 2000 years into the past and you end up with only a blurry, staticky image of Jesus on the cross rather than a clear image, has to be, like, the wildest understatement in the history of understatements, Aaronson says. As for the future, he adds, Predicting the weather three weeks from now might be forever impossible.

On top of all that, Aaronson says, The Devs headquarters is sure a hell of a lot fancier (and cleaner) than any quantum computing lab that Ive ever visited. (Does Kenton vacuum between torture sessions?) At least the computer more or less looks like a quantum computer.

OK, so maybe I didnt need to cajole a quantum computing savant into watching several hours of television to confirm that theres no way we can watch cavepeople paint. Garland isnt guilty of any science sins that previous storytellers havent committed many times. Whenever Aaronson has advised scriptwriters, theyve only asked him to tell them which sciencey words would make their preexisting implausible stories sound somewhat feasible. Its probably incredibly rare that writers would let the actual possibilities and limits of a technology drive their story, he says.

Although the show name-checks real interpretations of quantum mechanicsPenrose, pilot wave, many-worldsit doesnt deeply engage with them. The pilot wave interpretation holds that only one future is real, whereas many-worlds asserts that a vast number of futures are all equally real. But neither one would allow for the possibility of perfectly predicting the future, considering the difficulty of accounting for every variable. Garland is seemingly aware of how far-fetched his story is, because on multiple occasions, characters like Lily, Lyndon, and Stewart voice the audiences unspoken disbelief, stating that something or other isnt possible. Whenever they do, Katie or Forest is there to tell them that it is. Which, well, fine: Like Laplaces demon, Devs is intended as more of a thought experiment than a realistic scenario. As Katie says during her blue pill-red pill dialogue with Lily, Go with it.

We might as well go along with Garland, because any scientific liberties he takes are in service of the seriess deeper ideas. As Aaronson says, My opinion is that the show isnt really talking about quantum computing at allits just using it as a fancy-sounding buzzword. Really its talking about the far more ancient questions of determinism vs. indeterminism and predictability vs. unpredictability. He concludes, The plot of this series is one that wouldve been totally, 100 percent familiar to the ancient Greeksjust swap out the quantum computer for the Delphic Oracle. Aaronsonwho says he sort of likes Devs in spite of its quantum technobabblewould know: He wrote a book called Quantum Computing Since Democritus.

Speaking of Democritus, lets consult a few philosophers on the topic of free will. One of the most mind-bending aspects of Devs adherence to hard determinismthe theory that human behavior is wholly dictated by outside factorsis its insistence that characters cant change their behavior even if theyve seen the computers prediction of what theyre about to do. As Forest asks Katie, What if one minute into the future we see you fold your arms, and you say, Fuck the future. Im a magician. My magic breaks tram lines. Im not going to fold my arms. You put your hands in your pockets, and you keep them there until the clock runs out.

It seems as if she should be able to do what she wants with her hands, but Katie quickly shuts him down. Cause precedes effect, she says. Effect leads to cause. The future is fixed in exactly the same way as the past. The tram lines are real. Of course, Katie could be wrong: A character could defy the computers prediction in the finale. (Perhaps thats the mysterious unforeseeable event.) But weve already seen some characters fail to exit the tram. In an Episode 7 scenewhich, as Aaronson notes, is highly reminiscent of the VHS scene in Spaceballswe see multiple members of the Devs team repeat the same statements that theyve just heard the computer predict they would make a split second earlier. They cant help but make the prediction come true. Similarly, Lily ends up at Devs at the end of Episode 7, despite resolving not to.

Putting aside the implausibility of a perfect prediction existing at all, does it make sense that these characters couldnt deviate from their predicted course? Yes, according to five professors of philosophy I surveyed. Keep in mind what Garland has cited as a common criticism of his work: that the ideas I talk about are sophomoric because theyre the kinds of things that people talk about when theyre getting stoned in their dorm rooms. Were about to enter the stoned zone.

In this story, [the characters] are in a totally deterministic universe, says Ben Lennertz, an assistant professor of philosophy at Colgate University. In particular, the watching of the video of the future itself has been determined by the original state of the universe and the laws. Its not as if things were going along and the person was going to cross their arms, but then a non-deterministic miracle occurred and they were shown a video of what they were going to do. The watching of the video and the persons reaction is part of the same progression as the scene the video is of. In essence, the computer would have already predicted its own predictions, as well as every characters reaction to them. Everything that happens was always part of the plan.

Ohio Wesleyan Universitys Erin Flynn echoes that interpretation. The people in those scenes do what they do not despite being informed that they will do it, but (in part) because they have been informed that they will do it, Flynn says. (Think of Katie telling Lyndon that hes about to balance on the bridge railing.) This is not to say they will be compelled to conform, only that their knowledge presumably forms an important part of the causal conditions leading to their actions. When the computer sees the future, the computer sees that what they will do is necessitated in part by this knowledge. The computer would presumably have made different predictions had people never heard them.

Furthermore, adds David Landy of San Francisco State University, the fact that we see something happen one way doesnt mean that it couldnt have happened otherwise. Suppose we know that some guy is going to fold his arms, Landy says. Does it follow that he lacks the ability to not fold his arms? Well, no, because what we usually mean by has the ability to not fold his arms is that if things had gone differently, he wouldnt have folded his arms. But by stipulating at the start that he is going to fold his arms, we also stipulate that things arent going to go differently. But it can remain true that if they did go differently, he would not have folded his arms. So, he might have that ability, even if we know he is not going to exercise it.

If your head has started spinning, you can see why the Greeks didnt settle this stuff long before Garland got to it. And if it still seems strange that Forest seemingly cant put his hands in his pockets, well, what doesnt seem strange in the world of Devs? We should expect weird things to happen when we are talking about a very weird situation, Landy says. That is, we are used to people reliably doing what they want to do. But we have become used to that by making observations in a certain environment: one without time travel or omniscient computers. Introducing those things changes the environment, so we shouldnt be surprised if our usual inferences no longer hold.

Heres where we really might want to mime a marijuana hit. Neal Tognazzini of Western Washington University points out that one could conceivably appear to predict the future by tapping into a future that already exists. Many philosophers reject determinism but nevertheless accept that there are truths about what will happen in the future, because they accept a view in the philosophy of time called eternalism, which is (roughly) the block universe ideapast, present, and future are all parts of reality, Tognazzini says. This theory says that the past and the future exist some temporal distance from the presentwe just havent yet learned to travel between them. Thus, Tognazzini continues, You can accept eternalism about time without accepting determinism, because the first is just a view about whether the future is real whereas the second is a view about how the future is connected to the past (i.e., whether there are tram lines).

According to that school of thought, the future isnt what has to happen, its simply what will happen. If we somehow got a glimpse of our futures from the present, it might appear as if our paths were fixed. But those futures actually would have been shaped by our freely chosen actions in the interim. As Tognazzini says, Its a fate of our own makingwhich is just to say, no fate at all.

If we accept that the members of Devs know what theyre doing, though, then the computers predictions are deterministic, and the past does dictate the future. Thats disturbing, because it seemingly strips us of our agency. But, Tognazzini says, Even then, its still the case that what we do now helps to shape that future. We still make a difference to what the future looks like, even if its the only difference we could have made, given the tram lines we happen to be on. Determinism isnt like some force that operates independently of what we want, making us marionettes. If its true, then it would apply equally to our mental lives as well, so that the future that comes about might well be exactly the future we wanted.

This is akin to the compatibilist position espoused by David Hume, which seeks to reconcile the seemingly conflicting concepts of determinism and free will. As our final philosopher, Georgetown Universitys William Blattner, says, If determinism is to be plausible, it must find a way to save the appearances, in this case, explain why we feel like were choosing, even if at some level the choice is an illusion. The compatibilist perspective concedes that there may be only one possible future, but, Flynn says, insists that there is a difference between being causally determined (necessitated) to act and being forced or compelled to act. As long as one who has seen their future does not do what has been predicted because they were forced to do it (against their will, so to speak), then they will still have done it freely.

In the finale, well find out whether the computers predictions are as flawless and inviolable as Katie claims. Well also likely learn one of Devs most closely kept secrets: What Forest intends to do with his perfect model of Amaya. The show hasnt hinted that the computer can resurrect the dead in any physical fashion, so unless Forest is content to see his simulated daughter on a screen, he may try to enter the simulation himself. In Episode 7, Devs seemed to set the stage for such a step; as Stewart said, Thats the reality right there. Its not even a clone of reality. The box contains everything.

Would a simulated Forest, united with his simulated daughter, be happier inside the simulation than he was in real life, assuming hes aware hes inside the simulation? The philosopher Robert Nozick explored a similar question with his hypothetical experience machine. The experience machine would stimulate our brains in such a way that we could supply as much pleasure as we wanted, in any form. It sounds like a nice place to visit, and yet most of us wouldnt want to live there. That reluctance to enter the experience machine permanently seems to suggest that we see some value in an authentic connection to reality, however unpleasurable. Thinking Im hanging out with my family and friends is just different from actually hanging out with my family and friends, Tognazzini says. And since I think relationships are key to happiness, Im skeptical that we could be happy in a simulation.

If reality were painful enough, though, the relief from that pain might be worth the sacrifice. Suppose, for instance, that the real world had become nearly uninhabitable or otherwise full of misery, Flynn says. It seems to me that life in a simulation might be experienced as a sanctuary. Perhaps ones experience there would be tinged with sadness for the lost world, but Im not sure knowing its a simulation would necessarily keep one from being happy in it. Forest still seems miserable about Amaya IRL, so for him, that trade-off might make sense.

Whats more, if real life is totally deterministic, then Forest may not draw a distinction between life inside and outside of his quantum computer. If freedom is a critical component of fulfillment, then its hard to see how we could be fulfilled in a simulation, Blattner says. But for Forest, freedom isnt an option anywhere. Something about the situation seems sad, maybe pathetic, maybe even tragic, Flynn says. But if the world is a true simulation in the matter described, why not just understand it as the ability to visit another real world in which his daughter exists?

Those who subscribe to the simulation hypothesis believe that what we think of as real lifeincluding my experience of writing this sentence and your experience of reading itis itself a simulation created by some higher order of being. In our world, it may seem dubious that such a sophisticated creation could exist (or that anything or anyone would care to create it). But in Forests world, a simulation just as sophisticated as real life already exists inside Devswhich means that what Forest perceives as real life could be someone elses simulation. If hes possibly stuck inside a simulation either way, he might as well choose the one with Amaya (if he has a choice at all).

Garland chose to tell this story on TV because on the big screen, he said, it would have been slightly too truncated. On the small screen, its probably slightly too long: Because weve known more than Lily all along, what shes learned in later episodes has rehashed old info for us. Then again, Devs has felt familiar from the start. If Laplace got a pass for recycling Cicero and Leibniz, well give Garland a pass for channeling Laplace. Whats one more presentation of a puzzle thats had humans flummoxed forever?

See the rest here:
Making Sense of the Science and Philosophy of Devs - The Ringer

Technology alliances will help shape our post-pandemic future – C4ISRNet

Theres no question the post-corona world will be very different. How it will look depends on actions the worlds leaders take. Decisions made in coming months will determine whether we see a renewed commitment to a rules-based international order, or a fragmented world increasingly dominated by authoritarianism. Whomever steps up to lead will drive the outcome.

China seeks the mantle of global leadership. Beijing is exploiting the global leadership vacuum, the fissures between the United States and its allies, and the growing strain on European unity. The Chinese Communist Party has aggressively pushed a narrative of acting swiftly and decisively to contain the virus, building goodwill through mask diplomacy, and sowing doubts about the virus origin to deflect blame for the magnitude of the crisis and to rewrite history. Even though the results so far are mixed, the absence of the United States on the global stage provides Beijing with good momentum.

Before the pandemic, the worlds democracies already faced their gravest challenge in decades: the shift of economic power to illiberal states. By late 2019, autocratic regimes accounted for a larger share of global GDP than democracies for the first time since 1900. As former U.K. foreign secretary David Miliband recently observed, liberal democracy is in retreat. How the United States and like-minded partners respond post-pandemic will determine if that trend holds.

There is urgency to act the problem is now even more acute. The countries that figure out how to quickly restart and rebuild their economies post-pandemic will set the course for the 21st century. It is not only economic heft that is of concern: political power and military might go hand in hand with economic dominance.

At the center of this geostrategic and economic competition are technologies artificial intelligence, quantum computing, biotechnology, and 5G that will be the backbone of the 21st century economy. Leadership and ongoing innovation in these areas will confer critical economic, political, and military power, and the opportunity to shape global norms and values. The pre-crisis trajectory of waning clout in technology development, standards-setting, and proliferation posed an unacceptable and avoidable challenge to the interests of the worlds leading liberal-democratic states.

The current crisis accentuates this even more: it lays bare the need to rethink and restructure global supply chains; the imperative of ensuring telecommunication networks are secure, robust, and resilient; the ability to surge production of critical materiel, and the need to deter and counteract destructive disinformation. This is difficult and costly and it is best done in concert.

Bold action is needed to set a new course that enhances the ability of the worlds democracies to out-compete increasingly capable illiberal states. The growing clout of authoritarian regimes is not rooted in better strategy or more effective statecraft. Rather, it lies in the fractious and complacent nature of the worlds democracies and leading technology powers.

In response, a new multilateral effort an alliance framework is needed to reverse these trends. The worlds technology and democracy leaders the G7 members and countries like Australia, the Netherlands, and South Korea should join forces to tackle matters of technology policy. The purpose of this initiative is three-fold: one, regain the initiative in the global technology competition through strengthened cooperation between like-minded countries; two, protect and preserve key areas of competitive technological advantage; and three, promote collective norms and values around the use of emerging technologies.

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

Such cooperation is vital to effectively deal with the hardest geopolitical issues that increasingly center on technology, from competing economically to building deterrence to combating disinformation. This group should not be an exclusive club: it should also work with countries like Finland and Sweden to align policies on telecommunications; Estonia, Israel, and New Zealand for cyber issues; and states around the world to craft efforts to counter the proliferation of Chinese surveillance technology and offer sound alternatives to infrastructure development, raw material extraction, and loans from China that erode their sovereignty.

The spectrum of scale and ambition this alliance can tackle is broad. Better information sharing would yield benefits on matters like investment screening, counterespionage, and fighting disinformation. Investments in new semiconductor fabs could create more secure and diverse supply chains. A concerted effort to promote open architecture in 5G could usher in a paradigm shift for an entire industry. Collaboration will also be essential to avoiding another pandemic calamity.

Similar ideas are percolating among current and former government leaders in capitals such as Tokyo, Berlin, London, and Washington, with thought leaders like Jared Cohen and Anja Manuel, and in think tanks around the world. The task at hand is to collate these ideas, find the common ground, and devise an executable plan. This requires tackling issues like organizational structure, governance, and institutionalization. It also requires making sure that stakeholders from government, industry, and civil society from around the world provide input to make the alliance framework realistic and successful.

No one country can expect to achieve its full potential by going it alone, not even the United States. An alliance framework for technology policy is the best way to ensure that the worlds democracies can effectively compete economically, politically, and militarily in the 21st century. The links between the worlds leading democracies remain strong despite the challenges of the current crisis. These relationships are an enduring and critical advantage that no autocratic country can match. It is time to capitalize on these strengths, retake the initiative, and shape the post-corona world.

Martijn Rasser is a senior fellow at the Center for a New American Security.

Follow this link:
Technology alliances will help shape our post-pandemic future - C4ISRNet

Automated Machine Learning is the Future of Data Science – Analytics Insight

As the fuel that powers their progressing digital transformation endeavors, organizations wherever are searching for approaches to determine as much insight as could reasonably be expected from their data. The accompanying increased demand for advanced predictive and prescriptive analytics has, thus, prompted a call for more data scientists capable with the most recent artificial intelligence (AI) and machine learning (ML) tools.

However, such highly-skilled data scientists are costly and hard to find. Truth be told, theyre such a valuable asset, that the phenomenon of the citizen data scientist has of late emerged to help close the skills gap. A corresponding role, as opposed to an immediate substitution, citizen data scientists need explicit advanced data science expertise. However, they are fit for producing models utilizing best in class diagnostic and predictive analytics. Furthermore, this ability is incomplete because of the appearance of accessible new technologies, for example, automated machine learning (AutoML) that currently automate a significant number of the tasks once performed by data scientists.

The objective of autoML is to abbreviate the pattern of trial and error and experimentation. It burns through an enormous number of models and the hyperparameters used to design those models to decide the best model available for the data introduced. This is a dull and tedious activity for any human data scientist, regardless of whether the individual in question is exceptionally talented. AutoML platforms can play out this dreary task all the more rapidly and thoroughly to arrive at a solution faster and effectively.

A definitive estimation of the autoML tools isnt to supplant data scientists however to offload their routine work and streamline their procedure to free them and their teams to concentrate their energy and consideration on different parts of the procedure that require a more significant level of reasoning and creativity. As their needs change, it is significant for data scientists to comprehend the full life cycle so they can move their energy to higher-value tasks and sharpen their abilities to additionally hoist their value to their companies.

At Airbnb, they continually scan for approaches to improve their data science workflow. A decent amount of their data science ventures include machine learning and numerous pieces of this workflow are tedious. At Airbnb, they use machine learning to build customer lifetime value models (LTV) for guests and hosts. These models permit the company to improve its decision making and interactions with the community.

Likewise, they have seen AML tools as generally valuable for regression and classification problems involving tabular datasets, anyway, the condition of this area is rapidly progressing. In outline, it is accepted that in specific cases AML can immensely increase a data scientists productivity, often by an order of magnitude. They have used AML in many ways.

Unbiased presentation of challenger models: AML can rapidly introduce a plethora of challenger models utilizing a similar training set as your incumbent model. This can help the data scientist in picking the best model family. Identifying Target Leakage: In light of the fact that AML builds candidate models amazingly fast in an automated way, we can distinguish data leakage earlier in the modeling lifecycle. Diagnostics: As referenced prior, canonical diagnostics can be automatically created, for example, learning curves, partial dependence plots, feature importances, etc. Tasks like exploratory data analysis, pre-processing of data, hyper-parameter tuning, model selection and putting models into creation can be automated to some degree with an Automated Machine Learning system.

Companies have moved towards enhancing predictive power by coupling huge data with complex automated machine learning. AutoML, which uses machine learning to create better AI, is publicized as affording opportunities to democratise machine learning by permitting firms with constrained data science expertise to create analytical pipelines equipped for taking care of refined business issues.

Including a lot of algorithms that automate that writing of other ML algorithms, AutoML automates the end-to-end process of applying ML to real-world problems. By method for representation, a standard ML pipeline consists of the following: data pre-processing, feature extraction, feature selection, feature engineering, algorithm selection, and hyper-parameter tuning. In any case, the significant ability and time it takes to execute these strides imply theres a high barrier to entry.

In an article distributed on Forbes, Ryohei Fujimaki, the organizer and CEO of dotData contends that the discussion is lost if the emphasis on AutoML systems is on supplanting or decreasing the role of the data scientist. All things considered, the longest and most challenging part of a typical data science workflow revolves around feature engineering. This involves interfacing data sources against a rundown of wanted features that are assessed against different Machine Learning algorithms.

Success with feature engineering requires an elevated level of domain aptitude to recognize the ideal highlights through a tedious iterative procedure. Automation on this front permits even citizen data scientists to make streamlined use cases by utilizing their domain expertise. More or less, this democratization of the data science process makes the way for new classes of developers, offering organizations a competitive advantage with minimum investments.

Here is the original post:
Automated Machine Learning is the Future of Data Science - Analytics Insight

Googles AutoML Zero lets the machines create algorithms to avoid human bias – The Next Web

It looks like Googles working on some major upgrades to its autonomous machine learning development language AutoML. According to a pre-print research paper authored by several of the big Gs AI researchers, AutoML Zero is coming, and its bringing evolutionary algorithms with it.

AutoML is a tool from Google that automates the process of developing machine learning algorithms for various tasks. Its user-friendly, fairly simple to use, and completely open-source. Best of all, Googles always updating it.

In its current iteration, AutoML has a few drawbacks. You still have to manually create and tune several algorithms to act as building blocks for the machine to get started. This allows it to take your work and experiment with new parameters in an effort to optimize what youve done. Novices can get around this problem by using pre-made algorithm packages, but Googles working to automate this part too.

Per the Google teams pre-print paper:

It is possible today to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks. We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space.

Despite the vastness of this space, evolutionary search can still discover two-layer neural networks trained by backpropagation. These simple neural networks can then be surpassed by evolving directly on tasks of interest, e.g. CIFAR-10 variants, where modern techniques emerge in the top algorithms, such as bilinear interactions, normalized gradients, and weight averaging.

Moreover, evolution adapts algorithms to different task types: e.g., dropout-like techniques appear when little data is available.

In other words: Googles figured out how to tap evolutionary algorithms for AutoML using nothing but basic math concepts. The developers created a learning paradigm in which the machine will spit out 100 randomly generated algorithms and then work to see which ones perform the best.

After several generations, the algorithms become better and better until the machine finds one that performs well enough to evolve. In order to generate novel algorithms that can solve new problems, the ones that survive the evolutionary process are tested against various standard AI problems, such as computer vision.

Read: Why the quickest path to human-level AI may be letting it evolve on its own

Perhaps the most interesting byproduct of Googles quest to completely automate the act of generating algorithms and neural networks is the removal of human bias from our AI systems. Without us there to determine what the best starting point for development is, the machines are free to find things wed never think of.

According to the researchers, AutoML Zero already outperforms its predecessor and similar state-of-the-art machine learning-generation tools. Future research will involve setting a more narrow scope for the AI and seeing how well it performs in more specific situations using a hybrid approach that creates algorithms with a combination of Zeros self-discovery techniques and human-curated starter libraries.

Published April 14, 2020 20:00 UTC

See more here:
Googles AutoML Zero lets the machines create algorithms to avoid human bias - The Next Web

Nothing to hide? Then add these to your ML repo, Papers with Code says DEVCLASS – DevClass

In a bid to make advancements in machine learning more reproducible, ML resource and Facebook AI Research (FAIR) appendage Papers With Code has introduced a code completeness checklist for machine learning papers.

It is based on the best practices the Papers with Code team has seen in popular research repositories and the Machine Learning Reproducibility Checklist which Joelle Pineau, FAIR Managing Director, introduced in 2019, as well as some additional work Pineau and other researchers did since then.

Papers with Code was started in 2018 as a hub for newly published machine learning papers that come with source code, offering researchers an easy to monitor platform to keep up with the current state of the art. In late 2019 it became part of FAIR to further accelerate our growth, as founders Robert Stojnic and Ross Taylor put it back then.

As part of FAIR, the project will get a bit of a visibility push since the new checklist will also be used in the submission process for the 2020 edition of the popular NeurIPS conference on neural information processing systems.

The ML code completeness checklist is used to assess code repositories based on the scripts and artefacts that have been provided within it to enhance reproducibility and enable others to more easily build upon published work. It includes checks for dependencies, so that those looking to replicate a papers results have some idea what is needed in order to succeed, training and evaluation scripts, pre-trained models, and results.

While all of these seem like useful things to have, Papers with Code also tried using a somewhat scientific approach to make sure they really are indicators for a useful repository. To verify that, they looked for correlations between the number of fulfilled checklist items and the star-rating of a repository.

Their analysis showed that repositories that hit all the marks got higher ratings implying that the checklist score is indicative of higher quality submissions and should therefore encourage researchers to comply in order to produce useful resources. However, they simultaneously admitted that marketing and the state of documentation might also play into a repos popularity.

They nevertheless went on recommending to lay out the five elements mentioned and link to external resources, which always is a good idea. Additional tips for publishing research code can be found in the projects GitHub repository or the report on NeurIPS reproducibility program.

More:
Nothing to hide? Then add these to your ML repo, Papers with Code says DEVCLASS - DevClass

Covid-19 Detection With Images Analysis And Machine Learning – Elemental

/* we have just two outputs positive and negative according to our directories */ int outputNum = 2;int numEpochs = 1;

/*This class downloadData() downloads the datastores the data in java's tmpdir 15MB download compressedIt will take 158MB of space when uncompressedThe data can be downloaded manually here

// Define the File PathsFile trainData = new File(DATA_PATH + "/covid-19/training");File testData = new File(DATA_PATH + "/covid-19/testing");

// Define the FileSplit(PATH, ALLOWED FORMATS,random)FileSplit train = new FileSplit(trainData, NativeImageLoader.ALLOWED_FORMATS, randNumGen);FileSplit test = new FileSplit(testData, NativeImageLoader.ALLOWED_FORMATS, randNumGen);

// Extract the parent path as the image labelParentPathLabelGenerator labelMaker = new ParentPathLabelGenerator();

ImageRecordReader recordReader = new ImageRecordReader(height, width, channels, labelMaker);

// Initialize the record reader// add a listener, to extract the namerecordReader.initialize(train);//recordReader.setListeners(new LogRecordListener());

// DataSet IteratorDataSetIterator dataIter = new RecordReaderDataSetIterator(recordReader, batchSize, 1, outputNum);

// Scale pixel values to 0-1DataNormalization scaler = new ImagePreProcessingScaler(0, 1);scaler.fit(dataIter);dataIter.setPreProcessor(scaler);

// Build Our Neural Networklog.info("BUILD MODEL");MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder().seed(rngseed).optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT).updater(new Nesterovs(0.006, 0.9)).l2(1e-4).list().layer(0, new DenseLayer.Builder().nIn(height * width).nOut(100).activation(Activation.RELU).weightInit(WeightInit.XAVIER).build()).layer(1, new OutputLayer.Builder(LossFunctions.LossFunction.NEGATIVELOGLIKELIHOOD).nIn(100).nOut(outputNum).activation(Activation.SOFTMAX).weightInit(WeightInit.XAVIER).build()).setInputType(InputType.convolutional(height, width, channels)).build();

MultiLayerNetwork model = new MultiLayerNetwork(conf);

// The Score iteration Listener will log// output to show how well the network is trainingmodel.setListeners(new ScoreIterationListener(10));

log.info("TRAIN MODEL");for (int i = 0; i < numEpochs; i++) {model.fit(dataIter);}

log.info("EVALUATE MODEL");recordReader.reset();

// The model trained on the training dataset split// now that it has trained we evaluate against the// test data of images the network has not seen

recordReader.initialize(test);DataSetIterator testIter = new RecordReaderDataSetIterator(recordReader, batchSize, 1, outputNum);scaler.fit(testIter);testIter.setPreProcessor(scaler);

/*log the order of the labels for later useIn previous versions the label order was consistent, but randomIn current verions label order is lexicographicpreserving the RecordReader Labels order is nolonger needed left in for demonstrationpurposes*/log.info(recordReader.getLabels().toString());

// Create Eval object with 10 possible classesEvaluation eval = new Evaluation(outputNum);

// Evaluate the networkwhile (testIter.hasNext()) {DataSet next = testIter.next();INDArray output = model.output(next.getFeatures());// Compare the Feature Matrix from the model// with the labels from the RecordReadereval.eval(next.getLabels(), output);}// show the evaluationlog.info(eval.stats());}

More here:
Covid-19 Detection With Images Analysis And Machine Learning - Elemental

Department Of Energy Announces $30 Million For Advanced AI & ML-Based Researches – Analytics India Magazine

The Department of Energy in the US has recently announced its initiative to provide up to $30 million for advanced research in artificial intelligence and machine learning. This fund can be used for both scientific investigation and the management of complex systems.

This initiative comprises two-fold strategies.

Firstly, focusing on the development of artificial intelligence and machine learning for predictive modelling and simulation focused on research across the physical sciences. The technologies ML and AI are considered to offer promising new alternatives to conventional programming methods for computer modelling and simulation. And, secondly, this fund will be utilised on essential ML and AI research for decision support in addressing complex systems.

Eventually, the potential applications could include cybersecurity, power grid resilience, and other complex processes where these emerging technologies can make or aid in creating business decisions in real-time.

When asked, Under Secretary for Science Paul Dabbar stated that both these technologies artificial intelligence and machine learning are among the most powerful tools we have today for both advancing scientific knowledge and managing our increasingly complex technological environment.

He further said, This foundational research will help keep the United States in the forefront as applications for ML and AI rapidly expand, and as we utilise this evolving technology to solve the worlds toughest challenges such as COVID-19.

The applications for this initiative will be open to DOE national laboratories, universities, nonprofits, and industry, and according to the peer review, the funding will be awarded.

According to DOE, the planned funding for the scientific machine learning for modelling and simulations topic will be up to $10 million in FY 2020 dollars for projects of two years in duration. On the other hand, the planned funding for the artificial intelligence and decision support for complex systems topic will be up to $20 million, with up to $7 million in FY 2020 dollars and out-year funding contingent on congressional appropriations.

comments

More here:
Department Of Energy Announces $30 Million For Advanced AI & ML-Based Researches - Analytics India Magazine

How Microsoft Teams will use AI to filter out typing, barking, and other noise from video calls – VentureBeat

Last month, Microsoft announced that Teams, its competitor to Slack, Facebooks Workplace, and Googles Hangouts Chat, had passed 44 million daily active users. The milestone overshadowed its unveiling of a few new features coming later this year. Most were straightforward: a hand-raising feature to indicate you have something to say, offline and low-bandwidth support to read chat messages and write responses even if you have poor or no internet connection, and an option to pop chats out into a separate window. But one feature, real-time noise suppression, stood out Microsoft demoed how the AI minimized distracting background noise during a call.

Weve all been there. How many times have you asked someone to mute themselves or to relocate from a noisy area? Real-time noise suppression will filter out someone typing on their keyboard while in a meeting, the rustling of a bag of chips (as you can see in the video above), and a vacuum cleaner running in the background. AI will remove the background noise in real time so you can hear only speech on the call. But how exactly does it work? We talked to Robert Aichner, Microsoft Teams group program manager, to find out.

The use of collaboration and video conferencing tools is exploding as the coronavirus crisis forces millions to learn and work from home. Microsoft is pushing Teams as the solution for businesses and consumers as part of its Microsoft 365 subscription suite. The company is leaning on its machine learning expertise to ensure AI features are one of its big differentiators. When it finally arrives, real-time background noise suppression will be a boon for businesses and households full of distracting noises. Additionally, how Microsoft built the feature is also instructive to other companies tapping machine learning.

Of course, noise suppression has existed in the Microsoft Teams, Skype, and Skype for Business apps for years. Other communication tools and video conferencing apps have some form of noise suppression as well. But that noise suppression covers stationary noise, such as a computer fan or air conditioner running in the background. The traditional noise suppression method is to look for speech pauses, estimate the baseline of noise, assume that the continuous background noise doesnt change over time, and filter it out.

Going forward, Microsoft Teams will suppress non-stationary noises like a dog barking or somebody shutting a door. That is not stationary, Aichner explained. You cannot estimate that in speech pauses. What machine learning now allows you to do is to create this big training set, with a lot of representative noises.

In fact, Microsoft open-sourced its training set earlier this year on GitHub to advance the research community in that field. While the first version is publicly available, Microsoft is actively working on extending the data sets. A company spokesperson confirmed that as part of the real-time noise suppression feature, certain categories of noises in the data sets will not be filtered out on calls, including musical instruments, laughter, and singing. (More on that here: ProBeat: Microsoft Teams video calls and the ethics of invisible AI.)

Microsoft cant simply isolate the sound of human voices because other noises also happen at the same frequencies. On a spectrogram of speech signal, unwanted noise appears in the gaps between speech and overlapping with the speech. Its thus next to impossible to filter out the noise if your speech and noise overlap, you cant distinguish the two. Instead, you need to train a neural network beforehand on what noise looks like and speech looks like.

To get his points across, Aichner compared machine learning models for noise suppression to machine learning models for speech recognition. For speech recognition, you need to record a large corpus of users talking into the microphone and then have humans label that speech data by writing down what was said. Instead of mapping microphone input to written words, in noise suppression youre trying to get from noisy speech to clean speech.

We train a model to understand the difference between noise and speech, and then the model is trying to just keep the speech, Aichner said. We have training data sets. We took thousands of diverse speakers and more than 100 noise types. And then what we do is we mix the clean speech without noise with the noise. So we simulate a microphone signal. And then you also give the model the clean speech as the ground truth. So youre asking the model, From this noisy data, please extract this clean signal, and this is how it should look like. Thats how you train neural networks [in] supervised learning, where you basically have some ground truth.

For speech recognition, the ground truth is what was said into the microphone. For real-time noise suppression, the ground truth is the speech without noise. By feeding a large enough data set in this case hundreds of hours of data Microsoft can effectively train its model. Its able to generalize and reduce the noise with my voice even though my voice wasnt part of the training data, Aichner said. In real time, when I speak, there is noise that the model would be able to extract the clean speech [from] and just send that to the remote person.

Comparing the functionality to speech recognition makes noise suppression sound much more achievable, even though its happening in real time. So why has it not been done before? Can Microsofts competitors quickly recreate it? Aichner listed challenges for building real-time noise suppression, including finding representative data sets, building and shrinking the model, and leveraging machine learning expertise.

We already touched on the first challenge: representative data sets. The team spent a lot of time figuring out how to produce sound files that exemplify what happens on a typical call.

They used audio books for representing male and female voices, since speech characteristics do differ between male and female voices. They used YouTube data sets with labeled data that specify that a recording includes, say, typing and music. Aichners team then combined the speech data and noises data using a synthesizer script at different signal to noise ratios. By amplifying the noise, they could imitate different realistic situations that can happen on a call.

But audiobooks are drastically different than conference calls. Would that not affect the model, and thus the noise suppression?

That is a good point, Aichner conceded. Our team did make some recordings as well to make sure that we are not just training on synthetic data we generate ourselves, but that it also works on actual data. But its definitely harder to get those real recordings.

Aichners team is not allowed to look at any customer data. Additionally, Microsoft has strict privacy guidelines internally. I cant just simply say, Now I record every meeting.'

So the team couldnt use Microsoft Teams calls. Even if they could say, if some Microsoft employees opted-in to have their meetings recorded someone would still have to mark down when exactly distracting noises occurred.

And so thats why we right now have some smaller-scale effort of making sure that we collect some of these real recordings with a variety of devices and speakers and so on, said Aichner. What we then do is we make that part of the test set. So we have a test set which we believe is even more representative of real meetings. And then, we see if we use a certain training set, how well does that do on the test set? So ideally yes, I would love to have a training set, which is all Teams recordings and have all types of noises people are listening to. Its just that I cant easily get the same number of the same volume of data that I can by grabbing some other open source data set.

I pushed the point once more: How would an opt-in program to record Microsoft employees using Teams impact the feature?

You could argue that it gets better, Aichner said. If you have more representative data, it could get even better. So I think thats a good idea to potentially in the future see if we can improve even further. But I think what we are seeing so far is even with just taking public data, it works really well.

The next challenge is to figure out how to build the neural network, what the model architecture should be, and iterate. The machine learning model went through a lot of tuning. That required a lot of compute. Aichners team was of course relying on Azure, using many GPUs. Even with all that compute, however, training a large model with a large data set could take multiple days.

A lot of the machine learning happens in the cloud, Aichner said. So, for speech recognition for example, you speak into the microphone, thats sent to the cloud. The cloud has huge compute, and then you run these large models to recognize your speech. For us, since its real-time communication, I need to process every frame. Lets say its 10 or 20 millisecond frames. I need to now process that within that time, so that I can send that immediately to you. I cant send it to the cloud, wait for some noise suppression, and send it back.

For speech recognition, leveraging the cloud may make sense. For real-time noise suppression, its a nonstarter. Once you have the machine learning model, you then have to shrink it to fit on the client. You need to be able to run it on a typical phone or computer. A machine learning model only for people with high-end machines is useless.

Theres another reason why the machine learning model should live on the edge rather than the cloud. Microsoft wants to limit server use. Sometimes, there isnt even a server in the equation to begin with. For one-to-one calls in Microsoft Teams, the call setup goes through a server, but the actual audio and video signal packets are sent directly between the two participants. For group calls or scheduled meetings, there is a server in the picture, but Microsoft minimizes the load on that server. Doing a lot of server processing for each call increases costs, and every additional network hop adds latency. Its more efficient from a cost and latency perspective to do the processing on the edge.

You want to make sure that you push as much of the compute to the endpoint of the user because there isnt really any cost involved in that. You already have your laptop or your PC or your mobile phone, so now lets do some additional processing. As long as youre not overloading the CPU, that should be fine, Aichner said.

I pointed out there is a cost, especially on devices that arent plugged in: battery life. Yeah, battery life, we are obviously paying attention to that too, he said. We dont want you now to have much lower battery life just because we added some noise suppression. Thats definitely another requirement we have when we are shipping. We need to make sure that we are not regressing there.

Its not just regression that the team has to consider, but progression in the future as well. Because were talking about a machine learning model, the work never ends.

We are trying to build something which is flexible in the future because we are not going to stop investing in noise suppression after we release the first feature, Aichner said. We want to make it better and better. Maybe for some noise tests we are not doing as good as we should. We definitely want to have the ability to improve that. The Teams client will be able to download new models and improve the quality over time whenever we think we have something better.

The model itself will clock in at a few megabytes, but it wont affect the size of the client itself. He said, Thats also another requirement we have. When users download the app on the phone or on the desktop or laptop, you want to minimize the download size. You want to help the people get going as fast as possible.

Adding megabytes to that download just for some model isnt going to fly, Aichner said. After you install Microsoft Teams, later in the background it will download that model. Thats what also allows us to be flexible in the future that we could do even more, have different models.

All the above requires one final component: talent.

You also need to have the machine learning expertise to know what you want to do with that data, Aichner said. Thats why we created this machine learning team in this intelligent communications group. You need experts to know what they should do with that data. What are the right models? Deep learning has a very broad meaning. There are many different types of models you can create. We have several centers around the world in Microsoft Research, and we have a lot of audio experts there too. We are working very closely with them because they have a lot of expertise in this deep learning space.

The data is open source and can be improved upon. A lot of compute is required, but any company can simply leverage a public cloud, including the leaders Amazon Web Services, Microsoft Azure, and Google Cloud. So if another company with a video chat tool had the right machine learners, could they pull this off?

The answer is probably yes, similar to how several companies are getting speech recognition, Aichner said. They have a speech recognizer where theres also lots of data involved. Theres also lots of expertise needed to build a model. So the large companies are doing that.

Aichner believes Microsoft still has a heavy advantage because of its scale. I think that the value is the data, he said. What we want to do in the future is like what you said, have a program where Microsoft employees can give us more than enough real Teams Calls so that we have an even better analysis of what our customers are really doing, what problems they are facing, and customize it more towards that.

Read the original post:
How Microsoft Teams will use AI to filter out typing, barking, and other noise from video calls - VentureBeat

‘Refreshingly honest’ dealer who ran Perth drug den will be sent home to Poland – The Courier

A dealer who moved from Poland to set up a drug den in a housing association flat has been ordered to hand over more than 5,000 before he is booted out of Scotland.

Kamil Morawski has been told to pay back 5,460 he made dealing drugs from the flat in Perth.

Morawski, who was described as refreshingly honest when a sheriff jailed him for four and a half years, will also be extradited back to Poland

Morawski bluntly told police who raided his Perth home that he was a drug dealer and had been in business for months.

Perth Sheriff Court was told Morawski was given a housing association flat when he moved to Scotland and used it to set up a large-scale drug den peddling ecstasy, speed and cannabis.

Morawski, 31, was found with 40,000 worth of drugs after converting the flat into the centre of his drug dealing operation.

The father of one who had served prison terms in his homeland for drug-related crimes was caught with nearly two kilos of speed, worth nearly 20,000.

He had more than 1,000 ecstasy tablets and more than a kilo of cannabis in the McCallum Court flat.

Depute fiscal Charmaine Gilmartin told Perth Sheriff Court 1,120 ecstasy tablets were recovered with a potential value of 11,200, along with 1,945 grammes of amphetamine worth 19,450.

The total cannabis recovered weighed 1,309 grammes and had a maximum value of 13,090. Morawski had also stuffed more than 5,000 cash under his mattress.

She said: The accused gave full answers, stating that he was a drug dealer and sold cannabis, amphetamine and E.

He stated he had been dealing for around six months for financial gain.

Morawski, a prisoner at Perth, admitted three charges of being concerned in the supply of cannabis, amphetamine and ecstasy between January 31 and July 31 last year.

Solicitor David Sinclair, defending, said: He was seeking to improve his familys life and took a short way of doing so.

Sheriff Lindsay Foulis said: You have held your hands up and accepted responsibility at the earliest stage.

You do not shy away from taking responsibility.

Nor do you try and mask your reasons for your actions in any way and such honesty, to put it bluntly, is refreshing.

Read more:
'Refreshingly honest' dealer who ran Perth drug den will be sent home to Poland - The Courier

Will Universities, Colleges, And Law School Campuses Be Open In Fall 2020? – Above the Law

Maybe? Maybe not?

The short answer is that there is no definitive answer yet. Universities and law schools arent ready to make a decision because the pandemic is so fluid and there is so much uncertainty, nor do they have to yet. But the question is being discussed on a daily basis, and we have spent a good deal of time speaking with college presidents, provosts, and deans and trying our best to get the most recent and trust-worthy epidemiological modeling and medical community input.

This podcast condenses those two perspectives that of higher education and that of the medical community into a prediction for the fall. Our prediction, based on speculation, and which we are going to devote continuous attention to over the next several months, is that it is likely many colleges and universities will not have on-campus classes this fall. This is particularly true for those schools that are able to take a semester or even year-long financial hit. We allude to Bill Gatess work that states this may be the once-in-100-years pathogen we have not been prepared for, and the infection rate of the virus plays into this prediction. Certainly there is a broad continuum where you could see some colleges entirely online, some with a hybrid online/on-campus model, and potentially some that are fully open.

What about law schools, the area our firm has the most expertise in? The dynamics are a little bit different here because law schools generally dont have, or dont have to have, student housing. Their student bodies are considerably smaller (there are roughly 112,000 total law school students in the country vs. 22 million college students), and thus with testing improvements and availability you could see law schools having a model where all faculty and students are tested, and those who test negative can be in the classroom, which would also be webcasted or recorded for those who cant be in the classroom for a variety of reasons.

In fact, we think some law schools will open and some will remain closed. They may start up, even independent of central university openings, or they may ride out a semester of online-only courses. Some may do the model described above and some may open fully with a hand on the button to shutter immediately if the spike comes back from the virus.

Pictured below the podcast is the model from Dr. David Sinclair Ph.D. we have been looking at, along with much of his other work, in formulating some of these speculations As the summer progresses this will become less speculative, and we will provide updates at any juncture we learn new information.

Listen to the full podcast here:

(Graph via Dr. David Sinclair Ph.D.)

MikeSpiveyis the founder of TheSpiveyConsulting Groupand has been featured as an expert onlaw schoolsand law school admissions in many national media outlets, includingThe New York Times,The Economist,theABA Journal,The Chronicle of Higher Education,U.S. News & World Report,CNN/Fortune,andLaw. Prior to foundingSpiveyConsulting, Mike was a senior level administrator atVanderbilt,Washington University, andColoradolaw schools. You can follow him onTwitterandInstagram or connect with him LinkedIn.

Read the original here:
Will Universities, Colleges, And Law School Campuses Be Open In Fall 2020? - Above the Law

Is There Any Legitimate Health Advice in Gwyneth Paltrow’s "Goop Lab"? – InsideHook

Welcome toThe Workout From Home Diaries. Throughout our national self-isolation period, well be sharing single-exercise deep dives, offbeat belly-busters and general get-off-the-couch inspiration that doesnt require a visit to your (now-shuttered) local gym.

The word energy already had too many meanings.

Border Collie puppies barnstorming around a backyard are energetic. Cereal bars that contain fructose and maltodextrin give us the energy we need to get through an afternoon. Whirring wind turbines convert kinetic mechanical energy to electricity. Bitcoin mining uses as much energy (in terawatt-hours) year-over-year as the entire Czech Republic.

The word officially entered into overuse, though, on episodes five and six of The Goop Lab. The fifth episode, literally called The Energy Experience, features LA-based body worker and somatic energy practitioner John Amaral, who has patients lie down on massage beds as he plays with an undiscovered dimension in the air four to six feet away from their bodies. In the sixth episode, a medium named Laura Lynne Jackson beams energy into sitters, then interprets concepts the universe sends back to her to communicate with the dead.

There is little point in getting too worked up over the confounding definition of energy that Gwyneth Paltrow, her lifestyle empire, and this TV show a six-episode docuseries that dropped on Netflix about 10 weeks ago have entered into the words canon. During Goop Lab, discussions on energy feel somewhat reminiscent of a sophomore philosophy class where no one has actually done the reading. It is a cosmic entity, not necessarily meant to be understood, but apparently deserving of a wry smile, a bemused tilt of the head, a pat on the back for trying. This is the gray area though goop often colors it Easter pink where pseudoscience thrives, where phrases like food for thought and give it a go legitimize (if not blatantly prioritize) anecdotal accounts over blind clinical trials. Its why the internet was ready the second this series dropped in late January, its why the United Kingdoms National Health Service outright declared the show a considerable health risk to the public.

Of course, the NHS has had to battle far more considerable health risks than Paltrowism in recent weeks. And its alarmingly tempting, in the age of COVID-19, as we all turn into streaming service ultra-marathoners, to even seek out what is widely-agreed upon nonsense. Anything thats intertwined with personal health, too, at a time when were all thinking of our bodies how to boost our immune systems, how to sweat without doing the same run for the 13th time since shelter-in-place began is extra alluring.

With all of that out of the way, I have a confession. I quarantine-binged The Goop Lab in its entirety this weekend. I went in with an open mind, refusing to read a single scalding review beforehand (those came later and seriously, these writers had their takes ready, they might as well have been dropping obits) and actively searched for legitimate, usable strategies, practices or concepts I could apply to my personal fitness. Why take goop so seriously? It has a $250 million valuation, millions of followers across all relevant social platforms, hugely successful pop-up shops in cities across the States, and yearly wellness summits attended by high-profile actors, athletes and authors. It matters, whether you like it or not. And besides, I enjoy learning new ways to feel and perform better. An open, patient ear helps in that regard.

That ear had to hear the word energy exactly 4,000 times over the season, but there actually was, believe it or not, some relevant wellness knowledge hidden within all the Goopian muck. That muck generally followed the same format: Paltrow and her Chief Content Officer Elise Loehnen interview a couple special guests in a perfectly-lit office where people whod rather be drinking sangria pitchers under delicately hung Edison bulbs drink hemp tea while riding around on bubblegum pennyboards. The special guests, whove taken select goop staffers on some sort of retreat in the weeks or months preceding, get a chance to explain their research and dish on one staffer who had a particularly strong reaction to the experience. There are only three episodes worth watching within that framework, and just two special guests especially worth remembering. These episodes are the second, third and fourth: Cold Comfort The Pleasure Is Ours and The Health Span Plan.

This episode focuses on aging, and introduces the work of both Dr. Valter Longo, a cell biologist who works at the University of Southern California, and Dr. Morgan Levine, a professor in pathology at the Yale School of Medicine. Both are concerned with the concept of aging as disease; similar to the work of Harvard geneticist Dr. David Sinclair, who I had the chance to speak to just last month, Dr. Longo has shown that aging can be addressed at the molecular level with various lifestyle changes. For example, hes a big believer in calorie restriction (intermittent fasting) and a pescatarian diet.

Dr. Levine, meanwhile, employs blood work to test for inflammation, metabolism, kidney and liver function, and cardiovascular health to determine a persons chronological age. Think of it as a true or internal age. When contrasted with your chronological age, a biological age is a more accurate arbiter for your risk of cancer, heart disease, or diabetes. Personal fitness, much like personal savings, thrives best on information. Sometimes, you have to look at your account balance to plan a way forward. Talk to your doctor about the best way to get your blood tested, or consider a biological age test. There are a few out there, including myDNAge, TeloYears, and InnerAge. If the number is above your chronological age, dont panic. Thats where the work of Dr. Sinclair is so helpful; hes proven in Cambridge that fasting, exposure to cold temperatures and high-intensity interval training will literally put years on your life and high-quality years, at that.

Easily Goop Labs most celebrated episode, and for good reason. The star is Betty Dodson, a 90-year-old whos been teaching women how to orgasm since the 70s. Shes a gynecologist with Bourdain vibes, a patchy jacket and no hesitation in correcting Paltrows odd misunderstanding of female anatomy. (What GP considers the vagina at one point during the episode is actually the vulva.) If youre a woman, this is essential, long-overdue TV Goop Lab does well to show over a dozen different types of vulva, after a discussion on the disconcerting rise of labiaplasty surgery, which has long been a mainstay in the porn industry. Most courageously, though, the episode closes with the orgasm of Dodsons fellow sex educator Carlin Ross, whom she guides through the experience. How does any of this relate to wellness?

For women, improved sexual awareness and a reliable masturbation routine can help improve mood, aid in sleep, improve confidence and communication with partners, and even reduce pain from menstrual cramps. For men who have female partners, those are all good things. When supported, youll likely grow closer with your partner, and have a healthier relationship which, we all know by now, has massive affects on personal wellness. For those who dont (but hope/plan to, one day), its an important kick in the pants, a lesson that the orgasms performed through porn are just that, performances, and expectations need to be personalized and constantly reset or reconsidered depending on the women theyre seeing.

Ill admit, I was a little surprised to see Wim Hof himself on an episode of Goop Lab. He doesnt just look like Santa Claus hes borrowed a bit of his aura, a rare 21st-century legend. Upon hugging him, Paltrow even laughed Youre actually real! The Dutch extreme athlete made it over 7,000 feet up Mt. Everest wearing only a bathing suit and shoes. Hes run half-marathons completely barefoot, on the ice and snow. He once made full-body contact with ice for nearly two hours. Hes an absolute maniac, in other words, just sans David Blaine intensity. Hes far more mischievous, and able to tap into that secret sauce when needed, like the original, puppet-Yoda of Empire Strikes Back. On Goop Lab he has eight staffers practice the three pillars of the Wim Hof Method: cold therapy, breathing and meditation. They breathe into their bodies, aggressively, filling the rib cage with air, until their limbs start tingling and their minds go numb (Ive practiced this before, with a surf yogi in Hawaii; its nuts), allowing their thoughts to drift away. Then they practice 20 minutes of snow-ga barefoot flows in the snow along the banks of Lake Tahoe before actually jumping into the water.

It would all be so perfectly goop if only Wim Hof, and cold therapy werent actually scientifically corroborated. Exposure to cold water is dynamite for mental health, which is the main benefit Wim Hof touts in the episode; it encourages the release of neurotransmitters like dopamine, adrenaline, norepinephrine and serotonin, all of which have anti-depressive effects. But it isnt too shabby for the body in general, either; occasionally swimming around in a freezing cold lake, or committing to a bonkers-chilly shower has been shown to catalyze post-workout recovery, stave off injury, lower blood pressure, increase metabolic rate and stimulate the immune system.

Netflix captured one hell of a moment Wim Hof yelling into the California winter as editorial assistants jumped into 38F water and like all TV, I guess, thats what this show was ultimately about, good shots and better soundbites. The disclaimer at the beginning of each episode did indeed portend that Goop Lab wasnt meant to replace medical advice. But at least that scene meant something. It gave something to the world, a proven practice that might be worth introducing to our daily doldrums. And at the very least, it gave that tiny, mighty word, energy, a well-deserved break.

Subscribe herefor our free daily newsletter.

See the original post here:
Is There Any Legitimate Health Advice in Gwyneth Paltrow's "Goop Lab"? - InsideHook

Can canine coronavirus drug indomethacin be used in treating humans suffering from COVID-19? – MEAWW

A drug helped dogs fight off a type of coronavirus common in canines. Now, scientists believe the same drug might come to the rescue of humans suffering from COVID-19.

Though promising, the preprint study is preliminary. The study evaluated the drug Indomethacin against canine coronavirus. This virus, which is known to cause a highly contagious intestinal disease in dogs, shares ancestry with the new coronavirus.

Doctors prescribe indomethacin to reduce fever, pain, stiffness and swelling from inflammation. "Definitely worth additional testing of Indomethacin (Indocin), an anti-inflammatory from the 1960s, shown to inhibit CoV-2 & #coronavirus in dogs," tweeted Dr David Sinclair, from Harvard Medical School.

The focus is on old drugs because new treatments could take years to develop. "Given the urgency of the SARS CoV-2 outbreak, we focus here on the potential to repurpose existing drugs approved for treating infections caused by RNA viruses," the authors wrote in their study.

In one study, scientists showed that indomethacin can keep the SARS coronavirus a close relative of the new coronavirus from replicating inside cells. Scientists will learn more about its efficacy against the new coronavirus and its side-effects as they conduct more studies.

In this study, which has not been peer-reviewed yet, the team divided dogs infected with the canine coronavirus into three groups. Of the nine dogs that received an antiviral drug called ribavirin, three died. The second group fared a lot better. They received antibodies against the virus (anti-canine coronavirus serum) and only one dog died.

None of the dogs belonging to the third group died. Receiving Indomethacin, all the nine dogs survived the disease. What's more, dogs belonging to groups 2 and 3 recovered quickly.

"The results show, indomethacin can achieve a similar efficacy as treatment with anti-canine coronavirus serum, and superior efficacy than the treatment with ribavirin," the authors said.

As for lab-grown cells infected with the new coronavirus, they saw that indomethacin is toxic against the virus. Aspirin, on the other hand, was ineffective.

Drugs such as indomethacin and aspirin belong to a category of drugs called nonsteroidal anti-inflammatory drugs (NSAID). Doctors use them to treat pneumonia, which is common among severe COVID-19 patients.

However, some experts have raised an alarm against the class of drugs, suggesting that the drug might worsen the disease. "However, current scientific evidence does not indicate that patients with mildly symptomatic COVID-19 could be harmed by using NSAIDs," wrote Petros Ioannou, a post-doctoral researcher at the University Hospital of Heraklion, Crete, Greece, in the BMJ.

Earlier, France's health minister issued a warning against other NSAID drugs such as ibuprofen and cortisone. Other experts have said there is not enough evidence against the use of ibuprofen. The US Food and Drug Administration (FDA) issued a statement saying: "At this time, FDA is not aware of scientific evidence connecting the use of NSAIDs, like ibuprofen, with worsening COVID-19 symptoms. The agency is investigating this issue further and will communicate publicly when more information is available."

Read more:
Can canine coronavirus drug indomethacin be used in treating humans suffering from COVID-19? - MEAWW

Machine Learning: Making Sense of Unstructured Data and Automation in Alt Investments – Traders Magazine

The following was written byHarald Collet, CEO at Alkymi andHugues Chabanis, Product Portfolio Manager,Alternative Investments at SimCorp

Institutional investors are buckling under the operational constraint of processing hundreds of data streams from unstructured data sources such as email, PDF documents, and spreadsheets. These data formats bury employees in low-value copy-paste workflows andblockfirms from capturing valuable data. Here, we explore how Machine Learning(ML)paired with a better operational workflow, can enable firms to more quickly extract insights for informed decision-making, and help governthe value of data.

According to McKinsey, the average professional spends 28% of the workday reading and answering an average of 120 emails on top ofthe19% spent on searching and processing data.The issue is even more pronouncedininformation-intensive industries such as financial services,asvaluable employees are also required to spendneedlesshoursevery dayprocessing and synthesizing unstructured data. Transformational change, however,is finally on the horizon. Gartner research estimates thatby 2022, one in five workers engaged in mostly non-routine tasks will rely on artificial intelligence (AI) to do their jobs. And embracing ML will be a necessity for digital transformation demanded both by the market and the changing expectations of the workforce.

For institutional investors that are operating in an environment of ongoing volatility, tighter competition, and economic uncertainty, using ML to transform operations and back-office processes offers a unique opportunity. In fact, institutional investors can capture up to 15-30% efficiency gains by applying ML and intelligent process automation (Boston Consulting Group, 2019)inoperations,which in turn creates operational alpha withimproved customer service and redesigning agile processes front-to-back.

Operationalizingmachine learningworkflows

ML has finally reached the point of maturity where it can deliver on these promises. In fact, AI has flourished for decades, but the deep learning breakthroughs of the last decade has played a major role in the current AI boom. When it comes to understanding and processing unstructured data, deep learning solutions provide much higher levels of potential automation than traditional machine learning or rule-based solutions. Rapid advances in open source ML frameworks and tools including natural language processing (NLP) and computer vision have made ML solutions more widely available for data extraction.

Asset class deep-dive: Machine learning applied toAlternative investments

In a 2019 industry survey conducted byInvestOps, data collection (46%) and efficient processing of unstructured data (41%) were cited as the top two challenges European investment firms faced when supportingAlternatives.

This is no surprise as Alternatives assets present an acute data management challenge and are costly, difficult, and complex to manage, largely due to the unstructured nature ofAlternatives data. This data is typically received by investment managers in the form of email with a variety of PDF documents or Excel templates that require significant operational effort and human understanding to interpret, capture,and utilize. For example, transaction data istypicallyreceived by investment managers as a PDF document via email oran online portal. In order to make use of this mission critical data, the investment firm has to manually retrieve, interpret, and process documents in a multi-level workflow involving 3-5 employees on average.

The exceptionally low straight-through-processing (STP) rates already suffered by investment managers working with alternative investments is a problem that will further deteriorate asAlternatives investments become an increasingly important asset class,predictedbyPrequinto rise to $14 trillion AUM by 2023 from $10 trillion today.

Specific challenges faced by investment managers dealing with manual Alternatives workflows are:

WithintheAlternatives industry, variousattempts have been madeto use templatesorstandardize the exchange ofdata. However,these attempts have so far failed,or are progressing very slowly.

Applying ML to process the unstructured data will enable workflow automation and real-time insights for institutional investment managers today, without needing to wait for a wholesale industry adoption of a standardized document type like the ILPA template.

To date, the lack of straight-through-processing (STP) in Alternatives has either resulted in investment firms putting in significant operational effort to build out an internal data processing function,or reluctantly going down the path of adopting an outsourcing workaround.

However, applyinga digital approach,more specificallyML, to workflows in the front, middle and back office can drive a number of improved outcomes for investment managers, including:

Trust and control are critical when automating critical data processingworkflows.This is achieved witha human-in-the-loopdesign that puts the employee squarely in the drivers seat with features such as confidence scoring thresholds, randomized sampling of the output, and second-line verification of all STP data extractions. Validation rules on every data element can ensure that high quality output data is generated and normalized to a specific data taxonomy, making data immediately available for action. In addition, processing documents with computer vision can allow all extracted data to be traced to the exact source location in the document (such as a footnote in a long quarterly report).

Reverse outsourcing to govern the value of your data

Big data is often considered the new oil or super power, and there are, of course, many third-party service providers standing at the ready, offering to help institutional investors extract and organize the ever-increasing amount of unstructured, big data which is not easily accessible, either because of the format (emails, PDFs, etc.) or location (web traffic, satellite images, etc.). To overcome this, some turn to outsourcing, but while this removes the heavy manual burden of data processing for investment firms, it generates other challenges, including governance and lack of control.

Embracing ML and unleashing its potential

Investment managers should think of ML as an in-house co-pilot that can help its employees in various ways: First, it is fast, documents are processed instantly and when confidence levels are high, processed data only requires minimum review. Second, ML is used as an initial set of eyes, to initiate proper workflows based on documents that have been received. Third, instead of just collecting the minimum data required, ML can collect everything, providing users with options to further gather and reconcile data, that may have been ignored and lost due to a lack of resources. Finally, ML will not forget the format of any historical document from yesterday or 10 years ago safeguarding institutional knowledge that is commonly lost during cyclical employee turnover.

ML has reached the maturity where it can be applied to automate narrow and well-defined cognitive tasks and can help transform how employees workin financial services. However many early adopters have paid a price for focusing too much on the ML technology and not enough on the end-to-end business process and workflow.

The critical gap has been in planning for how to operationalize ML for specific workflows. ML solutions should be designed collaboratively with business owners and target narrow and well-defined use cases that can successfully be put into production.

Alternatives assets are costly, difficult, and complex to manage, largely due to the unstructured nature of Alternatives data. Processing unstructured data with ML is a use case that generates high levels of STP through the automation of manual data extraction and data processing tasks in operations.

Using ML to automatically process unstructured data for institutional investors will generate operational alpha; a level of automation necessary to make data-driven decisions, reduce costs, and become more agile.

The views represented in this commentary are those of its author and do not reflect the opinion of Traders Magazine, Markets Media Group or its staff. Traders Magazine welcomes reader feedback on this column and on all issues relevant to the institutional trading community.

More:
Machine Learning: Making Sense of Unstructured Data and Automation in Alt Investments - Traders Magazine

Machine learning: the not-so-secret way of boosting the public sector – ITProPortal

Machine learning is by no means a new phenomenon. It has been used in various forms for decades, but it is very much a technology of the present due to the massive increase in the data upon which it thrives. It has been widely adopted by businesses, reducing the time and improving the value of the insight they can distil from large volumes of customer data.

However, in the public sector there is a different story. Despite being championed by some in government, machine learning has often faced a reaction of concern and confusion. This is not intended as general criticism and in many cases it reflects the greater value that civil servants place on being ethical and fair, than do some commercial sectors.

One fear is that, if the technology is used in place of humans, unfair judgements might not be noticed or costly mistakes in the process might occur. Furthermore, as many decisions being made by government can dramatically affect peoples lives and livelihood then often decisions become highly subjective and discretionary judgment is required. There are also those still scarred by films such as iRobot, but thats a discussion for another time.

Fear of the unknown is human nature, so fear of unfamiliar technology is thus common. But fears are often unfounded and providing an understanding of what the technology does is an essential first step in overcoming this wariness. So for successful digital transformation not only do the civil servants who are considering such technologies need to become comfortable with its use but the general public need to be reassured that the technology is there to assist, not replace, human decisions affecting their future health and well-being.

Theres a strong case to be made for greater adoption of machine learning across a diverse range of activities. The basic premise of machine learning is that a computer can derive a formula from looking at lots of historical data that enables the prediction of certain things the data describes. This formula is often termed an algorithm or a model. We use this algorithm with new data to make decisions for a specific task, or we use the additional insight that the algorithm provides to enrich our understanding and drive better decisions.

For example, machine learning can analyse patients interactions in the healthcare system and highlight which combinations of therapies in what sequence offer the highest success rates for patients; and maybe how this regime is different for different age ranges. When combined with some decisioning logic that incorporates resources (availability, effectiveness, budget, etc.) its possible to use the computers to model how scarce resources could be deployed with maximum efficiency to get the best tailored regime for patients.

When we then automate some of this, machine learning can even identify areas for improvement in real time and far faster than humans and it can do so without bias, ulterior motives or fatigue-driven error. So, rather than being a threat, it should perhaps be viewed as a reinforcement for human effort in creating fairer and more consistent service delivery.

Machine learning is an iterative process; as the machine is exposed to new data and information, it adapts through a continuous feedback loop, which in turn provides continuous improvement. As a result, it produces more reliable results over time and evermore finely tuned and improved decision-making. Ultimately, its a tool for driving better outcomes.

The opportunities for AI to enhance service delivery are many. Another example in healthcare is Computer Vision (another branch of AI), which is being used in cancer screening and diagnosis. Were already at the stage where AI, trained from huge libraries of images of cancerous growths, is better at detecting cancer than human radiologists. This application of AI has numerous examples, such as work being done at Amsterdam UMC to increase the speed and accuracy of tumour evaluations.

But lets not get this picture wrong. Here, the true value is in giving the clinician more accurate insight or a second opinion that informs their diagnosis and, ultimately, the patients final decision regarding treatment. A machine is there to do the legwork, but the human decision to start a programme for cancer treatment, remains with the humans.

Acting with this enhanced insight enables doctors to become more efficient as well as effective. Combining the results of CT scans with advanced genomics using analytics, the technology can assess how patients will respond to certain treatments. This means clinicians avoid the stress, side effects and cost of putting patients through procedures with limited efficacy, while reducing waiting times for those patients whose condition would respond well. Yet, full-scale automation could run the risk of creating a lot more VOMIT.

Victims Of Modern Imaging Technology (VOMIT) is a new phenomenon where a condition such as a malignant tumour is detected by imaging and thus at first glance it would seem wise to remove it. However, medical procedures to remove it carry a morbidity risk which may be greater than the risk the tumour presents during the patients likely lifespan. Here, ignorance could be bliss for the patient and doctors would examine the patient holistically, including mental health, emotional state, family support and many other factors that remain well beyond the grasp of AI to assimilate into an ethical decision.

All decisions like these have a direct impact on peoples health and wellbeing. With cancer, the faster and more accurate these decisions are, the better. However, whenever cost and effectiveness are combined there is an imperative for ethical judgement rather than financial arithmetic.

Healthcare is a rich seam for AI but its application is far wider. For instance, machine learning could also support policymakers in planning housebuilding and social housing allocation initiatives, where they could both reduce the time for the decision but also make it more robust. Using AI in infrastructural departments could allow road surface inspections to be continuously updated via cheap sensors or cameras in all council vehicles (or cloud-sourced in some way). The AI could not only optimise repair work (human or robot) but also potentially identify causes and then determine where strengthened roadways would cost less in whole-life costs versus regular repairs or perhaps a different road layout would reduce wear.

In the US, government researchers are already using machine learning to help officials make quick and informed policy decisions on housing. Using analytics, they analyse the impact of housing programmes on millions of lower-income citizens, drilling down into factors such as quality of life, education, health and employment. This instantly generates insightful, accessible reports for the government officials making the decisions. Now they can enact policy decisions as soon as possible for the benefit of residents.

While some of the fears about AI are fanciful, there is a genuine cause for concern about the ethical deployment of such technology. In our healthcare example, allocation of resources based on gender, sexuality, race or income wouldnt be appropriate unless these specifically had an impact on the prescribed treatment or its potential side-effects. This is self-evident to a human, but a machine would need this to be explicitly defined. Logically, a machine would likely display bias to those groups whose historical data gave better resultant outcomes, thus perpetuating any human equality gap present in the training data.

The recent review by the Committee on Standards in Public Life into AI and its ethical use by government and other public bodies concluded that there are serious deficiencies in regulation relating to the issue, although it stopped short of recommending the establishment of a new regulator.

The review was chaired by crossbench peer Lord Jonathan Evans, who commented:

Explaining AI decisions will be the key to accountability but many have warned of the prevalence of Black Box AI. However our review found that explainable AI is a realistic and attainable goal for the public sector, so long as government and private companies prioritise public standards when designing and building AI systems.

Fears of machine learning replacing all human decision-making need to be debunked as myth: this is not the purpose of the technology. Instead, it must be used to augment human decision-making, unburdening them from the time-consuming job of managing and analysing huge volumes of data. Once its role can be made clear to all those with responsibility for implementing it, machine learning can be applied across the public sector, contributing to life-changing decisions in the process.

Find out more on the use of AI and machine learning in government.

Simon Dennis, Director of AI & Analytics Innovation, SAS UK

Here is the original post:
Machine learning: the not-so-secret way of boosting the public sector - ITProPortal

The impact of machine learning on the legal industry – ITProPortal

The legal profession, the technology industry and the relationship between the two are in a state of transition. Computer processing power has doubled every year for decades, leading to an explosion in corporate data and increasing pressure on lawyers entrusted with reviewing all of this information.

Now, the legal industry is undergoing significant change, with the advent of machine learning technology fundamentally reshaping the way lawyers conduct their day-to-day practice. Indeed, whilst technological gains might once have had lawyers sighing at the ever-increasing stack of documents in the review pile, technology is now helping where it once hindered. For the first time ever, advanced algorithms allow lawyers to review entire document sets at a glance, releasing them from wading through documents and other repetitive tasks. This means legal professionals can conduct their legal review with more insight and speed than ever before, allowing them to return to the higher-value, more enjoyable aspect of their job: providing counsel to their clients.

In this article, we take a look at how this has been made possible.

Practicing law has always been a document and paper-heavy task, but manually reading huge volumes of documentation is no longer feasible, or even sustainable, for advisors. Even conservatively, it is estimated that we create 2.5 quintillion bytes of data every day, propelled by the usage of computers, the growth of the Internet of Things (IoT) and the digitalisation of documents. Many lawyers have had no choice but resort to sampling only 10 per cent of documents, or, alternatively, rely on third-party outsourcing to meet tight deadlines and resource constraints. Whilst this was the most practical response to tackle these pressures, these methods risked jeopardising the quality of legal advice lawyers could give to their clients.

Legal technology was first developed in the early 1970s to take some of the pressure off lawyers. Most commonly, these platforms were grounded on Boolean search technology, requiring months and even years building the complex sets of rules. As well as being expensive and time-intensive, these systems were also unable to cope with the unpredictable, complex and ever-changing nature of the profession, requiring significant time investment and bespoke configuration for every new challenge that arose. Not only did this mean lawyers were investing a lot of valuable time and resources training a machine, but the rigidity of these systems limited the advice they could give to their clients. For instance, trying to configure these systems to recognise bespoke clauses or subtle discrepancies in language was a near impossibility.

Today, machine learning has become advanced enough that it has many practical applications, a key one being legal document review.

Machine learning can be broadly categorised into two types: supervised and unsupervised machine learning. Supervised machine learning occurs when a human interacts with the system in the case of the legal profession, this might be tagging a document, or categorising certain types of documents, for example. The machine then builds its understanding to generate insights to the user based on this human interaction.

Unsupervised machine learning is where the technology forms an understanding of a certain subject without any input from a human. For legal document review, the unsupervised machine learning will cluster similar documents and clauses, along with clear outliers from those standards. Because the machine requires no a priori knowledge of what the user is looking for, the system may indicate anomalies or unknown unknowns- data which no one had set out to identify because they didnt know what to look for. This allows lawyers to uncover critical hidden risks in real time.

It is the interplay between supervised and unsupervised machine learning that makes technology like Luminance so powerful. Whilst the unsupervised part can provide lawyers with an immediate insight into huge document sets, these insights only increase with every further interaction, with the technology becoming increasingly bespoke to the nuances and specialities of a firm.

This goes far beyond more simplistic contract review platforms. Machine learning algorithms, such as those developed by Luminance, are able to identify patterns and anomalies in a matter of minutes and can form an understanding of documents both on a singular level and in their relationship to each another. Gone are the days of implicit bias being built into search criteria, since the machine surfaces all relevant information, it remains the responsibility of the lawyer to draw the all-important conclusions. But crucially, by using machine learning technology, lawyers are able to make decisions fully appraised of what is contained within their document sets; they no longer need to rely on methods such as sampling, where critical risk can lay undetected. Indeed, this technology is designed to complement the lawyers natural patterns of working, for example, providing results to a clause search within the document set rather than simply extracting lists of clauses out of context. This allows lawyers to deliver faster and more informed results to their clients, but crucially, the lawyer is still the one driving the review.

With the right technology, lawyers can cut out the lower-value, repetitive work and focus on complex, higher-value analysis to solve their clients legal and business problems, resulting in time-savings of at least 50 per cent from day one of the technology being deployed. This redefines the scope of what lawyers and firms can achieve, allowing them to take on cases which would have been too time-consuming or too expensive for the client if they were conducted manually.

Machine learning is offering lawyers more insight, control and speed in their day-to-day legal work than ever before, surfacing key patterns and outliers in huge volumes of data which would normally be impossible for a single lawyer to review. Whether it be for a due diligence review, a regulatory compliance review, a contract negotiation or an eDiscovery exercise, machine learning can relieve lawyers from the burdens of time-consuming, lower value tasks and instead frees them to spend more time solving the problems they have been extensively trained to do.

In the years to come, we predict a real shift in these processes, with the latest machine learning technology advancing and growing exponentially, and lawyers spending more time providing valuable advice and building client relationships. Machine learning is bringing lawyers back to the purpose of their jobs, the reason they came into the profession and the reason their clients value their advice.

James Loxam, CTO, Luminance

Follow this link:
The impact of machine learning on the legal industry - ITProPortal