Breaking News and Updates
- Abolition Of Work
- Alternative Medicine
- Artificial Intelligence
- Atlas Shrugged
- Ayn Rand
- Basic Income Guarantee
- Cbd Oil
- Chess Engines
- Cloud Computing
- Conscious Evolution
- Cosmic Heaven
- Designer Babies
- Donald Trump
- Ethical Egoism
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom of Speech
- Gene Medicine
- Genetic Engineering
- Germ Warfare
- Golden Rule
- Government Oppression
- High Seas
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Longevity
- Immortality Medicine
- Intentional Communities
- Jordan Peterson
- Life Extension
- Mars Colonization
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- National Vanguard
- New Utopia
- Online Casino
- Personal Empowerment
- Political Correctness
- Politically Incorrect
- Post Human
- Post Humanism
- Private Islands
- Quantum Computing
- Quantum Physics
- Resource Based Economy
- Ron Paul
- Second Amendment
- Second Amendment
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Teilhard De Charden
- The Singularity
- Tor Browser
- Transhuman News
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Zeitgeist Movement
The Evolutionary Perspective
Category Archives: Ai
Posted: December 3, 2019 at 12:48 am
AI's Impact in 2020: 3 Trends to Watch
The popularity of AI and ML have wide-reaching effects on your enterprise. Here are three important trends driven by AI to look out for next year.
[Editor's note: Upside asked executives from around the country to tell us what top three trends they believe data professionals should pay attention to in 2020. Ryohei Fujimaki, Ph.D., founder and CEO of dotData, focused on AI and ML.]
The Rise of AutoML 2.0 Platforms
As the need for additional AI applications grows, businesses will need to invest in technologies that help them accelerate the data science process. However, implementing and optimizing machine learning models is only part of the data science challenge. In fact, the vast majority of the work that data scientists must perform is often associated with the tasks that preceded the selection and optimization of ML models such as feature engineering -- the heart of data science.
This means that organizations will need to look for new, more sophisticated automated machine learning platforms. These "AutoML 2.0" tools will need to provide end-to-end automation, from automatically creating and evaluating thousands of features (AI-based feature engineering) to the operationalization of ML and AI models -- and all the steps in between.
The Shift to Automation Will Intensify Focus on Privacy and Regulations
As AI and ML models become easier to create using advanced "AutoML 2.0" platforms, data scientists and citizen data scientists will begin to scale ML and AI model production in record numbers. This means organizations will need to pay special attention to data collection, maintenance, and privacy oversight to ensure that the creation of new, sophisticated models does not violate privacy laws or cause privacy concerns for consumers.
As a result, in 2020 we will see an emergence of new tools that will enable data scientists to have greater transparency without sacrificing accuracy. This shift to a more "white box" approach to data science will deliver more transparent and accurate models thereby empowering businesses to make data-centric decisions and accelerating their digital transformations.
More Citizen Data Scientists Doing Data Science
Big data will continue to be on the upsurge in 2020 with a growing demand for skilled data scientists and a continued shortage of data science talent -- creating ongoing challenges for businesses implementing AI and ML initiatives. Although AutoML platforms have alleviated some of the pressure on data science teams, they have not resulted in the productivity gains organizations are seeking from their AI and ML initiatives. As such, companies need better solutions to help them leverage their data for business insights.
In 2020, we will see a swift adoption of new, broader, "full-cycle" data science platforms that will significantly simplify tasks that formerly could only be completed by data scientists and boost the productivity of citizen data scientists -- business analysts and other data experts who have domain expertise but are not necessarily skilled data scientists. This continued democratization will lead to new use cases that are closer to the needs of business users and will enable faster time-to-market for AI applications in the enterprise.
About the Author
Dr. Ryohei Fujimaki is the founder and CEO of dotData. In his career, Dr. Fujimaki was the youngest research fellow ever in NEC Corporations 119-year history, the title was honored for only six individuals among over 1000 researchers. During his tenure at NEC, Ryohei was heavily involved in developing many cutting-edge data science solutions with NECs global business clients, and was instrumental in the successful delivery of several high-profile analytical solutions that are now widely used in industry. You can reach the author via email or LinkedIn.
Go here to read the rest:
Posted: at 12:48 am
Robots communication, artificial intelligence concept. Two robotic characters with light bulb
On November 20, 2019, Andrew Yang, during a Democratic Candidates debate, stated that the US is losing the AI arms race to China.A little over a year ago, I argued the same thing.Yang is right (after Yangs comment, Pete Buttigieg agreed).A couple of other candidates on the stage also wanted to chime in about AI, but were cut off by a commercial break.
Let me ask again:what took so long?Artificial intelligence and machine learning are transformative technologies that level many playing fields, so many in fact that a tiny nation can militarily compete with a great military power, like the US.How?The smartest weapon is not the largest weapon.Its code.It smart code.Its smart bots.Its cyberwarfare.Software grounded the Boeing 757 Max.The future of war is as much about AI and ML as payloads.The same for manufacturing, supply chain optimization and process automation:its all about digital technology, especially AI/ML.
The war for global leadership in artificial intelligence and machine learning is well underway, and the US is losing.
Is the AI battlefield well-understood?Not even close, at least by the leaders who develop national strategies or by the citizens of the United States who all need to spend some time on:
Since theyre mostly unaware of the war, US leaders have no clear strategy to prevent a historic loss.Imagine the implications of electing politicians who have no idea a deadly war is underway or who think the arms race is all about aircraft carriers and fighter squadrons?
robot, technology, future, futuristic, business, laser, laserbeam, nuclear, high tech, cyber, cyber ... [+] technology, data, artificial intelligence, 3D, metal, blue background, studio, science, sci fi, hand, gesture, robotic, tech, illustration, innovation, shiny, chrome, silver, wires, concept, creative
The Chinese have a very public, very-deep, extremely well-funded commitment to AI.Air Force General VeraLinn Jamieson says it plainly:"We estimate the total spending on artificial intelligence systems in China in 2017 was $12 billion.We also estimate that it will grow to at least $70 billion by 2020."According to the Obama White House Report in 2016, China publishes more journal articles on deep learning than the US and has increased its number of AI patents by 200%.China is determined to be the world leader in AI by 2030.
Listen to what Tristan Greene writing in TNWconcludes about the USs commitment to AI: Unfortunately, despite congressional efforts to get the conversation started at the national level in the US, the White Houses current leadership doesnt appear interested in coming up with a strategy tokeep upwith China. It gets worse:China has allocated billions of dollars towards infrastructure to house hundreds of AI businesses in dedicated industrial parks.It has specific companies, the Chinese counterparts to US operations like Google and Amazon, working on different problems in the field of AI.And itsregulating education so that the nation produces more STEM workers.But perhaps most importantly, China makes it compulsory for businesses and private citizens to share their data with the government something far more valuable than money in the world of AI.
Greenes scary bottom line?Meanwhile, in the US, the Trump administration has shown little interest in discussing its own countrys AI yet,may soon have to talk to Chinas.
According to Iris Deng,China ranks first in the quantity and citation of research papers, and holds the most AI patents, edging out the US and Japan (and)China has not been shy about its ambitions for AI dominance, with the State Council releasing a road map in July 2017 with a goal of creating a domestic industry worth 1 trillion yuan and becoming a global AI powerhouse by 2030.
It's obvious:Without more leadership from Congress and the President, the U.S. is in serious danger of losing the economic and military rewards of artificial intelligence (AI) to China.Thats the somber conclusion of a report published today (September 25) by the House Oversight and Reform IT subcommittee.
Jerry Bowles says it clearly:The U.S. has traditionally led the world in the development and application of AI-driven technologies, due in part to the governments commitment to investing heavily in research and development.That has, in turn, helped support AIs growth and development.In 2015, the United States led the world in total gross domestic R&D expenditures, spending $497 billion.But, since then, neither Congress nor the Trump administration has paid much attention to AI and government R&D investment has been essentially flat.Meanwhile, China has made AI a key part of its formal economic plans for the future.
3d rendering robot learning or machine learning with education hud interface
The Trump administration finally said something about the most importanttechnology in a generation.The Executive Order onMaintaining American Leadership in Artificial Intelligenceissued on February 11, 2019 hopefully is just the opening shot in the AI war,an implementation war the US is arguably already losing, especially in areas like robotics.
So what does the Trump administration want to do?
1.Invest in AI Research and Development (R&D)
The initiative focuses on maintaining our Nations strong, long-term emphasis on high-reward, fundamental R&D in AI by directing Federal agencies to prioritize AI investments in their R&D missions.
2.Unleash AI Resources
The initiative directs agencies to make Federal data, models, and computing resources more available to Americas AI R&D experts, researchers, and industries to foster public trust and increase the value of these resources to AI R&D experts, while maintaining the safety, security, civil liberties, privacy, and confidentiality protections we all expect.
3.Set AI Governance Standards
As part of the American AI Initiative, Federal agencies will foster public trust in AI systems by establishing guidance for AI development and use across different types of technology and industrial sectors.
4.Build the AI Workforce
To prepare our workforce with the skills needed to adapt and thrive in this new age of AI, the American AI Initiative calls for agencies to prioritize fellowship and training programs in computer science and other growing Science, Technology, Engineering, and Math (STEM) fields.
5.International Engagement and Protecting our AI Advantage
Federal agencies will also develop and implement an action plan to protect the advantage of the United States in AI and technology critical to United States national and economic security interests against strategic competitors and foreign adversaries.
While all of these initiatives are welcomed, theres no new money targeted directly at the above 5 initiatives or for AI R&D generally.Instead, the American AI Initiative directs agencies to prioritize, open and share investments within among existing research programs.
As described by Will Knight in theMIT Technology Review,the initiative is designed to boost Americas AI industry by reallocating funding, creating new resources, and devising ways for the country to shape the technology even as it becomes increasingly global however, while the goals are lofty, the details are vague.And it will not include a big lump sum of funding for AI research.As Knight points out, other nations, includingChina,Canada, and France, have made bigger moves to back and benefit from the technology in recent years.
As reported by Cade Metz in theNew York Times,General James Mattis, before he resigned his position as the US Secretary of Defense, implored (Trump) to create a national strategy for artificial intelligenceMattis argued that the United States was not keeping pace with the ambitious plans of China and other countries.
Ben Brody reports in Bloomberg, thatSenator Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee, praised some aspects of the order but criticized its tone.The tone of this executive order reflects a laissez-faire approach to AI development that I worry will have the U.S. repeating the mistakes it has made in treating digital technologies as inherently positive forces, with insufficient consideration paid to their misapplication, he said.
But wheres the funding?Why the vagueness about investments?Why is funding the responsibility of Congress?Why not White House-directed funding for the American AI Initiative?It would be hard to improve upon the observation made by William Carter from theCenter for Strategic and International Studiesas reported by Kaveh Waddell inAxios:"If they can find$5 billion for a border wall, they should be able to find a few billion for the foundation of our future economic growth."
Waddell further reports that so far, U.S.fundingfor AI has been anemican analysisfrom Bloomberg Government found thatthe Pentagon's R&D spending on AI has increasedfrom $1.4 billion to about $1.9 billion between 2017 and 2019.DARPA, the Pentagon's research arm, has separately pledged $2 billion in AI funding over the next five years it's hard to put a numberon the entire federal government's AI spend, says Chris Cornillie, a Bloomberg Government analyst, because most civilian agencies don't mention AI in their 2019 budget requests (but) these numberspale in comparison to estimates of Chinese spending on AI.Exact numbers are hard to come by, but just two Chinese cities ShanghaiandTiajin have committed to spending about$15 billion each. More recently, the proposed 2020 budget sees more increases in AI R&D. According to Cornillie at BloombergGovernment in Washington, "the 2020 budget has allocated almost $5B for AI R&D (for the Pentagon and all other US government agencies). From FY 2018 to 2020, the Pentagons budget request for AI R&D rose from $2.7 billion to $4.0 billion" ... (but) when you look at what Google or Apple alone are investing in AI, $5 billion doesnt seem that large of a figure. Especially if you put that in the context of the federal governments $1.37 trillion discretionary budget request. But if you look at year over year spending, it appears that agencies are getting the message that investments in AI will be critical in terms of economic competitiveness and national security." Finally! While Cornillie is hopefully right and the US is waking up,$5B is still dwarfed by Chinese investments in AI.
Its also unclear about how this initiative builds significantly upon theObama Administrations National Artificial Intelligence Research and Development Strategic Plan issued in October of 2016.
So now what?General Mattis is right:we need a national AI strategy (that answers the Chinese plan)and we need significant, long-term dedicated funding.
A coordinated, heavily-funded American response is way overdue.But theres more:
These steps represent a good start to turn the tide of the AI war a war the US cannot afford to lose.
I restate all this now because AI is finally getting the attention of lawmakers in Washington and across the country even in presidential debates!Hopefully, calls for AI strategiesand additional fundingare more than just campaign promises.
Go here to see the original:
Posted: at 12:48 am
Nvidia has for years made artificial intelligence (AI) and its various subsets such as machine learning and deep learning a foundation of future growth and sees it as a competitive advantage against rival Intel and a growing crop of smaller chip maker and newcomers looking to gain traction in a rapidly evolving IT environment. And for any company looking to make its mark in the fast-moving AI space, the healthcare industry is an important vertical to focus on.
We at The Next Platform have talked about the various benefits AI and deep learning can bring to an industry that is steeped in data and in need of anything that can help them gain real-time information from all that data. AI will touch all segments of healthcare, from predictive diagnostics that will drive prevention programs and precision surgery for better outcomes to increasingly precise diagnoses that will help lead to more personalized treatments and systems that will create more efficient operations and drive down costs. It will impact not only hospitals and other healthcare facilities but also other segments of the industry, from pharmaceuticals and drug development to clinical research.
Nvidia has put a focus on healthcare. For years the company has worked with GE to support its medical devices, and a year ago announced it was working with the Scripps Research Translational Institute to develop tools and infrastructure that leverage deep learning to fuel the development of AI-based software to expand the use of AI beyond medical imaging, which in large part has been the primary focus of AI in medicine.
However, in a hyper-regulated environment like healthcare where the privacy of patient data is paramount, any AI-based technology needs to ensure that the data is protected from cyber-criminals that want to steal it or leverage it, as illustrated in the increasingly targeting of healthcare facilities by ransomware campaigns, where bad actors take a hospitals data hostage by using malware to encrypt it and giving the facility the decryption key only after it pays the ransom.
At the Radiological Society of North America (RSNA) conferencethis weekend, Nvidia laid out a plan aimed at allowing hospitals to use machine learning models to train the mountains of confidential information they hold without the risk of exposing the data to outside parties. The company unveiled its Clara Federated Learning (FL) technique that will enable organizations to leverage training while keeping the data housed within the healthcare facility systems.
Nvidia describes Clara FL as a reference edge application that creates a distributed and collaborate AI training model, a key capability given that healthcare facilities house their data in disparate parts of their IT environment. Its based on Nvidias EGX intelligent edge computing platform, a software-define offering introduced in May that includes the EGX stack, which comes with support for Kubernetes and contains, GPU monitoring tools and Nvidia drives, which are managed by Nvidias GPU Operator tool. The Nvidia distributed NGC-Ready for Edge servers are built by OEMs leveraging the Nvidia technologies and not only can perform training locally but also collaborate to create more complete models of the data.
The Clara FL application is housed in a Helm chart to make it easier to deploy within a Kubernetes-based infrastructure, and the EGX platform provisions the federated server and the clients that it will collaborate with, according to Kimberly Powellis vice president of Nvidias healthcare business.
The application containers, initial AI model and other tools needed to begin the federated learning is delivered by the platform. Radiologists use the Nvidia Clara AI-Assisted Annotation software development kit (SDK) which is integrated into medical views like 3D slicer, Fovia and Philips Intellispace Discovery to label their hospitals patient data. Hospitals participating in the training effort use the EGX servers in their facilities to train the global model on their local data and the results are sent to the federated training server via a secure link. The data is kept private because only partial model weights are shared rather than patient records, with the model built through federated averaging.
The process is repeated until the model is as accurate as it need be.
As part of all this, Nvidia also announced Clara AGX, an embedded AI developer kit that is powered by the companys Xavier chips, which are also used in self-driving cars and can use as little as 10 watts of power. The idea is to offer lightweight and efficient AI computers that can quickly ingest huge amounts of data from large numbers of sensors to process images and video at high rates, bringing inference and 3D visualization capabilities to the picture.
The systems can be small enough to be embedded into a medical device, be small computers adjacent to the device or full-size servers, as pictured below:
AI in edge computing isnt only for AI development, Powell said during a press conference before the show started. AI wants to live with the device and AI wants to be deployed very close, in some cases, to medical devices for several reasons. One: AI can deliver real-time applications. If you think about an ultrasound or an endoscopy, where the data is streaming from a sensor, you want to be able to apply algorithms and many of them at once in a livestream. They want to embed the AI. You may also want to put an AI computer right next to a device. This is a device that is already living inside the hospital. You can augment that with an AI-embedded computer. The third way that we can imagine edge computing and how were demonstrating that at RSNA today is by connecting many devices and streaming devices to an NGX platform server thats living in the datacenter. We really see three models for edge and devices coming together. One is embedded into the instrument. The next is right next to the instrument for real-time and the third is to augment many, many devices around the hospital fleets of devices in the hospital to have augmented AI in the datacenter.
At the show, Nvidia also showed off a point-of-care MRI system built by Hyperfine Research that leverages Clara AGX. The device is about three feet wide and five feet tall:
The Hyperfine system uses ordinary magnets that do need power or cooling to produce an MRI image, can run in an open environment, and do not need to be shielded from electromagnetic fields around it. The company is still testing it, but early tests indicate that it can image brain disease and injury using scans that are about five to ten minutes long, according to Hyperfine.
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Posted: at 12:48 am
The Hechinger Report is a national nonprofit newsroom that reports on one topic: education. Sign up for our weekly newsletters to get stories like this delivered directly to your inbox.
As the artificial intelligence revolution comes to education, teachers are rightly concerned that AI in schools will replace the human assessment of human learning.
However, if developers work in tandem with teachers on the ground, fields like assisted writing feedback can evolve to make instruction more effective, more personalized and more human.
I used to grade papers by hand, a black Bic my tool of choice, etching feedback in the margins, scrawling questions and circling errors. It took innumerable hours, and I accepted the drudge as inevitable to life as an English teacher.
It had its rewards, of course, as I saw students gaining skills, developing their voices and flashing brilliance. Less rewarding was grinding out the feedback on standard mechanics, figuring out who needed what remediation and wondering why my efforts werent more effective.
Over the years, I learned to prioritize positive comments in every students paper, in the margins and final notes, mindful of student affect and motivation. I stopped circling every flaw, and shifted to finding patterns to aid instruction. I focused comments and grades on the deeper content of student work, with less attention to surface-level issues in the writing.
I began teaching the skills of self-regulated learning and self-regulated writing. Occasionally, I had to catch, counsel and retrain the plagiarist, who sometimes changed every other word from an internet source. I could spend two hours tracking down the original text of one dishonest paper.
That was my life when I started teaching high school 20 years ago.
Today, with AI in schools, assisted writing feedback makes my job easier and more rewarding. Feedback platforms can help teachers do what they love doing: developing writers, engaging learners, and helping students meet their goals for college and careers.
Years ago, I took notes on which students made which errors, and photocopied remedial exercises to target instruction as best I could. Today, I open my Clever portal and click on Quill. My students are already loaded into the system, and I simply assign a diagnostic.
Related: Will AI really transform education?
Quill figures out what each student needs, and then individualizes lessons. I can assign a diagnostic by level, targeting English language learners or advanced native speakers. Quill improves student writing and frees me up for more advanced instruction and personalized learning.
I first began using Turnitin.com for plagiarism prevention and detection, and that service has saved me hundreds of hours in tracking down borrowed material. Today, I use it more for its feedback tools. I can attach custom rubrics to various assignments or choose pre-built ones aligned with Common Core standards.
I author my own feedback codes, and when students mouse-over them, theyll get my explanation of how to repair an error or replicate success. I use the platforms automated spelling and grammar checkers, but I massage the feedback to avoid spilling red ink all over students papers, dismissing or deleting some categories completely, according to each students needs.
I embed positive feedback in the margins, and type customized comments and questions more quickly than I could write by hand. For English language learners who need targeted feedback, I check the metrics and focus my corrections.
Today, my feedback is clearer and more accessible and completely paperless. I cant lose a graded paper anymore, and neither can my students. Feedback is more effective when its more accessible and immediate.
At the moment, assisted writing platforms are getting better at seeing whats wrong with the surface of a students writing, but not what a student is doing well on that surface, or anything below. Students need teachers to see what strengths they bring, new insights they offer and the deep understandings they forge of complex texts. Students also need time to write without judgment, to tell their own stories and express their thoughts and feelings, with a focus on the whole student, not the surface of their texts.
For their academic work, however, students need more reliable and immediate feedback with clear paths from remedial skills to advanced work. As more teachers use assisted feedback, and take part in its development, platforms will get better at helping students write well and grow in self-confidence. Right now, such platforms are often telling students they misspelled their own names.
Related: Ten jobs that are safe from robots
One day soon, they might be picking up repetitious sentence structures or skillful rhythms in syntax. We might be crowd-sourcing positive reinforcements among teachers and sharing rubrics that push for excellence. Assisted feedback might communicate across platforms and automate the boilerplate instruction on standard grammar and mechanics. It could also make more transparent the biases embedded in standard English, for students and for teachers, while recognizing vernaculars and promoting both code-switching and code-meshing.
Artificial intelligence cannot replace good teaching, nor can it provide high-quality feedback on its own. AI can assist our work, streamline processes and connect our efforts at improving the craft. It can help teachers and researchers develop more effective writing instruction, through qualitative evaluation, and in robust studies with big data sets, randomized controls and rapid analysis.
With more researchers and educators of color involved in its development, AI can help us better serve marginalized populations through more culturally competent writing instruction, to help close opportunity gaps. Platforms can help coach students for self-regulated writing strategies, and help teachers explicitly teach them. Assisted writing feedback can become a new tool in the teachers hand, as we encourage students to express their genius and author their own futures.
This story about the use of artificial intelligence in schools was produced byThe Hechinger Report, a nonprofit, independent news organization focused oninequality and innovation in education. Sign upherefor our newsletter.
Robert Comeau teaches senior English at Another Course to College, a college-preparatory high school in the Boston Public Schools network.
Join us today.
Follow this link:
Posted: at 12:48 am
During the Intel AI Summit earlier this month where the company demonstrated its initial processors for artificial intelligence training and inference workloads, Naveen Rao, corporate vice president and general manager of the Artificial Intelligence Products Group at Intel, spoke about the rapid pace of evolution in the AI space that also includes machine learning and deep learning. The Next Platform did an in-depth look at the technical details Rao shared about the products. But as noted in the story, Rao explained that the complexity of neural network models when talking about the number of parameters is growing ten-fold every year, a rate that is unlike any other technology trend we have ever seen.
For Intel and the myriad other tech vendors getting making inroads into the space, AI and components like machine learning and deep learning already is a big business and promises to get bigger. Intels AI products are expected to generate more than $3.5 billion in revenue for the chip maker this year, according to Rao. But one of tricks to continuing the momentum is being able to keep pace with the fast-rising demand for innovations.
In a recent interview with The Next Platform before the AI Summit, two of the key executives overseeing the development of the companys first-generation Neural Network Processors NNP-T for training and NNP-I for inference echoed Raos comments when talking about the speed of evolution in the AI space. The market is taking a turn, with more enterprises beginning to embrace the technologies and incorporate them into their business plans, and Intel is aiming to meet accelerating demand not only with its NNPs but also through its Xeon server chips and its low-power Movidius processors, which can help bring AI and deep learning capabilities out to the fast-growing edge.
Gadi Singer and Carey Kloss, vice presidents and general managers of AI architectures for inference and training, respectively, also talked about the challenges the industry faces at a time when model sizes continue to grow and organizations are starting rolling out proof-of-concepts (POCs) with an eye toward wider deployments. The landscape has changed even since Intel bought startup Nervana Systems three years ago to challenge Nvidia and its GPU accelerators in the AI and deep learning space.
What we see happening in the transition to now and toward 2020 is what I call the coming of age of deep learning, says Singer, pictured below with an NNP-I chip, tells The Next Platform. This is where the capabilities have been better understood, where many companies are starting to understand how this might be applicable to their particular line of business. Theres a whole new generation of data scientists and other professionals who understand the field, theres an environment for developing new algorithms and new topologies for the deep learning frameworks. All those frameworks like TensorFlow and MXNet were not really in existence in 2015. It was all hand-tooled and so on. Now there are environments, there is a large cadre of people who are trained on that, theres a better understanding of the mapping, theres a better understanding of the data because it all depends on who is using the data and how to use the data.
The last five years were spent digesting what the new technology can do and building the foundation for using it. Now theres a lot of experimentation and deployment merging, he says. Hyperscale cloud services providers (CSPs) like Google, Facebook, Amazon and others already are using deep learning and AI and now enterprises are getting into the game. Intel sees an opportunity in 2020 and 2021 to create the capabilities and infrastructure to scale the technologies.
Now that it has gotten to the stage where companies understand how they can use it for their line of business, its going to be about total cost of ownership, ease of use, how to integrate purpose-built acceleration together with more general applications, Singer says. In most of those cases, companies have deep expertise in their domain and theyre bringing in deep learning capabilities. Its not really AI applications as it is AI-enriched applications that are doing whatever it was those companies were doing before but doing it in an AI-enriched way. What Intel is doing, both from the hardware side and the software side, is creating this foundation for scale.
That includes not on the AI, Xeon and Movidius products, but also such technologies as Optane DC persistent memory, 3D XPoint non-volatile memory and software, all of which work together and can help drive deep learning into a broader swath of the industry, Kloss says.
Its still the early days, he says. The big CSPs have deployed deep learning at scale. The rest of the industry is catching up. Were at the knee of the curve. Over next three years, every company will start deploying deep learning however it serves their purpose. But theyre not there just yet, so what were trying to do is to make sure they have the options available and we can help them deploy deep learning in their businesses. More importantly, we offer a broad range of products because as youre first entering deep learning, you might do some amount of proof-of-concept work. You dont need a datacenter filled with NNP-Ts for that. You can just use Xeons, or maybe use some NNP-Ts in the cloud. Once you really get your data sorted out and you figure out your models, then maybe you deploy a bunch of NNP-Ts in order to train your models at lower power and lower cost because its part of your datacenter.
It will be a similar approach when deploying inference, Kloss says. That means initially relying on Xeons, and once it becomes an important part of the businesses and enterprises look to save money and scale their capabilities, they can add NNP-I chips into the mix.
Balance, Scale And Challenges
Intel took a a clean-sheet approach to designing the NNPs and balance and scale were key factors, he says. That includes balancing bandwidth on and off the die with memory bandwidth and the right amount of static RAM (SRAM). Getting that balance right can mean 1.5 to two times better performance.
For scaling, training runs no longer happen on a single GPU. At minimum, training now runs on a chassis with at least eight GPUs and even after the NNP-T chip was in the Intel labs for only a few months, the company was already running eight- and 16-node scaling runs. Intel also incorporated a feature to help future-proof the chip, enabling it to move straight from the compute on one die to the compute on another die with low latency and high bandwidth, Kloss says.
Such capabilities are going to be critical in the rapidly evolving field, he says. When Nervana launched 2014, single-chip performance was important and the focus was on single-digit teraflops on a die. Now there are hundreds of teraflops on the die and even that isnt enough, according to Kloss. Full chasses are needed to keep up.
Just five years ago, you were training a neural [network] on single-digit teraflops on a single card and the space has moved so fast that now there are experiments where people are putting a thousand cards together to see how fast they can train something and it still takes an hour to train, Kloss says. Its been an exponential growth. In general, its going to get harder. Its going to be harder to continue scaling at this kind of pace.
Looking forward, there are going to be some key challenges Intel and the rest of the industry will have to address as parameter size increases and enterprise adoption expands, according to Kloss. At the same time, the trend will be toward continuous learning, Singer says, rather than separate inference and training. One of the challenges is size of the die, an area that the Open Compute Projects OAM (OCP Accelerator Module) specification will help address, Kloss says. There also are the issues with growing limitations with SerDes (Serializer/Desirializer) connectivity devices and power consumption.
You can only fit a die in there thats a particular size, he says. Its hard to go more than a 70x70mm package which is enormous, by the way. The things the size of a coaster. You cant just keep adding die area to this thing. The power budgets are going up; were seeing higher power budgets coming down the pike, so instead of adding die area people will start adding power to the chasses. But fitting high-powered parts into the current racks is a problem.
Regarding high-speed SerDes, it basically fills the edges of the die, but you cant go more than two deep or so on SerDes unless its a dedicated switch chip. [SerDes] links everything PCI-Express is made up of SERDES, 100 Gb/sec Ethernet is made up of SERDES. Each lane can only go so fast, and were starting to hit the limits of what how fast you can go over a reasonable link. So over the next couple of years were going to start hitting limits in terms of what the SERDES can do on the cards and how fast they can go. Were going to hit limits on how big a chip we can create, even if we could technologically go further. And then were going to hit power limits in terms of chassis power, rack power.
The next designs will have to take all of this into account, Kloss says, adding that the industry is going to hit these kinds of limitations, [so] it is going to be more and more important for total cost of ownership and for performance-per-chip to optimize for power.
At the same time, such efficiency is going to have to come in future-proofed products that can address workloads now and those coming down the line, Singer says. That is where Intels broad base of capabilities not only with the NNPs but also with its Xeon CPUs, Movidius chips, software and other technologies will become important.
By us creating a broader base and looking at the way to create the right engines but connect them in a way that can support multiple usages, were also creating headroom for usages that have not been invented yet but by 2021 might be the most common usages around, he says. Thats the pace of this industry.
The rest is here:
Posted: at 12:48 am
In the ongoing discussion about how much Artificial Intelligence (AI) will impact the future of work, the outlook seems bleak. McKinsey & Company estimates that between 400 million and 800 million individuals could be displaced by automation and need to find new jobs by 2030 around the world.
Yes, were on the cusp of a changing workforce, thanks to a number of technological advancements anchored by AI. But, I believe that the best outcome for marketers will be a business environment that complements human ability with technology, rather than eliminates it. In the future Im planning for, marketers are thriving because weve made the decision to let machines do what they do best, so marketers can do the same, and truly excel in their roles.
Currently, were quite a long way off from that becoming a reality. The pace of innovation and change is rapid, and weve only begun to dip our toes into the water of whats possible. But, its never too early to prepare, which is why I believe there are three key strategies marketers can implement now to succeed through the impending changes AI will bring.
Technology like AI has the power to revitalize industries, but marketers cant plan for the business of the future without a willingness to embrace transformation. Looking back at history, we have never been able to stop the progression of change, nor have we wanted to. If there is an easier, better, and/or faster way to do something, we have found a way to do it. Yet, in business, when faced with something so potentially disruptive as AI, we continue to bristle at the idea of incorporating AI-centric solutions into the workplace in spite of the fact that four out of 10 people surveyed by the University of Oxfords Future of Humanity Institute somewhat or strongly supported the development of AI.
AI is at the core of a new industrial revolution. So every person charged with continuing to deliver results for their business must take an objective look and analyze the impact of potential solutions to their business then take the plunge. Planning for the future always requires curiosity, a tolerance for measured risk, and a comfort level with playing the long game. Change takes time; no industry, category or process can be uprooted overnight. And, truly intelligent tools have a learning curve, too, which means the earlier marketers can begin to train an intelligent system, the more advantage theyll have.
In thinking critically about the capability AI has to change the way we work, theres a fundamental mind-shift that needs to be embraced in regard to how we define AI itself. Most of the AI-driven solutions we see in the world are assistive, yet the speed of the new industrial revolution is going to require machines to be autonomous.
If marketers are evaluating solutions that surface insights and other data that still needs to be manually analyzed and acted upon, marketers are engaging with an assistive AI solution. If the technology allows marketers to collaborate with it, rather than operate it makes recommendations and requests while running entire processes its autonomous. And, Autonomous Intelligence is what has the power to bring humanity back to work, not eliminate it from the job.
With autonomous solutions, marketers can test every idea, make insight-backed decisions, and take action in near real-time. Essentially, they are allowed to be more creative while delivering strong and more effective campaigns at the same time. By being freed from low-value, manual, and repetitive work tasks, marketers are able to focus on driving brand value to create more productive, long-term customer relationships.
At the end of the day, its important to remember that machines are machines, and autonomous tools still need the human touch. Invest time in training AI and marketers on how to work together to the best of each others ability. Machines are only as good as the input they receive. But once theyre trained and are able to learn, let them get to work, and let marketers focus their time on the initiatives that really matter.
I dont believe robots are going to displace marketers, but I believe were going to have to make room at the table for Autonomous Intelligence. In order to empower marketers to thrive in this new digital era, its high time we think about how to make technology work more strongly for us.
Read more: The Future of Artificial Intelligence Is Job Augmentation, Not Elimination
Posted: at 12:48 am
Until recently, data sets were small and costly, and computers were slow and expensive. So it is natural that as gains in computing power have dramatically reduced these impediments, economists have rushed to use big data and artificial intelligence to help them spot patterns in all sorts of activities and outcomes.
NEW YORK Until recently, two big impediments limited what research economists could learn about the world with the powerful methods that mathematicians and statisticians, starting in the early nineteenth century, developed to recognize and interpret patterns in noisy data: Data sets were small and costly, and computers were slow and expensive. So it is natural that as gains in computing power have dramatically reduced these impediments, economists have rushed to use big data and artificial intelligence to help them spot patterns in all sorts of activities and outcomes.
Data summary and pattern recognition are big parts of the physical sciences as well. The physicist Richard Feynman once likened the natural world to a game played by the gods: you dont know the rules of the game, but youre allowed to look at the board from time to time, in a little corner, perhaps. And from these observations, you try to figure out what the rules are.
Feynmans metaphor is a literal description of what many economists do. Like astrophysicists, we typically acquire non-experimental data generated by processes we want to understand. The mathematician John von Neumann defined a game as (1) a list of players; (2) a list of actions available to each player; (3) a list of how payoffs accruing to each player depend on the actions of all players; and (4) a timing protocol that tells who chooses what when. This elegant definition includes what we mean by a constitution or an economic system: a social understanding about who chooses what when.
Like Feynmans metaphorical physicist, our task is to infer a game which for economists is the structure of a market or system of markets from observed data. But then we want to do something that physicists dont: think about how different games might produce improved outcomes. That is, we want to conduct experiments to study how a hypothetical change in the rules of the game or in a pattern of observed behavior by some players (say, government regulators or a central bank) might affect patterns of behavior by the remaining players.
Thus, structural model builders in economics seek to infer from historical patterns of behavior a set of invariant parameters for hypothetical (often historically unprecedented) situations in which a government or regulator follows a new set of rules. The government has strategies, and the people have counterstrategies, according to a Chinese proverb. Structural models seek such invariant parameters in order to help regulators and market designers understand and predict data patterns under historically unprecedented situations.
The challenging task of building structural models will benefit from rapidly developing branches of AI that dont involve more than pattern recognition. A great example is AlphaGo. The team of computer scientists that created the algorithm to play the Chinese game Go cleverly combined a suite of tools that had been developed by specialists in statistics, simulation, decision theory, and game theory communities. Many of the tools used in just the right proportions to make an outstanding artificial Go player are also economists bread-and-butter tools for building structural models to study macroeconomics and industrial organization.
Were extending our sale until Tuesday! Subscribe today and get unlimited access to OnPoint, the Big Picture, the PS archive of more than 14,000 commentaries, and our annual magazine, for less than $2 a week.
Of course, economics differs from physics in a crucial respect. Whereas Pierre-Simon Laplace regarded the present state of the universe as the effect of its past and the cause of its future, the reverse is true in economics: what we expect other people to do later causes what we do now. We typically use personal theories about what other people want to forecast what they will do. When we have good theories of other people, what they are likely to do determines what we expect them to do. This line of reasoning, sometimes called rational expectations, reflects a sense in which the future causes the present in economic systems. Taking this into account is at the core of building structural economic models.
For example, I will join a run on a bank if I expect that other people will. Without deposit insurance, customers have incentives to avoid banks vulnerable to runs. With deposit insurance, customers dont care and wont run. On the other hand, if governments insure deposits, bank owners will want their assets to become as big and as risky as possible, while depositors wont care. There are similar tradeoffs with unemployment and disability insurance insuring people against bad luck may weaken their incentive to provide for themselves and for official bailouts of governments and firms.
More broadly, my reputation is what others expect me to do. I face choices about whether to confirm or disappoint those expectations. Those choices will affect how others behave in the future. Central bankers think about that a lot.
Like physicists, we economists use models and data to learn. We dont learn new things until we appreciate that our old models cannot explain new data. We then construct new models in light of how their predecessors failed. This explains how we have learned from past depressions and financial crises. And with big data, faster computers, and better algorithms, we might see patterns where once we heard only noise.
Continue reading here:
SPONSORED: Huawei on how it has boosted its AI development with new innovation – www.channelweb.co.uk
Posted: at 12:48 am
Artificial intelligence (AI) technology is again on the rise, after a series of booms and busts decades ago. Long before AI's emergence, Huawei was working to boost the explosive growth of AI technology with its non-stop waves of brilliant ideas and executions in the computing industry.
The core of successful applications of AI lies in the computing technology, and the potential is hence tremendous. It is estimated that AI computing will meet 80 per cent of the total computing requirements by 2025, and that the value of the global computing industry will climb from $1.5tn in 2018 to $2tn in 2023.
Huawei has been investing more in intelligent computing in recent years. The investment paid off well with its core processors, which power both the general-purpose and AI computing to unlock the infinite potential in computing businesses. General computing, in particular, is being well backed by the cooperation with Intel. With a robust foundation and vision, Huawei has embraced the computing industry well ahead.
Huawei have come up with more cutting-edge computing solutions which facilitate the data-thirsty world. Afterall, the tech giant aims at enabling inclusive AI that is affordable, effective, and reliable, which will contribute to data-efficient, energy-efficient, secure, trusted, and autonomous machine learning capability for computer vision, NLP, decision making, and inference.
The full-stack, all-scenario AI computing solution facilitates digital transformation in many industries, including smart city, finance, transportation, electric power, and other industries, involving UAVs, robots, smart stations and more.
The Atlas 900 AI cluster, which consists of thousands of Ascend 910 AI processors, has been a crucial launch. The cluster is currently the fastest in terms of AI training, taking only 59.8 seconds to train a ResNet-50 model, 10 seconds faster than the previous record.
It is capable of delivering up 256 to 1024 PFLOPS at FP16, equivalent to the computing power of 500,000 PCs. The powerful Atlas 900 cluster also provides limitless computing capabilities for large-scale dataset neural network training, as well as faster astronomical research, weather forecast, oil exploration, and faster time-to-market for autonomous driving, to name a few.
For execution, Huawei followed up with a holistic strategic plan for Intelligent Computing, covering four major areas where all players in the technology world are striving. Those include the Architecture innovation, where Huawei continues to invest and develop unique Da Vinci architecture, supporting tensor, vector, and scalar computing, taking the computing industry to yet another level. The all-scenario processors are another key area Huawei will invest in to keep a diverse portfolio of innovative processors cover a wide range of device-edge-cloud computing scenarios.
Along with various partners via different channels, Huawei is working to develop an AI ecosystem, bringing more people and organisations into a digital world and create a fully connected intelligence world. As such, Huawei seeks to remove business boundaries by providing the intelligent computing solutions and products to customers, and to partners in the form of components, while prioritizing support for integrated solutions to help partners develop their technologies.
In Europe alone, Huawei will invest 100m over the next five years in the AI Ecosystem Program, helping industry organizations, 200,000 developers, 500 ISV partners, and 50 universities and research institutes to boost innovation. One of the first moves was the partnership with Danish up-and-coming develop Jacob Knobel on the development of the AI ecosystem.
There will certainly be more work in shaping the AI industry in the region, including promotions of Partnership across the EU to help boost AI research and vertical industry development, and nurturing the AI university ecosystem in Europe through academic platforms.
Huawei's new computing strategy will help partners in embracing a world where powerful computing is required, which reflects Huawei's vision for building a fully connected, intelligent world.
See original here:
Posted: at 12:48 am
Servers inside a Facebook hyperscale data center in North Carolina. (Photo: Rich Miller)
Marc Cram, Director of Sales for Server Technology, explores how the data demands and more of the IoT and AI are changing the tech and data center industry. Explore whats in store for 2020, right around the corner.
Marc Cram, Director of Sales, Server Technology
In the world of 2019, we are living on the cusp of a major uptake in advanced technologies that promise to bring new levels of prosperity and convenience to people around the globe. The buildout of 5G wireless and fiber networks will create new jobs and new business models while enabling the transport of the anticipated tsunami of data coming from the uptake of IoT sensors, 5G-enabled handsets, and smart city efforts. Artificial intelligence (AI) will be deployed in countless IoT applications and locations to selectively route data traffic, provide spoken language interfaces, and optimize energy consumption.
As we approach 2020, we are already seeing open source software stacks that run on open hardware platforms to deliver edge computing, cloud computing, open radio access networks (O-RAN), IoT applications and artificial intelligence. Kubernetes, Akraino, and TensorFlow are but a few examples of the software efforts deployed or under way. The Open Compute Project (OCP) has become the leading repository for open source hardware designs donated by companies such as Facebook, Microsoft and Google.
The number and variety of sensors and devices having onboard wired or wireless connectivity supporting IPv4 and IPv6 continues to grow at an accelerating pace. Semiconductor companies are enabling a wide variety of protocols for putting things onto a network, such as Zigbee, Z-Wave, Bluetooth, LowPAN, LoRa, LoRaWAN, NFC, Wi-Fi6, LTE, 5G, and 10/100/1000 ethernet.
As smart cities seek to provide greater amounts of data and convenience applications for their citizens, sensors will be deployed to monitor ever wider swaths of infrastructure (Image: Server Technology)
So far, every software application has to run on a power-consuming hardware device, and most data generating sensors consume power. Until the gene twisters of the world figure out how to embed Wi-Fi or some other low-power networking interface into our DNA, communicating with these devices and sensors will continue to require handheld or other mobile power-consuming electronics of some sort to deliver meaningful, actionable intelligence to the individual. Cell phones take DC power in, and consume DC power as they display email or stream video. Edge and cloud servers take AC or DC power in, and consume DC power at the CPU to serve web pages and play online games. Networking hardware takes in AC or DC power, and consumes DC power in order to route and switch data between ports and the internet.
As smart cities seek to provide greater amounts of data and convenience applications for their citizens, sensors will be deployed to monitor ever wider swaths of infrastructure: water, sewer, electric, lighting, bridges, road conditions and pothole locations, street parking, park occupancy, public restrooms availability, escalator and elevator availability, door access, subway and light rail schedules, bus schedules, delivery drone locations, and active signage status will all be available to everyone, 24 hours a day.
The number and variety of sensors and devices having onboard wired or wireless connectivity supporting IPv4 and IPv6 continues to grow at an accelerating pace.
5G-enabled smartphones will embed dedicated AI chips to improve the quality of spoken interfaces and optimize battery life. New sensors in the smartphone will provide more detailed health and wellness data, while new software applications will deliver heretofore unrivaled experiences through augmented and virtual reality.
Each new application, each piece of information, takes electrical power to be created, transported, stored, analyzed and displayed. At Legrand, we are committed to provide sustainable power, access, and control solutions for the edge, cloud, core, mobile and smart city infrastructure used daily by billions of people around the world.
Marc Cram is Director of Sales for Server Technology.
Posted: at 12:48 am
The government is blocking the full publication of an official report assessing the ways artificial intelligence could be deployed across Whitehall and the wider public sector, NS Tech can reveal.
Ministers commissioned a British consultancy firm to review potential applications of the technology in government earlier this year. The report was delivered to the Cabinet Office in April and is likely to play a key role in efforts to transform public sector processes in the coming years.
NS Tech requested a copy of the report, which was produced by Faculty, under transparency rules in August. The government complied with the request, but returned a version which is heavily redacted, on the grounds that ministers are yet to determine how it will inform policy.
The Government Digital Service and Office for AI published guidelines for using artificial intelligence in the public sector in June, before issuing further guidance on AI procurement in October. However, the government is yet to publish a comprehensive strategy for how it will invest in AI across central government, despite committing to a review in the 2018 autumn budget.
The chancellor, Sajid Javid, had been expected to outline plans for AI investment in this years spending review, but the only mention of the technology was in the context of a 250m investment in its adoption in healthcare.
During the review, Faculty, which was formerly known as ASI Data Science, surveyed departments, carried out workshops with the Home Office, Ministry of Justice, Ministry of Defence, HMRC, the Department for Work and Pensions and the Department for Transport. Its consultants also spoke to representatives from the Canadian, Swedish and German government, as well as academics at Stanford University.
The report Faculty produced reveals the processes behind its review, but most of the suggested deployments have been redacted. The report does, however, disclose the most common existing applications of AI and machine learning, which include inspections of schools, farms and borders, fraud detection in benefits claims and analysing satellite data for international development work.
The report also reveals reasons why uptake of AI across government has been hindered so far. The lack of a widely-available and consistently-used modern data architecture means that AI model build and testing is time-consuming and labour-intensive and largely unsupported for production, and data assets cannot be joined, the report states.
The consultants describe cross-department data sharing practice as antediluvian, claiming they take six months to negotiate but not providing any transparency or accountability in how they are used. The report adds: To address these data governance issues, a more consistent and rigorous set of operational responsibilities in relation to data could be applied to publicly funded IT systems within particular departments and discharged by the relevant departmental leader (e.g. the Chief Information Officer). Low rates of employee retention are also cited as a barrier to adoption.
The report concludes: Machine learning and artificial intelligence are in the process of disrupting/transforming almost every sector of the economy  However, there are barriers to adoption some general, some specific to the public sector.
The Cross Government AI Review has been an extremely useful exercise to understand the current state of departmental capability/readiness/appetite, and to identify the most promising use cases for transformative projects.
Will the governments new AI procurement guidelines work?
Here is the original post: