Breaking News and Updates
- Abolition Of Work
- Alternative Medicine
- Artificial Intelligence
- Atlas Shrugged
- Ayn Rand
- Basic Income Guarantee
- Cbd Oil
- Chess Engines
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Designer Babies
- Donald Trump
- Elon Musk
- Ethical Egoism
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom of Speech
- Gene Medicine
- Genetic Engineering
- Germ Warfare
- Golden Rule
- Government Oppression
- High Seas
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Longevity
- Immortality Medicine
- Intentional Communities
- Jordan Peterson
- Life Extension
- Mars Colonization
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- National Vanguard
- New Utopia
- Online Casino
- Personal Empowerment
- Political Correctness
- Politically Incorrect
- Post Human
- Post Humanism
- Private Islands
- Quantum Computing
- Quantum Physics
- Resource Based Economy
- Ron Paul
- Second Amendment
- Second Amendment
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Teilhard De Charden
- The Singularity
- Tor Browser
- Transhuman News
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Zeitgeist Movement
The Evolutionary Perspective
Category Archives: Singularity
Posted: February 13, 2017 at 9:37 am
By Debkumar Mitra, Gray Matters
In 2016, a driverless Tesla car crashed killing the test driver. It was not the first vehicle to be involved in a fatal crash, but was the first of its kind and the tragedy opened a can of ethical dilemmas.
With autonomous systems such as driverless vehicles there are two main grey areas: responsibility and ethics. Widely discussed at various forums is a dilemma where a driverless car must choose between killing pedestrians or passengers.
Here, both responsibility and ethics are at play. The cold logic of numbers that define the mind of such systems can sway it either way and the fear is that passengers sitting inside the car have no control.
Any new technology brings a new set of challenges. But it appears that creating artificial intelligence-driven technology products is almost like unleashing the Frankensteins monster.
Artificial Intelligence (AI) is currently at the cutting-edge science and technology. Advances in technology, including aggregate technologies like deep learning and artificial neural networks, are behind many new developments such as that Go playing world champion machine.
However, though there is great positive potential for AI, many are afraid of what AI could do, and rightfully so. There is still the fear of a technological singularity, a circumstance in which AI machines would surpass the intelligence of humans and take over the world.
Researchers in genetic engineering also face a similar question. This dark side of technology, however, should not be used to decree closure of all AI or genetics research. We need to create a balance between human needs and technological aspirations.
Much before the current commotion over ethical AI technology, celebrated science-fiction author Isaac Asimov came up with his laws of robotics.
Exactly 75 years ago in a 1942 short story Runaround, Asimov unveiled an early version of his laws. The current forms of the laws are: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
Given the pace at which AI systems are developing, there is an urgent need to put in some checks and balances so that things do not go out of hand.
There are many organisations now looking at legal, technical, ethical and moral aspects of a society driven by AI technology. The Institute of Electrical and Electronics Engineers (IEEE) already has Ethically Aligned Designed, an AI framework addressing the issues in place. AI researchers are drawing up a laundry list similar to Asimovs laws to help people engage in a more fearless way with this beast of a technology.
In January 2017, Future of Life Institute (FLI), a charity and outreach organisation, hosted their second Beneficial AI Conference. AI experts developed Asilomar AI Principles, which ensures that AI remains beneficial and not harmful to the future of humankind.
The key points that came out of the conference are: How can we make future AI systems robust, so that they do what we want without malfunctioning or getting hacked? How can we grow our prosperity through automation while maintaining peoples resources and purpose? How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI? What set of values should AI be aligned with, and what legal and ethical status should it have?
Ever since they unshackled the power of the atom, scientists and technologists have been at the forefront of the movement emphasising science for the betterment of man. This duty was forced upon them when the first atom bomb was manufactured in the US. Little did they realise that a search for the atomic structure could give rise to nasty subplot? With AI we are at the same situation or maybe worse.
No wonder at an IEEE meeting that gave birth to ethical AI framework, the dominant thought was that the human and all living beings must remain at centre of all AI discussions. People must be informed at every level right from the design stage to development of the AI-driven products for everyday use.
While it is a laudable effort to develop ethically aligned technologies, it begs another question that has been raised at various AI conferences. Are humans ethical?
(The author is the CEO of Gray Matters. Views expressed above are his own)
Originally posted here:
Ready to Change the World? Apply Now for Singularity University’s 2017 Global Solutions Program – Singularity Hub
Posted: February 11, 2017 at 8:41 am
Im putting out a call for brilliant entrepreneurs who want to enroll in Singularity Universitys Global Solutions Program (GSP).
The GSP is where youll learn about exponentially growing technology, dig into humanitys Global Grand Challenges (GGCs) and then start a new company, product or service with the goal of positively impacting 1 billion people within 10 years.
We call this a ten-to-the-ninth (10+) company.
This post is about who should apply, how to apply and the over $1.5 million in scholarships being provided by Google for entrepreneurs.
SUs GSP program runs from June 17, 2017 until August 19, 2017.
Applications are due: February 21, 2017.
Eight years ago, Ray Kurzweil and I cofounded Singularity University to search the world for the most brilliant, world-class problem-solvers, to bring them together, and to give them the resources to create companies that impact billions.
The GSP is an intensive 24/7 experience at the SU campus at the NASA Research Center in Mountain View, CA, in the heart of Silicon Valley.
During the nine-week program, 90 entrepreneurs, engineers, scientists, lawyers, doctors and innovators from around the world learn from our expert faculty about infinite computing, AI, robotics, 3D printing, networks/sensors, synthetic biology, entrepreneurship, and more, and focus on building and developing companies to solve the global grand challenges (GGCs).
GSP participants form teams to develop unique solutions to GGCs, with the intent to form a company that, as I mentioned above, will positively impact the lives of a billion people in 10 years or less.
Over the course of the summer, participants listen to and interact with top Silicon Valley executive guest speakers, tour facilities like GoogleX, and spend hours getting their hands dirty in our highly advanced maker workshop.
At the end of the summer, the best of these startups will be asked to join SU Labs, where they will receive additional funding and support to take the company to the next level.
I am pleased to announce that thanks to a wonderful partnership with Google, all successful applicants will be fully subsidized by Google to participate in the program.
In other words, if accepted into the program, the GSP is free.
The Global Solutions Program (GSP) is SUs flagship program for innovators from a wide diversity of backgrounds, geographies, perspectives, and expertise. At GSP, youll get the mindset, tools, and network to help you createmoonshotinnovations that will positively transform the future of humanity. If you're looking to create solutions to help billions of people, we can help you do just that.
Key program dates:
This program will be unlike any we've ever doneand unlike any you've ever seen.
If you feel like you meet the criteria, apply now (click here).
Applications close February 21nd, 2017.
If you know of a friend or colleague who would be a good fit for this program, please share this post with them and ask that theyfill out an application.
See the article here:
Posted: at 8:41 am
Accelerating technology has been creating a lot of worry over job loss to automation, especially as machines become capable of doing things they never could in the past. A recent report released by the McKinsey Global Institute estimated that 49 percent of job activities could currently be fully automatedthat equates to 1.1 billion workers globally.
What gets less buzz is the other side of the coin: automation helping to create jobs. Believe it or not, it does happen, and we can look at one of the worlds largest retailers to see that.
Thanks in part to more robots in its fulfillment centers, Amazon has been able to drive down shipping costs and pass those savings on to customers. Cheaper shipping made more people use Amazon, and the company hired more workers to meet this increased demand.
So what do the robots do, and what do the people do?
Tasks involving fine motor skills, judgmentor unpredictability are handled by people. They stock warehouse shelves with items that come off delivery trucks. A robot could do this, except that to maximize shelf space, employees are instructed to stack items according to how they fit on the shelf rather than grouping them by type.
Robots can only operate in a controlled environment, performing regular and predictable tasks. Theyve largely taken over heavy lifting, including moving pallets between shelvesgood news for warehouse workers backsas well as shuttling goods from one end of a warehouse to another.
Under current technology, the expense of building robots able to stock shelves based on available space is more costly and less logical than hiring people to do it.
Similarly, for outgoing orders, robots do the lifting and transportation, but not the selecting or packing. A robot brings an entire shelf of goods to an employees workstation, where the employee selects the correct item and puts it on a conveyor belt for another employee to package. By this time, the shelf-carrying robot is already returning the first shelf and retrieving another.
Since loading trucks also requires spatial judgment and can be unpredictablespace must be maximized here even more than on shelvespeople take care of this too.
Ever since acquiring Boston-based robotics company Kiva Systemsin March 2012at a price tag of $775 millionAmazon has been ramping up its use of robots and is continuing to pour funds into automation research, both for robots and delivery drones.
In 2016 the company grew its robot workforce by 50 percent, from 30,000 to 45,000. Far from laying off 15,000 people, though, Amazon increased human employment by around 50 percent in the same period of time.
Even better, the companys Q4 2016 earnings report included the announcement that it plans to create more than 100,000 new full-time, full-benefit jobs in the US over the next 18 months. New jobs will be based across the country and will include various types of experience, education, and skill levels.
So how tight is the link between robots and increased productivity? Would there be even more jobs if people were doing the robots work?
Well, picture an employee walking (or even running) around a massive warehouse, locating the right shelf, climbing a ladder to reach the item hes looking for, grabbing it, climbing back down the ladder (carefully, of course), and walking back to his work station to package it for shipping. Now multiply the time that whole process took by the hundreds of thousands of packages shipped from Amazon warehouses each day.
Lots more time. Lots less speed. Fewer packages shipped. Higher costs. Lower earnings. No growth.
Though it may not last forever, right now Amazons robot-to-human balance is clearly in employees favor. Automation can take jobs away, but sometimes it can create them too.
Image Credit: Tabletmonkeys/YouTube
The rest is here:
Posted: February 10, 2017 at 3:33 am
On Saturday, Feb. 4, at The Grauer School in Encinitas, the three Rowe FTC robotics teams -- Singularity, Logitechies and Intergalactic Dragons -- ended up being 1st, 2nd and 3rd place captains in the League Championships exciting alliance rounds which culminated the end of a hard-fought event. David Warner, who heads the schools FTC robotics program, said, Im so proud of our students, parent mentors and coaches who worked countless hours to achieve success. Being the youngest teams at the championship, this is truly remarkable and a testament to their hard work!
The Logitechies and Intergalactic Dragons alliance teams faced off in an exciting third game to determine who would move on to face the Singularity alliance in the championship round. The Intergalactic Dragons won, but when they moved on to the final match to determine the champion, Singularitys 90-point autonomous program was the key to victory as their alliance put up well over 200 points in the two final games.
In addition to competing in the alliance matches, the Logitechies team was also a finalist in the judged Connect and PTC awards.
Singularity earned top honors of the day as they advance, along with the Intergalactic Dragons, to the San Diego Regionals at Francis Parker High School on Feb. 25.
Go here to read the rest:
Posted: at 3:33 am
Quantum computers promise to crack some of the worlds most intractable problems by super-charging processing power. But the technical challenges involved in building these machines mean theyve still achieved just a fraction of what they are theoretically capable of.
Now physicists from the UK have created a blueprint for a soccer-field-sized machine they say could reach the blistering speeds that would allow them to solve problems beyond the reach of todays most powerful supercomputers.
The system is based on a modular design interlinking multiple independent quantum computing units, which could be scaled up to almost any size. Modular approaches have been suggested before, but innovations such as a far simpler control system and inter-module connection speeds 100,000 times faster than the state-of-the-art make this the first practical proposal for a large-scale quantum computer.
For many years, people said that it was completely impossible to construct an actual quantum computer. With our work we have not only shown that it can be done, but now we are delivering a nuts and bolts construction plan to build an actual large-scale machine, professor Winfried Hensinger, head of the Ion Quantum Technology Group at the University of Sussex who led the research, said in a press release.
The technology at the heart of the individual modules is already well-established and relies on trapping ions (charged atoms) in magnetic fields to act as qubits, the basic units of information in quantum computers.
While bits in conventional computers can have a value of either 1 or 0, qubits take advantage of the quantum mechanical phenomena of superposition, which allows them to be both at the same time.
As Elizabeth Gibney explains in Nature, this is what makes quantum computers so incredibly fast. The set of qubits comprising the memory of a quantum computer could exist in every possible combination of 1s and 0s at once. Where a classical computer has to try each combination in turn, a quantum computer could process all those combinations simultaneously.
In a paper published in the journal Science Advances last week, researchers outline designs for modules containing roughly 2,500 qubits and suggest interlinking thousands of them together to create a machine containing two billion qubits. For comparison, Canadian firm D-Wave, the only commercial producer of quantum computers, just brought out its latest model featuring 2,000 qubits.
This is not the first time a modular system like this has been suggested, but previous approaches have recommended using light waves traveling through fiberoptics to link the units. This results in interaction rates between modules far slower than the quantum operations happening within them, putting a handbrake on the systems overall speed. In the new design, the ions themselves are shuttled from one module to another using electrical fields, which results in 100,000 times faster connection speeds.
The system also has a much simpler way of controlling qubits. Previous designs required lasers to be carefully targeted at each ion, an enormous engineering challenge when dealing with billions of qubits. Instead, the new system uses microwave fields and the careful application of voltages, which is much easier to scale up.
The researchers concede there are still considerable technical challenges to building a device on the scale they have suggested, not to mention the cost. But they have already announced plans to build a prototype based on the design at the university at a cost of 1-2 million.
While this proposal is incredibly challenging, I wish more in the quantum community would think big like this, Christopher Monroe, a physicist at the University of Maryland who has worked on trapped-ion quantum computing, told Nature.
In their paper, the researchers predict their two billion qubit system could find the prime factors of a 617-digit-long number in 110 days. This is significant because many state-of-the-art encryption systems rely on the fact that factoring large numbers can take conventional computers thousands of years. This is why many in the cybersecurity world are nervous about the advent of quantum computing.
These researchers arent the only ones working on bringing quantum computing into the real world, though. Google, Microsoft and IBM are all developing their own systems, and D-Wave recently open-sourced a software tool that helps those without a background in quantum physics program its machines.
All that interest is due to the enormous potential of quantum computing to solve problems as diverse and complex as developing drugs for previously incurable diseases, devising new breeds of materials for high-performance superconductors, magnets and batteries and even turbo-charging machine learning and artificial intelligence.
"The availability of a universal quantum computer may have a fundamental impact on society as a whole, said Hensinger. Without doubt it is still challenging to build a large-scale machine, but now is the time to translate academic excellence into actual application, building on the UK's strengths in this ground-breaking technology.
Image Credit: University of Sussex/YouTube
See the rest here:
Posted: at 3:33 am
Singularity Containers for Science, Reproducibility, and HPC
Explore how Singularity liberates non-privileged users and host resources (such as interconnects, resource managers, file systems, accelerators ) allowing users to take full control to set-up and run in their native environments. This talk explores ...
Posted: February 9, 2017 at 6:30 am
Over the holidays, I went for a drive with a Tesla. With, not in, because the car was doing the driving.
Hearing about autonomous vehicles is one thing; experiencing it was something entirely different. When the parked Model S calmly drove itself out of the garage, I stood gaping in awe, completely mind-blown.
If this years Consumer Electronics Show is any indication, self-driving cars are zooming into our lives, fast and furious. Aspects of automation are already in useTeslas Autopilot, for example, allows cars to control steering, braking and switching lanes. Elon Musk, CEO of Tesla, has gone so far as to pledge that by 2018, you will be able to summon your car from across the countryand itll drive itself to you.
So far, the track record for autonomous vehicles has been fairly impressive. According to a report from the National Highway Traffic Safety Administration, Teslas crash rate dropped by about 40% after turning on their first-generation Autopilot system. This week, with the introduction of gen two to newer cars equipped with the necessary hardware, Musk is aiming to cut the number of accidents by another whopping 50 percent.
But when self-driving cars mess up, we take note. Last year, a Tesla vehicle slammed into a white truck while Autopilot was engagedapparently confusing it with the bright, white skyresulting in the companys first fatality.
So think about this: would you entrust your life to a robotic machine?
For anyone to even start contemplating yes, the cars have to be remarkably safe fully competent in day-to-day driving, and able to handle any emergency traffic throws their way.
Unfortunately, those edge cases also happen to be the hardest problems to solve.
To interact with the world, autonomous cars are equipped with a myriad of sensors. Googles button-nosed Waymo car, for example, relies on GPS to broadly map out its surroundings, then further captures details using its cameras, radar and laser sensors.
These data are then fed into software that figures out what actions to take next.
As with any kind of learning, the more scenarios the software is exposed to, the better the self-driving car learns.
Getting that data is a two-step process: first, the car has to drive thousands of hours to record its surroundings, which are used as raw data to build 3D maps. Thats why Google has been steadily taking their cars out on field tripssome two million miles to datewith engineers babysitting the robocars to flag interesting data and potentially take over if needed.
This is followed by thousands of hours of labelingthat is, manually annotating the maps to point out roads, vehicles, pedestrians and other subjects. Only then can researchers feed the dataset, so-called labeled data, into the software for it to start learning the basics of a traffic scene.
The strategy works, but its agonizingly slow, tedious and the amount of experience that the cars get is limited. Since emergencies tend to fall into the category of unusual and unexpected, it may take millions of miles before the car encounters dangerous edge cases to test its softwareand of course, put both car and human at risk.
An alternative, increasingly popular approach is to bring the world to the car.
Recently, Princeton researchers Ari Seff and Jianxiong Xiao realized that instead of manually collecting maps, they could tap into a readily available repertoire of open-sourced 3D maps such as Google Street View and OpenStreetMap. Although these maps are messy and in some cases can have bizarre distortions, they offer a vast amount of raw data that could be used to construct datasets for training autonomous vehicles.
Manually labeling that data is out of the question, so the team built a system that can automatically extract road featuresfor example, how many lanes there are, if theres a bike lane, what the speed limit is and whether the road is a one-way street.
Using a powerful technique called deep learning, the team trained their AI on 150,000 Street View panoramas, until it could confidently discard artifacts and correctly label any given street attribute. The AI performed so well that it matched humans on a variety of labeling tasks, but at much faster speed.
The automated labeling pipeline introduced here requires no human intervention, allowing it to scale with these large-scale databases and maps, concluded the authors.
With further improvement, the system could take over the labor-intensive job of labeling data. In turn, more data means more learning for autonomous cars and potentially much faster progress.
This would be a big win for self-driving technology, says Dr. John Leonard, a professor specializing in mapping and automated driving at MIT.
Other researchers are eschewing the real world altogether, instead turning to hyper-realistic gaming worlds such as Grand Theft Auto V.
For those not in the know, GTA V lets gamers drive around the convoluted roads of a city roughly one-fifth the size of Los Angeles. Its an incredibly rich world the game boasts 257 types of vehicles and 7 types of bikes that are all based on real-world models. The game also simulates half a dozen kinds of weather conditions, in all giving players access to a huge range of scenarios.
Its a total data jackpot. And researchers are noticing.
In a study published in mid-2016, Intel Labs teamed up with German engineers to explore the possibility of mining GTA V for labeled data. By looking at any road scene in the game, their system learned to classify different objects in the roadcars, pedestrians, sidewalks and so onthus generating huge amounts of labeled data that can then be fed to self-driving cars.
Of course, datasets extracted from games may not necessarily reflect the real world. So a team from the University of Michigan trained two algorithms to detect vehicles one using data from GTA V, the other using real-world imagesand pitted them against each other.
The result? The game-trained algorithm performed just as well as the one trained with real-life images, although it needed about 100 times more training data to reach the performance of the real-world algorithmnot a problem, since generating images in games is quick and easy.
But its not just about datasets. GTA V and other hyper-realistic virtual worlds also allow engineers to test their cars in uncommon but highly dangerous scenarios that they may one day encounter.
In virtual worlds, AIs can tackle a variety of traffic hazardssliding on ice, hitting a wall, avoiding a deerwithout worry. And if the cars learn how to deal with these edge cases in simulations, they may have a higher chance of surviving one in real life.
So far, none of the above systems have been tested on physical self-driving cars.
But with the race towards full autonomy at breakneck speed, its easy to see companies incorporating these systems to give themselves an upper edge.
Perhaps more significant is that these virtual worlds represent a subtle shift towards the democratization of self-driving technology. Most of them are open-source, in that anyone can hop onboard to create and test their own AI solutions for autonomous cars.
And who knows, maybe the next big step towards full autonomy wont be made inside Tesla, Waymo, or any other tech giant.
It could come from that smart kid next door.
Image Credit: Shutterstock
See more here:
Posted: February 7, 2017 at 10:39 pm
Feeling run down? Have a case of the sniffles? Maybe you should have paid more attention to your smartwatch.
No, thats not the pitch line for a new commercial peddling wearable technology, though no doubt a few companies will be interested in the latest research published in PLOS Biology for the next advertising campaign. It turns out that some of the data logged by our personal tracking devices regarding healthheart rate, skin temperature, even oxygen saturationappear useful for detecting the onset of illness.
We think we can pick up the earliest stages when people get sick, says Michael Snyder, a professor and chair of genetics at Stanford University and senior author of the study, Digital Health: Tracking Physiomes and Activity Using Wearable Biosensors Reveals Useful Health-Related Information.
Snyder said his team was surprised that the wearables were so effective in detecting the start of the flu, or even Lyme disease, but in hindsight the results make sense: Wearables that track different parameters such as heart rate continuously monitor each vital sign, producing a dense set of data against which aberrations stand out even in the least sensitive wearables.
[Wearables are] pretty powerful because theyre a continuous measurement of these things, notes Snyder during an interview with Singularity Hub.
The researchers collected data for up to 24 months on a small study group, which included Snyder himself. Known as Participant #1 in the paper, Snyder benefited from the study when the wearable devices detected marked changes in his heart rate and skin temperature from his normal baseline. A test about two weeks later confirmed he had contracted Lyme disease.
In fact, during the nearly two years while he was monitored, the wearables detected 11 periods with elevated heart rate, corresponding to each instance of illness Snyder experienced during that time. It also detected anomalies on four occasions when Snyder was not feeling ill.
An expert in genomics, Snyder said his team was interested in looking at the effectiveness of wearables technology to detect illness as part of a broader interest in personalized medicine.
Everybodys baseline is different, and these devices are very good at characterizing individual baselines, Snyder says. I think medicine is going to go from reactivemeasuring people after they get sickto proactive: predicting these risks.
Thats essentially what genomics is all about: trying to catch disease early, he notes. I think these devices are set up for that, Snyder says.
The cost savings could be substantial if a better preventive strategy for healthcare can be found. A landmark report in 2012 from the Cochrane Collaboration, an international group of medical researchers, analyzed 14 large trials with more than 182,000 people. The findings: Routine checkups are basically a waste of time. They did little to lower the risk of serious illness or premature death. A news story in Reuters estimated that the US spends about $8 billion a year in annual physicals.
The study also found that wearables have the potential to detect individuals at risk for Type 2 diabetes. Snyder and his co-authors argue that biosensors could be developed to detect variations in heart rate patterns, which tend to differ for those experiencing insulin resistance.
Finally, the researchers also noted that wearables capable of tracking blood oxygenation provided additional insights into physiological changes caused by flying. While a drop in blood oxygenation during flight due to changes in cabin pressure is a well-known medical fact, the wearables recorded a drop in levels during most of the flight, which was not known before. The paper also suggested that lower oxygen in the blood is associated with feelings of fatigue.
Speaking while en route to the airport for yet another fatigue-causing flight, Snyder is still tracking his vital signs today. He hopes to continue the project by improving on the software his team originally developed to detect deviations from baseline health and sense when people are becoming sick.
In addition, Snyder says his lab plans to make the software work on all smart wearable devices, and eventually develop an app for users.
I think [wearables] will be the wave of the future for collecting a lot of health-related information. Its a very inexpensive way to get very dense data about your health that you cant get in other ways, he says. I do see a world where you go to the doctor and theyve downloaded your data. Theyll be able to see if youve been exercising, for example.
It will be very complementary to how healthcare currently works.
Image Credit: Shutterstock
Go here to read the rest:
Posted: at 10:39 pm
Greg Kurtzer, LBNL
Explore how Singularity liberates non-privileged users and host resources (such as interconnects, resource managers, file systems, accelerators ) allowing users to take full control to set-up and run in their native environments. This talk explores Singularity how it combines software packaging models with minimalistic containers to create very lightweight application bundles which can be simply executed and contained completely within their environment or be used to interact directly with the host file systems at native speeds. A Singularity application bundle can be as simple as containing a single binary application or as complicated as containing an entire workflow and is as flexible as you will need.
Gregory M. Kurtzer is currently the IT HPC Systems Architect and Technology Developer at Lawrence Berkeley National Laboratory. His specialties include Linux (environment, services and deep system internals), open source and development (Perl, C, SQL, PHP, HTML, etc.); HPC applications, administration, automation and provisioning of large scale system architectures. Along with his solid reputation for sparking new trends, Kurtzerhas created, founded, built and contributed to communities with install counts in the millions of users, and numerous breakthrough projects including CentOS Linux, Caos Linux, Perceus, Warewulf and most recently Singularity.
See more talks in the Stanford Conference Video Gallery
Sign up for our insideHPC Newsletter
Read more from the original source:
Posted: at 8:34 am
On Feb. 6, Jeremi Johnson, aka 10thLetter, dropped an unannounced new album, titled Nature In Singularity. The recording shifts 10th Letters gears a bit by delving into a more abstract wash of ambient samples and electronic soundscapes than anything Johnson has previously released. As the title suggests, the album is a conceptual offering that examines nature in the time of the Singularity a flash point in human evolution when behavior and civilizations rules become governed by advanced technology in ways that are not yet comprehensible.
The audio and video halves of Nature In Singularity give a glimpse into a day in the life of an artificially intelligent being taking a meditative stroll through various terrestrial terrains, happy that humans are no longer around to destroy the environment.
Nature In Singularity debuted live in a performance at Tech Square Labs on Jan. 28, during an evening of music and arts dedicated to exploring themes around the context of Singularity. Johnson was tasked with tackling nature. The material was initially intended for a one-off performance, but the theme and the imagery weighed heavy on his mind. Technology is in a place where some really crazy and really scary things are happening, Johnson says. We're living in a time when human intelligence is under assault. Journalism is under assault. Facts are under assault. Technology has progressed so much that I dont think we can turn back. Were at the event horizon for the Singularity, and this is how it all begins.
Nature In Singularity will be released as a cassette, and possibly as a DVD later this year. In the meantime, Johnson is wrapping up work on an album with Saira Raza, titled Bhadda Saya, which should arrive in late Feb.or early March.
10th Letter plays Mammal Gallery on Thurs., Feb. 9. With CJ Boyd, Danny Bailey and Rasheeda Ali, and Dux. $5. 9 p.m.91 Broad St. S.W. http://www.mammalgallery.com.
Continue reading here: