Singularity – Microsoft Research

Singularity was a multi-year research project focused on the construction of dependable systems through innovation in the areas of systems, languages, and tools. We built a research operating system prototype (called Singularity), extended programming languages, and developed new techniques and tools for specifying and verifying program behavior.

Advances in languages, compilers, and tools open the possibility of significantly improving software. For example, Singularity uses type-safe languages and an abstract instruction set to enable what we call Software Isolated Processes (SIPs). SIPs provide the strong isolation guarantees of OS processes (isolated object space, separate GCs, separate runtimes) without the overhead of hardware-enforced protection domains. In the current Singularity prototype SIPs are extremely cheap; they run in ring 0 in the kernels address space.

Singularity uses these advances to build more reliable systems and applications. For example, because SIPs are so cheap to create and enforce, Singularity runs each program, device driver, or system extension in its own SIP. SIPs are not allowed to share memory or modify their own code. As a result, we can make strong reliability guarantees about the code running in a SIP. We can verify much broader properties about a SIP at compile or install time than can be done for code running in traditional OS processes. Broader application of static verification is critical to predicting system behavior and providing users with strong guarantees about reliability.

Go here to see the original:

Singularity - Microsoft Research

Singularity Group to Host SingularityU India Summit on November 14 and 15 in Bangalore – Devdiscourse

The conference will bring together hundreds of leaders from India, Southeast Asia, Australia, and New Zealand to discuss exponential technologies as a tool to shape the future and their applications on individuals, organizations, and society Bangalore, Karnataka, India (NewsVoir) Singularity Group, a global impact organization that helps leaders leverage exponential technology to shape businesses and societies in the years ahead, today, announced the SingularityU India Summit, Re:Imagine the Future. Leaders from the world of technology, business, science and entrepreneurship will attend the event on November 14 and 15, 2022 at the Conrad Hilton in Bangalore. Machani Robotics will serve as the Diamond Sponsor with additional sponsorship opportunities still available for corporations, government organizations and venture capital groups interested in exponential technologies as a tool to shape the future. The two-day event, powered by HeroVired, and in association INK Talks and Machani Robotics will bring together innovative leaders and institutions, and create a plan to pole vault into the future. Over the course of two days, more than 20 experts will cover diverse topics including the future of work, finance, education, and AI, empowering a network of globally connected changemakers and leaders across India. Attendees can participate in master classes, workshops, and networking sessions discussing the future of work, electric mobility, education, cleantech and more.

Speakers include: Rob Nail, Serial entrepreneur, Associate Founder, Faculty member & former CEO of Singularity University, Shuo Chen, Singularity Expert: Entrepreneurship and Blockchain, Taddy Bletcher, Singularity Expert: Education, Lakshmi Pratury, Co-founder & CEO of INK Talks, Prerna Jhunjhunwala, Founder of Creative Galileo and others. The Summit will host a second stage for Indian entrepreneurs to showcase their companies and the impact they are making on the Indian startup community and larger ecosystem of the country. In this post-pandemic world, Singularity will be your guide as we work together to accelerate our journey towards a more equitable and sustainable future for us all, said Dermot Mee, COO of Singularity Group. If our ambition is to thrive, we need to collectively reimagine and design the future by shifting our mindset to think exponentially. We need to be the dreamers who learn to turn our dreams into reality. For ticketing information and registration, please visit http://www.singularityuindia.com. About Singularity Group Singularity Group is a global impact organization that looks into the future to help leaders better understand how exponential technology will shape businesses and societies in the years ahead. Through a deeper understanding of the accelerated pace of change and the role that technology plays in it, these leaders create tremendous positive impact that improves the wellbeing of people and the health of the planet. Over the past decade, Singularity has worked with more than 75,000 leaders drawn from corporations, nonprofits, governments, investors and academia. With 250,000 impact-minded innovators across the Singularity network, over 125 chapters and partners across six continents and a strong digital presence, Singularity Group reaches millions of people each month. The organization has launched over 5,000 social impact initiatives, and its alumni have started more than 200 companies.

For more information, visit su.org.

(This story has not been edited by Devdiscourse staff and is auto-generated from a syndicated feed.)

See the original post:

Singularity Group to Host SingularityU India Summit on November 14 and 15 in Bangalore - Devdiscourse

We need to manage AI better as we are approaching the Creative Singularity – RedShark News

David Shapton on why we can't ignore AI anymore and how, without active management, AI will be a threat to artists and creators not an opportunity.

If you've read my columns for the last ten years, you'll know I'm the opposite of a Luddite. I embrace new technology because I see it as a means to change the world for good. But whatever new tools technology brings us, it's how we use them that will determine their net effect on the level of happiness and well-being in our society.

And - to be perfectly clear - I see AI the same way. We're suddenly starting to see AI doing things for us that are supposed to be impossible: not just difficult, but properly impossible.

Like being able to "unmix" a musical recording. Want the dry, isolated lead vocal without the cacophony of the musical accompaniment?There's a web service for that.

Need to extract a person's portrait from a photograph with a distracting or unattractive background?A new version of the iPhone operating system will instantly do that for you, even with animals, objects, and human faces.

Can't read Welsh, Albanian or Icelandic?There's a translation app for that.

Need a background for your film that you're shooting on a virtual set? Just say the words"Mythical world populated by dragons and slightly scary looking tall people with mountains in the background and a spooky castle in the mid-distance",and... there's your background.

Amazing. What a giant leap forward. AI is doing things that shouldn't even be possible. The problem is that it's doing something that artists usually get paid to do. And that, at least, should provoke a reaction from us.

It will raise fundamental questions about who we are and what we can and should do. And it's not as clear-cut as you would think. We have to ask about skill, and not just skill as in expert-level muscle memory, but the talent in assessing a task or project, organising it and delivering a pleasing outcome without being ridiculously expensive (for example).

Let's step back for a minute and look at why this is becoming such an issue.

After several false starts, AI is taking off, and it's happening at a pace that surprises people. The key to understanding this is that word: "surprise". That's because we're accustomed to a world where we can see things coming. So, even though nobody can predict the future, we can identify trends. If you keep up with the news, then the chances are that nothing much will surprise you, especially if it's news about your own field of expertise.

But imagine a world where we can no longer make predictions based on trends. Where not even experts can know what's coming next. That's the stage we're at with AI, and it is a potential problem, as well as being a breathtaking display of technical virtuosity.

It's beginning to feel like we're approaching some sort of Singularity.

For those unfamiliar with the technological Singularity, it's a concept that Ray Kurzweil brought to the surface in his 2005 book The Singularity Is Near. There are several mutually compatible definitions of Singularity. I think the most useful one says that the technological Singularity is when the rate of progress is so steep that it appears to be a vertical line from our perspective. In other words, you get infinite progress in absolutely no time at all.

That's not likely to happen yet, but we're already starting to experience that the rate of progress is steeper than we can comprehend it to be.

One effect of that is that we start to be surprised by the rate of progress. Even experts are beginning to be surprised by AI. I'm not an expert, but I'm reasonably well informed, and I am extremely, totally surprised by the leaps that AI is making.

If you take Moore's law in its prime, progress was around 40% per year. Effectively, that's like compound interest. Add that into the mix each year, and you arrive at the sort of progress in computers that we've seen over the last four decades or so. Remember that percentage while I tell you that last year, Nvidia - arguably the leading developer of co-processors for AI -said that AI is developing at the rate of 116% per year. That's enough to give us a million-fold increase in ten years. On top of that, AI is capable of improving itself - it's "intelligent", after all. (But let's not be too picky about the definition of "intelligence" here!).

I remember talking to some digital video engineers at JVC around the start of this century. I was suggesting an approach to video encoding that would be pretty radical. My engineer friends told me that you'd need a thousand-fold increase in technology to do that. It was a figure plucked out of the air but from an informed viewpoint. They meant, "it won't happen in our lifetimes".

But that thousand-fold increasehashappened. Except that it's more than a million-fold if you include AI in the mix - and it would be negligent not to.

So, our handy instrument for detecting a Singularity is "surprised experts".

I was surprised by the quality of images from text-to-image programs like Stable Diffusion and by the AI's sheer virtuosity. But what surprised me even more - and I could have used the word "shocked" here - was that, quite spontaneously, friends of friends and colleagues started to use the AI images in places where they would previously have employed an artist or designer. Web pages, backdrops for virtual production, brochures, and probably loads more uses that I haven't heard about yet. It's happening. AI is taking our jobs. OMG!

But this isn't the end of it. Let's not go down the rabbit hole of arguments about sentient machines and AI "wanting" to take over the world. We're not quite there yet. But we have arrived at a critical point where we need to take a deep and measured look at how wemanageAI in the creative sphere.

AI can automate tedious processes; it can speed up repetitive tasks. It can match colours in previously unmatched shots. It can up-res and down-res. It can create fantasy backgrounds and photorealistic foregrounds.

So we have to decide: what will our relationship with AI be like? And it won't be easy. With the AI landscape changing so quickly, there is no informed answer. So there isn't a definitive way forward.

But the future for artists and creators is different this year from how it looked last year. We can't ignore AI, or it might end up ignoring us. Or, more likely, our clients will use AI to bypass us.

But AI will neverbeus. The new techniques appear to be extraordinarily good at identifying the essence of styles and themes. But will they ever be creative? Or can they only be derivative?

There will be more questions than answers. Meanwhile, let's not be Luddites. If we can manage AI, it can do great work. It might become a new and expressive canvas that takes our imagination further than before. Without our input, AI might only ever be a soul-less facsimile of art: devoid of emotion and wonder.

We may not know how it will turn out, but one thing is certain: we can't ignore it.

Read more:

We need to manage AI better as we are approaching the Creative Singularity - RedShark News

New Bayonetta 3 Trailer Reveals An In-Universe Singularity, And Lots Of Witches – Gameranx

Neither angel nor demon, but a secret third thing.

PlatinumGames and Nintendo have shared a new trailer for Bayonetta 3, and its quite a doozy.

While the sequence of events in the trailer are deliberately cropped and put together so that it doesnt make sense and viewers are left guessing, we can discuss some elements within that PlatinumGames have dropped as small teasers of what we can expect.

For one, Cereza talks quite a bit about a singularity that she and the other Umbran Witches need to stop, or defeat. In scientific language, a gravitational singularity is a situation where gravity becomes so intense that it breaks down spacetime to a catastrophic level. Such a situation, that literally breaks spacetime, cannot be defined to have a where or a when. When we read, watch, or play fiction that brings up the idea of singularity, they directly reference or create a variation of this scenario.

This matches what GameInformer has allegedly added to their Bayonetta 3 issue cover story. In that issue they confirm that this Bayonetta game takes place in a multiverse, something that was heavily hinted at in prior trailers as well.

The trailer also mentions an Arch-Eve falling. This is an entirely new character that hasnt been mentioned before, at least not by this name. Could this be another alter ego of antagonist Baldr? Notably, Baldr isnt seen or mentioned in this trailer either, but that doesnt mean he isnt in the game at all. Another character actually refers to Cereza as Arch-Eve Origin, which certainly deepens the mystery. Other things they name drop without explaining is an Alphaverse, which is apparently where they can stop the singularity, and Chaos Gears, which is something you will need to collect in the game.

But now we should talk about the many unnamed characters that are appearing in this trailer. Theres a spider-based Umbran Witch, who makes reference to having literal fish to fry. Theres a black skinned Witch, who seems to wear an Egyptian inspired outfit. And theres a fun looking masked Witch, who crosses her sword with Cereza. There are more familiar faces, a seemingly older Jeanne whos dragging a mysterious doctor along with her, and Baal, the Empress of the Fathoms. This is the large toad demon thats been around since Bayonetta 2, and her fabulous self returns, seemingly to match up with a new Bayonetta, or joining her for the first time.

But most interesting is the prominence of the newest Witch in town, Viola. She apparently gets tasked with taking care of Luka for Cereza at some point, which also implies we get to play a lot of her somewhere in the game. Viola even sees Cereza die in battle against a mysterious new enemy. Neither an angel, nor a devil, but a secret third thing. Also not a human, so this character really is a genuine mystery.

All mysteries will definitely be revealed soon, Bayonetta 3 will be releasing exclusively to the Nintendo Switch on October 28, 2022. You can watch the trailer and read more of our coverage of Bayonetta 3 below.

Bayonetta 3 Gets 7-Minute Gameplay Video Featuring Viola

Bayonetta 3 Gets New Story Details and Gameplay Trailer

Source: YouTube, Reddit

See the original post here:

New Bayonetta 3 Trailer Reveals An In-Universe Singularity, And Lots Of Witches - Gameranx

This Week’s Awesome Tech Stories From Around the Web (Through October 15) – Singularity Hub

9 Astonishing Ways That Living Standards Have Improved Around the WorldTony Morley | Big ThinkOver the last 200 years, the lives of average people in every country have been radically transformed and improved. In our modern day, we are living longer and are more prosperous than ever beforein both high-income and low-income countries. And while progress forward is by no means progress completed nor a guarantee of progress to come, the remarkable improvements in global living standards serve, not as a high water or finish line, but rather as a source of inspiration and hope.

Human Brain Cells Transplanted Into Baby Rats Brains Grow and Form ConnectionsJessica Hamzelou | MIT Technology ReviewThese animals could be used to learn more about human neuropsychiatric disorders, say the researchers behind the work. Its an important step forward in progress into [understanding and treating] brain diseases, says Julian Savulescu, a bioethicist at the National University of Singapore, who was not involved in the study. But the development also raises ethical questions, he says, particularly surrounding what it means to humanize animals.

Fake Joe Rogan Interviews Fake Steve Jobs in an AI-powered PodcastBenj Edwards | Ars TechnicaWhether its legal to use Jobs or Rogans vocal likenesses in this mannerparticularly to promote a commercial productremains to be seen. And despite the PR-stunt nature of the podcast, the concept of entirely fictional celebrity podcasts got our attention. As voice synthesis becomesmore widespread and potentially undetectable, were looking at a future where media artifacts from any era will likely be completely fluid and malleable, shapable to fit any narrative.

Stoke Space Aims to Build Rapidly Reusable Rocket With a Completely Novel DesignEric Berger | Ars TechnicaSpaceX had already shown the way on first-stage launch and recovery with the Falcon 9 and its vertical takeoff and landing, so Stoke started with the second stage. Last month, the company started to test-fire its upper-stage engines at a facility in Moses Lake, Washington. The images andvideo show an intriguing-looking ring with 15 discrete thrusters firing for several seconds. The circular structure is 13 feet in diameter, and this novel-looking design is Stokes answer to one of the biggest challenges of getting a second stage back from orbit.

Microsoft Brings DALL-E 2 to the Masses With Designer and Image CreatorKyle Wiggers | TechCrunchSeeking to bring OpenAIs tech to an even wider audience, Microsoft is launching Designer, a Canva-like web app that can generate designs for presentations, posters, digital postcards, invitations, graphics and more to share on social media and other channels. Designerwhose announcement leakedrepeatedly this spring and summerleverages user-created content and DALL-E 2 to ideate designs, with drop-downs and text boxes for further customization and personalization.

Can Start-Ups Significantly Lower the Cost of Gene Sequencing?Roy Furchgott | The New York TimesiIf someone drops the price of sequencing 10-fold, I can sequence 10 times as many people, [Dr. Bruce D. Gelb] said. And you build up your statistical oomph to discover stuff. The days of statistical oomphmeaning an explosion in the amount of data gleaned from lower-priced testsappear imminent. Ultima Genomics, a biotech start up, made news at the Advances in Genome Biology and Technology conference in June, unveiling a gene-sequencing machine that it claims can sequence a complete genome for $100.

Metas VR Headset Harvests Personal Data Right Off Your FaceKhari Johnson | WiredCameras inside the device that track eye and face movements can make an avatars expressions more realistic, but they raise new privacy questions. Raw images and pictures used to power these features are stored on the headset, processed locally on the device, and deleted after processing, Meta says.Eye-trackingandfacial-expressionprivacy notices the company published this week state that although raw images get deleted, insights gleaned from those images may be processed and stored on Meta servers.

The Case for and Against CryptocurrencyTyler Cowen | Big ThinkCryptocurrency is truly a new idea, and its rare for society to encounter fundamentally new ideas. Cryptocurrency is well positioned to serve a crucial financial and transactional role as a globalized internet grows to include more of our lives. Crypto enthusiasts espouse grand plans that do not sound realistic, while crypto skeptics fail to appreciate the revolutionary nature of the technology.

The Chinese Surveillance State Proves That the Idea of Privacy Is More Malleable Than Youd ExpectZeyi Yang | MIT Technology ReviewHow the world should respond to the rise of surveillance states might be one of the most important questions facing global politics at the moment, Chin says, because these technologies really do have the potential to completely alter the way governments interact with and control people.i

Image Credit: Simone Hutsch / Unsplash

Continued here:

This Week's Awesome Tech Stories From Around the Web (Through October 15) - Singularity Hub

Absolutely Prefab-ulous: Why Luxury Buyers Are Moving Toward Modular – Barron’s

Set on a 7-acre vineyard in Californias Napa Valley, a compound known as Yountvilla is a private second home designed for entertaining a large family.

In addition to the 14,000-square-foot main residencein what Oakland, California-based architect Toby Long calls Napa-barn stylethe estate includes a 2,000-square-foot pool house and a 2,000-square-foot party barn. The cinema, conservatory-style great room, swimming pool, hot tub, outdoor kitchen with two pizza ovens, large reflecting pool, six-car garage, tennis court and two outdoor terraces bring the party home. But for all its singularity, the lavish estate is among a growing number of modern modular mansions springing up across the U.S. that feature prefab factory-built components.

More:Its Not a Hobbits Home, but This New Zealand Property Starred in Lord of The Rings

Ultra-high-net-worth individuals, some driven by the need to sequester safely during the pandemic era, have chosen to erect these houses, which can cost millions and even tens of millions of dollars, because they are more efficient to build, are of superior quality, and most significantly, they can be completed far more quickly than those built via traditional on-site construction methods.

Mr. Long, who has been building prefab houses for over two decades under the brand name Clever Homes, said that the genre is emerging from its slumber in the U.S. When you mention prefab or modular, people think of high volume, low quality. But its overcoming its legacy of cheapnessits a sophisticated process.

Steve Glenn, CEO and founder of Plant Prefab, which is based in Rialto, California, has completed about 150 units, including 36 at the Lake Tahoe-area ski resort development the Palisades at Olympic Valley, where residences sell for $1.8 million to $5.2 million.

Prefab is popular in Scandinavia, Japan and parts of Europe but not in the U.S., Mr. Glenn said. We have had a significant growth in orders over the last couple of years; some is Covid-related because people have the flexibility to choose where they want to work and live.

Plant Prefabs building system provided an efficient and predictable way to build high-quality homes in Lake Tahoes short building season at a time when U.S. shortages of skilled labor are particularly acute on the West Coast, said Lindsay Brown, principal and owner of the Brown Studio, the Encinitas, California-based firm that designed the Palisades development. Prefab mitigated the need for us to compromise on our designs, he added.

Although the first documented prefab house was recorded in 1624it was made of wood and shipped to Massachusetts from Englandthe concept wasnt employed on a mass scale until World War II, when there was a great need for cheap housing that could be built quickly, and its only in the last decade or two that custom home builders have embraced it for high-end private estates and luxury residential developments.

Its not an inexpensive option. Prices for custom prefabricated houses average $500 to $600 per square foot, but often are much higher. When site planning, transportation, finishing and landscaping are added in, the total finished cost can double or even triple.

On Mr. Longs Napa Valley project, for instance, the prefab budget alone was $1,000 per square foot.

These modern modular mansions are unique, he said. There are not a lot of people doing them. I build 40 to 50 prefab houses a year, and only two or three of them are mansions.

Prefab, he added, can be a practical option in luxury-resort areas such as the Colorado ski-and-golf resort Telluride, where the snowy Rocky Mountain winters can throw a monkey wrench into construction schedules.

Its hard to build there, Mr. Long said. It could take two to three years to get on a builders schedule and two to three years to build the house, and theres a short build season because of the weather. All these factors spur people to explore other methods of building. You can shortcut and simplify the schedule by working with a factory partner.

Modular mansions, he added, can be completed in one-third to one-half the time of those built with traditional construction methods. We can do a project in under a yearnot the two to three years it takes in most towns, he said.

There are two main types of conventional prefab options on the market that builders of high-end houses employ: modular and panelized.

In the modular system, building-block-like units are constructed in a factory, shipped to the site, placed in position with a crane and finished by a general contractor and a construction crew.

In the conventional structural insulated panelized system, panels that sandwich an insulating foam core are manufactured in a factory, packed flat and shipped to the site and assembled.

Most of Mr. Longs architectural designs are what he calls hybrids: They meld modular and panelized elements with traditional on-site construction, and depending on the prefab manufacturer, a proprietary brand-named system that incorporates various features of both.

More:Fall Luxury Developments Around the World

In the case of the Napa Valley estate, for instance, the timber systems of the structures were prefabricated. The project has 20 modules16 for the main house and four for the pool house. The party barn, which is framed by prefab timbers, is being built from a repurposed barn that was dismantled and shipped to the site. The main living area of the residence, including the great glassed-in room, was the only portion of the project built on site.

In projects with high-dollar investments and complex architecture and finishes, there are always elements that are built on site, Mr. Long said, adding that the amenities and special features of custom residences are what drive the cost up.

Architect Joseph Tanney, a partner in the New York-based firm RESOLUTION: 4 ARCHITECTURE, typically works on 10 to 20 luxury hybrid prefab projects a year, most of them in New Yorks Hamptons, Hudson Valley and Catskills, and all of them are designed to meet LEED standards.

Weve found that the modular methodology provides the highest value proposition in terms of time and money relative to the overall quality of the entire project, said Mr. Tanney, co-author of Modern Modular: The Prefab Houses by Resolution: 4 Architecture. By leveraging the efficiency of conventional wood-framed modules, were capable of building about 80% of the house in the factory. The more we can build in the factory, the higher the value proposition.

Since April 2020, a month into the pandemic, he said that inquiries for higher-end modern homes have spiked.

Brian Abramson, CEO and founder of Method Homes, a prefab builder based in the Seattle area that constructs houses whose finished prices range from $1.5 million to over $10 million, said that we have seen a large increase in demand for our homes since the pandemic with all the people moving and wanting to change their living situation with remote work.

He noted that the streamlined, predictable approach of prefab appeals to a lot of new clients who have built homes in the conventional way. Additionally, labor is very limited in a lot of the markets we work in, and local contractors have multiple-year backlogs so we provide a faster option, he said.

Method Homes are finished in the factory in 16 to 22 weeks and are assembled on site in one to two days. Then they take four months to over a year to finish, depending on the scale and scope of the project and local labor availability, Mr. Abramson said.

At Plant Prefab, which uses its own proprietary Plant Building System composed of specialized panels and modules, business is so brisk that the company is building a third factory, this one fully automated, that will be capable of producing up to 800 units a year.

Our system offers the design flexibility and portability of panels with the time and cost advantages of modular, Mr. Glenn says, adding that its optimized for custom architectural homebuilding.

The company, which was founded in 2016 to focus on custom homes designed by its in-house studio and third-party architects, is on a mission to make great, sustainable architecture more accessible, Mr. Glenn said. To do that, we needed a building solution designed for custom, high-quality, sustainable home construction: a factory with the technology and systems to make the process faster, more reliable, more efficient and less wasteful.

Prefab builder Dvele, which is based in the San Diego area, is experiencing similar growth. Founded five years ago, it ships to 49 states and has plans to expand to Canada and Mexico and ultimately roll out internationally.

We make 200 modules a year, and by 2024, when we open a second factory, we will be able to do 2,000 a year, said Kellan Hannah, the companys director of growth. The people who buy our homes have dual incomes and higher incomes, but we are moving away from customization.

Prefab isnt the only unconventional option that custom builders and their clients are embracing. Custom post-and-beam kits, such as those made by Seattle-based Lindal Cedar Homes, are being used to build turnkey residences that sell for $2 million to $3 million.

There are no architectural compromises in our system, said operations manager Bret Knutson, adding that interest has increased 40% to 50% since the pandemic. Clients have a very open-ended palette to choose from. They can design whatever size and style of home they want as long as they stay within the system.

He noted that clients like the variety of modern and classic home styles available and enjoy the custom design process and the flexibility of the system.

The kit doesnt include interior finishes, which he said can double or triple the total cost.

More:Real Momentum for Virtual Property as Technology Takes Hold of Industry

Lindal, the largest manufacturer of post-and-beam kit homes in North America, works mainly with clients in the United States, Canada and Japan. It delivers the house kits, which take 12 to 18 months to complete and around the same time to construct on site as do conventional builds, by shipping container, a plus for secluded vacation spots or resort islands that cannot be accessed by car.

Lindal, which has an international network of dealers, recently collaborated with the Los Angeles-based architectural firm Marmol Radziner on a 3,500-square-foot residence and matching guest house in Hawaii.

The quality of materials was absolutely premium, Mr. Knutson said. There were completely clear fir beams and clear cedar siding throughout. Even the plywood was custom-clear cedar that cost around $1,000 a sheet.

This article originally appeared on Mansion Global.

Visit link:

Absolutely Prefab-ulous: Why Luxury Buyers Are Moving Toward Modular - Barron's

800,000 Neurons in a Dish Learned to Play Pong in Just Five Minutes – Singularity Hub

Scientists just taught hundreds of thousands of neurons in a dish to play Pong. Using a series of strategically timed and placed electrical zaps, the neurons not only learned the game in a virtual environment, but played better over timewith longer rallies and fewer missesshowing a level of adaptation previously thought impossible.

Why? Picture literally taking a chunk of brain tissue, digesting it down to individual neurons and other brain cells, dumping them (gently) onto a plate, and now being able to teach them, outside a living host, to respond and adapt to a new task using electrical zaps alone.

Its not just fun and games. The biological neural network joins its artificial cousin, DeepMinds deep learning algorithms, in a growing pantheon of attempts at deconstructing, reconstructing, and one day mastering a sort of general intelligence based on the human brain.

The brainchild of Australian company Cortical Labs, the entire setup, dubbed DishBrain, is the first real-time synthetic biological intelligence platform, according to the authors of a paper published this month in Neuron. The setup, smaller than a dessert plate, is extremely sleek. It hooks up isolated neurons with chips that can both record the cells electrical activity and trigger precise zaps to alter those activities. Similar to brain-machine interfaces, the chips are controlled with sophisticated computer programs, without any human input.

The chips act as a bridge for neurons to link to a virtual world. As a translator for neural activity, they can unite biological electrical data with silicon bits, allowing neurons to respond to a digital game world.

DishBrain is set up to expand to further games and tests. Because the neurons can sense and adapt to the environment and output their results to a computer, they could be used as part of drug screening tests. They could also help neuroscientists better decipher how the brain organizes its activity and learns, and inspire new machine learning methods.

But the ultimate goal, explained Dr. Brett Kagan, chief scientific officer at Cortical Labs, is to help harness the inherent intelligence of living neurons for their superior computing power and low energy consumption. In other words, compared to neuromorphic hardware that mimics neural computation, why not just use the real thing?

Theoretically, generalized SBI [synthetic biological intelligence] may arrive before artificial general intelligence (AGI) due to the inherent efficiency and evolutionary advantage of biological systems, the authors wrote in their paper.

The DishBrain project started with a simple idea: neurons are incredibly intelligent and adaptable computing machines. Recent studies suggest that each neuron is a supercomputer in itself, with branches once thought passive acting as independent mini-computers. Like people within a community, neurons also have an inherent ability to hook up to diverse neural networks, which dynamically shifts with their environment.

This level of parallel, low-energy computation has long been the inspiration for neuromorphic chips and machine learning algorithms to mimic the natural abilities of the brain. While both have made strides, none have been able to recreate the complexity of a biological neural network.

From worms to flies to humans, neurons are the starting block for generalized intelligence. So the question was, can we interact with neurons in a way to harness that inherent intelligence? said Kagan.

Enter DishBrain. Despite its name, the plated neurons and other brain cells are from an actual brain with consciousness. As for intelligence, the authors define it as the ability to gather information, collate the data, and adjust firing activitythat is, how neurons process the datain a way that helps adapt towards a goal; for example, rapidly learning to place your hand on the handle of a piping hot pan without searing it on the rim.

The setup starts, true to its name, with a dish. The bottom of each one is covered with a computer chip, HD-MEA, that can record from stimulated electrical signals. Cells, either isolated from the cortex of mouse embryos or derived from human cells, are then laid on top. The dish is bathed in a nutritious fluid for the neurons to grow and thrive. As they mature, they grow from jiggly blobs into spindly shapes with vast networks of sinuous, interweaving branches.

Within two weeks, the neurons from mice self-organized into networks inside their tiny homes, bursting with spontaneous activity. Neurons from human originsskin cells or other brain cellstook a bit longer, establishing networks in roughly a month or two.

Then came the training. Each chip was controlled by commercially available software, linking it to a computer interface. Using the system to stimulate neurons is similar to providing sensory datalike those coming from your eyes as you focus on a moving ball. Recording the neurons activity is the outcomethat is, how they would react to (if inside a body) you moving your hand to hit the ball. DishBrain was designed so that the two parts integrated in real time: similar to humans playing Pong, the neurons could in theory learn from past misses and adapt their behavior to hit the virtual ball.

Heres how Pong goes. A ball bounces rapidly across the screen, and the player can slide a tiny vertical paddlewhich looks like a bold lineup and down. Here, the ball is represented by electrical zaps based on its location on the screen. This essentially translates visual information into electrical data for the biological neural network to process.

The authors then defined distinct regions of the chip for sensation and movements. One region, for example, captures incoming data from the virtual ball movement. A part of the motor region then controls the virtual paddle to move up, whereas another causes it to move down. These assignments were arbitrary, the authors explained, meaning that the neurons within needed to adjust their firings to excel at a match.

So how do they learn? If the neurons hit the ballthat is, showing the corresponding type of electrical activitythe team then zapped them at that location with the same frequency each time. Its a bit like establishing a habit for the neurons. If they missed the ball, then they were zapped with electrical noise that disrupted the neural network.

The strategy is based on a learning theory called the free energy principle, explained Kagan. Basically, it supposes that neurons hold beliefs about their surroundings, and adjust and repeat their electrical activity so they can better predict the environment, either changing their beliefs or their behavior.

The theory panned out. In just five minutes, both human and mice neurons rapidly improved their gameplay, including better rallies, fewer aceswhere the paddle failed to intercept the ball without a single hitand long gameplays with more than three consecutive hits. Surprisingly, mice neurons learned faster, though eventually they were outperformed by human ones.

The stimulations were critical for their learning. Separate experiments with DishBrain without any electrical feedback performed far worse.

The study is a proof of concept that neurons in a dish can be a sophisticated learning machine, and even exhibit signs of sentience and intelligence, said Kagan. Thats not to say theyre consciousrather, they have the ability to adapt to a goal when embodied into a virtual environment.

Cortical Labs isnt the first to test the boundaries of the data processing power of isolated neurons. Back in 2008, Dr. Steve Potter at the Georgia Institute of Technology and team found that with even just a few dozen electrodes, they could stimulate rat neurons to exhibit signs of learning in a dish.

DishBrain has a leg up with thousands of electrodes compacted in each setup, and the company hopes to tap into its biological power to aid drug development. The system, or its future derivations, could potentially act as a micro-brain surrogate for testing neurological drugs, or gaining insights into the neurocomputation powers of different species or brain regions.

But the long-term vision is a living bio-silicon computer hybrid. Integrating neurons into digital systems may enable performance infeasible with silicon alone, the authors wrote. Kagan imagines developing biological processing units that weave together the best of both worlds for more efficient computationand in the process, shed a light on the inner workings of our own minds.

This is the start of a new frontier in understanding intelligence, said Kagan. It touches on the fundamental aspects of not only what it means to be human, but what it means to be alive and intelligent at all, to process information and be sentient in an ever-changing, dynamic world.

Image Credit: Cortical Labs

See the original post here:

800,000 Neurons in a Dish Learned to Play Pong in Just Five Minutes - Singularity Hub

The weird plan to hide a "backup copy" of life in lava tubes on the Moon – Big Think

Well-meaning futurists often take a simple technological idea, apply it to human life, and then stretch the conclusions to absurd lengths. Youve probably heard a few of these pie-in-the-sky extrapolations. For several decades, computer chips evolved very rapidly: a technological singularity will end life as we know it in a few years! Weve managed to synthesize some useful nano-sized things: nanobots will soon rebuild our entire bodies and eliminate illness! Simple tissues can sometimes be frozen and thawed again: cryonics will make death obsolete!

Such is the case with the Lunar Ark, a lunatic idea more at home in science fiction than science fact.

The Lunar Ark project has a simple purpose at least in the minds of its creators. Humans do bad and scary things. They have wars and build bombs and change the climate. Earth is a delicate place, one inevitable error in human judgment away from being destroyed forever.

Given our precarious situation, we need a backup copy of life so that we can start over again when everything goes south. Its a dark and strange way of looking at life, which has somehow flourished for billions of years, new species taking over from old, rolling merrily on and on through multiple mass extinctions and regenerations.

The Lunar Ark teams methods of achieving this goal are similarly strange. They want to extract the DNA, seeds, eggs, spores, and sperm from 6.7 million living species and cryogenically freeze all of it. Maybe theyll start with only the endangered species. The samples would be stored in floating, rotating cylindrical banks where magnetically levitated robots will place and retrieve them. This frozen life bank will sit, tended by the robots, ready to recolonize Earth with a life backup, or be carried along on some future space colonization mission. Where do you store something where it could possibly be safe from human and natural forces for centuries or more? This is where the Moon comes in.

Lunar lava tubes are underground cavities formed by volcanic processes in our satellite. A similar phenomenon is found beneath the surface of the Hawaiian Islands, which can help us understand the geological processes at work. The surface layer of a lava flow cools and eventually hardens. Underneath, molten magma may continue to flow; if the flow is on a mild incline, the magma can drain out partially or entirely, leaving an empty cavern under a thick, arched rock roof. The cavern may be very stable if it forms with the proper geometry.

Thurston Lava Tube at Hawaii Volcanoes National Park, Big Island, Hawaii.Frank Schulenburg, CC BY-SA 3.0, via Wikimedia Commons

On the Moon, a physically stable lava tube could also provide a protective cocoon for its contents. The mass of rock should shield the cavity beneath from cosmic radiation, as well as micrometeoroid strikes. It would moderate the temperature swings of about 300 K (530 degrees Fahrenheit!) between lunar day and night on the surface. These properties have made lunar lava tubes a perennial favorite location for a human lunar base.

They attracted attention from the Lunar Ark project as well. By comparison, any location near the surface of the Earth with its thick atmosphere, tumultuous weather, constant erosion, active volcanism, and multitude of life forces is extremely unstable. A lunar tube repository would still be vulnerable to an unlucky meteorite strike, or some future reawakening of lunar volcanism. Perhaps the greatest risk is that all the ills of the human world that the Ark project hopes to escape could also colonize the Moon in coming centuries.

This project exemplifies an ongoing problem with futurism, and technology in general: addressing human affairs and the complex trajectories of natural life as if they were software engineering problems. Partition the problem into logical subunits, address each logical block, and solve it systematically with computer concepts. Any problem that cant be solved today will be magically overcome tomorrow by explosive growth in technology and become cheap in a decade.

This is just what the Lunar Ark is doing. Worried about corrupting the program of life on Earth and losing lots of biodata? Just back it up to a hard disk so that we can install a fresh copy. What if the backup disk gets corrupted? We can store it in a limited-access, climate-controlled, safe location. How do we restore life from the backup? Just unfreeze it, assume that will work sometimes, and plan for future technology to come along and fix the rest of it. Well figure it out later. Theres always a kludge.

Subscribe for counterintuitive, surprising, and impactful stories delivered to your inbox every Thursday

Perhaps we use futurism as something of a comfort blanket. Human endeavor, and indeed life itself, are strange and unknowable things. We cant even slightly predict the future of most complex natural systems. Thats terrifying. Computer and software technologies are logical creatures, understood and bound by absolute and predictable systems of rules. They can give us the comfort of knowable logical certainty that the Universe cant otherwise provide. Rather than let the chaotic course of life wind its way forward, we hope to trump difficulty through technology, to outsmart chance by logical calculation, and to defeat death with hardware engineering.

Its a valiant effort, but in vain. The Lunar Ark could no more ensure biodiversity on Earth than an artificial intelligence could end the need for toil, or a nanobot swarm could eliminate physical suffering. Many problems can be overcome by computer technology paradigms, but the chaotic nature of life is not one of them.

See more here:

The weird plan to hide a "backup copy" of life in lava tubes on the Moon - Big Think

The Moon May Have Formed Just Hours After Earth Collided With a Protoplanet – Singularity Hub

Cast your mind back to when Earth was a baby. The solar system was a brutal nursery. Giant fragments of rock whirled chaotically around a fiery young sun, regularly bombarding infant planets. Earth formed during this period, aptly called the Hadean, and without this steady rain of fire building up the bones of our planet, we wouldnt be here at all.

And neither would the moon.

Towards the end of this period, about 4.5 billion years ago, a Mars-sized protoplanet called Theia smacked into Earth in a collision thought to have released 100 million times more energy than the asteroid that ended the dinosaurs. The impact destroyed Theia, threw a titanic plume of material into orbitand gave birth to our moon.

This giant impact scenario is the leading theory for how the moon formed because it fits much of what we observe about the Earth and moon today. But scientists are still debating the details. Early simulations of the impact, for example, suggested the moon would be mostly made of material from Theia, but analysis of lunar rocks shows the geochemical composition of the Earth and moon is nearly identical.

Now, however, a new high-resolution simulation, described in a recent paper by NASA Ames scientists and researchers at Durham University, may help resolve the discrepancy.

According to the paper, the outcomes across a number of possible impact scenarios more closely match observations, including the moons orbit and composition. But perhaps most surprisingly, where prior work suggested the moons formation would have taken months or years, the new simulation suggests our satellite formed and was slingshotted into orbit in mere hours.

In the simulation, shown in the video below, Theia strikes Earth with a glancing blow. An arc of material, originating from both Theia and Earth, whips into orbit and forms two bodies. The larger of these, doomed to fall back to Earth, launches the smaller one, the moon, into a stable orbit. If the initial collision took place at midnight, the moon would have formed by breakfast.

This isnt the first attempt at better fitting our observations to the moons giant impact origin story.

Scientists have proposed and simulated a number of theories to explain the moons geochemical composition. These include higher energy or multiple impacts, a hit-and-run, or the possibility of an earlier impact, when Earth was still covered by an ocean of magma. These are still possible, though each comes with its own set of challenges too.

Here, the team took a different approach, suggesting that perhaps the problem isnt the theory but our simulation of it. Older simulations used hundreds of thousands or millions of particlesyou can think of these as idealized digital stand-ins for chunks of Earth and Theia, each following the laws of physics in the collision. The latest simulation, on the other hand, uses hundreds of millions of particles, each about 8.5 miles (14 kilometers) across.

Its the highest resolution digital recreation of the moons formation yet.

The resolution brought the mechanics of large impacts into focus in a way prior, less detailed simulations simply couldnt. And in the process, the work threw a new, potentially simpler theory into the hat: That the moon formed rapidly, in just one step. The team found this scenario could produce a moon much like ours, from orbit to composition.

However, while the new work is enticing, further reinforcing it will require more high-resolution simulations and, crucially, future missions collecting more samples from the moon itself.

Whatever scientists find, the story of the moons formation has far-reaching implications. Its fate is tied closely to Earths, from tides to plate tectonics and the rise and evolution of life itself. If we find our moon is an outlieras it seems to be in our solar system, at leastperhaps the chances that life arises and survives the long haul elsewhere are lower. We just dont know yet.

Thats why its important to build and study simulations like this one.

The more we learn about how the moon came to be, the more we discover about the evolution of our own Earth, said Vincent Eke, a researcher at Durham University and a co-author on the paper, in a statement. Their histories are intertwinedand could be echoed in the stories of other planets changed by similar or very different collisions.

Image Credit: NASA Ames Research Center

View original post here:

The Moon May Have Formed Just Hours After Earth Collided With a Protoplanet - Singularity Hub

Alien Megastructures? Cosmic Thumbprint? Here’s What’s Behind This Spectacular James Webb Image – Singularity Hub

In July, a puzzling new image of a distant extreme star system surrounded by surreal concentric geometric rings had even astronomers scratching their heads. The picture, which looks like a kind of cosmic thumbprint, came from the James Webb Space Telescope, NASAs newest flagship observatory.

The internet immediately lit up with theories and speculation. Some on the wild fringe even claimed it as evidence for alien megastructures of unknown origin.

Luckily, our team at the University of Sydney had already been studying this very star, known as WR140, for more than 20 yearsso we were in prime position to use physics to interpret what we were seeing.

Our model, published in Nature, explains the strange process by which the star produces the dazzling pattern of rings seen in the Webb image (itself now published in Nature Astronomy).

WR140 is whats called a Wolf-Rayet star. These are among the most extreme stars known. In a rare but beautiful display, they can sometimes emit a plume of dust into space stretching hundreds of times the size of our entire Solar System.

The radiation field around Wolf-Rayets is so intense, dust and wind are swept outwards at thousands of kilometers per second, or about 1 percent the speed of light. While all stars have stellar winds, these overachievers drive something more like a stellar hurricane.

Critically, this wind contains elements such as carbon that stream out to form dust.

WR140 is one of a few dusty Wolf-Rayet stars found in a binary system. It is in orbit with another star, which is itself a massive blue supergiant with a ferocious wind of its own.

The binary stars of the WR140 system. Image Credit: Amanda Smith / IoA / University of Cambridge/ author provided

Only a handful of systems like WR140 are known in our whole galaxy, yet these select few deliver the most unexpected and beautiful gift to astronomers. Dust doesnt simply stream out from the star to form a hazy ball as might be expected; instead it forms only in a cone-shaped area where the winds from the two stars collide.

Because the binary star is in constant orbital motion, this shock front must also rotate. The sooty plume then naturally gets wrapped into a spiral, in the same way as the jet from a rotating garden sprinkler.

WR140, however, has a few more tricks up its sleeve layering more rich complexity into its showy display. The two stars are not on circular but elliptical orbits, and furthermore dust production turns on and off episodically as the binary nears and departs the point of closest approach.

By modeling all these effects into the three-dimensional geometry of the dust plume, our team tracked the location of dust features in three-dimensional space.

By carefully tagging images of the expanding flow taken at the Keck Observatory in Hawaii, one of the worlds largest optical telescopes, we found our model of the expanding flow fit the data almost perfectly.

Except for one niggle. Close in right near the star, the dust was not where it was supposed to be. Chasing that minor misfit led us to a phenomenon never before caught on camera.

We know that light carries momentum, which means it can exert a push on matter known as radiation pressure. The outcome of this phenomenon, in the form of matter coasting at high speed around the cosmos, is evident everywhere.

But it has been a remarkably difficult process to catch in the act. The force fades quickly with distance, so to see material being accelerated you need to very accurately track the movement of matter in a strong radiation field.

This acceleration turned out to be the one missing element in the models for WR140. Our data did not fit because the expansion speed wasnt constant: the dust was getting a boost from radiation pressure.

Catching that for the first time on camera was something new. In each orbit, it is as if the star unfurls a giant sail made of dust. When it catches the intense radiation streaming from the star, like a yacht catching a gust, the dusty sail makes a sudden leap forward.

The final outcome of all this physics is arrestingly beautiful. Like a clockwork toy, WR140 puffs out precisely sculpted smoke rings with every eight-year orbit.

Each ring is engraved with all this wonderful physics written in the detail of its form. All we have to do is wait, and the expanding wind inflates the dust shell like a balloon until it is big enough for our telescopes to image.

In each eight-year orbit, a new ring of dust forms around WR140. Image Credit: Yinuo Han / University of Cambridge / author provided

Then, eight years later, the binary returns in its orbit and another shell appears identical to the one before, growing inside the bubble of its predecessor. Shells keep accumulating like a ghostly set of giant nesting dolls.

However, the true extent to which we had hit on the right geometry to explain this intriguing star system was not brought home to us until the new Webb image arrived in June.

The image from the James Webb Space Telescope (left) confirmed in detail the predictions of the model (right). Image Credit: Yinhuo Han / Peter Tuthill / Ryan Lau / author provided

Here were not one or two, but more than 17 exquisitely sculpted shells, each one a nearly exact replica nested within the one preceding it. That means the oldest, outermost shell visible in the Webb image must have been launched about 150 years before the newest shell, which is still in its infancy and accelerating away from the luminous pair of stars driving the physics at the heart of the system.

With their spectacular plumes and wild fireworks, the Wolf-Rayets have delivered one of the most intriguing and intricately patterned images to have been released by the new Webb telescope.

This was one of the first images taken by Webb. Astronomers are all on the edge of our seats, waiting for what new wonders this observatory will beam down to us.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: NASA, ESA, CSA, STScI, NASA-JPL, Caltech

Excerpt from:

Alien Megastructures? Cosmic Thumbprint? Here's What's Behind This Spectacular James Webb Image - Singularity Hub

RisingWave Emerges to Tackle Tsunami of Real-Time Data – Datanami

(wanpatsorn/Shutterstock)

Only the most advanced companies have overcome the technical complexity involved with processing streaming data in real time. One of the vendors aiming to reduce this complexity and make stream data processing available to the masses is RisingWave Labs, which today announced $36 million in financing.

The early days of stream data processing brought us stand-alone systems that were capable of acting upon vast streams of data, and doing so with low latency and reliability. Stream processing frameworks like Apache Storm made headway in addressing these challenges and led the way to more sophisticated frameworks like Apache Flink and others.

Things got significantly more complex when companies realized they needed to know something about the past to take the best action on the newest data, which necessitated the integration of stream processing frameworks with databases or data lakes, where the historical record lived as persisted data. Architectural blueprints, such as the Lambda and Kappa architectures, were proposed to address this unique challenge, but the technical complexity in keeping these dual-path systems running are immense.

Today were seeing the emergence of a new category of productthe streaming databaseaimed at solving this problem. Instead of running data through a dedicated stream processing framework like Storm or Flink, the backers of streaming databases think that all the data processingincluding the business end of a streaming big data pipeline like Kafka, Kinesis, or Pulsarcan be handled by the SQL query engine contained in a relational database.

RisingWave is a Postgres-compatible database developed to process data streams in the cloud (Image courtesy RisingWave Labs)

Thats the approach taken with RisingWave, a new open source streaming database that emerged just over a year ago. Yingjun Wu, a former AWS and IBM engineer, created RisingWave as a cloud-native database with the goal of providing the bernefits of stream processing without the technical complexity inherent with stream processing frameworks.

Existing open-source systems are very costly to deploy, maintain, and use in the modern cloud environment, Wu, who is the CEO of RisingWave Labs, says in a press release today. Our goal is not to build yet another streaming system that is 10X faster than existing systems, but to deliver a simple and cost-effective system that allows everyone to benefit from stream processing.

Developed in Rust, RisingWave is a Postgres-compatible database can do many of the things that stream processing frameworks do, but within the context and control of a familiar relational database running in the cloud and the SQL language, according to Wu, who has a PhD from National University of Singapore and was also a visiting PhD at Carnegie Mellon University.

[RisingWave] consumes streaming data, performs continuous queries, and maintains results dynamically in the form of a materialized view, Wu says in a blog post earlier this year. Processing data streams inside a database is quite different from that inside a stream computation engine: streaming data are instantly ingested into data tables; queries over streaming and historical data are simply modeled as table joins; query results are directly maintained and updated inside the database, without pushing into a downstream system.

The open source project, which available on GitHub via an Apache 2.0 license, is being adopted by organizations for a range of uses, including real-time analytics and alerting; IoT device tracking; monitoring user activity; and online application data serving. The company, which changed its name from Singularity Data three weeks ago, recently unveiled the beta of a hosted commercial version of RisingWave; its slated to become generally available next year.

The $36 million in Series A funding announced today brings the San Francisco companys total funding to $40 million. That funding will help RisingWave tackle the real-time processing opportunities available in both legacy and green-field applications, says Yu Chen, a partner with Yunqi Partners, which was one of the venture firms that led the Series A.

There is no lack of tools to process data streams, Chen states in a press release, but RisingWave is one of the few designed as a database and can be easily plugged into a modern data stack to make real-time data intelligence a reality.

Related Items:

Is Real-Time Streaming Finally Taking Off?

Developing Kafka Data Pipelines Just Got Easier

Can Streaming Graphs Clean Up the Data Pipeline Mess?

The rest is here:

RisingWave Emerges to Tackle Tsunami of Real-Time Data - Datanami