NVIDIA GeForce RTX 3080 Ashes Of The Singularity 4K Benchmarks Leaked? – Wccftech

A single measly point of data for the upcoming NVIDIA GeForce RTX 3080's performance improvement over the 2080 Ti leaked out earlier today courtesy of _rogame (via Videocardz). Before we begin - a warning. This represents a single point of data - of a particularly controversial title featuring drivers that may or may not be optimized yet. Take it with a grain of salt and always wait for third party reviews before making a decision. With that out of the way, let's dig into it.

The RTX 3080 features a 100% improvement in CUDA cores compared to the RTX 2080 Ti (8704 cores vs 4352 cores) and this is something that should easily translate into slightly less than double the gaming performance (assuming a slight dip in clocks and increased thermal limiting) in most titles with very little optimization. This, however, for some reason, does not appear to be the case in this single data point. The RTX 3080 coupled with an i9 9900K (which is a very capable CPU) produces just over 88 frames in 4K with a score of 8700.

NVIDIA GeForce RTX 3080 Flagship Is 68% Faster On Average Than RTX 2080 In OpenCL & CUDA Benchmarks, Up To 2X Faster In Some Cases

I do wish to iterate again here that there could be multiple reasons why this is not even close to being indicative of final performance: for one we dont know the clock speed this was run at. We don't even know for sure if this is the actual RTX 3080 (AotS scores are not impossible to fake). In fact this score is just about the same as a highly overclocked RTX 2080 Ti which leads us to believe that this is actually an RTX 2080 Ti masquerading as an RTX 3080.

_rogame was also able to get some stock numbers to compare this score against:

As you can see this particular RTX 3080 scores around 27% more than a bone stock RTX 2080 Ti, which is a performance increase that can be achieved just by using very high clock rates and a closed-loop liquid-cooled. The performance shown here absolutely does not make sense considering the RTX 3080 rocks a 100% more CUDA cores. If this score is true, then there is something very wrong happening in this test, either through the driver stack or through the software stack. Our money however, is on this benchmark being fabricated.

Read more:

NVIDIA GeForce RTX 3080 Ashes Of The Singularity 4K Benchmarks Leaked? - Wccftech

That’s all folks, the singularity is near. Elon Musk’s cyber pigs and brain computer tech – Toronto Star

Goodbye Dolly. Hello Gertrude and Dorothy.

Joining the first sheep that was ever cloned as a sign of our science fact future, this past week, celebrity entrepreneur Elon Musk gave a presentation about Neuralink, his company that is focusing on creating technology that links with brains. As part of it, he introduced pigs who had the prototype devices implanted in them. The internet dubbed them Cyber Pigs and portions of readings from Gertrudes brain were played.

Brain computer technology is at a point where the potential medical implications are so exciting many players are pursuing different approaches to the field. The ethics of using this technology are sometimes best explained in science fiction like Black Mirror and The Matrix.

To discuss the latest in brain computer technology and the Neuralink presentation, we are joined by Graeme Moffat. He is a Senior Fellow at the Munk School of Global Affairs and Public Policy, and also the Chief Scientist and cofounder of System 2 Neurotechnology. He was formerly Chief Scientist and Vice President of Regulatory Affairs with Interaxon, a Toronto-based world leader in consumer neurotechnology.

Listen to this episode and more at This Matters or subscribe at Apple Podcasts, Spotify, Google Podcasts or wherever you listen to your favourite podcasts.

Read more:

That's all folks, the singularity is near. Elon Musk's cyber pigs and brain computer tech - Toronto Star

David Beckhams Esports Organization is Set to Participate in Rocket League – The Game Haus

Over the last few years, Rocket League has been one of the most popular esports titles. Having amassed a significant following since its release in July 2015, Forbes believes that the vehicular soccer game is poised to be the next major title in competitive gaming. Crucially, there are many reasons behind the titles unprecedented success. That said, Rocket Leagues cross-platform compatibility and multi-console accessibility have undoubtedly played a pivotal role in its recent development.

Because of that, on an esports level, the title has since become associated with world-renowned names in both mainstream entertainment and sport. As well as being a sponsor for pay-per-view WWE events, Rocket League has also peaked the interests of Champions League-winning footballer, David Beckham. So, lets take a look at the 45-year-olds plans for competitive Rocket League.

According to a report by Front Office Sports, the recently-created team is co-owned by Beckham, the former Manchester United and Real Madrid winger. Although new to competitive gaming, the organization looks set to explore some of the sectors most popular titles, such as FIFA, Fortnite, and Rocket League. From a general standpoint, Beckhams involvement in esports is a natural fit. Having spent the bulk of his career playing in footballs most famous competitions, the six-time Premier League winner knows what it takes to succeed.

At the time of writing, Guild Esports are still very much in the development stage as they seek to acquire a team of high-caliber players. To date, Beckhams organization currently has two players on their roster in the form of Joseph Kidd and Thomas Binkhorst, who both used to represent Team Singularity. During his time at Team Singularity, Kidd was part of the side that beat AS Monacos esports team 3-0 in a competitive Rocket League event.

According to their profiles at esports earnings, Beckhams team has acquired two players with a lot of experience in Rocket League. As of August 26th, 2020, Binkhorst is ranked as the 58th best player in Holland regarding the Psyonix publication. Kidd, on the other hand, is listed at 231st for UK gamers.

Although Guild Esports have yet to achieve anything on a competitive level, Beckhams involvement, albeit in an ownership capacity, is a testament to Rocket Leagues appeal. Despite now being released five years ago, the title continues to grow in popularity. TwitchTracker states that the Psyonix game averages 57,323 viewers on Twitch, the worlds leading live streaming service for video games.

Source: Unsplash

Both in-game and in the mainstream, Rocket League is enjoying a period of expansion. As well as more teams entering the fray,Psyonixdecision to expand NA and EU to ten teams means that organizations will need to be on form to achieve future success. This, combined with the emergence of well-backed brands, will only ensure that the title retains its relevancy and becomes even more significant to the esports industry in the next decade.

Furthermore, the involvement of Beckham may also be central to expanding the markets audience base. In recent years, esports has adapted to cater to growing interest, as evident from the rise of esports betting. Having taken the world by storm, several trusted bookmakers now offer in-depth markets on some of the industrys most popular titles, such as FIFA and Counter-Strike: Global Offensive. For example, operators like Betway and bet365 are considered to be two of the best esports betting sites currently on the market, as their extensive offerings are coupled with sign-up bonuses.

While it remains to seen whether Beckhams Guild eSports can hit the heights on the competitive stage, their involvement is unquestionably positive for the sector. Ultimately, this expansion is symbolic of the appeal that Rocket League has been able to generate since its release. As a result, the future is undoubtedly bright for the Psyonix publication.

See the original post:

David Beckhams Esports Organization is Set to Participate in Rocket League - The Game Haus

Early-stage VC StartupXseed hits first close of new fund – VCCircle

Early-stage venture capital firm StartupXseed Ventures LLP has hit the first close of its second fund that will focus on deep-technology startups, according to multiple media reports.

The fund has raised Rs 65 crore ($8.84 million at current exchange rates) and expects to mop up the full Rs 150 crore over the next six to nine months, The Economic Times reported.

This new fund comes almost four years after StartupXseed raised its first fund, which had a targeted corpus of Rs 100 crore.

The second vehicle has received commitments from several family offices and professionals from the information technology sector.

It will focus primarily on investments in the deep-technology segments, including semiconductors, cybersecurity, drones and aerospace. It will also look to invest in the artificial intelligence and machine learning sectors.

The fund will look to make around 15 investments with ticket sizes ranging between Rs 3 crore and Rs 10 crore, StartupXseed managing partner BV Naidu and co-founding partner Ravi Thakur said, per the report.

The firm was founded by Naidu, Thakur, TV Mohandas Pai (through Aarin Capital) and Ramakrishna V.

VCCircle has reached out to the venture capital firm on the details of this new fund and will update this report accordingly.

The vehicle will also look at potential investments in the financial- and health-technology sectors, and says it aims to deploy the entire corpus over the next 24-30 months.

Our goal from the first fund was to stabilise the process, identify the thesis of our process and provide post-investment support, Naidu said. StartupXseed says it has made around 12 investments from Fund-I and has recorded three exits so far.

Two of these are complete exits from bot management solutions provider ShieldSquare and fabless semiconductor firm Siliconch Systems. It has also recorded a partial exit from the human resources-focussed software-as-a-service (SaaS) firm Darwinbox.

Last month, StartupXseed took part in a Rs 8 crore (around $1.07 million) pre-Series A funding round in SmarterBiz, an AI-based customer experience platform. Other portfolio companies include Bellatrix Aerospace and Singularity Dynamics.

Continue reading here:

Early-stage VC StartupXseed hits first close of new fund - VCCircle

Kiki’s Vacation Wins the Casual Gaming Weekly Vote at Game Development World Championship – Gamasutra

[This unedited press release is made available courtesy of Gamasutra and its partnership with notable game PR-related resource GamesPress.]

For immediate release on September 7th.

Kiki's Vacation by Mexican development team HyperBeard has won the weekly voting on Game Development World Championship Fan Favourite category for Casual Gaming Week. The game is available for iOS and Android devices.

"Kiki's Vacation sets you off to the breezy paradise of Kokoloko Island in a relaxing idle adventure! Join Kiki as she befriends the locals, explores the islands' secrets, finds romance (*wink*wink*) and discovers herself in the process!" -HyperBeard describes Kiki's Vacation.

The HyperBeard team will move on to the next round in the Fan Favourite category of the GDWC -Game Development World Championship. They will face the other weekly vote winners in the final voting event at the end of the GDWC 2020 season.

Runners up were:

2nd place: Fall Master by Sambrela from Georgia

3rd place: Wobbly Dot by Piron Games from United Kingdom

The Rest of the Nominees in alphabetical order:

- Popcorn 3D by Jolly Llama Games

- Terminal Singularity by Moustache Cabal from Bulgaria

- World Wreckers by Lea Creative Industries from Seychelles

The GDWC team sends congratulations to the HyperBeard team and big thanks to all Nominees and voters. The weekly votes take place each week, from Monday to Saturday and there are always six new exciting games to check out and vote for. This week's vote is already live on the event website at thegdwc.com.

Game Development World Championship Website:

http://thegdwc.com

Kiki's Vacation Win Announcement:

https://thegdwc.com/blog/blog.php?blog_id=52

Kiki's Vacation GDWC page:

https://thegdwc.com/pages/game.php?game_guid=f55765e8-1966-4346-a05b-2257310748fc

Kiki's Vacation Trailer:

For more information contact:

Olli Mntyl, The GDWC Manager

[emailprotected]

More here:

Kiki's Vacation Wins the Casual Gaming Weekly Vote at Game Development World Championship - Gamasutra

The world of Artificial… – The American Bazaar

Sophia. Source: https://www.hansonrobotics.com/press/

Humans are the most advanced form of Artificial Intelligence (AI), with an ability to reproduce.

Artificial Intelligence (AI) is no longer a theory but is part of our everyday life. Services like TikTok, Netflix, YouTube, Uber, Google Home Mini, and Amazon Echo are just a few instances of AI in our daily life.

This field of knowledge always attracted me in strange ways. I have been an avid reader and I read a variety of subjects of non-fiction nature. I love to watch movies not particularly sci-fi, but I liked Innerspace, Flubber, Robocop, Terminator, Avatar, Ex Machina, and Chappie.

When I think of Artificial Intelligence, I see it from a lay perspective. I do not have an IT background. I am a researcher and a communicator; and, I consider myself a happy person who loves to learn and solve problems through simple and creative ideas. My thoughts on AI may sound different, but Im happy to discuss them.

Humans are the most advanced form of AI that we may know to exit. My understanding is that the only thing that differentiates humans and Artificial Intelligence is the capability to reproduce. While humans have this ability to multiply through male and female union and transfer their abilities through tiny cells, machines lack that function. Transfer of cells to a newborn is no different from the transfer of data to a machine. Its breathtaking that how a tiny cell in a human body has all the necessary information of not only that particular individual but also their ancestry.

Allow me to give an introduction to the recorded history of AI. Before that, I would like to take a moment to share with you my recent achievement that I feel proud to have accomplished. I finished a course in AI from Algebra University in Croatia in July. I could attend this course through a generous initiative and bursary from Humber College (Toronto). Such initiatives help intellectually curious minds like me to learn. I would also like to express that the views expressed are my own understanding and judgment.

What is AI?

AI is a branch of computer science that is based on computer programming like several other coding programs. What differentiates Artificial Intelligence, however, is its aim that is to mimic human behavior. And this is where things become fascinating as we develop artificial beings.

Origins

I have divided the origins of AI into three phases so that I can explain it better and you dont miss on the sequence of incidents that led to the step by step development of AI.

Phase 1

AI is not a recent concept. Scientists were already brainstorming about it and discussing the thinking capabilities of machines even before the term Artificial Intelligence was coined.

I would like to start from 1950 with Alan Turing, a British intellectual who brought WW II to an end by decoding German messages. Turing released a paper in the October of 1950 Computing Machinery and Intelligence that can be considered as among the first hints to thinking machines. Turing starts the paper thus: I propose to consider the question, Can machines think?. Turings work was also the beginning of Natural Language Processing (NLP). The 21st-century mortals can relate it with the invention of Apples Siri. The A.M. Turing Award is considered the Nobel of computing. The life and death of Turing are unusual in their own way. I will leave it at that but if you are interested in delving deeper, here is one article by The New York Times.

Five years later, in 1955, John McCarthy, an Assistant Professor of Mathematics at Dartmouth College, and his team proposed a research project in which they used the term Artificial Intelligence, for the first time.

McCarthy explained the proposal saying, The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. He continued, An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.

It started with a few simple logical thoughts that germinated into a whole new branch of computer science in the coming decades. AI can also be related to the concept of Associationism that is traced back to Aristotle from 300 BC. But, discussing that in detail will be outside the scope of this article.

It was in 1958 that we saw the first model replicating the brains neuron system. This was the year when psychologist Frank Rosenblatt developed a program called Perceptron. Rosenblatt wrote in his article, Stories about the creation of machines having human qualities have long been fascinating province in the realm of science fiction. Yet we are now about to witness the birth of such a machine a machine capable of perceiving, recognizing, and identifying its surroundings without any human training or control.

A New York Times article published in 1958 introduced the invention to the general public saying, The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.

My investigation in one of the papers of Rosenblatt hints that even in the 1940s scientists talked about artificial neurons. Notice in the Reference section of Rosenblatts paper published in 1958. It lists Warren S. McCulloch and Walter H. Pitts paper of 1943. If you are interested in more details, I would suggest an article published in Medium.

The first AI conference took place in 1959. However, by this time, the leads in Artificial Intelligence had already exhausted the computing capabilities of the time. It is, therefore, no surprise that not much could be achieved in AI in the next decade.

Thankfully, the IT industry was catching up quickly and preparing the ground for stronger computers. Gordon Moore, the co-founder of Intel, made a few predictions in his article in 1965. Moore predicted a huge growth of integrated circuits, more components per chip, and reduced costs. Integrated circuits will lead to such wonders as home computers or at least terminals connected to a central computerautomatic controls for automobiles, and personal portable communications equipment, Moore predicted. Although scientists had been toiling hard to launch the Internet, it was not until the late 1960s that the invention started showing some promises. On October 29, 1969, ARPAnet delivered its first message: a node-to-node communication from one computer to another, notes History.com.

With the Internet in the public domain, computer companies had a reason to accelerate their own developments. In 1971, Intel introduced its first chip. It was a huge breakthrough. Intel impressively compared the size and computing abilities of the new hardware saying, This revolutionary microprocessor, the size of a little fingernail, delivered the same computing power as the first electronic computer built in 1946, which filled an entire room.

Around the 1970s more popular versions of languages came in use, for instance, C and SQL. I mention these two as I remember when I did my Diploma in Network-Centered Computing in 2002, the advanced versions of these languages were still alive and kicking. Britannica has a list of computer programming languages if you care to read more on when the different languages came into being.

These advancements created a perfect amalgamation of resources to trigger the next phase in AI.

Phase 2

In the late 1970s, we see another AI enthusiast coming in the scene with several research papers on AI. Geoffrey Hinton, a Canadian researcher, had confidence in Rosenblatts work on Perceptron. He resolved an inherent problem with Rosenblatts model that was made up of a single layer perceptron. To be fair to Rosenblatt, he was well aware of the limitations of this approach he just didnt know how to learn multiple layers of features efficiently, Hinton noted in his paper in 2006.

This multi-layer approach can be referred to as a Deep Neural Network.

Another scientist, Yann LeCun, who studied under Hinton and worked with him, was making strides in AI, especially Deep Learning (DL, explained later in the article) and Backpropagation Learning (BL). BL can be referred to as machines learning from their mistakes or learning from trial and error.

Similar to Phase 1, the developments of Phase 2 end here due to very limited computing power and insufficient data. This was around the late 1990s. As the Internet was fairly recent, there was not much data available to feed the machines.

Phase 3

In the early 21st-century, the computer processing speed entered a new level. In 2011, IBMs Watson defeated its human competitors in the game of Jeopardy. Watson was quite impressive in its performance. On September 30, 2012, Hinton and his team released the object recognition program called Alexnet and tested it on Imagenet. The success rate was above 75 percent, which was not achieved by any such machine before. This object recognition sent ripples across the industry. By 2018, image recognition programming became 97% accurate! In other words, computers were recognizing objects more accurately than humans.

In 2015, Tesla introduced its self-driving AI car. The company boasts its autopilot technology on its web site saying, All new Tesla cars come standard with advanced hardware capable of providing Autopilot features today, and full self-driving capabilities in the futurethrough software updates designed to improve functionality over time.

Go enthusiasts will also remember the 2016 incident when Google-owned DeepMinds AlphaGo defeated the human Go world-champion Lee Se-dol. This incident came at least a decade too soon. We know that Go is considered one of the most complex games in human history. And, AI could learn it in just 3 days, to a level to beat a world champion who, I would assume must have spent decades to achieve that proficiency!

The next phase shall be to work on Singularity. Singularity can be understood as machines building better machines, all by themselves. In 1993, scientist Vernor Vinge published an essay in which he wrote, Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Scientists are already working on the concept of technological singularity. If these achievements can be used in a controlled way, these can help several industries, for instance, healthcare, automobile, and oil exploration.

I would also like to add here that Canadian universities are contributing significantly to developments in Artificial Intelligence. Along with Hinton and LeCun, I would like to mention Richard Sutton. Sutton, Professor at the University of Alberta, is of the view that advancements in the singularity can be expected around 2040. This makes me feel that when AI will no longer need human help, it will be a kind of specie in and of itself.

To get to the next phase, however, we would need more computer power to achieve the goals of tomorrow.

Now that we have some background on the genesis of AI and some information on the experts who nourished this advancement all these years, it is time to understand a few key terms of AI. By the way, if you ask me, every scientist who is behind these developments is a new topic in themselves. I have tried to put a good number of researched sources in the article to generate your interest and support your knowledge in AI.

Big Data

With the Internet of Things (IoT), we are saving tons of data every second from every corner of the world. Consider, for instance, Google. It seems that it starts tracking our intentions as soon as we type the first alphabet on our keyboard. Now think for a second how much data is generated from all the internet users from all over the World. Its already making predictions of our likes, dislikes, actionseverything.

The concept of big data is important as that makes the memory of Artificial Intelligence. Its like a parent sharing their experience with their child. If the child can learn from that experience, they develop cognizant abilities and venture into making their own judgments and decisions. Similarly, big data is the human experience that is shared with machines and they develop on that experience. This can be supervised as well as unsupervised learning.

Symbolic Reasoning and Machine Learning

The basics of all processes are some mathematical patterns. I think that this is because math is something that is certain and easy to understand for all humans. 2 + 2 will always be 4 unless there is something we havent figured out in the equation.

Symbolic reasoning is the traditional method of getting work done through machines. According to Pathmind, to build a symbolic reasoning system, first humans must learn the rules by which two phenomena relate, and then hard-code those relationships into a static program. Symbolic reasoning in AI is also known as the Good Old Fashioned AI (GOFAI).

Machine Learning (ML) refers to the activity where we feed big data to machines and they identify patterns and understand the data by themselves. The outcomes are not as predicted as here machines are not programmed to specific outcomes. Its like a human brain where we are free to develop our own thoughts. A video by ColdFusion explains ML thus: ML systems analyze vast amounts of data and learn from their past mistakes. The result is an algorithm that completes its task effectively. ML works well with supervised learning.

Here I would like to make a quick tangent for all those creative individuals who need some motivation. I feel that all inventions were born out of creativity. Of course, creativity comes with some basic understanding and knowledge. Out of more than 7 billion brains, somewhere someone is thinking out of the box, verifying their thoughts, and trying to communicate their ideas. Creativity is vital for success. This may also explain why some of the most important inventions took place in a garage (Google and Microsoft). Take, for instance, a small creative tool like a pizza cutter. Someone must have thought about it. Every time I use it, I marvel how convenient and efficient it is to slice a pizza without disturbing the toppings with that running cutter. Always stay creative and avoid preconceived ideas and stereotypes.

Alright, back to the topic!

Deep Learning

Deep Learning (DL) is a subset of ML. This technology attempts to mimic the activity of neurons in our brain using matrix mathematics, explains ColdFusion. I found this article that describes DL well. With better computers and big data, it is now possible to venture into DL. Better computers provide the muscle and the big data provides the experience to a neuron network. Together, they help a machine think and execute tasks just like a human would do. I would suggest reading this paper titled Deep Leaning by LeCun, Bengio, and Hinton (2015) for a deeper perspective on DL.

The ability of DL makes it a perfect companion for unsupervised learning. As big data is mostly unlabelled, DL processes it to identify patterns and make predictions. This not only saves a lot of time but also generates results that are completely new to a human brain. DL offers another benefit it can work offline; meaning, for instance, a self-driving car. It can take instantaneous decisions while on the road.

What next?

I think that the most important future development will be AI coding AI to perfection, all by itself.

Neural nets designing neural nets have already started. Early signs of self-production are in vision. Google has already created programs that can produce its own codes. This is called Automatic Machine Learning or AutoML. Sundar Pichai, CEO of Google and Alphabet, shared the experiment in his blog. Today, designing neural nets is extremely time intensive, and requires an expertise that limits its use to a smaller community of scientists and engineers. Thats why weve created an approach called AutoML, showing that its possible for neural nets to design neural nets, said Pichai (2017).

Full AI capabilities will also trigger several other programs like fully-automated self-driving cars, full-service assistance in sectors like health care and hospitality.

Among the several useful programs of AI, ColdFusion has identified the five most impressive ones in terms of image outputs. These are AI generating an image from a text (Plug and Play Generative Networks: Conditional Iterative Generation of Images in Latent Space), AI reading lip movements from a video with 95% accuracy (LipNet), Artificial Intelligence creating new images from just a few inputs (Pix2Pix), AI improving the pixels of an image (Google Brains Pixel Recursive Super Resolution), and AI adding color to b/w photos and videos (Let There Be Color). In the future, these technologies can be used for more advanced functions like law enforcement et cetera.

AI can already generate images of non-existing humans and add sound and body movements to the videos of individuals! In the coming years, these tools can be used for gaming purposes, or maybe fully capable multi-dimensional assistance like the one we see in the movie Iron Man. Of course, all these developments would require new AI laws to avoid misuse; however, that is a topic for another discussion.

Humans are advanced AI

Artificial Intelligence is getting so good at mimicking humans that it seems that humans themselves are some sort of AI. The way Artificial Intelligence learns from data, retains information, and then develops analytical, problem solving, and judgment capabilities are no different from a parent nurturing their child with their experience (data) and then the child remembering the knowledge and using their own judgments to make decisions.

We may want to remember here that there are a lot of things that even humans have not figured out with all their technology. A lot of things are still hidden from us in plain sight. For instance, we still dont know about all the living species in the Amazon rain forest. Astrology and astronomy are two other fields where, I think, very little is known. Air, water, land, and celestial bodies control human behavior, and science has evidence for this. All this hints that we as humans are not in total control of ourselves. This feels similar to AI, which so far requires external intervention, like from humans, to develop it.

I think that our past has answers to a lot of questions that may unravel our future. Take for example the Great Pyramid at Giza, Egypt, which we still marvel for its mathematical accuracy and alignment with the earths equator as well as the movements of celestial bodies. By the way, we could compare the measurements only because we have already reached a level to know the numbers relating to the equator.

Also, think of Indias knowledge of astrology. It has so many diagrams of planetary movements that are believed to impact human behavior. These sketches have survived several thousand years. One of Indias languages, Vedic, is considered more than 4,000 years old, perhaps one of the oldest in human history. This was actually a question asked from IBM Watson during the 2011 Jeopardy competition. Understanding the literature in this language might unlock a wealth of information.

I feel that with the kind of technology we have in AI, we should put some of it at work to unearth our wisdom from the past. It is a possibility that if we overlook it, we may waste resources by reinventing the wheel.

Read the original post:

The world of Artificial... - The American Bazaar

What’s the magic behind Matthew Stafford’s mastery of the Lions’ offense? – Detroit Lions Blog- ESPN – ESPN

ALLEN PARK, Mich. The ball looked it like it could have been intercepted easily. Jeff Okudah was in perfect position in the end zone. He read everything right. He was where he was supposed to be. It didnt matter.

Not even close.

Matthew Stafford put the ball where only his receiver, Marvin Hall, could catch it. It was a window so small realistically only the football could have fit through for the play to work. You could say this is only one play in a training camp and might not be indicative of how Stafford played in practice throughout August.

Create or join a league today >>Cheat Sheet Central >>

Except this wasnt a singularity. It happened to Amani Oruwariye against Kenny Golladay. It happened to Jahlani Tavai and ended up in the hands of Marvin Jones. Combine that with Staffords arm strength -- which remains among the best in the league -- and theres reason to think the 12-year veteran might be on the cusp of a season in which he fulfills the potential thats surrounded him since he was drafted, both in his physical abilities and his knowledge of exactly where to throw the ball and when.

Hes a wizard, man, said backup quarterback Chase Daniel, who has known Stafford since high school. Its impressive. His recall of plays, a photographic memory, all that stuff you want in a quarterback. Its impressive and makes you want to work harder and its why hes been one of the best quarterbacks in the league going on 12 years now.

It isnt a practice thing, either. Hes done it during games, too -- either with the help of Calvin Johnson earlier in his career or throws that make you wonder how he pulled it off the past few seasons, including a pass through three Kansas City defenders for a touchdown to Golladay in Week 4 last season.

I wish more people could appreciate it, backup quarterback David Blough said.

At the time, Blough still was learning about his new teammate. A rookie out of Purdue who was traded to Detroit from Cleveland at the roster cuts deadline, Blough only watched Stafford from afar on television and what he remembered of him growing up just outside Dallas himself when Stafford was in high school.

The next day, in the quarterback meeting room, Blough got to see a small bit of Staffords personality. He almost shrugged it off as hes just doing his job although Blough said you might get a wink from him as hes saying it.

This always has been who Stafford is -- from top-rated high school recruit to top-rated college quarterback and then the No. 1 pick in the 2009 draft. Hes thrown a 5,000-yard season and holds a bevy of fastest-to NFL records.

Hes led 28 fourth-quarter comebacks, tied with Brett Favre for No. 11 in history. Hes No. 18 in all-time passing yards, with 41,025, and if he has at least a 4,000-yard season hell pass Dan Fouts and Drew Bledsoe to be No. 16 all-time. His 256 touchdowns are No. 19 all-time and hes 35 touchdown passes away from moving into the top 15.

He is also, at age 32, perhaps playing better than he ever has. Before he suffered broken bones in his back last season, sending him to injured reserve, he was playing at a Pro Bowl level in the first year in Darrell Bevells offense, completing 64.3 percent of his passes for 2,499 yards, 19 touchdowns and five interceptions.

Had he played a full season, he might have reached 5,000 yards for the second time. While hes played in other offenses before -- becoming prolific in Scott Linehans Air Raid offense early in his career and then more efficient in the Jim Caldwell/Jim Bob Cooter system for five years after that -- its possible Bevells offense fits him better than the others.

It meshes a mix of play-action and focus on the run game with enough attempts at bigger, explosive plays that take advantage of Staffords arm and the skills of Golladay and Jones to win contested catches.

When were out there at quarterback, were empowered to throw, Blough said. Take shots, take shots, take shots. [Bevell] keeps calling them and I think Matthew feels encouraged by that and confident.

While it appears he has mastery over Bevells system, and Stafford is reaching a point in his career where almost any offense is going to be something he picks up quick, Bevell has noticed some small, subtle changes entering another season with Stafford, something that could make a great quarterback even better.

He might be even a little bit quicker on some of the decisions hes making, Bevell said. We really have put an emphasis on his speed. Starting with last year when we got here and how your feet correspond to the plays, I think hes done a nice job with that.

I mean, hes just a special talent in terms of throwing the football. It just looks so effortless. He can just flick it, and the balls flying out of his hands. Hes always been impressive that way.

Are you ready for some football? Play for FREE and answer questions on the Monday night game every week. Make Your Picks

Its something his teammates have known and his coaches have learned as theyve worked with him. Its something the public has understood in fits and starts, but if Stafford can stay healthy in 2020 and manage his team through an abnormal season in a global pandemic, its possible he might be able to do one thing that could get him more recognition.

Win the Lions first since division title sine 1993, when Stafford was 5 years old.

The rest is here:

What's the magic behind Matthew Stafford's mastery of the Lions' offense? - Detroit Lions Blog- ESPN - ESPN

(G)I-DLE are the only major K-pop girl group writing their own music – i-D

(G)I-DLE, the six-member K-pop girl group, are weighing up what they love and admire about each other. Shuhuas eccentric thoughts, Soojins eye-catching dancing skills... offers their leader and rapper Soyeon, who is herself whip-smart, elfish and, when she needs to be, steely. Yuqis confidence, adds Miyeon. She really knows how to love herself. Describing what it is that makes Minnie, Shuhua, Soojin, Soyeon, Yuqi and Miyeon unique charisma, beauty, humour, dreamlike auras, the double whammy of sexy-but-cute their words take flight like tiny jewel-coloured birds, darting and uplifting.

That (G)I-DLE see each other not just as bandmates but as role models and muses comes through in every interview they do. As a multinational group (Thai, Taiwanese, Chinese, Korean) working through cultural differences and the hardships of being far from home, theyve formed an affectionate, protective sisterhood. And as a self-producing girl group who write their own music a very rare entity in K-Pop and have input into every part of their creative process, they utilise this closeness to their advantage; the tiniest details about each individual member serve as inspiration. All the while, their fandom (NEVERLAND) sprawls further across the world with every new record.

The tightness of their bond makes their onstage presence prismatic boldly reflecting the dozens of shifting, individual elements that make up each young woman outwards as a complex singularity. At the very core of (G)I-DLE, however, the shared, primary foundation has always been self-belief. Its what lead Soyeon to write their debut-securing first single (()) _ LATATA; what propelled Yuqi, Shuhua and Minnie to leave their countries to try their luck in South Koreas idol industry in the first place; and what gave them the only rookie group competing alongside five senior acts the confidence and skill to wind up in third place on the survival show, Queendom, causing a stir with their regal, fearless finale performance of LION.

But when in conversation with the group, it becomes clear that theyre also united by a shared ambition, vulnerability, and the inclination to push beyond traditional expectations of them as women in K-pop. In fact, they refuse to acknowledge that these boundaries even exist. We havent hit #1 on the digital charts yet, Yuqi says, thats something I really want to achieve, and I hope we can be more acknowledged musically! Miyeon, who recently revealed that (G)I-DLE are set on creating their own genre, says that theyre still working towards it... were consistently trying to achieve that goal. Soyeon agrees, adamant that even when they do, it too will contain no barriers and have no end.

In 2020, the group have already celebrated their two year anniversary and released three singles: the formidable Oh My God (from their third Korean EP, I Trust); Im The Trend (written by Minnie and Yuqi); and their latest, DUMDi DUMDi, a lighthearted summer bop. Casually highlighting (G)I-DLEs effortless duality, their second Japanese EP consisting of translated, re-recorded versions of tracks including Oh My God and a brand new deep cut, Minnies heartbreaking Tung Tung (Empty) dropped at the of August, making (G)I-DLEs 2020 a creatively abundant one, despite the looming global pandemic.

We spoke to the group to discuss Tung-Tung and more

DUMDi DUMDi is definitely a change of pace for a (G)I-DLE lead single why was now the right time to drop something so upbeat and breezy?Minnie: We always try to do something new. The concept of this comeback was also interpreted in our own style, which I hope many people liked. (G)I-DLE turned bright and fresh for the summer!

The video shoot looks like it was a lot of fun. What do you remember most about it?Yuqi: We were soaked the whole time from shooting the pool scene and the bubble party scene. Shooting those scenes was a bit tough, but they turned out very nicely.

Soojin: The bubble party scene is the most memorable. Make sure to check out that scene from the music video!

Soyeon, on Im The Trend you wrote: I have everything that you want to resemble / My charms that endured through the tough Produce 101, Unpretty Rapstar, Queendom. How do you think these shows helped you become the leader you are today?Soyeon: I learned that if you survive through the struggles, you will eventually make it. Im a goal pursuer, I am competitive, and I am not easily wavered. Its more appropriate to say that Im the type of person who can enjoy competition shows, than to say that being on competition shows helped me. And as a leader, this personality of mine comes in handy.

Youve described yourself as having been a quiet kid. When do you recall this other side of you the fierce, competition-loving Soyeon appearing?Soyeon: I am pretty quiet. Rather than wanting to win over someone, I just wanted to be the best since I was young. Honestly, I am not sure what kind of influence my parents had on me adopting this kind of mindset but they always had faith in me!

Queendom was a tough show looking back, how has it impacted (G)I-DLE long-term?Miyeon: I was scared and concerned about being on a competitive show. But every time we prepared a performance and got on stage, the belief that I will be fine as long as I am with my teammates became stronger. Once Queendom was over, I started to think that the six of us can do anything together, and I became confident enough not to fear any adventures.

Its clear that (G)I-DLE have a true bond whats something that helps keep you close as a team?Miyeon:We spend a lot of time talking. We dont need to consciously make time for it, we just talk amongst ourselves a lot, opening up even the trivial parts of our lives without discomfort. All six of us like to eat, so we get together to have good food, too.

Minnie, Yuqi and Soyeon as all three of you are songwriters, how do your working styles differ?Minnie: I concentrate on my feelings at the moment. For You was the most challenging to write because it was actually my first time creating a song on my own. I made that song when I was very lonely and struggling, I felt so alone then that I wanted to tell somebody. The feelings got more intense as I wrote the lyrics, it was overwhelming. I hope listening to this song will relieve peoples sad and lonesome hearts.

Soyeon: I consider the lyrics the most important. The line You changed as if you took a drug from HANN (Alone) is one Im proud of. It illustrates your partner turning into a totally different person the moment your relationship comes to an end.

Minnie, tell me about your new song Tung-Tung (Empty)...I wrote the song utilising the Korean word tung-tung and the message of the song is my heart which used to be full of you is now empty (tung-tung). I hope the audience finds the loneliness relatable as its a track I put my best efforts into. The harmonies and string instruments are key points also, so pay attention to them as you listen!

Soyeon, as the groups main songwriter and someone who always strives to be the best, have you ever had a fear around the possibility that a song might not be a success? How do you see past intrusive thoughts like that?I see failure as something that will not happen. And if it does, it will be just a moment, and Im sure Id be able to overcome it shortly.

And, finally, what would you like to say to NEVERLAND right now?Minnie:Dear NEVERLAND, I always thank you for all the love you have given us until now, and I love you.Miyeon:If NEVERLAND werent here, (G)I-DLE wouldnt be here either. Thanks to NEVERLAND being by our side and supporting us all the time. We will do our best for them!

Visit link:

(G)I-DLE are the only major K-pop girl group writing their own music - i-D

‘The World To Come’: Review | Reviews – Screen International

Dir. Mona Fastvold. US. 2020. 98 mins.

It would be easy to sell The World to Come as the female Brokeback Mountain, but that would be to traduce the richness, singularity and command of Mona Fastvolds beautifully executed and acted drama. The story of female friendship blossoming into passionate love in a severe 1850s American rural setting, this is an austere but lyrical piece underwritten by a complex grasp of emotional and psychological nuance, and a second feature of striking command by Norwegian-born director Mona Fastvold, following up her 2014 debut The Sleepwalker (she has also collaborated as writer on Brady Corbets features).

Understatement and interiority are the watchwords for a film which uses suggestion and period language very subtly

Scripted with heightened literary cadences by Ron Hansen and Jim Shepard, the film is well crafted in every respect, and marks an acting career high for Katherine Waterston, as well as a fine showcase for the ever more impressive Vanessa Kirby. Fastvolds maturely satisfying piece, picked up internationally by Sony Pictures, should find acclaim on the festival circuit, and upmarket distributors will hopefully find a way to highlight its appeal to discerning audiences on the big screen, where its stark elegance will truly flourish.

The film is framed with handwritten date captions as a diary kept in the 1850s in rural upstate New York by Abigail (Waterstone), the young wife of farmer Dyer (Casey Affleck). Their relationship lies under the shadow of the recent death of their young daughter, and grief along with the normal rigours of life in the remote countryside is keeping them emotionally apart, with the thoughtful Abigail and the gentle but taciturn Dyer unable to communicate their feelings, as seems par for the course in a rural marriage at this period. One day, however, Abigail exchanges glances with a new neighbour, Tallie (Kirby), in a subtle hint of what could be classified love at first sight. When Tallie pays a neighbourly visit, the two instantly bond, exchanging confidences, with Abigails reserve gradually conquered up by Tallies candour and ironic knowingness about womens domestic lot something she is familiar with, being married to the possessive Finney (Christopher Abbott).

Working over the seasons, beginning with a descent into a harshly forbidding winter, Fastvold teases out the shifts in the characters lives, at first establishing a tone of pensive reserve, then setting a note of heightened peril (mortality, after all, really means something in this environment), notably in an extraordinary blizzard sequence. As the action enters another year, warmth comes into the two womens lives; at last their slow-simmering romance catches fire in tentative declarations followed by a first kiss, and the fond words, You smell like a biscuit. There are flashes of overt sexual content, but used extremely sparingly and telegraphically towards the end, while Fastvold shows the meaning of Abigails passion in subtle touches like a moment where she lies back on a table, fully dressed, in a quiet swoon of rapture.

Acted with finely calibrated subtlety, the film uses close-ups sparingly but to resonant effect, contrasting the cautiousness with which Abigail reveals her self and the warmer, more openly expressive face of Tallie. Waterstone and Kirby pull off something very finely balanced, conveying the enormity of their characters emotions while speaking a stylised, formal, sometimes playful language: the script will be music to lovers of 19th-century American writing (Hawthorne, Emily Dickinson, Edith Wharton). As the two husbands, Affleck and Abbott contrast sharply both playing deeply enclosed, solemn men, but of different emotional literacy, one with a capacity for moral generosity, the other shockingly without.

Understatement and interiority are the watchwords for a film which uses suggestion and period language very subtly. Poetry plays a part in the central relationship, but theres a poetic ring to the prose too, both in the dialogue and in Abigails journal (both screenwriters are novelists, Ron Hansen having explored this period in The Assassination of Jesse James by the Coward Robert Ford, the film of which starred Casey Affleck as Ford). This is also very much a film about the power and necessity of writing, as suggested by a line that compares ink to fire: a good servant and a hard master.

Ink on paper is also sometimes suggested by the look of the winter sequences, colours bled to monochrome. Shot on 16mm by Andr Chemetoff, the film at once captures the look of period photography and establishes a feeling of contemporary realism, with no alienating sense of historical distance. The grainy texture of the images, combined with Jean Vincent Puzoss meticulous design, somewhat recalls the American period films (Meeks Cutoff, First Cow) of Kelly Reichardt, with something of the severe grace of Terence Daviess best work.

There is also a distinctive score by David Blumberg, foregrounding woodwinds - notably in the blizzard sequence, which has a feel of free jazz without being incongruous for the period (improvising legend Peter Brtzmann is featured on bass clarinet). The closing song, featuring singer Josephine Foster, catches the period feel perfectly over manuscript-style end credits.

Production companies: Seachange Media, Killer Films, Hype Films

International sales: Charades, sales@charades.eu

Producers: Casey Affleck, Whitaker Lader, Pamela Koffler, David Hinojosa, Margarethe Baillou

Screenplay: Ron Hansen, Jim Shepard

Based on the story by Jim Shepard

Cinematography: Andre Chemetoff

Editor: David Jancso

Production design: Jean Vincent Puzos

Music: David Blumberg

Main cast: Katherine Waterston, Vanessa Kirby, Casey Affleck, Christopher Abbott

See original here:

'The World To Come': Review | Reviews - Screen International

Before or After the Singularity – PRESSENZA International News Agency

Scientific theories developed by independent and non-networked groups came to the following conclusion: Something will happen around the world that will change human history in a special way. While the predictions may not match exact dates, they all have one thing in common it will happen this century and within a few decades.

The event or the sum of the events per se was named SINGULARITY and has unique characteristics: The development of the events does not generally accelerate within the scope of their properties, but changes abruptly or collapses and starts again.

These predictions could be made on the basis of curves that encompass the development of natural ecosystems as well as the various significant milestones in the universal history of mankind from the beginning of time.

Researchers like Alexander Panov, Ray Kurzweil and many others were able to bring those considerations together by bringing together fundamentally different variables such as energy sources, automation, artificial intelligence, mode of production and consumption, etc., etc., etc.

However, the majority of theories portray science and technology as the creator of this future and not as a by-product of the evolution of our species.

We are of the opinion that the change takes place out of ones own awareness of humanity in its human and spiritual dimension, and that as a consequence of this change external changes also occur which technology, artificial intelligence and genetic engineering do not exclude, but them instead puts it in the foreground and makes it the vehicle and support for this change.

In summary, the SINGULARITY is a wonderful tool for theoretical analysis for us to imagine a world to which we are striving and also to prevent the dangers that such a change could bring.

In what other way could we seriously speak of this chaotic future? Its like were on a ship and were drawn to the enormous gravity of a black hole, a zone where time and space warp. Would we be able to know at what point in time or what distance we would reach the central vortex of the black hole? Were not trying to do futurology even less under these conditions.

But analyzing things from this point of view, with a warning in mind, is an excellent way of imagining the world that we may expect in the future.

Our area of interest focuses on human existence and this is the basis of our analysis, which of course does not claim to be scientific accuracy. We may also later be able to question current science with its alleged thoroughness and infallibility.

We strive for the evolution of mankind, we want a revolution in their consciousness and values. We reject the reification of the human being and the apocalyptic view of the future. We do not deny that machines are useful if they help to relieve people of work. We speak out against any kind of concentration of power and demand the expansion of human freedom, which can neither be restricted nor replaced by soulless algorithms.

As you can see, the future can hold many nuances Our goal is to exchange ideas with those who are interested in these topics.

What is your vision of the future?

Translation from German by Lulith V. by the Pressenza volunteer translation team. We are looking for volunteers!

Carlos Santos is a teacher and has been active in a humanist movement all his life. For the last decade he has devoted himself to audiovisual implementations as a director, producer and screenwriter of documentaries and feature films within his production company Esencia Humana Films. Email: escenariosfuturos21@gmail.com; Blog: escenariosfuturos.org

Read the original post:

Before or After the Singularity - PRESSENZA International News Agency

Neuralink’s Wildly Anticipated New Brain Implant: the Hype vs. the Science – Singularity Hub

Neuralinks wildly anticipated demo last Friday left me with more questions than answers. With a presentation teeming with promises and vision but scant on data, the event nevertheless lived up to its main goal as a memorable recruitment session to further the growth of the mysterious brain implant company.

Launched four years ago with the backing of Elon Musk, Neuralink has been working on futuristic neural interfaces that seamlessly listen in on the brains electrical signals, and at the same time, write into the brain with electrical pulses. Yet even by Silicon Valley standards, the company has kept a tight seal on its progress, conducting all manufacturing, research, and animal trials in-house.

A vision of marrying biological brains to artificial ones is hardly unique to Neuralink. The past decade has seen an explosion in brain-machine interfacessome implanted into the brain, some into peripheral nerves, or some that sit outside the skull like a helmet. The main idea behind all these contraptions is simple: the brain mostly operates on electrical signals. If we can tap into these enigmatic neural codesthe brains internal languagewe could potentially become the architects of our own minds.

Let people with paralysis walk again? Check and done. Control robotic limbs with their minds? Yup. Rewriting neural signals to battle depression? In humans right now. Recording the electrical activity behind simple memories and playing it back? Human trials ongoing. Linking up human minds into a BrainNet to collaborate on a Tetris-like game through the internet? Possible.

Given this backdrop, perhaps the most impressive part of the demonstration isnt lofty predictions of what brain-machine interfaces could potentially do one day. In some sense, were already there. Rather, what stood out was the redesigned Link device itself.

In Neuralinks coming out party last year, the company envisioned a wireless neural implant with a sleek ivory processing unit worn at the back of the ear. The electrodes of the implant itself are sewn into the brain with automated robotic surgery, relying on brain imaging techniques to avoid blood vessels and reduce brain bleeding.

The problem with that design, Musk said, is that it had multiple pieces and was complex. You still wouldnt look totally normal because theres a thing coming out of your ear.

The prototype at last weeks event came in a vastly different physical shell. About the size of a large coin, the device replaces a small chunk of your skull and sits flush with the surrounding skull matter. The electrodes, implanted inside the brain, connect with this topical device. When covered by hair, the implant is invisible.

Musk envisions an outpatient therapy where a robot can simultaneously remove a piece of the skull, sew the electrodes in, and replace the missing skull piece with the device. According to the team, the Link has similar physical properties and thickness as the skull, making the replacement a sort of copy-and-paste. Once inserted, the Link is then sealed to the skull with superglue.

I could have a Neuralink right now and you wouldnt know it, quipped Musk.

For a device that small, the team packed an admirable array of features into it. The Link device has over 1,000 channels, which can be individually activated. This is on par with Neuropixel, the crme de la crme of neural probes with 960 recording channels thats currently used widely in research, including by the Allen Institute for Brain Science.

Compared to the Utah Array, a legendary implant system used for brain stimulation in humans with only 256 electrodes, the Link has an obvious edge up in terms of pure electrode density.

Whats perhaps most impressive, however, is its onboard processing for neural spikesthe electrical pattern generated by neurons when they fire. Electrical signals are fairly chaotic in the brain, and filtering spikes from noise, as well as separating trains of electrical activity into spikes, normally requires quite a bit of processing power. This is why in the lab, neural spikes are usually recorded offline and processed using computers, rather than with on-board electronics.

The problem gets even more complicated when considering wireless data transfer from the implanted device to an external smartphone. Without accurate and efficient compression of those neural data, the transfer could tremendously lag, drain battery life, or heat up the device itselfsomething you dont want happening to a device stuck inside your skull.

To get around these problems, the team has been working on algorithms that use characteristic shapes of electrical patterns that look like spikes to efficiently identify individual neural firings. The data is processed on the chip inside the skull device. Recordings from each channel are filtered to root out obvious noise, and the spikes are then detected in real time. Because different types of neurons have their characteristic ways of spikingthat is, the shape of their spikes are diversethe chip can also be configured to detect the particular spikes youre looking for. This means that in theory the chip could be programmed to only capture the type of neuron activity youre interested infor example, to look at inhibitory neurons in the cortex and how they control neural information processing.

These processed spike data are then sent out to smartphones or other external devices through Bluetooth to enable wireless monitoring. Being able to do this efficiently has been a stumbling block in wireless brain implantsraw neural recordings are too massive for efficient transfer, and automated spike detection and compression of that data is difficult, but a necessary step to allow neural interfaces to finally cut the wire.

Link has other impressive features. For one, the battery life lasts all day, and the device can be charged at night using inductive charging. From my subsequent conversations with the team, it seems like there will be alignment lights to help track when the charger is aligned with the device. Whats more, the Link itself also has an internal temperature sensor to monitor for over-heating, and will automatically disconnect if the temperature rises above a certain thresholda very necessary safety measure so it doesnt overheat the surrounding skull tissue.

From the get-go of the demonstration, there was an undercurrent of tension between whats possible in neuroengineering versus whats needed to understand the brain.

Since its founding, Neuralink has always been fascinated with electrode numbers: boosting channel numbers on its devices and increasing the number of neurons that can be recorded at the same time.

At the event, Musk said that his goal is to increase the number of recorded neurons by a factor of 100, then 1,000, then 10,000.

But heres the thing: as neuroscience is increasingly understanding the neural code behind our thought processes, its clear that more electrodes or more stimulated neurons isnt always better. Most neural circuits employ whats called sparse coding, in that only a handful of neurons, when stimulated in a way that mimics natural firing, can artificially trigger visual or olfactory sensations. With optogeneticsthe technique of stimulating neurons with lightscientists now know that its possible to incept memories by targeting just a few key neurons in a circuit. Sticking a ton of wires into the brain, which inevitably causes scarring, and zapping hundreds of thousands of neurons isnt necessarily going to help.

Unlike engineering, the solution to the brain isnt more channels or more implants. Rather, its deciphering the neural codeknowing what to stimulate, in what order, to produce what behavior. Its perhaps telling that despite claims of neural stimulation, the only data shown at the event were neurons firing from a section of a mouse brainusing two-photon microscopy to image neural activationafter zapping brain tissue with an electrode. What information, if any, is really being written into the brain? Without an idea of how neural circuits work and in what sequences, zapping the brain with electricityno matter how cool the device itself isis akin to banging on all the keys of a piano at once, rather than composing a beautiful melody.

Of course, the problem is far larger than Neuralink itself. Its perhaps the next frontier in solving the brains mysteries. To their credit, the Neuralink team has looked at potential damage to the brain from electrode insertion. A main problem with current electrodes is that the brain will eventually activate non-neuronal cells to form an insulating sheath around the electrode, sealing it off from the neurons it needs to record from. According to some employees I talked to, so far, for at least two months, the scarring around electrodes is minimal, although in the long run there may be scar tissue buildup at the scalp. This may make electrode threads difficult to removesomething that still needs to be optimized.

However, two months is only a fraction of what Musk is proposing: a decade-long implant, with hardware that can be updated.

The team may also have an answer there. Rather than removing the entire implant, it could potentially be useful to leave the threads inside the brain and only remove the top capthe Link device that contains the processing chip. The team is now trying the idea out, while exploring the possibility of a full-on removal and re-implant.

As a demonstration of feasibility, the team trotted out three adorable pigs: one without an implant, one with a Link, and one with the Link implanted and then removed. Gertrude, the pig currently with an implant in areas related to her snout, had her inner neural firings broadcasted as a series of electrical crackles as she roamed around her pen, sticking her snout into a variety of food and hay and bumping at her handler.

Pigs came as a surprise. Most reporters, myself included, were expecting non-human primates. However, pigs seem like a good choice. For one, their skulls have a similar density and thickness to human ones. For another, theyre smart cookies, meaning they can be trained to walk on a treadmill while the implant records from their motor cortex to predict the movement of each joint. Its feasible that the pigs could be trained on more complicated tests and behaviors to show that the implant is affecting their movements, preferences, or judgment.

For now, the team doesnt yet have publicly available data showing that targeted stimulation of the pigs cortexsay, motor cortexcan drive their muscles into action. (Part of this, I heard, is because of the higher stimulation intensity required, which is still being fine-tuned.)

Although pitched as a prototype, its clear that the Link remains experimental. The team is working closely with the FDA and was granted a breakthrough device designation in July, which could pave the way for a human trial for treating people with paraplegia and tetraplegia. Whether the trials will come by end of 2020, as Musk promised last year, however, remains to be seen.

Rather than other brain-machine interface companies, which generally focus on brain disorders, its clear that Musk envisions Link as something that can augment perfectly healthy humans. Given the need for surgical removal of part of your skull, its hard to say if its a convincing sell for the average person, even with Musks star power and his vision of augmenting natural sight, memory playback, or a third artificial layer of the brain that joins us with AI. And because the team only showed a highly condensed view of the pigs neural firingsrather than actual spike tracesits difficult to accurately gauge how sensitive the electrodes actually are.

Finally, for now the electrodes can only record from the cortexthe outermost layer of the brain. This leaves deeper brain circuits and their functions, including memory, addiction, emotion, and many types of mental illnesses off the table. While the team is confident that the electrodes can be extended in length to reach those deeper brain regions, its work for the future.

Neuralink has a long way to go. All that said, having someone with Musks impact championing a rapidly-evolving neurotechnology that could help people is priceless. One of the lasting conversations I had after the broadcast was someone asking me what its like to drill through skulls and see a living brain during surgery. I shrugged and said its just bone and tissue. He replied wistfully it would still be so cool to be able to see it though.

Its easy to forget the wonder that neuroscience brings to people when youve been in it for years or decades. Its easy to roll my eyes at Neuralinks data and think well neuroscientists have been listening in on live neurons firing inside animals and even humans for over a decade. As much as Im still skeptical about how Link compares to state-of-the-art neural probes developed in academia, Im impressed by how much a relatively small leadership team has accomplished in just the past year. Neuralink is only getting started, and aiming high. To quote Musk: Theres a tremendous amount of work to be done to go from here to a device that is widely available and affordable and reliable.

Image Credit: Neuralink

View original post here:

Neuralink's Wildly Anticipated New Brain Implant: the Hype vs. the Science - Singularity Hub

New Zealand Is About to Test Long-Range Wireless Power Transmission – Singularity Hub

A famous image of inventor Nikola Tesla shows him casually sitting on a chair, legs crossed, taking notesoblivious to the profusion of artificial lightning rending the air meters away. By then, Tesla and raw electricity were like an old married couple.

The experiments, conducted in Colorado, led to one of Teslas most audacious proposals: To power the world without wires. He made headlines with plans for a world wireless system, and won funding from JP Morgan to build the first of several huge transmission towers.

But Teslas wireless energy dream died soon after. JP Morgan canceled additional funding. The tower was demolished. Later scientists were skeptical Teslas plans (which were a bit vague) would have worked.

Meanwhile, Teslas peer Guglielmo Marconi pursued a parallel dream with far greater success: The wireless transmission of information on radio waves. Todays world is, of course, awash in wireless information.

Now, if New Zealand startup Emrod has its way, Tesla and Marconis dreams may merge. The company is building a system to wirelessly beam power over long distances. Earlier this month, Emrod received funding from Powerco, New Zealands second biggest utility, to conduct a test of its system at a grid-connected commercial power station.

The company hopes to bring energy to communities far from the grid or transmit power from remote renewable sources, like offshore wind farms.

The system consists of four components: A power source, a transmitting antenna, several (or more) transmitting relays, and a rectenna.

First, the transmitting antenna transforms electricity into microwave energyan electromagnetic wave just like Marconis radio waves, only a bit more energeticand focuses it into a cylindrical beam. The microwave beam is sent through a series of relays until it hits the rectenna, which converts it back into electricity.

With safety in mind, Emrod is using energy in the industrial, scientific, and medical (ISM) band, and keeping the power density low. Its not just how much power you deliver, its how much power you deliver per square meter, Emrod founder, Greg Kushnir, told New Atlas. The levels of density were using are relatively low. At the moment, its about the equivalent of standing outside at noon in the sun, about 1 kW per square meter.

But if it works as intended, the beam wont ever contact anything but empty air. The system uses a net of lasers surrounding the beam to detect obstructions, like a bird or person, and it automatically shuts off transmission until the obstruction has moved on.

The technologypower transmission via microwave energyhas been around for decades. But to make it commercially viable, you have to minimize energy losses. Kushnir said metamaterials developed in recent years are the difference-maker.

The company uses metamaterials to more efficiently convert the microwave beam back into electricity. The relays, which are like lenses extending the beam beyond line-of-sight by refocusing it, are nearly lossless. According to Kushnir, most of losses happen at the other end, where electricity is converted into microwave energy. Overall, he said the systems efficiency is around 70%, which is short of copper wires but economically viable in some areas. And its those areas the companys aiming for.

we dont foresee in the near future a situation where we could say all copper wire can be replaced by wireless, Kushnir said. Inherently, itll have lower efficiency levels. Its not about replacing the whole infrastructure but augmenting it in places where it makes sense.

The companys prototype can currently send a few watts of energy over a distance of about 130 feet. For the Powerco project, theyre working on a larger version capable of beaming a few kilowatts. The plan is to deliver the new system to Powerco in October, test it in the lab for a few months, and then, if all goes to plan, try it out in the field. The tests will aim to validate how much power the system can transmit over what distance.

Though the current model is modest, Kushnir says it should scale.

We can use the exact same technology to transmit 100 times more power over much longer distances, he said in a press release. Wireless systems using Emrod technology can transmit any amount of power current wired solutions transmit.

Ray Simpkin, Emrods chief scientific officer, told IEEE Spectrum that the company is also looking into whether they could beam power across 30 kilometers of water from the New Zealand mainland to Stewart Island. He said the system could cost as little as 60 percent of an undersea cable.

Ultimately, the technology may help power rural areas or transmit energy from offshore wind farms, both cases where its expensive to build physical infrastructure to tap or feed the grid. In other cases, say in national parks, a mode of wireless transmission could have less impact on the environment and require less maintenance. Or it might be used to provide power after natural disasters in which physical infrastructure has been damaged.

Its not Teslas world wireless system, but it just might make long-distance wireless power a commercial reality in the not-too-distant future.

Image source:Killian Eon /Pexels

Continue reading here:

New Zealand Is About to Test Long-Range Wireless Power Transmission - Singularity Hub

Baby Boomers 7 Possible Legacies, Guided by W.B. Yeats, Joan Didion and the Book of Revelations – Forbes

The center wasnt holding. It was a country of bankruptcy notices and public auction announcements and common-place reports of casual killings and misplaced children and abandoned homes and vandals who misspelled the four-letter words they scrawled.

-Slouching Towards Bethlehem, Joan Didion (1964)

Cycles of History: Post World War I, the 1960s and now

Joan Didion, wrote her essay Slouching Towards Bethlehem in 1964 from Haight-Ashbury, ground zero for the cultural shockwave unleashed when the oldest Baby Boomers became eligible for the draft, found their parents lifestyle stifling and discovered the pleasuresand painsof drugs, sex and rock and roll.

Writer Joan Didion and novelist John Gregory Dunne sitting in the library of their Malibu, ... [+] California, home. (Photo by Henry Clarke/Conde Nast via Getty Images)

Her reference to the center wasnt holding directly connects the 1960s in America to the instability after the end of World War I in 1918, which inspired W. B. Yeats to write his famous poem, The Second Coming:

Turning and turning in the wideninggyre.

The falcon cannot hear the falconer;

Things fall apart; the centre cannot hold;

Mere anarchy is loosed upon the world,

The blood-dimmed tide is loosed, and everywhere

The ceremony of innocence is drowned;

The best lack all conviction, while the worst

Are full of passionate intensity.

Widening gyre refers to his theory of history as a series of cycles of order and disorder from the birth of Christ to his Second Coming.If Irish mysticism isnt your thing, think of Mark Twains (maybe) words, History doesnt repeat itself, it rhymes. Or,Kondratievs long wave theory of cycles of 40-60 years.

The idea of history as a "gyre" is drawn from Irish mysticism and Hinduism.

In 1918, the war to end all wars led instead to a period of extended political and economic instability that laid the foundation for Yeats torment. And the violent Easter Rising of 1916, leading to the independence of the Irish Republic in 1919,brought it home. The Pandemic of 1918 infected 500 million people and killed 50 million, and brought Yeats young wife near death.

We Baby Boomers Pass Through Yeats Widening Gyre

From Joan Didions hippies to our ongoing pandemic, it feels like we Baby Boomers are moving through Yeatswidening gyre.Perhaps it is as simple as when many of our parents were born (mine 1911, 1914); when we were becoming adults in the 1960s and 1970s; and now we see the Great White Light in the lessening distance about 50 years later.

Or maybe we Baby Boomers are finally coming to terms with our lost innocence. Watch the movie documentary, Woodstock. What you will see and hear...almost smell, touch and taste...is beautiful, dirty, dizzy late adolescents who are smiling, high, frothy with the ooze of young sensuality, misinterpreted as some kind of new utopian societal order. We believed we were going to bring peace and love to the world. Great rock and roll though.

At our confident midlife, we thought the world was yielding to our generational intelligence, insight and will. Our working life witnessed an unprecedented period of economic expansion, riding the powers of globalization and innovation in technology and health care. If the alchemy of technical smarts and money could achieve something, mostly we have.

Then, what initially seemed like an era of the spreading of our values of democracy and human rights shifted towards autocracy, the increasing economic hegemony of China and America shifting towards President Trumps populism and America First. We can take claim, though, for major progress in womens and gay rights.

Assessing Our 7 Legacies Past, Present, Future

We Baby Boomers are beginning to reflect about our legacies might be. Its too early to know, of course. But its not too early to speculate, to outline the first draft of possibilities. We will take our guidance from Yeats and Didion whenever we can.

War: The Vietnam war wasnt our idea. We can take claim to ending it and the military draft. America didnt initiate significant aggressive military action for 25 years after.But, after Al Qaedas attacks on U.S. soil on 9/11/2001, we decided not only to retaliate against Afghanistan but to forget the sins of colonialism and imperialism and engage in nation-building in the graveyard of empires.Our unilateral attack against Iraq only proved that we had forgotten the lessons of Vietnam. Maybe if we had a military draft in 2003, we might have hesitated long enough to realize that we were invigorating a new power with WMD (weapons of mass destruction) in Iran by destroying its worst enemy.

JFK and Mrs. Kennedy, in a pink outfit prepared for motorcade into city from airport, Nov. 22. After ... [+] a few speaking stops, the President was assassinated in the same car.

Violence:The assassinations of John F. Kennedy (November 22, 1963), Malcolm X (February21, 1965), Martin Luther King (April 4, 1968), Robert F. Kennedy(June 6, 1968) left us stunned and wounded and with a leadership vacuum we can sense even today.

The blood-dimmed tide is loosed, and everywhere

The ceremony of innocence is drowned

The violence of the 1960s also deepened a deep cultural rift between those Americans favoring various forms of gun control and those who hold that the Second Amendment provides expansive rights for individual citizens to own and carry guns. The National Rifle Associations turn to lobbying for gun rights in the late 1970s has thwarted many Congressional attempts for stronger gun control. Despite the Second Amendments ambiguity in wording and intent- A well-regulated militia, being necessary to the security of a free state, the right of the people to keep and bear arms, shall not be infringed.in 2008, the Supreme Court affirmed an individuals right to guns.

Today, we remain the most heavily armed citizenry in the world, by a huge margin. We account for 393 million (about 46 percent) of the worldwide total of civilian held firearms, on average over 1 gun per person, about 10 times most other countries.

Racism: We Baby-Boomers inherited a long, mean history of racism, often enabled by violence,which has been part of America from our very beginning. Born in Revolution, we expanded with genocidal fervor against Native Americans who occupied land we coveted, and built the economy of our largely agricultural South with African slaves.

Even after we fought the Civil War, certainly because of slavery whatever other causes one might posit,racial segregation continued, most prominently due to Jim Crow laws. Until 1965, Jim Crow laws notoriously constrained any political and economic gainsby black people, enabled bythe Supreme-Court-mandated separate but equal doctrine in 1896.

We Baby Boomers were born and growing up just as the Civil Rights Movement emerged in the late 1950s, enabled by multiple rulings by the Warren Court finding segregation unconstitutional. Some of may remember dramatic events ranging from Emmett Tills murder and Rosa Parks and the Montgomery bus boycott in the mid 1950s to the Freedom Riders, voter registration drives and the March on Washington in the first half of the 1960s.

The Civil Rights Act became law in 1964, the Voting Rights Act in 1965. But the progress made was not enough to stop the Long, Hot Summer of race riots in over 100 cities throughout our country in 1967. The federally sponsored Kerner Commission later determined the causes of the riots included police brutality, white racism, and socioeconomic discrimination in multiple forms.

But as our gyre winds around again, the Supreme Court in 2013 struck down an essential clause of the Voting Rights Act because it bears no logical relationship to the present day.And we live today in the sad wake of the police shooting of Jacob Blake in Kenosha, Wisconsin and the killings of George Floyd, Trayvon Martin and so many other Black men.

Participants in the March on Washington conclude their march from the Lincoln Memorial to the Martin ... [+] Luther King Jr. Memorial August 28, 2020 in Washington, DC. (Photo by Drew Angerer/Getty Images)

Will Black Lives Matter be our platform to end our countries 400 year history racism against Blacks?

Wealth:Despite great prosperity, we Baby Boomers have presided over dramatic growth in the socioeconomic gap that is now threatening our American social contract. Our wealthiest 0.1% take in almost 200 times the income of the bottom 90%, reflecting levels unseen since the Gilded Age before World War I.S&P 500 firm CEOs were paid 287 times as much as average U.S. workers in 2018, compared to 42 times as much as the average U.S. worker in 1980.

Through our new monetary policy of flooding the world with cash at zero interest rates and a fiscal policy spending unprecedented numbers of trillions of dollars, we are now spending our childrens and grandchildrens money. But its okay because, after all, The Giving Pledge still lets billionaires leave their kids 50%!

Drugs: We Baby Boomers have had a drug to enhance every period of our lives. Sex? Birth control pills approved by the FDA in 1960.When we grew anxious and depressed with our adult responsibilities, we had Prozac and its progeny, anti-depressants too numerous to list. Our sex life dragging in midlife?ED drugs, starting with Viagra in 1998.

To get high? Pot and LSD for our college years; cocaine when we had money and gotbusy in our 20s and 30s.Remember how fun college was? Lets legalize pot as medicine as we age.

And, as our forebears in the gyre, from the Roaring Twenties and the Lost Generation after World War (read Hemingways Moveable Feastits life as punctuated by alcohol), we loved to party and alcohol for most is remains our chosen drug.

Tragically,many of our poor, some returning soldiers and now even members of our working class have sought heroin in the 1960s and later, crack cocaine in the 1970s and 1980s, and now pain-killing legal opioids leading back to illegal heroin, fentanyl.We passed Just Say No years ago.

Climate Change:

Like Scrooge in Dickens The Christmas Carol, we have been given a view of our future. In our case, its looking warm. We know, and have known for a long time, that a global temperature change of more than 1.5C would be catastrophic for human civilization. Its effects will range from the destabilization of agricultural systems to drastic sea-level rise, to tragic losses in biodiversity. In 2019, the Intergovernmental Panel on Climate Change (IPCC) reported that limiting global warming to 1.5C would require rapid, far-reaching and unprecedented changes in all aspects of society Global net human-caused emissions of carbon dioxide (CO2) would need to fall by about 45 percent from 2010 levels by 2030, reaching net zero around 2050. Statements like these are fairly banal to read these days, but even so, it has become abundantly clear that a world without emissions is beyond the reach of the Boomer imagination.

A gas flare from the Shell Chemical LP petroleum refinery illuminates the sky on August 21, 2019 in ... [+] Norco, Louisiana. (Photo by Drew Angerer/Getty Images)

Artificial Intelligence:

Artificial intelligence is, in simplest terms, computer technology that emulates human intelligence. What we have now is called specialized artificial intelligence or machine learning.We have not yet achieved the much more human-like AI that would be needed to achieve the Singularity, defined as when technology develops beyond human control; or, when human and machine are fully integrated; or, when artificial intelligence surpasses human intelligence.

We need to return now to Joan Didionsessay, Slouching Towards Bethlehem,whose title leads us to the second stanza of Yeats Second Coming, which we will need to help us decide about where we stand onAI and the Singularity.

Surely somerevelationis at hand;

Surely theSecond Comingis at hand.

The Second Coming! Hardly are those words out

When a vast image out ofSpiritus Mundi

Troubles my sight: somewhere in sands of the desert

A shape withlion body and the head of a man,

A gaze blank and pitiless as the sun,

Is moving its slow thighs, while all about it

Reel shadows of the indignant desert birds.

The darkness drops again; but now I know

That twenty centuries of stony sleep

Were vexed to nightmare by a rocking cradle,

And what rough beast, its hour come round at last,

Slouches towardsBethlehemto be born?

The Second Coming by W. B. Yeats, (1919)

Yeats rough beast refers to the Anti-Christ in the last book of the New Testament, Revelations, who isset for the final showdown andthe Second Coming of Christ. You can think of the rough beast as Evil or your worst nightmare come to life right before your eyes.

Is the Singularity the work of the Anti-Christ?

Ray Kurzweil speaks onstage at 'Ray and Amy Kurzweil on Collaboration and the Future ' during 2017 ... [+] SXSW Conference and Festivals (Photo by Katrina Barber/Getty Images for SXSW)

Ray Kurzweil, one of the Singularitys leading theorists, tells us we shouldnt fear its arrival around 2045. The late Steven Hawkings thought AI posed a fundamental risk to the existence of human civilization.Elon Musk worries that AI might overtake human intelligence by 2025.

Who do you think is the Anti-Christ?

The Antichrist, miniature from the Latin manuscript III 177 folio 44, 12th Century. (Photo by ... [+] DeAgostini/Getty Images)

Wall Street? Well, its our gyres Whore of Babylon (Revelation: 17:1-8), whose infinite appetite for cashits profits have risen from 10% to close to half in the United States while we Boomers watchedwill certainly seek it to charge tolls on the road to the Singularity.

Biomedicine? A Chinese scientist cloned a human embryo in recent years Wont the Singularity need a post-modern Dr. Frankenstein to plant the circuit in our brains?

Perhaps, the FAANGsFacebook, Amazon, Apple, Netflix and Alphabet, i.e., well-positioned with technology, expertise, cash, self-interest and brands, although their Chinese competitors are strong and determined to win.The social media segment gives loud voice to the worst of us but seems quite blind to that bare truth. Facebook founder and CEO Mark Zuckerberg views concerns about AI as irresponsible. People who dont trust Facebook, he says, dont understand it.

In the moments just before the Singularity, a Wall Street mergers bankercant miss the biggest deal ever! - approaches you with a digital device and asks, Please press the Accept button. You are only agreeing to give us your mind. You get to keep your soul.

Press Disagree Boomers. If we save humanity twice,We will be the Greatest Generation!

Read the rest here:

Baby Boomers 7 Possible Legacies, Guided by W.B. Yeats, Joan Didion and the Book of Revelations - Forbes

Could Quantum Computing Progress Be Halted by Background Radiation? – Singularity Hub

Doing calculations with a quantum computer is a race against time, thanks to the fragility of the quantum states at their heart. And new research suggests we may soon hit a wall in how long we can hold them together thanks to interference from natural background radiation.

While quantum computing could one day enable us to carry out calculations beyond even the most powerful supercomputer imaginable, were still a long way from that point. And a big reason for that is a phenomenon known as decoherence.

The superpowers of quantum computers rely on holding the qubitsquantum bitsthat make them up in exotic quantum states like superposition and entanglement. Decoherence is the process by which interference from the environment causes them to gradually lose their quantum behavior and any information that was encoded in them.

It can be caused by heat, vibrations, magnetic fluctuations, or any host of environmental factors that are hard to control. Currently we can keep superconducting qubits (the technology favored by the fields leaders like Google and IBM) stable for up to 200 microseconds in the best devices, which is still far too short to do any truly meaningful computations.

But new research from scientists at Massachusetts Institute of Technology (MIT) and Pacific Northwest National Laboratory (PNNL), published last week in Nature, suggests we may struggle to get much further. They found that background radiation from cosmic rays and more prosaic sources like trace elements in concrete walls is enough to put a hard four-millisecond limit on the coherence time of superconducting qubits.

These decoherence mechanisms are like an onion, and weve been peeling back the layers for the past 20 years, but theres another layer that left unabated is going to limit us in a couple years, which is environmental radiation, William Oliver from MIT said in a press release. This is an exciting result, because it motivates us to think of other ways to design qubits to get around this problem.

Superconducting qubits rely on pairs of electrons flowing through a resistance-free circuit. But radiation can knock these pairs out of alignment, causing them to split apart, which is what eventually results in the qubit decohering.

To determine how significant of an impact background levels of radiation could have on qubits, the researchers first tried to work out the relationship between coherence times and radiation levels. They exposed qubits to irradiated copper whose emissions dropped over time in a predictable way, which showed them that coherence times rose as radiation levels fell up to a maximum of four milliseconds, after which background effects kicked in.

To check if this coherence time was really caused by the natural radiation, they built a giant shield out of lead brick that could block background radiation to see what happened when the qubits were isolated. The experiments clearly showed that blocking the background emissions could boost coherence times further.

At the minute, a host of other problems like material impurities and electronic disturbances cause qubits to decohere before these effects kick in, but given the rate at which the technology has been improving, we may hit this new wall in just a few years.

Without mitigation, radiation will limit the coherence time of superconducting qubits to a few milliseconds, which is insufficient for practical quantum computing, Brent VanDevender from PNNL said in a press release.

Potential solutions to the problem include building radiation shielding around quantum computers or locating them underground, where cosmic rays arent able to penetrate so easily. But if you need a few tons of lead or a large cavern in order to install a quantum computer, thats going to make it considerably harder to roll them out widely.

Its important to remember, though, that this problem has only been observed in superconducting qubits so far. In July, researchers showed they could get a spin-orbit qubit implemented in silicon to last for about 10 milliseconds, while trapped ion qubits can stay stable for as long as 10 minutes. And MITs Oliver says theres still plenty of room for building more robust superconducting qubits.

We can think about designing qubits in a way that makes them rad-hard, he said. So its definitely not game-over, its just the next layer of the onion we need to address.

Image Credit: Shutterstock

Read more:

Could Quantum Computing Progress Be Halted by Background Radiation? - Singularity Hub

Diversity As A CEO Priority During This Singular Time In Our History – Forbes

As I speak with CEOs every day, so many are truly pained and deeply want racial harmony. In considering the state of diversity today, I thought it would make sense to talk with one leader who has a history of building true movements: Edie Fraser, CEO, Women Business Collaborative. Edie has already built Million Women Mentors (MWM) with 2.5 million commitments.

Robert Reiss: Talk about diversity today.

Edie Fraser: Robert, Diversity is a number one issue for the private sector, right up there with return on investments and CEO Leadership. This moment is singular and provides an opportunity to create sustainable change. The time is NOW! Platitudes are no longer acceptable.Talent is key and so, too, are investments in diverse suppliers and our communities. I was engaged in the civil rights movement early and have spent my career working to accelerate the position of women and minorities in business. It has been nearly only 17 months since we founded Women Business Collaborative (WBC) together as a non-profit, focusing on increasing parity and power and with it 25% advancement of diversity changes in every action initiative taken. The private sectors awareness of the disparities in corporate America have only heightened in 2020. It is business that is showing courage to take action, and the private sector must ACT NOW!

WBC Edie Fraser team

Focus on the importance of Diversity, Equity & Inclusion (DE&I) on our economy and our national wellbeing. COVID-19 and the recession combined with tensions over the continued racism in America have created an unprecedented economic and human crises and highlighted inequities further fueling unrest.In corporate America, our CEOs and Chief Diversity Officers (CDOs) and CHROs are crucial to successfully navigating the current social challenges along with the others in the Executive Suite. Bottom up and top down, all must work together to change what has been the status quo. We want results.

Reiss: You mentioned the singularity of this moment in time. What strikes you as different from preceding periods of political and social unrest?

Fraser: 2020 is different because we are moving beyond statements of support into real actions, investments, and accountability. Platitudes are no longer acceptable. This moment is for change. Consumers and employees are demanding companies look beyond shareholder interests and prioritize people. Financial results will be better because they do. The Pandemic, recession and racism aren't just impacting a small number of people; impacts are widespread and leaders great leaders see that real investments in change are the next essential step.

This summer Blackrock CEO Larry Fink committed to insisting on Board diversity and senior talent and investments; Oprah replaced her likeness on the Cover of Oprah Magazine for the first time.Different examples in different industries with various needs and approaches but all are examples of real actionable leadership that aligns with their brand.

As business leaders, we know that if something is critical to our success, we measure it.DE&I is no different. Consumers have shown they are looking for real action such as more diversified hiring and promotion practices, diversifying suppliers, and insisting on internal reviews of cultures holding back people of color. The companies that prioritize all aspects of diversity will come out ahead. McKinsey & Company's 2019 Diversity Matters: How Inclusion Wins Report found that gender diverse teams are 25 percent more likely to financially outperform their competitors and ethnically diverse teams were 36 percent more likely.

Reiss: What does CEO activism look like?

Fraser:As protests erupted, most CEOs took that time to listen and reflect.The heard about long-term racism.To repeat: it is imperative we move from statements of support and listening to action.CEO activism includes some key tenants to start a clear connection between the statement, the brand, and actions.Activism includes transparency and accountability. CEOs and their teams set bold goals and lead by example.

Reiss: Which CEOs do you see as leaders for DE&I efforts right now?

Fraser:Many are committing action. Satya Nadella at Microsoft continually comes out on top of DE&I rankings. As one of the fastest-growing sectors, technology companies are under pressure to create a more substantial presence of women and people of color. Since 2014, Microsoft is in the 3 percent of Fortune 500 companies to report full workforce demographic data and supplier data. CEOs who are making commitments then hold themselves accountable by reporting the metrics. Robert, WBC with you and Becky Shambaugh, just interviewed Kaiser Permanente CEO Greg Adams and he shared, Diversity is holistic in talent and community, and now Kaiser has made a commitment of $1.7 billion to Black and Hispanic suppliers.

Diversity is a most pressing issue right now. Companies tracking and reportingemployee satisfaction. DiversityInc. released its annual Top 50 Companies for Diversity list in May. At the top was Marriott International and CEO Arne Sorenson, a leader for diversity in the hospitality industry. Eli Lilly and Company was number three on the list with CEO Dave Ricks leading the way for inclusion in healthcare. Mastercard's Ajay Banga made the number six spot leading the financial service industry. (5/8/2020) Goldman Sachs' David Solomon, another leader in the financial service industry, recently released a list of aspirational goals emphasizing women and Black professionals in V.P. roles (8/6/2020).

The latest appointment of Linda Rendle as CEO at Clorox takes the number of women leading Fortune 500 companies to 38 and growing. Yet we lack women of color both at the CEO Level and in boardrooms. The momentum for DE&I across the board makes us focused. CEOs and CDOs need to work together with their teams to keep bring people to the table, move pipeline talent upward and insist that thosewho historically have not had the opportunity to be there get the new seats. Candidates have the skills.

Reiss: You mentioned the growing number of women leading Fortune 500 firms. As you are so dedicated to accelerating the position of women in business how do you believe we continue that progress?

Fraser: We cannot let the conversation off the table. We created WBC because there were manywomen business organizations working in silos. To change the numbers, we needed collaboration. In many instances, organizations were competing. WBC is thrilled to have 41+ women business organizations today aligned to drive change on more women and women of color as CEO, boards and capital firms and more entrepreneurs successful and with capital support.

We are seeing progress, in the past months, more women were appointed to corporate boards, 130 Black women are running for office, we had a record number of women leading Fortune 500 firms, and still there is so much work to be done. The Boardlist recently announced that it is extending its services to men of color.(8/11/20) That type of expansion is so imperative. We must see the progress, celebrate it, and then ask ourselves how we can take what we know works and use it to help more people.

This slow pace of change has been prompted calls to action. The current social unrest in our country calls for proactive, vigilant leadership working toward achieving results. It is THE TIME for accountability and impact. Together we will work tirelessly to inspire companies to share clear results. We must succeed.

To listen to CEO Interviews go to The CEO Forum Group

See the original post here:

Diversity As A CEO Priority During This Singular Time In Our History - Forbes

If you flew your spaceship through a wormhole, could you make it out alive? Maybe… – SYFY WIRE

Can you already hear Morgan Freemans sonorous voice as if this was another episode of Through the Wormhole?

Astrophysicists have figured out a way to traverse a (hypothetical) wormhole that defies the usual thinking that wormholes (if they exist) would either take longer to get through than the rest of space or be microscopic. These wormholes just have to warp the rules of physics which is totally fine since they would exist in the realm of quantum physics. Freaky things could happen when you go quantum. If wormholes do exist, some of them might be large enough for a spacecraft to not only fit through, but get from this part of the universe to wherever else in the universe in one piece.

"Larger wormholes are possible with aspecial type of dark sector,a type of matter that interactsonly gravitationally with our own matter. The usual dark matter is an example.However, the one we assumed involves a dark sector that consists of an extradimensional geometry,"Princeton astrophysicist Juan Maldacena and grad student Alexey Milekhin told SYFY WIRE.Theyrecently performed a new study that reads like a scientific dissection of what exactly happened to John Crichtons spaceship when it zoomed through a wormhole in Farscape.

"This type of larger wormhole isbased on therealization that a five-dimensional spacetime could be describing physics at lowerenergies than the ones we usually explore, but that it would have escaped detection because it couples with our matter only through gravity," Maldacena and Milekhinsaid."In fact, its physics issimilar to adding many strongly interacting massless fields to the known physics,and for this reason it can give rise to the required negative energy."

While the existence of wormholes has never been proven, you could defend theories that they could exist deep in the quantum realm. The problem is, even if they do exist, they are thought to be infinitesimal. Hypothetical wormholes would also take so long to get across that youd basically be a space fossil by the time you got to the other end. Maldacena and Milekhin have found a theoretical way for a wormhole thatcould get you across the universe in seconds and manage not to crush your spacecraft. At least it would seem like seconds to you. To everyone else on Earth, it could be ten thousand years. Scary thought.

"Usually whenpeople discuss wormholes, they have in mind 'short'wormholes: the ones forwhich the travel time would be almost instantaneous even for a distant observer.We think that such wormholes are inconsistent with the basic principles of relativity," the scientists said. "The ones we considered are 'long': for a distant observed the path alongnormal space-time is shorter than through the wormhole.There is a time-dilation factor because the extreme gravity makes travel time very short for the traveller. For an outsider, the time it takes is much longer, so we have consistency with the principles of relativity, which forbid travel faster than the speed of light."

Fortraversable wormholesto exist, but the vacuum of space would have to be cold and flat to actually allow for what they theorize. Space is already cold. Just pretend that its flat for the sake of imagining Maldacena and Milekhin's brainchild of a wormhole.

"These wormholes are big, the gravitational forces will be rather small. So, if they were in empty flat space,they would not be hazardous. We chose their size to be big enough so that theywould be safe from large gravitational forces," they said.

Negative energy would also have to exist in a traversable wormhole. Physics forbids such a thing from being a reality. In quantum physics, the concept of this exotic energy is explained by Stephen Hawking as the absence of energy from two pieces of matter being closer together as opposed to being far apart, because energy needs to be burned so they can be separated despite gravitational force struggling to pull them back together. Fermions, which include subatomic particles such as electrons, protons, and neutrons (with the exception that they would need to be massless), would enter one end and travel in circles. They would come out exactly where they went in, which suggests that the modification of energy in the vacuum can make it negative.

"Early theorized wormholes were not traversable; an observer going through a wormhole encounters a singularity before reaching the toher side, which is related ot the fact that positive energy tends to attract matter and light," the scientists said."This is whyspacetime shrinks at the singularity of a black hole. Negative energy prevents this. The main problem is that the particular type of negative energy that is needed is not possible in classical physics, and in quantum physics it is only possible in some limited amounts and for special circumstances.

Say you make it to a gaping wormhole ready to take you...nobody knows where. What would it feel like to travel through it? Probably not unlike Space Mountain, if you ask Maldacena and Milekhin. In their study, they described these wormholes as "the ultimate roller coaster."

The only thing a spaceship pilot would need to do, unlike Farscapes Crichton, who totally lost control, is get the ship in sync with the tidal forces of the wormhole so they could be in the right position to take off. These are the forces that will push and pull an object away from another object depending on the difference in the objects strength of gravity, and that gravity would power the spaceship through.This is whyit would basically end upflying itself. But there are still obstacles.

"The problem is that every object which enters the wormhole will be acceleratedto very high energies," the scientists said."It means that a wormhole must be kept extremely cleanto be safe for human travel. In particular, even the pervasive cosmic microwaveradiation, which has very low energy, would be boosted to high energies andbecome dangerous for the wormhole traveler."

So maybe this will never happen. Wormholes may never actually be proven to exist. Even if they dont, it's wild to think about the way quantum physics could even allow for a wormhole that you could coast right through.

Visit link:

If you flew your spaceship through a wormhole, could you make it out alive? Maybe... - SYFY WIRE

Managing Complexity in the New Era of HPC – insideHPC

By Bill Wagner, CEO Bright Computing

Until recently, High Performance Computing (HPC) was a fairly mature and predictable area of information technology. It was characterized by a narrow category of applications used by a largely fixed set of industries running on predominantly Intel-based on-premise systems. But over the last few years, all of that has begun to change. New technologies, cloud, edge, and a broadening set of commercial use cases in the areas of data analytics and machine learning have set in motion a tsunami of change for HPC. This is no longer a tool for rocket scientists and the research elite. HPC is quickly becoming a strategic necessity for all industries that want to gain a competitive advantage in their markets, or at least keep pace with their industry peers in order to survive.

While HPC has given commercial users a powerful set of new tools that drive innovation, it has also introduced a variety of challenges to those organizations, including increased infrastructure costs, complexities associated with new technologies, and a lack of HPC know-how to take advantage of it. The challenges introduced by this new era of HPC have given rise to new implications for how companies execute their HPC strategies, with most embarking on a steep and risky learning curve to the detriment of their IT staff and budget.

On the technology side, options have never been more prevalent. With a wide range of choices in hardware, software, and even consumption models, organizations are now faced with an array of choices. New processing elements (Intel, AMD, ARM, GPUs, FPGAs, IPUs), containers (Kubernetes, Docker, and Singularity), and cloud options (hybrid and multi-cloud) have disrupted the HPC industry, challenging organizations to pick infrastructure solutions (both hardware and software) that will be able to tackle their diversifying workloads, while seamlessly working together.

In the past, HPC clusters were built with a fairly static mindset. The notion of combining X86 and ARM architectures in the same cluster was not even a consideration. Furthermore, extending your HPC cluster to the public cloud for additional capacity was something you planned to do down the road. Hosting containerized machine learning applications and data analytics applications on your HPC cluster harmoniously alongside traditional MPI-based modeling and simulation applications was on the wishlist. Offering end users bare metal, VMs, and containers on the same cluster was unheard of, and deploying edge compute as an integral part of your core HPC infrastructure fell under the category of maybe someday. However, in todays new world of HPC, IT managers and infrastructure architects are feeling the pressure to make all these things happen right now. The availability of new, highly specialized hardware and software is both enticing and intimidating. If organizations dont take advantage of all that HPC offers, someone else will, and losing the race for competitive advantage can deal a devastating blow to businesses vying for market share.

In the days of traditional HPC, you built a static cluster and focused your energy on keeping it up and running for its lifespan. As such, research institutions and commercial HPC practitioners alike were able to get by with building custom scripts to integrate a collection of different open-source tools to manage their clusters. But integrating tools for server provisioning, monitoring, alerts, and change management is difficult, labor-intensive, and an ongoing maintenance burden, but possible nonetheless for organizations with the human resources and skill to do so. In the emerging new era of HPC, clusters are far from static and far more complex as a result. The need to leverage new types of processors and accelerators, servers from different manufacturers, to integrate with the cloud, to extend to the edge, to host machine learning and data analytics applications and offer end-users VMs and containers alongside bare metal servers raises the bar exponentially for organizations that contemplate a do-it-yourself approach to building a cluster management solution.

Now more than ever before, there is an increasing need for a professional, supported cluster management tool that spans hardware, software, and consumption models for the new era in HPC. Bright Cluster Manager is a perfect example of a commercial tool with the features and built-in know-how to build and manage heterogenous high-performance Linux clusters for HPC, machine learning, and analytics with ease. Bright Cluster Manager automatically builds your cluster from bare metal setting up networking, user directories, security, DNS, and more and sits across an organizations HPC resourceswhether on-premise, in the cloud, or at the edgeand manages them across workloads. Bright can also react to increasing demand for different types of applications and instantly reassign resources within the cluster to service high-priority workloads based on the policies you set. Intersect360 states, Fundamentally, Bright Computing helps address the big question in HPC: how to match diverse resources to diverse workloads in a way that is both efficient today and future-proof for tomorrow. [1]

Bright Computing highlights the transition that one organization made from their home-grown approach to Bright Cluster Manager. The Louisiana Optical Network Infrastructurea premier HPC and high-capacity middle-mile fiber-optic network provider for education and research entities in Louisianamade the switch from their do-it-yourself HPC management setup to Bright Cluster Manager software to provide consistency, ease-of-use, and the ability to easily extend resources to the cloud.

LONI had previously used a homegrown cluster management system that presented a myriad of challenges including lack of a graphical user interface (GUI), daunting complexity for new employees, and proneness to out-of-sync changes and configurations, said LONI Executive Director, Lonnie Leger. Likewise, the do-it-yourself infrastructure we had placed constraints on end-users due to a lack of knowledge continuity concerning cluster health, performance, and capability. By leveraging a commercial solution such as Bright Cluster Manager, we now have an enterprise-grade cluster management solution that embodies the required skills and expertise needed to effectively manage our HPC environment.

This decision to move from in-house, piecemeal open source to a fully supported commercial cluster management solution was born out of necessity for LONI. With a desire to diversify their services, they had quickly outgrown their DIY setup and HPC expertise. While expansion wasnt impossible, it became a daunting task as internal personnel and HPC expertise were limited. This example is but one of many in the new world of HPC. As more organizations try to navigate the challenge of managing the interdependency between hardware and software, dealing with hardware problems, isolating performance degradations, and keeping up with a constant demand for changes, the need for commercially supported cluster management solutions has become more important than ever before.

All of the change taking place in HPC that breaks and broadens how we think about it makes it necessary to remind ourselves what HPC really is. Intersect360 Research defines HPC as the use of servers, clusters, and supercomputersplus associated software tools, components, storage, and servicesfor scientific, engineering, or analytical tasks that are particularly intensive in computation, memory usage, or data management. [2] This definition is important because it recognizes that HPC can be much broader than what it has been traditionally, and with that broadening comes a whole new level of complexity. The harsh reality is that as organizations embrace a broader definition of HPC to propel their business, they must come to terms with the complexity that needs to be overcome in order to manifest it.

With Bright Cluster Manager software, complexity is automated away and replaced with flexibility. Bright builds and pre-tests a turn-key high-performance cluster from a wizard based on your specifications and instruments the cluster with health checks and monitoring, provides detailed insight on resource utilization, dynamically assigns resources to service end-user workloads based on demand, extends your cluster to the public cloud for additional resources if desired, extends to the edge for centralized management of remote resources, supports mixed hardware environments, offers bare metal, VMs or containers from the same cluster and provides command line, GUI and API based access to all functionality.

As stated by Intersec360 Research, Data science and machine learning? Intel or AMD? GPUs or FPGAs? Docker or Kubernetes? Cloud, on-premise, or edge? AWS or Azure? Bright Cluster Manager lets users decide individually how to incorporate all of these transitionssome or all, mix and match, now or laterin a single HPC cluster environment. With so many independent trends continuing to push HPC forward, Bright Computing is aiming to be the company that helps users pull them all together. [3]

Bright Computing helps address the big question in HPC: how to match diverse resources to diverse workloads in a way that is both efficient today and future-proof for tomorrow.

For more information about Bright Computing solutions for HPC, visit http://www.brightcomputing.com or email us at info@brightcomputing.com

[1] Intersect360 Research Paper: Bright Computing: Managing Multiple Paths to Innovation

[2] Intersect360 Research Paper: Bright Computing: Managing Multiple Paths to Innovation

[3] Intersect360 Research Paper: Bright Computing: Managing Multiple Paths to Innovation

Bright Computing is the leading provider of platform-independent commercial cluster management software. Bright Cluster Manager, Bright Cluster Manager for Data Science, and Bright OpenStack automate the process of installing, provisioning, configuring, managing, and monitoring clusters for HPC, data analytics, machine learning, and OpenStack environments.

See the article here:

Managing Complexity in the New Era of HPC - insideHPC

Waymo Just Started Testing Its Driverless Trucks in Texas – Singularity Hub

Its been almost four years since Uber shipped 50,000 cans of beer across Colorado in a self-driving truck. It was the first-ever commercial shipment completed using self-driving technology. Now competitor Waymo is launching a much larger driverless trucking experiment.

With a new hub in Dallas, Waymos heavy-duty trucks took to the Texas roads this week to start the companys road testing of its driverless fleet, which consists of 13 Peterbilt 18-wheelers complete with cameras, lidar, and on-board computers.

The trucks wont be running completely autonomously; theyll always have a safety driver onboard and ready to take over at any moment. The company plans to hire local truckers for these jobs.

The trucks wont be carrying commercial goods yet, either, but theyll be loaded up with weights to mimic a commercial load. Waymo hasnt yet said how long the testing phase will last, or when it thinks its trucks will start operating fully autonomously.

From the sound of it, its not likely to be soon, nor sudden; a company spokesperson told Trucks.com, We will likely see fully driverless trucks begin to hit the road within the coming years, but its not going to be a flip the switch moment achieving fully driverless happens gradually, guided by a safe and responsible approach.

Waymo was planning to roll out its truck testing in the spring, but was delayed by the pandemic. The company started using its fleet of autonomous Chrysler Pacifica minivans to map Texas and New Mexico roads in January. They chose these states because of their expansive and high-quality highway systems, good weather, and large trucking industry. Waymo competitor TuSimple also has a hub in Dallas and is currently conducting testing on Texas roads (and in July announced plans for a cross-country network of driverless trucks).

Waymo started out as the Google Self-Driving Car Project in 2009. Though its still held by Alphabet, the company just had its first outside funding round in March of this year, raising a whopping $2.5 billion. Interestingly, CEO John Krafcik insists Waymo is not, in fact, a self-driving car company, instead focusing on its aim to build the worlds most experienced driver (though the fact that that driver is not a person necessarily implies its a computer). In practice, this highly experienced driver will be a package of both hardware and software that could be installed in cars and trucks.

Though some fear that the advent of self-driving trucks could put thousands of people out of a job, proponents of the technology make the opposite argument, citing a shortage of drivers thats causing truckers to be overworked.

Industry insiders envision self-driving tech acting more as a copilot than a replacement; for example, when they know theyre about to be on a highway for a good long stretch, drivers could switch into fully autonomous mode and take a nap, look at their phones, etc. Besides giving them the rest they need, theyd also save time and get to their destinations faster.

The transition, as mentioned above, will be gradual; so though we dont know exactly when, we can be fairly certain that at some point in the not-too-distant future, driverless trucks will be transporting a lot more than beerand not just in Texas.

Image Credit: Source: Waymo

View original post here:

Waymo Just Started Testing Its Driverless Trucks in Texas - Singularity Hub

Baltimore Writer’s Club: Q&A with JHU prof and author Andrew H. Miller – – Baltimore Fishbowl

Its hard to imagine a more opportune time to contemplate the lives were not leading. After five months of quarantine, however, finding the motivation to do so might be even harder. Where to begin? How to proceed? Luckily, Johns Hopkins English professor Andrew H. Miller has written the perfect guidebook to accompany us on this journey.

On Not Being Someone Else: Tales of Our Unled Lives, Millers third book and first for a general audience, is a thought-provoking blend of criticism and memoir as well as a page-turning introduction to literature and philosophy. Thoughts of contingency are viral, he writes:

Each of us no doubt could make a list: if my parents hadnt moved . . . when I was young; if I had gone to a different college; if I hadnt taken that one class with that one teacher; if [that particular relationship hadnt ended]; . . . if I had taken another job . . . What would my life be like? What would I be like?

Humans, Miller points out, are highly susceptible to the pathogenic tendency to ruminate. As a result, we go about our days haloed by evaporating, airborne unrealities, the specters of the people [we] might have become if wed made different choices, followed different paths. Why, he asks, do we try to figure out who we are by focusingfixatingon who were not? What makes us so convinced that understanding the present requires looking back on a past that never existed in the first place? And why are certain fantasies about who we might have been so pertinacious?

Unled lives, Miller argues, are a largely modern preoccupation, a by-product of the post-Industrial drive to capitalize on resources and opportunities. Alas, how burdensome weve found this! Modern culture abounds with examples of the mental torment wrought by what Miller calls our singularity, the inescapable fact that each of us is limited to being a single self among many possible selves. He examines three classic, twentieth-century variations on this theme: Robert Frosts The Road Not Taken (1916), Henry Jamess The Jolly Corner (1908), and Frank Capras Its a Wonderful Life (1946). As he goes on to show, even those Victorian novels most celebrated for their realismthe triple-decker tomes of Anthony Trollope and George Eliot, for instancedevote countless pages to describing what did not happen to their fictional protagonists. He notices a similar phenomenon on screen: Consider the number of films that depend on a characters sense of being misrecognized, taken as someone else, living the life of someone else, even while remaining him or herself.

As a work of criticism, On Not Being Someone Else isfittinglysingular. Persuasive but never prescriptive, Millers distinguishes himself through his willingness to explore the affective dimensions of personal narrative. Urging us to examine our strivings, failings, and longings, On Not Being Someone Else at once invites us to reflect and encourages us to be ourselves. Even if were just sitting at home, imagining the summer vacations we didnt take, and wondering when, perhaps whether, well ever be able to resume our wanderings.

BFB: In 2015, in the middle of working on this project, you went through a major life change. After more than two decades at the University of Indiana in Bloomington, you moved to Baltimore and started teaching at JHU. Im curious . . . Did the experience of moving shape your thinking on the subject of lifes crossroads?

Andrew H. Miller: Thats a good question. I hadnt thought about the book and the move as being conjoined in a meaningful way, but youre right to point out that both were happening at the same time. And journeys play a big role in this book about paths untaken. But Id begun the book long before the move; the most important way that the move and the book were connected wasnt in the content. Coming to Johns Hopkins, where faculty are given more time to focus on their research and writing, allowed me to finish the book. Thats the straightforward, logistical answer. It was a very difficult move: My wife [Mary Favret, also an English professor at Hopkins] and I were very attached to our colleagues at IU, and our son was starting his sophomore year of high school. Earlier in our careers wed had an opportunity to go to another school. That time we chose to stay at Indiana University; this time we moved.

BFB: What youre describing reminds me of the one traveler, two roads scenario in Frosts The Road Not Taken. Is it fair to say that at some level you were reenacting that pivotal moment of decision-making, only this time you took the other path?

Andrew H. Miller: Thats interesting . . . On some level, yes, but its important to keep in mind that conceiving of life in this way, as a forking path, is not without limitations. In this case, I think comparing the two decisions risks making the whole situation appear more deliberate than it was. We never had any regrets about staying at IU; it was not at all as though we felt we had made a bad choiceon the contrary, actually. Time passed. Then the chance to come to JHU presented itself, and we realized that circumstances had changed: we were in our fifties; our children were getting ready to leave home; suddenly, the idea of having everything around us be new seemed like good fortune. So here we are.

BFB: The book itself represents another kind of departure. Whereas your two previous works were scholarly monographs, here you address an audience of general readers in a somewhat unconventional format. What prompted those shifts?

Andrew H. Miller: I knew pretty early on that I wanted to write for a wider audience. In part, that desire was born of my perceiving how others responded to the topic. I could tell this was a subject that resonated deeply and broadly. This made me think about what I myself respond to in criticism. Several of the writers I find most engagingRoland Barthes, James Wood, Maggie Nelsonuse short, essayistic modes to great effect. I decided to try my hand at it.

BFB: In the preface, you outline some of the reasons unlived lives are hard to write about. For me, some of the most arresting moments in On Not Being Someone Else were the glimpses of your own writing process and its attendant struggles. Beyond the difficulties inherent in the topic, what did you find most challenging about this book?

Andrew H. Miller: Part of the appeal of this project was that it forced me to confront a new set of demands. I wanted to write about my academic work and my personal experience, and I wanted to do so in my own natural voice, in a conversational way. At the same time, I felt obligated not to betray my discipline. I also felt obligated not to betray my audience. Sometimes these imperatives were in conflict, and I had to struggle to find what seemed like the right balance.

Let me clarify what I mean. One of the main things that I wanted to do was leave readers with some work to do, something to keep thinking about after they close the book. I also wanted to give them plenty of space to disagree with my ideas and to arrive at their own interpretations. In that way, the book has the potential to become theirseach individual readers, I meanand to remain alive in their thoughts . . .

That vitality was important to me, and at a certain point I could see that in order to foster it, I had to hold back. I had to remind myself that the point was not to be exhaustive, not to have the last word. And this meant resisting some conventions of academic discourse, and restraining my own impulse to split philosophical hairs.

BFB: What youre saying reminds me of something you wrote in the introduction: only if we acknowledge what an author has not done can we appreciate what he has. Thats in reference to Henry Jamess uncanny knack for letting the reader see simultaneously the achieved work of art and its unrealized possibilities. Was that one of your goals?

Andrew H. Miller: Well . . . no and yes! On one hand, no, I didnt consciously set out to write something with that sort of duality. On the other, I can appreciate that I was drawn to unled lives in part because its such a paradoxical topic; it has that duality built-in and that appeals to me. So, in that spirit, yes, one of my goals was to show how thinking about the lives you havent led canby a kind of counter-motionlead you to think about the life youre leading. I find consolation in that, and I hope my readers do, too.

Andrew H. Miller will discuss On Not Being Someone Else: Tales of Our Unled Lives with William Egginton in a virtual event on Monday, August 31, at 6:30 p.m. Click here to register. This Zoom event is part of the Humanities in the Village series sponsored by The Ivy Bookshop and Bird in Hand.

Jennie Hann received her PhD in English from Johns Hopkins. The recipient of an Emerging Critics fellowship from the National Book Critics Circle, shes writing a biography of the poet Mark Strand.

More here:

Baltimore Writer's Club: Q&A with JHU prof and author Andrew H. Miller - - Baltimore Fishbowl

Jompame raises almost 400 thousand pesos to help a Dominican boy who fished at curfew for dinner – Dominican Today

Jompame, the online collection platform for social assistance, has raised almost 400 thousand pesos before noon today to help feed the boy Alexander de Len (13), who was stopped by several policemen at night, at beginning of curfew, when the minor returned from fishing for dinner.

At 11:27 am, the solidarity of the hands of 306 people had donated RD $ 375,590.75. The goal, according to the platform page, is RD $ 400,000 to cover one year of feeding the little one.

The initiative in favor of the minor, who was surprised by the four policemen in the vicinity of Los Negros beach, in Azua with the crab sack, is released accompanied by a video on Instagram in which the child explains that despite the financial limitations in the house where he lives with his father and grandmother, everything that arrives is shared.

He claims to be proud of his father for whom he asks help.

The four policemen who stopped him made a financial contribution that night and took him to his home.

Alexanders mother passed away when he was three years old.

Jompame Was founded by Katherine Motyka.Motyka is the second Dominican woman to be accepted at Singularity University at NASA.

Katherine graduated first in her class on Industrial Engineering and earned a scholarship to study Materials and Manufacturing Science at Jonkoping University, Sweden. But it is after completing her studies that she discovered the world of entrepreneurship, winning the competitive Startup Weekend Santo Domingo competition twice within a very short time.

Link:

Jompame raises almost 400 thousand pesos to help a Dominican boy who fished at curfew for dinner - Dominican Today