Page 60«..1020..59606162..7080..»

Category Archives: Singularity

Essay: How to read in a restless world – Hindustan Times

Posted: January 7, 2021 at 5:39 am

Did your New Years resolution include reading more? But do you sometimes find that you have trouble focusing on your reading, especially books in their entirety in ways you did not before? Given the times we live in, I have to say this is quite natural. We can still be passionate readers but our expectations from the practice of reading might just have to shift a little to get attuned to our reality.

I started thinking about this while chatting with one of my students. A first-year in college, born in the 21st century, she is a digital native, someone who has literally not seen a time when the internet wasnt around. She told me that she used to love reading, but in the last couple of years, shes found it hard to focus on her reading. The distractions always steal her away.

Once in a while, digital natives and digital immigrants have the same kinds of problems especially as we move through different phases in our relationship with technology.

Were still trying to read like we used to in the old times; this is true even of the digital natives who are beginning their reading lives in this new reality. There is an existing mode and discourse of reading the only one we have and everybody who reads, especially so-called serious readers, must get initiated into that mode. But what is that mode?

***

Reading:Transforming any space into a private place.(Shutterstock)

The social phenomenon of private reading, as we know it is fairly recent in the history of humanity. While printed texts played a role in ancient China, mainly as scripture and SparkNotes for their version of UPSC exams, ancient and medieval Europe only saw books as rare and precious handwritten manuscripts owned by churches, royalty, and wealthy aristocrats. Sustained reading on a mass scale would have to wait for the popular spread of printed books in the 18th century. Capitalism, by then, would expand to create the modern middle class, who had the literacy, leisure, and purchasing power to buy books on a large enough scale to create and sustain a publishing industry. Hence, was born a modern practice sitting in isolation and reading quietly for a long time, to finish a whole book.

Reading as entertainment reached its peak during the Victorian age especially of long books such as the novel, which eventually went from being a popular genre to the status of high art, unfortunately (but understandably) losing its popularity in the process. Excitement over novels reached a stage where people crowded the New York harbour to pounce on people arriving from England with questions about the next instalment of the serialised Dickens novel that had not yet reached America: Is Little Nell dead? It was the kind of popularity enjoyed by the soap opera in the 20th century and the web-series today. The first techno-generic challenge to the dominance of books as popular entertainment would come in the early 20th century, from the art form of cinema. Cinemas most direct threat was not to reading but to its performative precedent, theatre. But just as the newer art form of photography, with its superior capture of reality, drove the older art of painting to Impressionism, cinema inspired theatre to experimental forms such as epic and expressionism, drawing attention to the flesh-and-blood presence that made it unique.

The 20th-century challenge to the primacy of reading was different in one important way from what would come in the 21st. The former needed full attention, especially back in the days when the only way to watch a film was to enter a dark hall, leaving everything else behind. Real and metaphoric equivalence was offered by the single-screen cinema hall. Multiplex theatres inside shopping malls made movie watching one possibility among many, offering several movies from which one could choose.

That is, possibly, the defining character of culture in the digitized and disembodied 21st century: consuming multiple cultural experiences at the same time. I remember an editorial argument in n+1 magazine from a few years ago that said something similar: that we are now more likely to read something while also listening to music, enjoy a joke or a debate on social media in between a movie streaming on our iPad. Its a technologically-curated version of older pleasures such as enjoying fine wine with poetry, or mead with the minstrels. The print-era singularity of the artistic experience will be replaced by a more pluralized, fragmented, and differently fulfilling experience.

Author Saikat Majumdar(Tribuvan Tiwari)

That is what I told my student: that I, too, have lost the old-world concentration I had from the time multisensory digital distractions were in my case unavailable (as opposed to her situation of usage-restriction). My body is too restless. Its used to multiple activities, multiple buttons, multiple screens. But my personal solution to the problem has been lasting and effective. I walk around the house with my book or e-reader. The restless energy gets channelled in my movement while I keep my mind drowned in language. For those who are able to do this, I highly recommend it!

I think we need to accept that the way to read in an older world with long, undivided attention, as a singular activity wont be available to us most of the time. Its okay to read while finishing a 45-minute lap on your cross-trainer; its okay to read with music in the background, our mind swimming between sound and sentence. Its okay to love the beauty of physical books while actually doing most of your reading with e-readers, with multiple options stored in its screen. All of this not because were busier because thats always an excuse but because something fundamentally has shifted in our sensibility, and it needs to contain multiple energies even while it tends to a classic love, that of reading.

The modernity of print gifted us the singularity of a beautiful artistic and intellectual experience. To celebrate it in our restless present, we can turn that restlessness into multiple windows of experience, simultaneously enjoyable.

Saikat Majumdars books include the novels The Firebird and The Scent of God, and the nonfiction, College.

Read the rest here:

Essay: How to read in a restless world - Hindustan Times

Posted in Singularity | Comments Off on Essay: How to read in a restless world – Hindustan Times

Music Critic Praises BTS Vs Voice + Says He is a Vital Part of The Groups Musical Identity – Kpopstarz

Posted: at 5:39 am

BTS member V is one of the most distinguishable voices in the group and has been praised by critics for his deep, husky baritone. Recently, a member of the Selection Committee for Korean Music Awards, Kim Young Dae, praised BTS for his amazing vocals.

(Photo : BTS Twiter)

In a detailed review that goes deep in-depth on the individual contribution of all seven members of BTS to their success, Kim Young Dae gushes about V's voice, claiming it is an integral part of the boy band's identity.

To start his comments on BTS member V, Kim Young Da e talks about wishing this period with the coronavirus would pass so that he could listen to his solo song, "Inner Child", in a large stadium with many fans. He claims that the song is a hymn of youth, a beautiful confession that he knows has to be satisfied listening through his headphones.

(Photo : BTS Twitter)

He goes on to say that while BTS's songs are not remembered due to one overpowering voice, V has something special about him. The idol has a gifted tone and vocal, solidifying his importance in BTS's musical identity. He goes on to say that he, as a man, envies V's deep baritone voice. His voice was described as low but too low, full of volume and texture that stuns people with its crisp clarity, and is undeniably soulful and is impossible to mimic.

It is difficult to find the words to explain V's voice, but it only takes seconds to identify the tone of his voice. V's solo song, "Singularity," is both sensual and captivating. It is a song that cannot be heard anywhere else. It is a neo-soul that is unconventional, eliciting a new interest in V as a vocal.

(Photo : BTS Twitter)

V's emotional vocals in "Epilogue: Young Forever", a song that is said to embody BTS's identity itself, and his fervent voice in "Save Me", are some of his remarkable moments in BTS as a vocalist. In songs like "DNA", V's low, stable, and pure voice became an important element to people listening to the song, as he perfectly practices musical narrations.

In a group, where each member has a limit and needs to take in their own part, it is difficult to know V's charms just from BTS's music. That is why it is vital to listen to V's solo song, which gives us a sense of winter. Songs like "Scenery" or "Winter Bear" show V's charm perfectly; not only his vocals or his deep baritone, it also shows how he effortlessly shows his feelings and his sensitivity.

(Photo : BTS Twitter)

Kim Young Dae praises V as having a voice that is perfect for a movie soundtrack, as he is able to calmy deliver the emotion of the song. "It is just like his personality", the music critic concludes.

Kim Young Dae is not the only critic who has praised V's vocals. Bianca Mndez praised V's solo song "Singularity", which was the opening track of BTS's "Love Yourself: Tear", saying that the song was a prominent "tone-setter" on the album. Katie Goh of VICE has also praised "Singularity", saying that it was one of V's best vocal performances to date.

Do you like V's voice? Tell us in the comments below!

For more K-Pop news and updates, always keep your tabs open here on KpopStarz.

KpopStarz owns this

Written by Alexa Lewis

Read the rest here:

Music Critic Praises BTS Vs Voice + Says He is a Vital Part of The Groups Musical Identity - Kpopstarz

Posted in Singularity | Comments Off on Music Critic Praises BTS Vs Voice + Says He is a Vital Part of The Groups Musical Identity – Kpopstarz

That’s all folks, the singularity is near. Elon Musk’s cyber pigs and brain computer tech – Toronto Star

Posted: September 7, 2020 at 2:26 am

Goodbye Dolly. Hello Gertrude and Dorothy.

Joining the first sheep that was ever cloned as a sign of our science fact future, this past week, celebrity entrepreneur Elon Musk gave a presentation about Neuralink, his company that is focusing on creating technology that links with brains. As part of it, he introduced pigs who had the prototype devices implanted in them. The internet dubbed them Cyber Pigs and portions of readings from Gertrudes brain were played.

Brain computer technology is at a point where the potential medical implications are so exciting many players are pursuing different approaches to the field. The ethics of using this technology are sometimes best explained in science fiction like Black Mirror and The Matrix.

To discuss the latest in brain computer technology and the Neuralink presentation, we are joined by Graeme Moffat. He is a Senior Fellow at the Munk School of Global Affairs and Public Policy, and also the Chief Scientist and cofounder of System 2 Neurotechnology. He was formerly Chief Scientist and Vice President of Regulatory Affairs with Interaxon, a Toronto-based world leader in consumer neurotechnology.

Listen to this episode and more at This Matters or subscribe at Apple Podcasts, Spotify, Google Podcasts or wherever you listen to your favourite podcasts.

Continue reading here:

That's all folks, the singularity is near. Elon Musk's cyber pigs and brain computer tech - Toronto Star

Posted in Singularity | Comments Off on That’s all folks, the singularity is near. Elon Musk’s cyber pigs and brain computer tech – Toronto Star

Before or After the Singularity – PRESSENZA International News Agency

Posted: at 2:26 am

Scientific theories developed by independent and non-networked groups came to the following conclusion: Something will happen around the world that will change human history in a special way. While the predictions may not match exact dates, they all have one thing in common it will happen this century and within a few decades.

The event or the sum of the events per se was named SINGULARITY and has unique characteristics: The development of the events does not generally accelerate within the scope of their properties, but changes abruptly or collapses and starts again.

These predictions could be made on the basis of curves that encompass the development of natural ecosystems as well as the various significant milestones in the universal history of mankind from the beginning of time.

Researchers like Alexander Panov, Ray Kurzweil and many others were able to bring those considerations together by bringing together fundamentally different variables such as energy sources, automation, artificial intelligence, mode of production and consumption, etc., etc., etc.

However, the majority of theories portray science and technology as the creator of this future and not as a by-product of the evolution of our species.

We are of the opinion that the change takes place out of ones own awareness of humanity in its human and spiritual dimension, and that as a consequence of this change external changes also occur which technology, artificial intelligence and genetic engineering do not exclude, but them instead puts it in the foreground and makes it the vehicle and support for this change.

In summary, the SINGULARITY is a wonderful tool for theoretical analysis for us to imagine a world to which we are striving and also to prevent the dangers that such a change could bring.

In what other way could we seriously speak of this chaotic future? Its like were on a ship and were drawn to the enormous gravity of a black hole, a zone where time and space warp. Would we be able to know at what point in time or what distance we would reach the central vortex of the black hole? Were not trying to do futurology even less under these conditions.

But analyzing things from this point of view, with a warning in mind, is an excellent way of imagining the world that we may expect in the future.

Our area of interest focuses on human existence and this is the basis of our analysis, which of course does not claim to be scientific accuracy. We may also later be able to question current science with its alleged thoroughness and infallibility.

We strive for the evolution of mankind, we want a revolution in their consciousness and values. We reject the reification of the human being and the apocalyptic view of the future. We do not deny that machines are useful if they help to relieve people of work. We speak out against any kind of concentration of power and demand the expansion of human freedom, which can neither be restricted nor replaced by soulless algorithms.

As you can see, the future can hold many nuances Our goal is to exchange ideas with those who are interested in these topics.

What is your vision of the future?

Translation from German by Lulith V. by the Pressenza volunteer translation team. We are looking for volunteers!

Carlos Santos is a teacher and has been active in a humanist movement all his life. For the last decade he has devoted himself to audiovisual implementations as a director, producer and screenwriter of documentaries and feature films within his production company Esencia Humana Films. Email: escenariosfuturos21@gmail.com; Blog: escenariosfuturos.org

Go here to see the original:

Before or After the Singularity - PRESSENZA International News Agency

Posted in Singularity | Comments Off on Before or After the Singularity – PRESSENZA International News Agency

Neuralink’s Wildly Anticipated New Brain Implant: the Hype vs. the Science – Singularity Hub

Posted: at 2:26 am

Neuralinks wildly anticipated demo last Friday left me with more questions than answers. With a presentation teeming with promises and vision but scant on data, the event nevertheless lived up to its main goal as a memorable recruitment session to further the growth of the mysterious brain implant company.

Launched four years ago with the backing of Elon Musk, Neuralink has been working on futuristic neural interfaces that seamlessly listen in on the brains electrical signals, and at the same time, write into the brain with electrical pulses. Yet even by Silicon Valley standards, the company has kept a tight seal on its progress, conducting all manufacturing, research, and animal trials in-house.

A vision of marrying biological brains to artificial ones is hardly unique to Neuralink. The past decade has seen an explosion in brain-machine interfacessome implanted into the brain, some into peripheral nerves, or some that sit outside the skull like a helmet. The main idea behind all these contraptions is simple: the brain mostly operates on electrical signals. If we can tap into these enigmatic neural codesthe brains internal languagewe could potentially become the architects of our own minds.

Let people with paralysis walk again? Check and done. Control robotic limbs with their minds? Yup. Rewriting neural signals to battle depression? In humans right now. Recording the electrical activity behind simple memories and playing it back? Human trials ongoing. Linking up human minds into a BrainNet to collaborate on a Tetris-like game through the internet? Possible.

Given this backdrop, perhaps the most impressive part of the demonstration isnt lofty predictions of what brain-machine interfaces could potentially do one day. In some sense, were already there. Rather, what stood out was the redesigned Link device itself.

In Neuralinks coming out party last year, the company envisioned a wireless neural implant with a sleek ivory processing unit worn at the back of the ear. The electrodes of the implant itself are sewn into the brain with automated robotic surgery, relying on brain imaging techniques to avoid blood vessels and reduce brain bleeding.

The problem with that design, Musk said, is that it had multiple pieces and was complex. You still wouldnt look totally normal because theres a thing coming out of your ear.

The prototype at last weeks event came in a vastly different physical shell. About the size of a large coin, the device replaces a small chunk of your skull and sits flush with the surrounding skull matter. The electrodes, implanted inside the brain, connect with this topical device. When covered by hair, the implant is invisible.

Musk envisions an outpatient therapy where a robot can simultaneously remove a piece of the skull, sew the electrodes in, and replace the missing skull piece with the device. According to the team, the Link has similar physical properties and thickness as the skull, making the replacement a sort of copy-and-paste. Once inserted, the Link is then sealed to the skull with superglue.

I could have a Neuralink right now and you wouldnt know it, quipped Musk.

For a device that small, the team packed an admirable array of features into it. The Link device has over 1,000 channels, which can be individually activated. This is on par with Neuropixel, the crme de la crme of neural probes with 960 recording channels thats currently used widely in research, including by the Allen Institute for Brain Science.

Compared to the Utah Array, a legendary implant system used for brain stimulation in humans with only 256 electrodes, the Link has an obvious edge up in terms of pure electrode density.

Whats perhaps most impressive, however, is its onboard processing for neural spikesthe electrical pattern generated by neurons when they fire. Electrical signals are fairly chaotic in the brain, and filtering spikes from noise, as well as separating trains of electrical activity into spikes, normally requires quite a bit of processing power. This is why in the lab, neural spikes are usually recorded offline and processed using computers, rather than with on-board electronics.

The problem gets even more complicated when considering wireless data transfer from the implanted device to an external smartphone. Without accurate and efficient compression of those neural data, the transfer could tremendously lag, drain battery life, or heat up the device itselfsomething you dont want happening to a device stuck inside your skull.

To get around these problems, the team has been working on algorithms that use characteristic shapes of electrical patterns that look like spikes to efficiently identify individual neural firings. The data is processed on the chip inside the skull device. Recordings from each channel are filtered to root out obvious noise, and the spikes are then detected in real time. Because different types of neurons have their characteristic ways of spikingthat is, the shape of their spikes are diversethe chip can also be configured to detect the particular spikes youre looking for. This means that in theory the chip could be programmed to only capture the type of neuron activity youre interested infor example, to look at inhibitory neurons in the cortex and how they control neural information processing.

These processed spike data are then sent out to smartphones or other external devices through Bluetooth to enable wireless monitoring. Being able to do this efficiently has been a stumbling block in wireless brain implantsraw neural recordings are too massive for efficient transfer, and automated spike detection and compression of that data is difficult, but a necessary step to allow neural interfaces to finally cut the wire.

Link has other impressive features. For one, the battery life lasts all day, and the device can be charged at night using inductive charging. From my subsequent conversations with the team, it seems like there will be alignment lights to help track when the charger is aligned with the device. Whats more, the Link itself also has an internal temperature sensor to monitor for over-heating, and will automatically disconnect if the temperature rises above a certain thresholda very necessary safety measure so it doesnt overheat the surrounding skull tissue.

From the get-go of the demonstration, there was an undercurrent of tension between whats possible in neuroengineering versus whats needed to understand the brain.

Since its founding, Neuralink has always been fascinated with electrode numbers: boosting channel numbers on its devices and increasing the number of neurons that can be recorded at the same time.

At the event, Musk said that his goal is to increase the number of recorded neurons by a factor of 100, then 1,000, then 10,000.

But heres the thing: as neuroscience is increasingly understanding the neural code behind our thought processes, its clear that more electrodes or more stimulated neurons isnt always better. Most neural circuits employ whats called sparse coding, in that only a handful of neurons, when stimulated in a way that mimics natural firing, can artificially trigger visual or olfactory sensations. With optogeneticsthe technique of stimulating neurons with lightscientists now know that its possible to incept memories by targeting just a few key neurons in a circuit. Sticking a ton of wires into the brain, which inevitably causes scarring, and zapping hundreds of thousands of neurons isnt necessarily going to help.

Unlike engineering, the solution to the brain isnt more channels or more implants. Rather, its deciphering the neural codeknowing what to stimulate, in what order, to produce what behavior. Its perhaps telling that despite claims of neural stimulation, the only data shown at the event were neurons firing from a section of a mouse brainusing two-photon microscopy to image neural activationafter zapping brain tissue with an electrode. What information, if any, is really being written into the brain? Without an idea of how neural circuits work and in what sequences, zapping the brain with electricityno matter how cool the device itself isis akin to banging on all the keys of a piano at once, rather than composing a beautiful melody.

Of course, the problem is far larger than Neuralink itself. Its perhaps the next frontier in solving the brains mysteries. To their credit, the Neuralink team has looked at potential damage to the brain from electrode insertion. A main problem with current electrodes is that the brain will eventually activate non-neuronal cells to form an insulating sheath around the electrode, sealing it off from the neurons it needs to record from. According to some employees I talked to, so far, for at least two months, the scarring around electrodes is minimal, although in the long run there may be scar tissue buildup at the scalp. This may make electrode threads difficult to removesomething that still needs to be optimized.

However, two months is only a fraction of what Musk is proposing: a decade-long implant, with hardware that can be updated.

The team may also have an answer there. Rather than removing the entire implant, it could potentially be useful to leave the threads inside the brain and only remove the top capthe Link device that contains the processing chip. The team is now trying the idea out, while exploring the possibility of a full-on removal and re-implant.

As a demonstration of feasibility, the team trotted out three adorable pigs: one without an implant, one with a Link, and one with the Link implanted and then removed. Gertrude, the pig currently with an implant in areas related to her snout, had her inner neural firings broadcasted as a series of electrical crackles as she roamed around her pen, sticking her snout into a variety of food and hay and bumping at her handler.

Pigs came as a surprise. Most reporters, myself included, were expecting non-human primates. However, pigs seem like a good choice. For one, their skulls have a similar density and thickness to human ones. For another, theyre smart cookies, meaning they can be trained to walk on a treadmill while the implant records from their motor cortex to predict the movement of each joint. Its feasible that the pigs could be trained on more complicated tests and behaviors to show that the implant is affecting their movements, preferences, or judgment.

For now, the team doesnt yet have publicly available data showing that targeted stimulation of the pigs cortexsay, motor cortexcan drive their muscles into action. (Part of this, I heard, is because of the higher stimulation intensity required, which is still being fine-tuned.)

Although pitched as a prototype, its clear that the Link remains experimental. The team is working closely with the FDA and was granted a breakthrough device designation in July, which could pave the way for a human trial for treating people with paraplegia and tetraplegia. Whether the trials will come by end of 2020, as Musk promised last year, however, remains to be seen.

Rather than other brain-machine interface companies, which generally focus on brain disorders, its clear that Musk envisions Link as something that can augment perfectly healthy humans. Given the need for surgical removal of part of your skull, its hard to say if its a convincing sell for the average person, even with Musks star power and his vision of augmenting natural sight, memory playback, or a third artificial layer of the brain that joins us with AI. And because the team only showed a highly condensed view of the pigs neural firingsrather than actual spike tracesits difficult to accurately gauge how sensitive the electrodes actually are.

Finally, for now the electrodes can only record from the cortexthe outermost layer of the brain. This leaves deeper brain circuits and their functions, including memory, addiction, emotion, and many types of mental illnesses off the table. While the team is confident that the electrodes can be extended in length to reach those deeper brain regions, its work for the future.

Neuralink has a long way to go. All that said, having someone with Musks impact championing a rapidly-evolving neurotechnology that could help people is priceless. One of the lasting conversations I had after the broadcast was someone asking me what its like to drill through skulls and see a living brain during surgery. I shrugged and said its just bone and tissue. He replied wistfully it would still be so cool to be able to see it though.

Its easy to forget the wonder that neuroscience brings to people when youve been in it for years or decades. Its easy to roll my eyes at Neuralinks data and think well neuroscientists have been listening in on live neurons firing inside animals and even humans for over a decade. As much as Im still skeptical about how Link compares to state-of-the-art neural probes developed in academia, Im impressed by how much a relatively small leadership team has accomplished in just the past year. Neuralink is only getting started, and aiming high. To quote Musk: Theres a tremendous amount of work to be done to go from here to a device that is widely available and affordable and reliable.

Image Credit: Neuralink

More:

Neuralink's Wildly Anticipated New Brain Implant: the Hype vs. the Science - Singularity Hub

Posted in Singularity | Comments Off on Neuralink’s Wildly Anticipated New Brain Implant: the Hype vs. the Science – Singularity Hub

Microsoft’s New Deepfake Detector Puts Reality to the Test – Singularity Hub

Posted: at 2:26 am

The upcoming US presidential election seems set to be something of a messto put it lightly. Covid-19 will likely deter millions from voting in person, and mail-in voting isnt shaping up to be much more promising. This all comes at a time when political tensions are running higher than they have in decades, issues that shouldnt be political (like mask-wearing) have become highly politicized, and Americans are dramatically divided along party lines.

So the last thing we need right now is yet another wrench in the spokes of democracy, in the form of disinformation; we all saw how that played out in 2016, and it wasnt pretty. For the record, disinformation purposely misleads people, while misinformation is simply inaccurate, but without malicious intent. While theres not a ton tech can do to make people feel safe at crowded polling stations or up the Postal Services budget, tech can help with disinformation, and Microsoft is trying to do so.

On Tuesday the company released two new tools designed to combat disinformation, described in a blog post by VP of Customer Security and Trust Tom Burt and Chief Scientific Officer Eric Horvitz.

The first is Microsoft Video Authenticator, which is made to detect deepfakes. In case youre not familiar with this wicked byproduct of AI progress, deepfakes refers to audio or visual files made using artificial intelligence that can manipulate peoples voices or likenesses to make it look like they said things they didnt. Editing a video to string together words and form a sentence someone didnt say doesnt count as a deepfake; though theres manipulation involved, you dont need a neural network and youre not generating any original content or footage.

The Authenticator analyzes videos or images and tells users the percentage chance that theyve been artificially manipulated. For videos, the tool can even analyze individual frames in real time.

Deepfake videos are made by feeding hundreds of hours of video of someone into a neural network, teaching the network the minutiae of the persons voice, pronunciation, mannerisms, gestures, etc. Its like when you do an imitation of your annoying coworker from accounting, complete with mimicking the way he makes every sentence sound like a question and his eyes widen when he talks about complex spreadsheets. Youve spent hoursno, monthsin his presence and have his personality quirks down pat. An AI algorithm that produces deepfakes needs to learn those same quirks, and more, about whoever the creators target is.

Given enough real information and examples, the algorithm can then generate its own fake footage, with deepfake creators using computer graphics and manually tweaking the output to make it as realistic as possible.

The scariest part? To make a deepfake, you dont need a fancy computer or even a ton of knowledge about software. There are open-source programs people can access for free online, and as far as finding video footage of famous peoplewell, weve got YouTube to thank for how easy that is.

Microsofts Video Authenticator can detect the blending boundary of a deepfake and subtle fading or greyscale elements that the human eye may not be able to see.

In the blog post, Burt and Horvitz point out that as time goes by, deepfakes are only going to get better and become harder to detect; after all, theyre generated by neural networks that are continuously learning from and improving themselves.

Microsofts counter-tactic is to come in from the opposite angle, that is, being able to confirm beyond doubt that a video, image, or piece of news is real (I mean, can McDonalds fries cure baldness? Did a seal slap a kayaker in the face with an octopus? Never has it been so imperative that the world know the truth).

A tool built into Microsoft Azure, the companys cloud computing service, lets content producers add digital hashes and certificates to their content, and a reader (which can be used as a browser extension) checks the certificates and matches the hashes to indicate the content is authentic.

Finally, Microsoft also launched an interactive Spot the Deepfake quiz it developed in collaboration with the University of Washingtons Center for an Informed Public, deepfake detection company Sensity, and USA Today. The quiz is intended to help people learn about synthetic media, develop critical media literacy skills, and gain awareness of the impact of synthetic media on democracy.

The impact Microsofts new tools will have remains to be seenbut hey, were glad theyre trying. And theyre not alone; Facebook, Twitter, and YouTube have all taken steps to ban and remove deepfakes from their sites. The AI Foundations Reality Defender uses synthetic media detection algorithms to identify fake content. Theres even a coalition of big tech companies teaming up to try to fight election interference.

One thing is for sure: between a global pandemic, widespread protests and riots, mass unemployment, a hobbled economy, and the disinformation thats remained rife through it all, were going to need all the help we can get to make it through not just the election, but the rest of the conga-line-of-catastrophes year that is 2020.

Image Credit: Darius BasharonUnsplash

View original post here:

Microsoft's New Deepfake Detector Puts Reality to the Test - Singularity Hub

Posted in Singularity | Comments Off on Microsoft’s New Deepfake Detector Puts Reality to the Test – Singularity Hub

The world of Artificial… – The American Bazaar

Posted: at 2:26 am

Sophia. Source: https://www.hansonrobotics.com/press/

Humans are the most advanced form of Artificial Intelligence (AI), with an ability to reproduce.

Artificial Intelligence (AI) is no longer a theory but is part of our everyday life. Services like TikTok, Netflix, YouTube, Uber, Google Home Mini, and Amazon Echo are just a few instances of AI in our daily life.

This field of knowledge always attracted me in strange ways. I have been an avid reader and I read a variety of subjects of non-fiction nature. I love to watch movies not particularly sci-fi, but I liked Innerspace, Flubber, Robocop, Terminator, Avatar, Ex Machina, and Chappie.

When I think of Artificial Intelligence, I see it from a lay perspective. I do not have an IT background. I am a researcher and a communicator; and, I consider myself a happy person who loves to learn and solve problems through simple and creative ideas. My thoughts on AI may sound different, but Im happy to discuss them.

Humans are the most advanced form of AI that we may know to exit. My understanding is that the only thing that differentiates humans and Artificial Intelligence is the capability to reproduce. While humans have this ability to multiply through male and female union and transfer their abilities through tiny cells, machines lack that function. Transfer of cells to a newborn is no different from the transfer of data to a machine. Its breathtaking that how a tiny cell in a human body has all the necessary information of not only that particular individual but also their ancestry.

Allow me to give an introduction to the recorded history of AI. Before that, I would like to take a moment to share with you my recent achievement that I feel proud to have accomplished. I finished a course in AI from Algebra University in Croatia in July. I could attend this course through a generous initiative and bursary from Humber College (Toronto). Such initiatives help intellectually curious minds like me to learn. I would also like to express that the views expressed are my own understanding and judgment.

What is AI?

AI is a branch of computer science that is based on computer programming like several other coding programs. What differentiates Artificial Intelligence, however, is its aim that is to mimic human behavior. And this is where things become fascinating as we develop artificial beings.

Origins

I have divided the origins of AI into three phases so that I can explain it better and you dont miss on the sequence of incidents that led to the step by step development of AI.

Phase 1

AI is not a recent concept. Scientists were already brainstorming about it and discussing the thinking capabilities of machines even before the term Artificial Intelligence was coined.

I would like to start from 1950 with Alan Turing, a British intellectual who brought WW II to an end by decoding German messages. Turing released a paper in the October of 1950 Computing Machinery and Intelligence that can be considered as among the first hints to thinking machines. Turing starts the paper thus: I propose to consider the question, Can machines think?. Turings work was also the beginning of Natural Language Processing (NLP). The 21st-century mortals can relate it with the invention of Apples Siri. The A.M. Turing Award is considered the Nobel of computing. The life and death of Turing are unusual in their own way. I will leave it at that but if you are interested in delving deeper, here is one article by The New York Times.

Five years later, in 1955, John McCarthy, an Assistant Professor of Mathematics at Dartmouth College, and his team proposed a research project in which they used the term Artificial Intelligence, for the first time.

McCarthy explained the proposal saying, The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. He continued, An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.

It started with a few simple logical thoughts that germinated into a whole new branch of computer science in the coming decades. AI can also be related to the concept of Associationism that is traced back to Aristotle from 300 BC. But, discussing that in detail will be outside the scope of this article.

It was in 1958 that we saw the first model replicating the brains neuron system. This was the year when psychologist Frank Rosenblatt developed a program called Perceptron. Rosenblatt wrote in his article, Stories about the creation of machines having human qualities have long been fascinating province in the realm of science fiction. Yet we are now about to witness the birth of such a machine a machine capable of perceiving, recognizing, and identifying its surroundings without any human training or control.

A New York Times article published in 1958 introduced the invention to the general public saying, The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.

My investigation in one of the papers of Rosenblatt hints that even in the 1940s scientists talked about artificial neurons. Notice in the Reference section of Rosenblatts paper published in 1958. It lists Warren S. McCulloch and Walter H. Pitts paper of 1943. If you are interested in more details, I would suggest an article published in Medium.

The first AI conference took place in 1959. However, by this time, the leads in Artificial Intelligence had already exhausted the computing capabilities of the time. It is, therefore, no surprise that not much could be achieved in AI in the next decade.

Thankfully, the IT industry was catching up quickly and preparing the ground for stronger computers. Gordon Moore, the co-founder of Intel, made a few predictions in his article in 1965. Moore predicted a huge growth of integrated circuits, more components per chip, and reduced costs. Integrated circuits will lead to such wonders as home computers or at least terminals connected to a central computerautomatic controls for automobiles, and personal portable communications equipment, Moore predicted. Although scientists had been toiling hard to launch the Internet, it was not until the late 1960s that the invention started showing some promises. On October 29, 1969, ARPAnet delivered its first message: a node-to-node communication from one computer to another, notes History.com.

With the Internet in the public domain, computer companies had a reason to accelerate their own developments. In 1971, Intel introduced its first chip. It was a huge breakthrough. Intel impressively compared the size and computing abilities of the new hardware saying, This revolutionary microprocessor, the size of a little fingernail, delivered the same computing power as the first electronic computer built in 1946, which filled an entire room.

Around the 1970s more popular versions of languages came in use, for instance, C and SQL. I mention these two as I remember when I did my Diploma in Network-Centered Computing in 2002, the advanced versions of these languages were still alive and kicking. Britannica has a list of computer programming languages if you care to read more on when the different languages came into being.

These advancements created a perfect amalgamation of resources to trigger the next phase in AI.

Phase 2

In the late 1970s, we see another AI enthusiast coming in the scene with several research papers on AI. Geoffrey Hinton, a Canadian researcher, had confidence in Rosenblatts work on Perceptron. He resolved an inherent problem with Rosenblatts model that was made up of a single layer perceptron. To be fair to Rosenblatt, he was well aware of the limitations of this approach he just didnt know how to learn multiple layers of features efficiently, Hinton noted in his paper in 2006.

This multi-layer approach can be referred to as a Deep Neural Network.

Another scientist, Yann LeCun, who studied under Hinton and worked with him, was making strides in AI, especially Deep Learning (DL, explained later in the article) and Backpropagation Learning (BL). BL can be referred to as machines learning from their mistakes or learning from trial and error.

Similar to Phase 1, the developments of Phase 2 end here due to very limited computing power and insufficient data. This was around the late 1990s. As the Internet was fairly recent, there was not much data available to feed the machines.

Phase 3

In the early 21st-century, the computer processing speed entered a new level. In 2011, IBMs Watson defeated its human competitors in the game of Jeopardy. Watson was quite impressive in its performance. On September 30, 2012, Hinton and his team released the object recognition program called Alexnet and tested it on Imagenet. The success rate was above 75 percent, which was not achieved by any such machine before. This object recognition sent ripples across the industry. By 2018, image recognition programming became 97% accurate! In other words, computers were recognizing objects more accurately than humans.

In 2015, Tesla introduced its self-driving AI car. The company boasts its autopilot technology on its web site saying, All new Tesla cars come standard with advanced hardware capable of providing Autopilot features today, and full self-driving capabilities in the futurethrough software updates designed to improve functionality over time.

Go enthusiasts will also remember the 2016 incident when Google-owned DeepMinds AlphaGo defeated the human Go world-champion Lee Se-dol. This incident came at least a decade too soon. We know that Go is considered one of the most complex games in human history. And, AI could learn it in just 3 days, to a level to beat a world champion who, I would assume must have spent decades to achieve that proficiency!

The next phase shall be to work on Singularity. Singularity can be understood as machines building better machines, all by themselves. In 1993, scientist Vernor Vinge published an essay in which he wrote, Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Scientists are already working on the concept of technological singularity. If these achievements can be used in a controlled way, these can help several industries, for instance, healthcare, automobile, and oil exploration.

I would also like to add here that Canadian universities are contributing significantly to developments in Artificial Intelligence. Along with Hinton and LeCun, I would like to mention Richard Sutton. Sutton, Professor at the University of Alberta, is of the view that advancements in the singularity can be expected around 2040. This makes me feel that when AI will no longer need human help, it will be a kind of specie in and of itself.

To get to the next phase, however, we would need more computer power to achieve the goals of tomorrow.

Now that we have some background on the genesis of AI and some information on the experts who nourished this advancement all these years, it is time to understand a few key terms of AI. By the way, if you ask me, every scientist who is behind these developments is a new topic in themselves. I have tried to put a good number of researched sources in the article to generate your interest and support your knowledge in AI.

Big Data

With the Internet of Things (IoT), we are saving tons of data every second from every corner of the world. Consider, for instance, Google. It seems that it starts tracking our intentions as soon as we type the first alphabet on our keyboard. Now think for a second how much data is generated from all the internet users from all over the World. Its already making predictions of our likes, dislikes, actionseverything.

The concept of big data is important as that makes the memory of Artificial Intelligence. Its like a parent sharing their experience with their child. If the child can learn from that experience, they develop cognizant abilities and venture into making their own judgments and decisions. Similarly, big data is the human experience that is shared with machines and they develop on that experience. This can be supervised as well as unsupervised learning.

Symbolic Reasoning and Machine Learning

The basics of all processes are some mathematical patterns. I think that this is because math is something that is certain and easy to understand for all humans. 2 + 2 will always be 4 unless there is something we havent figured out in the equation.

Symbolic reasoning is the traditional method of getting work done through machines. According to Pathmind, to build a symbolic reasoning system, first humans must learn the rules by which two phenomena relate, and then hard-code those relationships into a static program. Symbolic reasoning in AI is also known as the Good Old Fashioned AI (GOFAI).

Machine Learning (ML) refers to the activity where we feed big data to machines and they identify patterns and understand the data by themselves. The outcomes are not as predicted as here machines are not programmed to specific outcomes. Its like a human brain where we are free to develop our own thoughts. A video by ColdFusion explains ML thus: ML systems analyze vast amounts of data and learn from their past mistakes. The result is an algorithm that completes its task effectively. ML works well with supervised learning.

Here I would like to make a quick tangent for all those creative individuals who need some motivation. I feel that all inventions were born out of creativity. Of course, creativity comes with some basic understanding and knowledge. Out of more than 7 billion brains, somewhere someone is thinking out of the box, verifying their thoughts, and trying to communicate their ideas. Creativity is vital for success. This may also explain why some of the most important inventions took place in a garage (Google and Microsoft). Take, for instance, a small creative tool like a pizza cutter. Someone must have thought about it. Every time I use it, I marvel how convenient and efficient it is to slice a pizza without disturbing the toppings with that running cutter. Always stay creative and avoid preconceived ideas and stereotypes.

Alright, back to the topic!

Deep Learning

Deep Learning (DL) is a subset of ML. This technology attempts to mimic the activity of neurons in our brain using matrix mathematics, explains ColdFusion. I found this article that describes DL well. With better computers and big data, it is now possible to venture into DL. Better computers provide the muscle and the big data provides the experience to a neuron network. Together, they help a machine think and execute tasks just like a human would do. I would suggest reading this paper titled Deep Leaning by LeCun, Bengio, and Hinton (2015) for a deeper perspective on DL.

The ability of DL makes it a perfect companion for unsupervised learning. As big data is mostly unlabelled, DL processes it to identify patterns and make predictions. This not only saves a lot of time but also generates results that are completely new to a human brain. DL offers another benefit it can work offline; meaning, for instance, a self-driving car. It can take instantaneous decisions while on the road.

What next?

I think that the most important future development will be AI coding AI to perfection, all by itself.

Neural nets designing neural nets have already started. Early signs of self-production are in vision. Google has already created programs that can produce its own codes. This is called Automatic Machine Learning or AutoML. Sundar Pichai, CEO of Google and Alphabet, shared the experiment in his blog. Today, designing neural nets is extremely time intensive, and requires an expertise that limits its use to a smaller community of scientists and engineers. Thats why weve created an approach called AutoML, showing that its possible for neural nets to design neural nets, said Pichai (2017).

Full AI capabilities will also trigger several other programs like fully-automated self-driving cars, full-service assistance in sectors like health care and hospitality.

Among the several useful programs of AI, ColdFusion has identified the five most impressive ones in terms of image outputs. These are AI generating an image from a text (Plug and Play Generative Networks: Conditional Iterative Generation of Images in Latent Space), AI reading lip movements from a video with 95% accuracy (LipNet), Artificial Intelligence creating new images from just a few inputs (Pix2Pix), AI improving the pixels of an image (Google Brains Pixel Recursive Super Resolution), and AI adding color to b/w photos and videos (Let There Be Color). In the future, these technologies can be used for more advanced functions like law enforcement et cetera.

AI can already generate images of non-existing humans and add sound and body movements to the videos of individuals! In the coming years, these tools can be used for gaming purposes, or maybe fully capable multi-dimensional assistance like the one we see in the movie Iron Man. Of course, all these developments would require new AI laws to avoid misuse; however, that is a topic for another discussion.

Humans are advanced AI

Artificial Intelligence is getting so good at mimicking humans that it seems that humans themselves are some sort of AI. The way Artificial Intelligence learns from data, retains information, and then develops analytical, problem solving, and judgment capabilities are no different from a parent nurturing their child with their experience (data) and then the child remembering the knowledge and using their own judgments to make decisions.

We may want to remember here that there are a lot of things that even humans have not figured out with all their technology. A lot of things are still hidden from us in plain sight. For instance, we still dont know about all the living species in the Amazon rain forest. Astrology and astronomy are two other fields where, I think, very little is known. Air, water, land, and celestial bodies control human behavior, and science has evidence for this. All this hints that we as humans are not in total control of ourselves. This feels similar to AI, which so far requires external intervention, like from humans, to develop it.

I think that our past has answers to a lot of questions that may unravel our future. Take for example the Great Pyramid at Giza, Egypt, which we still marvel for its mathematical accuracy and alignment with the earths equator as well as the movements of celestial bodies. By the way, we could compare the measurements only because we have already reached a level to know the numbers relating to the equator.

Also, think of Indias knowledge of astrology. It has so many diagrams of planetary movements that are believed to impact human behavior. These sketches have survived several thousand years. One of Indias languages, Vedic, is considered more than 4,000 years old, perhaps one of the oldest in human history. This was actually a question asked from IBM Watson during the 2011 Jeopardy competition. Understanding the literature in this language might unlock a wealth of information.

I feel that with the kind of technology we have in AI, we should put some of it at work to unearth our wisdom from the past. It is a possibility that if we overlook it, we may waste resources by reinventing the wheel.

See more here:

The world of Artificial... - The American Bazaar

Posted in Singularity | Comments Off on The world of Artificial… – The American Bazaar

‘The World To Come’: Review | Reviews – Screen International

Posted: at 2:26 am

Dir. Mona Fastvold. US. 2020. 98 mins.

It would be easy to sell The World to Come as the female Brokeback Mountain, but that would be to traduce the richness, singularity and command of Mona Fastvolds beautifully executed and acted drama. The story of female friendship blossoming into passionate love in a severe 1850s American rural setting, this is an austere but lyrical piece underwritten by a complex grasp of emotional and psychological nuance, and a second feature of striking command by Norwegian-born director Mona Fastvold, following up her 2014 debut The Sleepwalker (she has also collaborated as writer on Brady Corbets features).

Understatement and interiority are the watchwords for a film which uses suggestion and period language very subtly

Scripted with heightened literary cadences by Ron Hansen and Jim Shepard, the film is well crafted in every respect, and marks an acting career high for Katherine Waterston, as well as a fine showcase for the ever more impressive Vanessa Kirby. Fastvolds maturely satisfying piece, picked up internationally by Sony Pictures, should find acclaim on the festival circuit, and upmarket distributors will hopefully find a way to highlight its appeal to discerning audiences on the big screen, where its stark elegance will truly flourish.

The film is framed with handwritten date captions as a diary kept in the 1850s in rural upstate New York by Abigail (Waterstone), the young wife of farmer Dyer (Casey Affleck). Their relationship lies under the shadow of the recent death of their young daughter, and grief along with the normal rigours of life in the remote countryside is keeping them emotionally apart, with the thoughtful Abigail and the gentle but taciturn Dyer unable to communicate their feelings, as seems par for the course in a rural marriage at this period. One day, however, Abigail exchanges glances with a new neighbour, Tallie (Kirby), in a subtle hint of what could be classified love at first sight. When Tallie pays a neighbourly visit, the two instantly bond, exchanging confidences, with Abigails reserve gradually conquered up by Tallies candour and ironic knowingness about womens domestic lot something she is familiar with, being married to the possessive Finney (Christopher Abbott).

Working over the seasons, beginning with a descent into a harshly forbidding winter, Fastvold teases out the shifts in the characters lives, at first establishing a tone of pensive reserve, then setting a note of heightened peril (mortality, after all, really means something in this environment), notably in an extraordinary blizzard sequence. As the action enters another year, warmth comes into the two womens lives; at last their slow-simmering romance catches fire in tentative declarations followed by a first kiss, and the fond words, You smell like a biscuit. There are flashes of overt sexual content, but used extremely sparingly and telegraphically towards the end, while Fastvold shows the meaning of Abigails passion in subtle touches like a moment where she lies back on a table, fully dressed, in a quiet swoon of rapture.

Acted with finely calibrated subtlety, the film uses close-ups sparingly but to resonant effect, contrasting the cautiousness with which Abigail reveals her self and the warmer, more openly expressive face of Tallie. Waterstone and Kirby pull off something very finely balanced, conveying the enormity of their characters emotions while speaking a stylised, formal, sometimes playful language: the script will be music to lovers of 19th-century American writing (Hawthorne, Emily Dickinson, Edith Wharton). As the two husbands, Affleck and Abbott contrast sharply both playing deeply enclosed, solemn men, but of different emotional literacy, one with a capacity for moral generosity, the other shockingly without.

Understatement and interiority are the watchwords for a film which uses suggestion and period language very subtly. Poetry plays a part in the central relationship, but theres a poetic ring to the prose too, both in the dialogue and in Abigails journal (both screenwriters are novelists, Ron Hansen having explored this period in The Assassination of Jesse James by the Coward Robert Ford, the film of which starred Casey Affleck as Ford). This is also very much a film about the power and necessity of writing, as suggested by a line that compares ink to fire: a good servant and a hard master.

Ink on paper is also sometimes suggested by the look of the winter sequences, colours bled to monochrome. Shot on 16mm by Andr Chemetoff, the film at once captures the look of period photography and establishes a feeling of contemporary realism, with no alienating sense of historical distance. The grainy texture of the images, combined with Jean Vincent Puzoss meticulous design, somewhat recalls the American period films (Meeks Cutoff, First Cow) of Kelly Reichardt, with something of the severe grace of Terence Daviess best work.

There is also a distinctive score by David Blumberg, foregrounding woodwinds - notably in the blizzard sequence, which has a feel of free jazz without being incongruous for the period (improvising legend Peter Brtzmann is featured on bass clarinet). The closing song, featuring singer Josephine Foster, catches the period feel perfectly over manuscript-style end credits.

Production companies: Seachange Media, Killer Films, Hype Films

International sales: Charades, sales@charades.eu

Producers: Casey Affleck, Whitaker Lader, Pamela Koffler, David Hinojosa, Margarethe Baillou

Screenplay: Ron Hansen, Jim Shepard

Based on the story by Jim Shepard

Cinematography: Andre Chemetoff

Editor: David Jancso

Production design: Jean Vincent Puzos

Music: David Blumberg

Main cast: Katherine Waterston, Vanessa Kirby, Casey Affleck, Christopher Abbott

Read the original here:

'The World To Come': Review | Reviews - Screen International

Posted in Singularity | Comments Off on ‘The World To Come’: Review | Reviews – Screen International

What’s the magic behind Matthew Stafford’s mastery of the Lions’ offense? – Detroit Lions Blog- ESPN – ESPN

Posted: at 2:26 am

ALLEN PARK, Mich. The ball looked it like it could have been intercepted easily. Jeff Okudah was in perfect position in the end zone. He read everything right. He was where he was supposed to be. It didnt matter.

Not even close.

Matthew Stafford put the ball where only his receiver, Marvin Hall, could catch it. It was a window so small realistically only the football could have fit through for the play to work. You could say this is only one play in a training camp and might not be indicative of how Stafford played in practice throughout August.

Create or join a league today >>Cheat Sheet Central >>

Except this wasnt a singularity. It happened to Amani Oruwariye against Kenny Golladay. It happened to Jahlani Tavai and ended up in the hands of Marvin Jones. Combine that with Staffords arm strength -- which remains among the best in the league -- and theres reason to think the 12-year veteran might be on the cusp of a season in which he fulfills the potential thats surrounded him since he was drafted, both in his physical abilities and his knowledge of exactly where to throw the ball and when.

Hes a wizard, man, said backup quarterback Chase Daniel, who has known Stafford since high school. Its impressive. His recall of plays, a photographic memory, all that stuff you want in a quarterback. Its impressive and makes you want to work harder and its why hes been one of the best quarterbacks in the league going on 12 years now.

It isnt a practice thing, either. Hes done it during games, too -- either with the help of Calvin Johnson earlier in his career or throws that make you wonder how he pulled it off the past few seasons, including a pass through three Kansas City defenders for a touchdown to Golladay in Week 4 last season.

I wish more people could appreciate it, backup quarterback David Blough said.

At the time, Blough still was learning about his new teammate. A rookie out of Purdue who was traded to Detroit from Cleveland at the roster cuts deadline, Blough only watched Stafford from afar on television and what he remembered of him growing up just outside Dallas himself when Stafford was in high school.

The next day, in the quarterback meeting room, Blough got to see a small bit of Staffords personality. He almost shrugged it off as hes just doing his job although Blough said you might get a wink from him as hes saying it.

This always has been who Stafford is -- from top-rated high school recruit to top-rated college quarterback and then the No. 1 pick in the 2009 draft. Hes thrown a 5,000-yard season and holds a bevy of fastest-to NFL records.

Hes led 28 fourth-quarter comebacks, tied with Brett Favre for No. 11 in history. Hes No. 18 in all-time passing yards, with 41,025, and if he has at least a 4,000-yard season hell pass Dan Fouts and Drew Bledsoe to be No. 16 all-time. His 256 touchdowns are No. 19 all-time and hes 35 touchdown passes away from moving into the top 15.

He is also, at age 32, perhaps playing better than he ever has. Before he suffered broken bones in his back last season, sending him to injured reserve, he was playing at a Pro Bowl level in the first year in Darrell Bevells offense, completing 64.3 percent of his passes for 2,499 yards, 19 touchdowns and five interceptions.

Had he played a full season, he might have reached 5,000 yards for the second time. While hes played in other offenses before -- becoming prolific in Scott Linehans Air Raid offense early in his career and then more efficient in the Jim Caldwell/Jim Bob Cooter system for five years after that -- its possible Bevells offense fits him better than the others.

It meshes a mix of play-action and focus on the run game with enough attempts at bigger, explosive plays that take advantage of Staffords arm and the skills of Golladay and Jones to win contested catches.

When were out there at quarterback, were empowered to throw, Blough said. Take shots, take shots, take shots. [Bevell] keeps calling them and I think Matthew feels encouraged by that and confident.

While it appears he has mastery over Bevells system, and Stafford is reaching a point in his career where almost any offense is going to be something he picks up quick, Bevell has noticed some small, subtle changes entering another season with Stafford, something that could make a great quarterback even better.

He might be even a little bit quicker on some of the decisions hes making, Bevell said. We really have put an emphasis on his speed. Starting with last year when we got here and how your feet correspond to the plays, I think hes done a nice job with that.

I mean, hes just a special talent in terms of throwing the football. It just looks so effortless. He can just flick it, and the balls flying out of his hands. Hes always been impressive that way.

Are you ready for some football? Play for FREE and answer questions on the Monday night game every week. Make Your Picks

Its something his teammates have known and his coaches have learned as theyve worked with him. Its something the public has understood in fits and starts, but if Stafford can stay healthy in 2020 and manage his team through an abnormal season in a global pandemic, its possible he might be able to do one thing that could get him more recognition.

Win the Lions first since division title sine 1993, when Stafford was 5 years old.

Excerpt from:

What's the magic behind Matthew Stafford's mastery of the Lions' offense? - Detroit Lions Blog- ESPN - ESPN

Posted in Singularity | Comments Off on What’s the magic behind Matthew Stafford’s mastery of the Lions’ offense? – Detroit Lions Blog- ESPN – ESPN

If you flew your spaceship through a wormhole, could you make it out alive? Maybe… – SYFY WIRE

Posted: at 2:26 am

Can you already hear Morgan Freemans sonorous voice as if this was another episode of Through the Wormhole?

Astrophysicists have figured out a way to traverse a (hypothetical) wormhole that defies the usual thinking that wormholes (if they exist) would either take longer to get through than the rest of space or be microscopic. These wormholes just have to warp the rules of physics which is totally fine since they would exist in the realm of quantum physics. Freaky things could happen when you go quantum. If wormholes do exist, some of them might be large enough for a spacecraft to not only fit through, but get from this part of the universe to wherever else in the universe in one piece.

"Larger wormholes are possible with aspecial type of dark sector,a type of matter that interactsonly gravitationally with our own matter. The usual dark matter is an example.However, the one we assumed involves a dark sector that consists of an extradimensional geometry,"Princeton astrophysicist Juan Maldacena and grad student Alexey Milekhin told SYFY WIRE.Theyrecently performed a new study that reads like a scientific dissection of what exactly happened to John Crichtons spaceship when it zoomed through a wormhole in Farscape.

"This type of larger wormhole isbased on therealization that a five-dimensional spacetime could be describing physics at lowerenergies than the ones we usually explore, but that it would have escaped detection because it couples with our matter only through gravity," Maldacena and Milekhinsaid."In fact, its physics issimilar to adding many strongly interacting massless fields to the known physics,and for this reason it can give rise to the required negative energy."

While the existence of wormholes has never been proven, you could defend theories that they could exist deep in the quantum realm. The problem is, even if they do exist, they are thought to be infinitesimal. Hypothetical wormholes would also take so long to get across that youd basically be a space fossil by the time you got to the other end. Maldacena and Milekhin have found a theoretical way for a wormhole thatcould get you across the universe in seconds and manage not to crush your spacecraft. At least it would seem like seconds to you. To everyone else on Earth, it could be ten thousand years. Scary thought.

"Usually whenpeople discuss wormholes, they have in mind 'short'wormholes: the ones forwhich the travel time would be almost instantaneous even for a distant observer.We think that such wormholes are inconsistent with the basic principles of relativity," the scientists said. "The ones we considered are 'long': for a distant observed the path alongnormal space-time is shorter than through the wormhole.There is a time-dilation factor because the extreme gravity makes travel time very short for the traveller. For an outsider, the time it takes is much longer, so we have consistency with the principles of relativity, which forbid travel faster than the speed of light."

Fortraversable wormholesto exist, but the vacuum of space would have to be cold and flat to actually allow for what they theorize. Space is already cold. Just pretend that its flat for the sake of imagining Maldacena and Milekhin's brainchild of a wormhole.

"These wormholes are big, the gravitational forces will be rather small. So, if they were in empty flat space,they would not be hazardous. We chose their size to be big enough so that theywould be safe from large gravitational forces," they said.

Negative energy would also have to exist in a traversable wormhole. Physics forbids such a thing from being a reality. In quantum physics, the concept of this exotic energy is explained by Stephen Hawking as the absence of energy from two pieces of matter being closer together as opposed to being far apart, because energy needs to be burned so they can be separated despite gravitational force struggling to pull them back together. Fermions, which include subatomic particles such as electrons, protons, and neutrons (with the exception that they would need to be massless), would enter one end and travel in circles. They would come out exactly where they went in, which suggests that the modification of energy in the vacuum can make it negative.

"Early theorized wormholes were not traversable; an observer going through a wormhole encounters a singularity before reaching the toher side, which is related ot the fact that positive energy tends to attract matter and light," the scientists said."This is whyspacetime shrinks at the singularity of a black hole. Negative energy prevents this. The main problem is that the particular type of negative energy that is needed is not possible in classical physics, and in quantum physics it is only possible in some limited amounts and for special circumstances.

Say you make it to a gaping wormhole ready to take you...nobody knows where. What would it feel like to travel through it? Probably not unlike Space Mountain, if you ask Maldacena and Milekhin. In their study, they described these wormholes as "the ultimate roller coaster."

The only thing a spaceship pilot would need to do, unlike Farscapes Crichton, who totally lost control, is get the ship in sync with the tidal forces of the wormhole so they could be in the right position to take off. These are the forces that will push and pull an object away from another object depending on the difference in the objects strength of gravity, and that gravity would power the spaceship through.This is whyit would basically end upflying itself. But there are still obstacles.

"The problem is that every object which enters the wormhole will be acceleratedto very high energies," the scientists said."It means that a wormhole must be kept extremely cleanto be safe for human travel. In particular, even the pervasive cosmic microwaveradiation, which has very low energy, would be boosted to high energies andbecome dangerous for the wormhole traveler."

So maybe this will never happen. Wormholes may never actually be proven to exist. Even if they dont, it's wild to think about the way quantum physics could even allow for a wormhole that you could coast right through.

Read more:

If you flew your spaceship through a wormhole, could you make it out alive? Maybe... - SYFY WIRE

Posted in Singularity | Comments Off on If you flew your spaceship through a wormhole, could you make it out alive? Maybe… – SYFY WIRE

Page 60«..1020..59606162..7080..»