That "Research" About How Smartphones Are Causing Deformed Human Bodies Is SEO Spam, You Idiots

That

You know that "research" going around saying humans are going to evolve to have hunchbacks and claws because of the way we use our smartphones? Though our posture could certainly use some work, you'll be glad to know that it's just lazy spam intended to juice search engine results.

Let's back up. Today the Daily Mail published a viral story about "how humans may look in the year 3000." Among its predictions: hunched backs, clawed hands, a second eyelid, a thicker skull and a smaller brain.

Sure, that's fascinating! The only problem? The Mail's only source is a post published a year ago by the renowned scientists at... uh... TollFreeForwarding.com, a site that sells, as its name suggests, virtual phone numbers.

If the idea that phone salespeople are purporting to be making predictions about human evolution didn't tip you off, this "research" doesn't seem very scientific at all. Instead, it more closely resembles what it actually is — a blog post written by some poor grunt, intended to get backlinks from sites like the Mail that'll juice TollFreeForwarding's position in search engine results.

To get those delicious backlinks, the top minds at TollFreeForwarding leveraged renders of a "future human" by a 3D model artist. The result of these efforts is "Mindy," a creepy-looking hunchback in black skinny jeans (which is how you can tell she's from a different era).

Grotesque model reveals what humans could look like in the year 3000 due to our reliance on technology

Full story: https://t.co/vQzyMZPNBv pic.twitter.com/vqBuYOBrcg

— Daily Mail Online (@MailOnline) November 3, 2022

"To fully realize the impact everyday tech has on us, we sourced scientific research and expert opinion on the subject," the TollFreeForwarding post reads, "before working with a 3D designer to create a future human whose body has physically changed due to consistent use of smartphones, laptops, and other tech."

Its sources, though, are dubious. Its authority on spinal development, for instance, is a "health and wellness expert" at a site that sells massage lotion. His highest academic achievement? A business degree.

We could go on and on about TollFreeForwarding's dismal sourcing — some of which looks suspiciously like even more SEO spam for entirely different clients — but you get the idea.

It's probably not surprising that the this gambit for clicks took off among dingbats on Twitter. What is somewhat disappointing is that it ended up on StudyFinds, a generally reliable blog about academic research. This time, though, for inscrutable reasons it treated this egregious SEO spam as a legitimate scientific study.

The site's readers, though, were quick to call it out, leading to a comically enormous editor's note appended to the story.

"Our content is intended to stir debate and conversation, and we always encourage our readers to discuss why or why not they agree with the findings," it reads in part. "If you heavily disagree with a report — please debunk to your delight in the comments below."

You heard them! Get debunking, people.

More conspiracy theories: If You Think Joe Rogan Is Credible, This Bizarre Clip of Him Yelling at a Scientist Will Probably Change Your Mind

The post That "Research" About How Smartphones Are Causing Deformed Human Bodies Is SEO Spam, You Idiots appeared first on Futurism.

View post:

That "Research" About How Smartphones Are Causing Deformed Human Bodies Is SEO Spam, You Idiots

Mind map – Wikipedia

This article is about the visual diagram. For the geographical concept, see Mental mapping.

Diagram to visually organize information

A mind map is a diagram used to visually organize information into a hierarchy, showing relationships among pieces of the whole.[1] It is often created around a single concept, drawn as an image in the center of a blank page, to which associated representations of ideas such as images, words and parts of words are added. Major ideas are connected directly to the central concept, and other ideas branch out from those major ideas.

Mind maps can also be drawn by hand, either as "notes" during a lecture, meeting or planning session, for example, or as higher quality pictures when more time is available. Mind maps are considered to be a type of spider diagram.[2] A similar concept in the 1970s was "idea sun bursting".[3]

Although the term "mind map" was first popularized by British popular psychology author and television personality Tony Buzan, the use of diagrams that visually "map" information using branching and radial maps traces back centuries. These pictorial methods record knowledge and model systems, and have a long history in learning, brainstorming, memory, visual thinking, and problem solving by educators, engineers, psychologists, and others. Some of the earliest examples of such graphical records were developed by Porphyry of Tyros, a noted thinker of the 3rd century, as he graphically visualized the concept categories of Aristotle. Philosopher Ramon Llull (12351315) also used such techniques.

The semantic network was developed in the late 1950s as a theory to understand human learning and developed further by Allan M. Collins and M. Ross Quillian during the early 1960s. Mind maps are similar in structure to concept maps, developed by learning experts in the 1970s, but differ in that mind maps are simplified by focusing around a single central key concept.

Buzan's specific approach, and the introduction of the term "mind map", arose during a 1974 BBC TV series he hosted, called Use Your Head.[4][5] In this show, and companion book series, Buzan promoted his conception of radial tree, diagramming key words in a colorful, radiant, tree-like structure.[6]

Buzan says the idea was inspired by Alfred Korzybski's general semantics as popularized in science fiction novels, such as those of Robert A. Heinlein and A. E. van Vogt. He argues that while "traditional" outlines force readers to scan left to right and top to bottom, readers actually tend to scan the entire page in a non-linear fashion. Buzan's treatment also uses then-popular assumptions about the functions of cerebral hemispheres in order to explain the claimed increased effectiveness of mind mapping over other forms of note making.

Cunningham (2005) conducted a user study in which 80% of the students thought "mindmapping helped them understand concepts and ideas in science".[9] Other studies also report some subjective positive effects on the use of mind maps.[10][11] Positive opinions on their effectiveness, however, were much more prominent among students of art and design than in students of computer and information technology, with 62.5% vs 34% (respectively) agreeing that they were able to understand concepts better with mind mapping software.[10] Farrand, Hussain, and Hennessy (2002) found that spider diagrams (similar to concept maps) had limited, but significant, impact on memory recall in undergraduate students (a 10% increase over baseline for a 600-word text only) as compared to preferred study methods (a 6% increase over baseline).[12] This improvement was only robust after a week for those in the diagram group and there was a significant decrease in motivation compared to the subjects' preferred methods of note taking. A meta study about concept mapping concluded that concept mapping is more effective than "reading text passages, attending lectures, and participating in class discussions".[13] The same study also concluded that concept mapping is slightly more effective "than other constructive activities such as writing summaries and outlines". However, results were inconsistent, with the authors noting "significant heterogeneity was found in most subsets". In addition, they concluded that low-ability students may benefit more from mind mapping than high-ability students.

Joeran Beel and Stefan Langer conducted a comprehensive analysis of the content of mind maps.[14] They analysed 19,379 mind maps from 11,179 users of the mind mapping applications SciPlore MindMapping (now Docear) and MindMeister. Results include that average users create only a few mind maps (mean=2.7), average mind maps are rather small (31 nodes) with each node containing about three words (median). However, there were exceptions. One user created more than 200 mind maps, the largest mind map consisted of more than 50,000 nodes and the largest node contained ~7,500 words. The study also showed that between different mind mapping applications (Docear vs MindMeister) significant differences exist related to how users create mind maps.

There have been some attempts to create mind maps automatically. Brucks & Schommer created mind maps automatically from full-text streams.[15] Rothenberger et al. extracted the main story of a text and presented it as mind map.[16] There is also a patent application about automatically creating sub-topics in mind maps.[17]

Mind-mapping software can be used to organize large amounts of information, combining spatial organization, dynamic hierarchical structuring and node folding. Software packages can extend the concept of mind-mapping by allowing individuals to map more than thoughts and ideas with information on their computers and the Internet, like spreadsheets, documents, Internet sites and images.[18] It has been suggested that mind-mapping can improve learning/study efficiency up to 15% over conventional note-taking.[12]

The following dozen examples of mind maps show the range of styles that a mind map may take, from hand-drawn to computer-generated and from mostly text to highly illustrated. Despite their stylistic differences, all of the examples share a tree structure that hierarchically connects sub-topics to a main topic.

Read more:

Mind map - Wikipedia

Mind uploading in fiction – Wikipedia

References of mind uploading in fiction

Mind uploading, whole brain emulation, or substrate-independent minds, is a use of a computer or another substrate as an emulated human brain. The term "mind transfer" also refers to a hypothetical transfer of a mind from one biological brain to another. Uploaded minds and societies of minds, often in simulated realities, are recurring themes in science-fiction novels and films since the 1950s.

A story featuring an artificial brain that replicates the personality of a specific person is "The Infinite Brain" by John Scott Campbell, written under the name John C. Campbell,[1] and published in the May 1930 issue of Science Wonder Stories.[2] The artificial brain is created by an inventor named Anton Des Roubles, who tells the narrator that "I am attempting to construct a mechanism exactly duplicating the mechanical and electrical processes occurring in the human brain and constituting the phenomena known as thought." The narrator later learns that Des Roubles has died, and on visiting his laboratory, finds a machine that can communicate with him via typed messages, and which tells him "I, Anton Des Roubles, am deadmy body is deadbut I still live. I am this machine. These racks of apparatus are my brains, which is thinking even as yours is. Anton Des Roubles is dead but he has built me, his exact mental duplicate, to carry on his life and work." The machine also tells him "He made my brain precisely like his, built three hundred thousand cells for my memory, and filled two hundred thousand of them with his own knowledge. I have his personality; it is my own through a process I will tell you of later. ... I think just as you do. I have a consciousness as have other men." He then explains his discovery that the electrical impulses in the brain create magnetic fields that can be detected by a device he built called a "Telepather", and that "[t]hrough this instrument any one's mental condition can be exactly duplicated." Later, he enlists the narrator's help in constructing a new type of artificial brain that will retain his memories but possess an expanded intellect, though the experiment does not go as planned, as the new intelligence has a radically different personality and soon sets out to conquer the world.

An early story featuring technological transfer of memories and personality from one brain to another is "Intelligence Undying" by Edmond Hamilton, first published in the April 1936 issue of Amazing Stories. In this story, an elderly scientist named John Hanley explains that when humans are first born, "our minds are a blank sheet except for certain reflexes which we all inherit. But from our birth onward, our minds are affected by all about us, our reflexes are conditioned, as the behaviorists say. All we experience is printed on the sheet of our minds. ... Everything a human being learns, therefore, simply establishes new connections between the nerve cells of the brain. ... As I said, a newborn child has no such knowledge connections in his cortex at allhe has not yet formed any. Now if I take that child immediately after birth and establish in his brain exactly the same web of intricate neurone connections I have built up in my own brain, he will have exactly the same mind, memories, knowledge, as I have ... his mind will be exactly identical with my mind!" He then explains he has developed a technique to do just this, saying "I've devised a way to scan my brain's intricate web of neurone connections by electrical impulses, and by means of those impulses to build up an exactly identical web of neurone connections in the infant's brain. Just as a television scanning-disk can break down a complicated picture into impulses that reproduce the picture elsewhere." He adds that the impulses scanning his brain will kill him, but the "counter-impulses" imprinting the same pattern on the baby's brain will not harm him. The story shows the successful transfer of John Hanley's mind to the baby, whom he describes as "John Hanley 2nd", and then skips forward to the year 3144 to depict "John Hanley, 21st" using his advanced technology to become the ruler of the Earth in order to end a war between the two great political powers of the time, and then further ahead to "John Hanley, 416th" helping to evacuate humanity to the planet Mercury in response to the Sun shrinking into a white dwarf. He chooses to remain on Earth awaiting death, so that people would "learn once more to do for themselves, would become again a strong a self-reliant race", with Hanley concluding that he "had been wrong in living as a single super-mind down through the ages. He saw that now, and now he was undoing that wrong."

A story featuring human minds replicated in a computer is the novella Izzard and the Membrane by Walter M. Miller, Jr., first published in May 1951.[3] In this story, an American cyberneticist named Scott MacDonney is captured by Russians and made to work on an advanced computer, Izzard, which they plan to use to coordinate an attack on the United States. He has conversations with Izzard as he works on it, and when he asks it if it is self-aware, it says "answer indeterminate" and then asks "can human individual's self-awareness transor be mechanically duplicated?" MacDonney is unfamiliar with the concept of a self-awareness transor (it is later revealed that this information was loaded into Izzard by a mysterious entity who may nor may not be God[4]), and Izzard defines it by saying "A self-awareness transor is the mathematical function which describes the specific consciousness pattern of one human individual."[5] It is later found that this mathematical function can indeed be duplicated, although not by a detailed scan of the individual's brain as in later notions of mind uploading; instead, Donney just has to describe the individual verbally in sufficient detail, and Izzard uses this information to locate the transor in the appropriate "mathematical region". In Izzard's words, "to duplicate consciousness of deceased, it will be necessary for you to furnish anthropometric and psychic characteristics of the individual. These characteristics will not determine transor, but will only give its general form. Knowing its form, will enable me to sweep my circuit pattern through its mathematical region until the proper transor is reached. At that point, the consciousness will appear among the circuits."[6] Using this method, MacDonney is able to recreate the mind of his dead wife in Izzard's memory, as well as create a virtual duplicate of himself, which seems to have a shared awareness with the biological MacDonney.

In The Altered Ego by Jerry Sohl (1954), a person's mind can be "recorded" and used to create a "restoration" in the event of their death. In a restoration, the person's biological body is repaired and brought back to life, and their memories are restored to the last time that they had their minds recorded (what the story calls a 'brain record'[7]), an early example of a story in which a person can create periodic backups of their own mind which are stored in an artificial medium. The recording process is not described in great detail, but it is mentioned that the recording is used to create a duplicate or "dupe" which is stored in the "restoration bank",[8] and at one point a lecturer says that "The experience of the years, the neurograms, simple memory circuitsneurons, if you wishstored among these nerve cells, are transferred to the dupe, a group of more than ten billion molecules in colloidal suspension. They are charged much as you would charge the plates of a battery, the small neuroelectrical impulses emanating from your brain during the recording session being duplicated on the molecular structure in the solution."[9] During restoration, they take the dupe and "infuse it into an empty brain",[9] and the plot turns on the fact that it is possible to install one person's dupe in the body of a completely different person.[10]

An early example featuring uploaded minds in robotic bodies can be found in Frederik Pohl's story "The Tunnel Under the World" from 1955.[11] In this story, the protagonist Guy Burckhardt continually wakes up on the same date from a dream of dying in an explosion. Burckhardt is already familiar with the idea of putting human minds in robotic bodies, since this is what is done with the robot workers at the nearby Contro Chemical factory. As someone has once explained it to him, "each machine was controlled by a sort of computer which reproduced, in its electronic snarl, the actual memory and mind of a human being ... It was only a matter, he said, of transferring a man's habit patterns from brain cells to vacuum-tube cells." Later in the story, Pohl gives some additional description of the procedure: "Take a master petroleum chemist, infinitely skilled in the separation of crude oil into its fractions. Strap him down, probe into his brain with searching electronic needles. The machine scans the patterns of the mind, translates what it sees into charts and sine waves. Impress these same waves on a robot computer and you have your chemist. Or a thousand copies of your chemist, if you wish, with all of his knowledge and skill, and no human limitations at all." After some investigation, Burckhardt learns that his entire town had been killed in a chemical explosion, and the brains of the dead townspeople had been scanned and placed into miniature robotic bodies in a miniature replica of the town (as a character explains to him, 'It's as easy to transfer a pattern from a dead brain as a living one'), so that a businessman named Mr. Dorchin could charge companies to use the townspeople as test subjects for new products and advertisements.

Something close to the notion of mind uploading is very briefly mentioned in Isaac Asimov's 1956 short story The Last Question: "One by one Man fused with AC, each physical body losing its mental identity in a manner that was somehow not a loss but a gain." A more detailed exploration of the idea (and one in which individual identity is preserved, unlike in Asimov's story) can be found in ArthurC. Clarke's novel The City and the Stars, also from 1956 (this novel was a revised and expanded version of Clarke's earlier story Against the Fall of Night, but the earlier version did not contain the elements relating to mind uploading). The story is set in a city named Diaspar one billion years in the future, where the minds of inhabitants are stored as patterns of information in the city's Central Computer in between a series of 1000-year lives in cloned bodies. Various commentators identify this story as one of the first (if not the first) to deal with mind uploading, human-machine synthesis, and computerized immortality.[12][13][14][15]

Another of the "firsts" is the novel Detta r verkligheten (This is reality), 1968, by the renowned philosopher and logician Bertil Mrtensson, a novel in which he describes people living in an uploaded state as a means to control overpopulation. The uploaded people believe that they are "alive", but in reality they are playing elaborate and advanced fantasy games. In a twist at the end, the author changes everything into one of the best "multiverse" ideas of science fiction.

In Robert Silverberg's To Live Again (1969), an entire worldwide economy is built up around the buying and selling of "souls" (personas that have been tape-recorded at six-month intervals), allowing well-heeled consumers the opportunity to spend tens of millions of dollars on a medical treatment that uploads the most recent recordings of archived personalities into the minds of the buyers. Federal law prevents people from buying a "personality recording" unless the possessor first had died; similarly, two or more buyers were not allowed to own a "share" of the persona. In this novel, the personality recording always went to the highest bidder. However, when one attempted to buy (and therefore possess) too many personalities, there was the risk that one of the personas would wrest control of the body from the possessor.

In the 1982 novel Software, part of the Ware Tetralogy by Rudy Rucker, one of the main characters, Cobb Anderson, has his mind downloaded and his body replaced with an extremely human-like android body. The robots who persuade Anderson into doing this sell the process to him as a way to become immortal.

In William Gibson's award-winning Neuromancer (1984), which popularized the concept of "cyberspace", a hacking tool used by the main character is an artificial infomorph of a notorious cyber-criminal, Dixie Flatline. The infomorph only assists in exchange for the promise that he be deleted after the mission is complete.

The fiction of Greg Egan has explored many of the philosophical, ethical, legal, and identity aspects of mind transfer, as well as the financial and computing aspects (i.e. hardware, software, processing power) of maintaining "copies." In Egan's Permutation City (1994), Diaspora (1997) and Zendegi (2010), "copies" are made by computer simulation of scanned brain physiology. See also Egan's "jewelhead" stories, where the mind is transferred from the organic brain to a small, immortal backup computer at the base of the skull, the organic brain then being surgically removed.

The movie The Matrix is commonly mistaken[citation needed] for a mind uploading movie, but with exception to suggestions in later movies, it is only about virtual reality and simulated reality, since the main character Neo's physical brain still is required for his mind to reside in. The mind (the information content of the brain) is not copied into an emulated brain in a computer. Neo's physical brain is connected into the Matrix via a brain-machine interface. Only the rest of the physical body is simulated. Neo is disconnected from and reconnected to this dreamworld.[citation needed]

James Cameron's 2009 movie Avatar has so far been the commercially most successful example of a work of fiction that features a form of mind uploading. Throughout most of the movie, the hero's mind has not actually been uploaded and transferred to another body, but is simply controlling the body from a distance, a form of telepresence. However, at the end of the movie the hero's mind is uploaded into Eywa, the mind of the planet, and then back into his Avatar body.

Mind transfer is a theme in many other works of science fiction in a wide range of media. Specific examples include the following:

Go here to see the original:

Mind uploading in fiction - Wikipedia

All classifieds – Veux-Veux-Pas, free classified ads Website

--- Category ---Real Estate SaleLuxury real estateInvestmentReal Estate for RentRoomateVacation Rentalsmortgage, hypothec, life annuityOthersUsed CarsUsed motorcycles, ScootersCollection cars, motorcyclesAuto, Moto Parts, AccessoriesUsed motor-home, camper van and RVsSailing, BoatsCar RentalsTrucks and utility vehiclesAgricultural MachinesOther vehiclesOthersJob searchsJob OffersStudent jobs, Summer JobsMLM - Tele Commuting - Distance WorkInternshipsRefrigerators, freezersVacuum cleaners, steam cleaningCookingCoffee Makers, espressoDishwasherWashing machines, dryersSewingSmall appliancesPressing, IroningOther appliancesComputersTV, Video, DVD player, VCRConsoles, Video GamesCameras, Video camerasAudio EquipmentGPSTelephonesDVD, MoviesCD, MusicBooksOthersAgricultural MaterialTransport - HandlingConstruction MachineConstruction MaterialToolsIndustrial EquipmentCatering - HospitalityOffice SuppliesShops & MarketsMedical MaterialFurnitureTablewareHome DecorationHousehold linenHome and GardenAir Conditioning, HeatingAntiques, CollectionsPlants / Trees WantedCheap dealsMoving, Garage SaleVariousSportsFishing, HuntingCampingBikes, CyclingMusical InstrumentsGames, ToysHobbies, RecreationGoing OutTicketsConcertsExhibitionsEventsRestaurantsBars, CafeVariousClothingShoesBaby EquipmentLuggages, Bags, Luggage AccessoriesWatches, JewelleryVariousbeauty, body productsHealth Products, disabilityVariousfood, farm productsWineVariousDesignArchitectureLiteratureMusicPaintingDanceScienceInnovationsCats, KittensDogs, PuppiesHorsesBirdsFish, AquariumReptiles, Snakes, LizardsMice, Rats, Hamsters, GerbisFarm AnimalsPets Supplies and AccessoriesPet sitterbreeding, matingOthersServices at HomeCarpool, RideshareTutoring, Courses, LessonsTranslationIT services, Computer Tech HelpTourism GuideBabysittingBeauty, hairdressing, body care, hair removal, massagehealth services, personal servicesReparation, construction, plumber, electrician, gardenerCleaning, housekeepingCooks, catering servicesJuristAssociationsActivities, Going OutMeetingMissed connections, MessagesLost, FoundBarter, exchangeOthersFurnitureAutomobiles and PartsPrint servicesSellCleaning / Dry CleaningDistributionElectronic Goods / ServicesComputersBuilding RepairsSoftwareStationaryMedical Goods / ServicesAdvertising ServicesFinancial ServicesInsuranceConsultingSecurity Products / ServicesOther ProductsOther ServicesSeaside Vacation RentalsMountain Vacation RentalsCountryside Vacation RentalsAbroad Vacation RentalsOthersMan seeking WomanWoman seeking ManMan seeking ManWoman seeking WomanPlatonic MeetingMisc RomanceCasual EncountersMissed ConnectionsSex ToysMoviesVideo portalsMeetingMusicCars, motorbikesBicycle, cyclingThe bestFor girlsEntertainmentPhotoshopPhotographySportShopsCartoonGamesChildrenMobile phoneEmployment, jobsRecipesAdultsClassified adsPetsMedicineMediaTVCasinoAssociationGovernmentTransportArt, cultureScience, educationDesignComputer sciencesInternet

--- City ---

The rest is here:

All classifieds - Veux-Veux-Pas, free classified ads Website

Machines with Minds? The Lovelace Test vs. the Turing Test – Walter Bradley Center for Natural and Artificial Intelligence

Non-Computable You: What You Do That Artificial Intelligence Never Will (Discovery Institute Press, 2022) by Robert J. Marks is available here. What follows is an excerpt from Chapter 2.

Selmer Bringsjord, and his colleagues have proposed the Lovelace test as a substitute for the flawed Turing test. The test is named after Ada Lovelace.

Bringsjord defined software creativity as passing the Lovelace test if the program does something that cannot be explained by the programmer or an expert in computer code.2 Computer programs can generate unexpected and surprising results.3 Results from computer programs are often unanticipated. But the question is, does the computer create a result that the programmer, looking back, cannot explain?

When it comes to assessing creativity (and therefore consciousness and humanness), the Lovelace test is a much better test than the Turing test. If AI truly produces something surprising which cannot be explained by the programmers, then the Lovelace test will have been passed and we might in fact be looking at creativity. So far, however, no AI has passed the Lovelace test.4 There have been many cases where a machine looked as if it were creative, but on closer inspection, the appearance of creative content fades.

Here are a couple of examples.

A computer program namedAlphaGo was taught to play GO, the most difficult of all popular board games. AlphaGo was an impressively monumental contribution to machine intelligence. AI already had mastered tic-tac-toe, then the more complicated game of checkers, and then the still more complicated game of chess. Conquest of GO remained an unmet goal of AI until it was finally achieved by AlphaGo.

In a match against (human) world championLee Sedol in 2016, AlphaGo made a surprising move. Those who understood the game described the move as ingenious and unlike anything a human would ever do.

Were we seeing the human attribute of creativity in AlphaGo beyond the intent of the programmers? Does this act pass the Lovelace test?

The programmers of AlphaGo claim that they did not anticipate the unconventional move. This is probably true. But AlphaGo is trained to play GO by the programmers. GO is a board game with fixed rules in a static never-changing arena. And thats what the AI did, and did well. It applied programmed rules within a narrow, rule-bound game. AlphaGo was trained to play GO and thats what it did.

So, no. The Lovelace test was not passed. If the AlphaGo AI were to perform a task not programmed, like beating all comers at the simple game of Parcheesi, the Lovelace test would be passed. But as it stands, Alpha GO is not creative. It can only perform the task it was trained for, namely playing GO. If asked, AlphaGo is unable to even explain the rules of GO.

This said, AI can appear smart when it generates a surprising result. But surprise does not equate to creativity. When a computer program is asked to search through a billion designs to find the best, the result can be a surprise. But that isnt creativity. The computer program has done exactly what it was programmed to do.

Heres another example from my personal experience. The Office of Naval Research contracted Ben Thompson, of Penn States Applied Research Lab, and me and asked us to evolve swarm behavior. As we saw in Chapter 1, simple swarm rules can result in unexpected swarm behavior like stacking Skittles. Given simple rules, finding the corresponding emergent behavior is easy. Just run a simulation. But the inverse design problem is a more difficult one. If you want a swarm to perform some task, what simple rules should the swarm bugs follow? To solve this problem, we applied an evolutionary computing AI. This process ended up looking at thousands of possible rules to find the set that gave the closest solution to the desired performance.

One problem we looked at involved a predatorprey swarm. All action took place in a closed square virtual room. Predators, called bullies, ran around chasing prey called dweebs. Bullies captured dweebs and killed them. We wondered what performance would be if the goal was maximizing the survival time of the dweeb swarm. The swarms survival time was measured up to when the last dweeb was killed.

After running the evolutionary search, we were surprised by the result: The dweebs submitted themselves to self-sacrifice in order to maximize the overall life of the swarm.

This is what we saw: A single dweeb captured the attention of all the bullies, who chased the dweeb in circles around the room. Around and around they went, adding seconds to the overall life of the swarm. During the chase, all the other dweebs huddled in the corner of the room, shaking with what appeared to be fear. Eventually, the pursuing bullies killed the sacrificial dweeb, and pandemonium broke out as the surviving dweebs scattered in fear. Eventually another sacrificial dweeb was identified, and the process repeated. The new sacrificial dweeb kept the bullies running around in circles while the remaining dweebs cowered in a corner.

The sacrificial dweeb result was unexpected, a complete surprise.There was nothing written in the evolutionary computer code explicitly calling for these sacrificial dweebs. Is this an example of AI doing something we had not programmed it to do? Did it pass the Lovelace test?

Absolutely not.

We had programmed the computer to sort through millions of strategies that would maximize the life of the dweeb swarm, and thats what the computer did. It evaluated options and chose the best one. The result was a surprise, but does not pass the Lovelace test for creativity. The program did exactly what it was written to do. And the seemingly frightened dweebs were not, in reality, shaking with fear; humans tend to project human emotions onto non-sentient things. They were rapidly adjusting to stay as far away as possible from the closest bully. They were programmed to do this.

If the sacrificial dweeb action and the unexpected GO move against Lee Sedol do not pass the Lovelace test, what would? The answer is, anything outside of what code was programmed to do.

Heres an example from the predatorprey swarm example. The Lovelace test would be passed if some dweebs became aggressive and started attacking and killing lone bullies a potential action we did not program into the suite of possible strategies. But that didnt happen and, because the ability of a dweeb to kill a bully is not written into the code, it will never happen.

Likewise, without additional programming, AlphaGo will never engage opponent Lee Sedol in trash talk or psychoanalyze Sedol to get a game edge. Either of those things would be sufficiently creative to pass the Lovelace test. But remember: the AlphaGo software as written could not even provide an explanation of its own programmed behavior, the game of GO.

You may also wish to read the earlier excerpts published here:

Why you are not and cannot be computable. A computer science prof explains in a new book that computer intelligence does not hold a candle to human intelligence. In this excerpt from his forthcoming book, Non-Computable You, Robert J. Marks shows why most human experience is not even computable.

The Software of the Gaps: An excerpt from Non-Computable You. In his just-published book, Robert J. Marks takes on claims that consciousness is emerging from AI and that we can upload our brains. He reminds us of the tale of the boy who dug through a pile of manure because he was sure that underneath all that poop, there MUST surely be a pony!

and

Marks: Artificial intelligence is no more creative than a pencil.You can use a pencil but the creativity comes from you. With AI, clever programmers can conceal that fact for a while. In this short excerpt from his new book, Non-Computable You, Robert J. Marks discusses the tricks that make you think chatbots are people.

Notes

1 Selmer Bringsjord, Paul Bello, and David Ferrucci, Creativity, the Turing Test, and the (Better) Lovelace Test, in The Turing Test: The Elusive Standard of Artificial Intelligence, ed. James H. Moor (Boston: Kluwer Academic Publishers, 2003), 215239.

2 David Klinghoffer, Robert Marks on the Lovelace Test, Evolution News and Science Today, Discovery Institute, January 24, 2018.

3 Bringsjord, Bello, and Ferrucci, Creativity. The Lovelace test (LT) is more formally stated by Bringsjord and his colleagues. Here is their definition: Artificial agent A, designed by H, passes LT if and only if (1) A outputs o; (2) As outputting o is not the result of a fluke hardware error, but rather the result of processes A can repeat; (3) H (or someone who knows what H knows, and has Hs resources) cannot explain how A produced o. Notice that this differs from Turings surprises which, as he admitted, occurred because he as programmer erred or else forgot what he had done.

4 Selmer Bringsjord, The Turing Test is Dead. Long Live the Lovelace Test, interview by Robert J. Marks in Mind Matters News, podcast, 27:25, April 2, 2020, https://mindmatters.ai/podcast/ep76/.

Read more:

Machines with Minds? The Lovelace Test vs. the Turing Test - Walter Bradley Center for Natural and Artificial Intelligence

Upload Season 3 is not coming in July 2022 – Amazon Adviser

We are more than ready to see whats next as it looks like Nathans head is about to explode. When will Upload Season 3 come to Prime Video?

There were a lot of questions at the end ofUpload Season 2. Nathan had been downloaded, but it looks like he only has 24 hours to live. Hes already suffering nosebleeds, which is a sign that hes going to die.

The good news is Taylor rebooted him within the digital world. This Nathan wont have any memories of what happened in the real world, but at least theres a chance to bring him back again.

Now we just need to know when well get to see what is to come next.Upload Season 3 isnt coming in July, though. Thats not surprising with the renewal only just coming in back in May, but it doesnt stop us from looking out for any information about the new season.

We had to wait almost two years for the second season of the series. Part of that was due to COVID and the filming delays in 2020. Filming couldnt start until January 2021. It wrapped in April 2021, but this is a show with a length post-production process. It took almost a year for that to be completed.

With all that in mind, were not looking atUpload Season 3 arriving in 2022 at all. There is a chance well see it by the end of 2023, though. Without the filming delays, we shouldnt have to wait almost two years for the third season. It will be worth the wait, though. That part we know.

At least the writing had started before the official renewal came in. Everyone was pretty confident that this should would be renewed. It will help to keep the wait for production to start to a minimum.

Upload is available to stream on Prime Video.

Read the original here:

Upload Season 3 is not coming in July 2022 - Amazon Adviser

Movin’s predictable logistic flow will give MSMEs peace of mind: UPS’ Ufku Akaltan – Economic Times

Rahul Bhatia-led InterGlobe Enterprises and Atlanta-based logistics major UPS rolled out their logistics venture, Movin, in the capital recently. Aiming to tap the logistics industry in a more predictable, transparent and reliable manner, Movin boasts of helping businesses get more competitive in this space.

With its services that include day-definite as well as time-definite solutions, the venture claims to give businesses better predictability and planning for their operations. Ufku Akaltan, UPS President-Indian subcontinent, Middle East and Africa, and JB Singh, Director,

The Economic Times (ET): Why did you want to do this JV with InterGlobe for the Indian market? What was the idea behind it?Ufku Akaltan (UA): A typical joint venture brings in unique and a wealth of experience to the Indian market. Also in the aviation, hospitality and travel industries, InterGlobe is second to none. They have built enduring brands and they have connected these brands to customers. They were a perfect partner for us to bring Movin to the market together.

ET: Are you creating any physical infrastructure in India?UA: Movin is currently building and expanding its network, starting with our people, technology, processes and also partners. Long-term leases are a part of this game plan to make sure the network is aligned to our customer needs to provide a predictable and reliable service.

ET: How will Movin help in driving more efficiencies in logistic operations for Indias MSME sector?UA: Movin, with its product portfolio and technology driven processes, will be providing MSMEs the peace of mind for a predictable, controllable, plannable logistic flow. It is going to help the MSMEs reach their customers faster, which will improve their chances to grow and, in essence, enhance their cash flows. They will also have a much easier access to global trade and when customers are capable in a domestic environment, they start seeking opportunities to start exporting. At that point, they will integrate with our global network which will further help them grow their business.

ET: Movin seeks to make the supply chain more predictable which, in effect, will help a lot of businesses in the present environment. What kind of technological processes do you have in place to make this possible? JB Singh (JS): Movin will focus on its people, driving commitment through training, engagement and empowerment. Culture drives business, so that is fundamental and becomes a bedrock of our business. We have built the whole indigenised stack in technology. It is almost paperless where you can upload KYCs or any documentation seamlessly. It is really simple to use, which means even someone from a tier-II or a tier-III place can use it. We have a very large robust call centre and we are going to promise single call resolution. There is a whole repository where customers will have their own data on billings and everything which they can access in time.

It is all accessible to customers and it is linked to our entire back office. Everything that we see, the customers can see as well and so it is entirely transparent. Even aspects like our HR policies and how we manage our people everything is integrated into that.

A more important part is the processes. If there are solid processes in place, it drives efficiency and that drives value.

ET: The supply chain crisis has affected economies across the world. In the aftermath of such a crisis, how can such a brand help to alleviate that in some way? UA: When you have predictability and a reliable service from a logistics partner, that helps you surgically support your own customers with such a product. We

Read more:

Movin's predictable logistic flow will give MSMEs peace of mind: UPS' Ufku Akaltan - Economic Times

Opinion: The unsung heroes of Ukraine: Photographers Evgeniy Maloletka and Mstyslav Chernov, who captured Mariupol’s pain – The Globe and Mail

Associated Press photographer Evgeniy Maloletka points at the smoke rising after an airstrike on a maternity hospital, in Mariupol, Ukraine, on March 9.Mstyslav Chernov/The Associated Press

The Globe and Mail is spotlighting some of the unsung heroes of the war in Ukraine, who are doing their part amid Russias invasion. Other pieces in this series include recognition of the doctors, the farmers and the public servants

Christian Borys is the founder of Saint Javelin, a company that has donated more than $1-million from sales to Ukraine since the start of the war, and a former reporter based in Kyiv from 2014 to 2018.

In February, early in Vladimir Putins unprompted and illegal attack on Ukraine, Russian troops quickly advanced from the south and from the east until they were on the footsteps of the Ukrainian city of Mariupol.

As the Russian military began its siege of the now-infamous port city, there came a moment of reckoning for the foreign aid workers, journalists, photographers and others staying in hotels across the city: whether they should stay or leave.

Most left. But Evgeniy Maloletka and Mstyslav Chernov, both veteran Ukrainian journalists who have covered the war with the Associated Press for years, decided to stay, because they understood what the Russians were about to do to Mariupol. They decided that their job, no matter the personal risk, was to tell that story to the world.

Since 2014, theyve both seen firsthand the depths of depravity within the Russian military. They understood what the Russian military can do and does to journalists who expose their horrors. And they knew that the Russian military would be only too happy to wipe Mariupol off the face of the Earth, killing thousands of civilians, if it meant achieving Russias goals.

Yet they stayed. And by staying the only international journalists who remained in the city, over the course of their month there they became the first journalists to shock the world with the true face of Russia.

With Russias relentless shelling knocking out power throughout the city, they used generators to power their phones and laptops so they could upload their work to Associated Press editors around the world, using whatever weak internet signal was left.

And that work was gruesome and heartbreaking. In late February, they started to send photos of lifeless Ukrainian children in Mariupols hospitals whod been killed by Russian artillery and air strikes. The photos were almost impossible to look at, but Mr. Maloletka and Mr. Chernov bore witness at those hospitals, spending hours with those medical teams, watching countless children and civilians die so that the rest of the world could see what Russia was doing.

The psychological effect of what they saw and experienced is impossible for almost anyone to understand, but they did it day in and day out. They experienced danger themselves, too; Mr. Chernov said the Russians were hunting them down. They had a list of names, including ours, and they were closing in, he wrote.

Over the course of the nearly three-month siege, Mariupol became the home of a humanitarian crisis with thousands of reported deaths. In May, the city fell under Russian control.

The two of them never fired a bullet, but in a very real way, I think that those two men helped to save Ukraine. They may not have fired a bullet, but their photos and videos helped galvanize the worlds support. In my mind, the country still fights on today, in part because of their cameras.

Keep your Opinions sharp and informed. Get the Opinion newsletter. Sign up today.

See the original post:

Opinion: The unsung heroes of Ukraine: Photographers Evgeniy Maloletka and Mstyslav Chernov, who captured Mariupol's pain - The Globe and Mail

How to Resize an Image in Photoshop – PetaPixel

Resizing images in Adobe Photoshop sounds like it should be a trivial and simple operation, and for many uses and users, it is. Its when one starts looking closely at the details in a resized image that it becomes apparent you should be looking into the options of the resize image dialogue box.

In the past resizing to larger sizes was often performed on image files in order to make large prints. With the higher pixel count of current cameras and high-end mobile devices, resizing larger might not seem that necessary and maybe it isnt for many users. If, however, youre looking to produce the best quality prints from your image, resizing before printing might be in order and it doesnt necessarily mean increasing image size.

Another resize scenario that I personally run into nearly every day is prepping images for posting to social media. In this case, Im reducing the size of images in Photoshop before uploading them to social media and my own website. Other reasons for resizing images include submissions for publications, insertion into videos, and many more.

First, well start with a basic, barebones step-by-step look at how to resize an image. The steps are as follows:

Now lets do a deeper dive into the options available when resizing any image.

The Image Size dialogue box (Figure 1) looks pretty simple but theres a bit more here than meets the eye. Lets take a tour of the settings and options starting with the preview window.

In Figure 1 we see the Image resize dialogue box as it normally appears in Photoshop. It can be called by going to the Image > Image Size item in the main menu or by pressing Control+ALT+I in Windows or CMD+ALT+I on Mac.

Starting at the left we have the preview window which by default shows a 100% view of the image being resized. This view will update as one changes the parameters of the resize action to be performed. I highly recommend leaving this setting at 100% as it will provide the most accurate preview of the image quality after the resize is completed.

This preview window by default is a bit small but since 2013, Adobe added the ability to enlarge the entire Image Size dialogue box. It is not readily apparent but you can grab the sides and corners to enlarge the view. You can see in Figure 2 how Ive enlarged the view so now at 100% we can see all of Stephanies face instead of just the small portion visible in Figure 1.

To the right of the preview window are some details about the image starting with the image file size. In this case, the file size is based on an uncompressed file so if you were to save this file as TIFF with compression turned off, it would be 34.4Megabytes in size. Obviously, when the image is exported as a JPEG it would be much smaller in size.

Below this, we have the image dimensions which are by default displayed in inches but might be different if you have changed this in the past. I usually size by pixels which can be seen as my default dimension setting in Figure 3. Other options are percent, inches, centimeters, millimeters, as well as points and picas. The latter two are most familiar to the desktop publishing crowd

Next we have the Fit To options seen in Figure 4. This is a pull-down list with some commonly used image sizes for web use, desktop publishing, and photo print sizes. Keep in mind these presets will not crop an image so if you, for instance, use the preset for 5 x 7 inches and the image doesnt match those proportions it will only match one of the dimensions and let the other adjust to fit the original proportions of the image.

Just below the Original Size option in the pull-down list is Auto Resolution. It might sound like Photoshop will somehow be able to guess the desired resolution for which you are needing. However, it is not reading your mind (yet!). Figure 5 shows the settings that are displayed when selecting this option.

Auto Resolution is basically there to do some math for you when your image will be output via the color separation process (or halftone in the case of grayscale images). If you are having to resize your image for high-quality book publishing or other similar types of output, the printing service will likely provide you with the lines per inch setting you should use. Of course outside the United States you may find they use lines per centimeter and thankfully Photoshop provides this option.

It may also be that you have a printer in your office/studio/lab that might provide a lines/inch type setting recommendation, so check your manual to see if this is the case for the best quality output from your printer.

Three base options are available for draft, good, and best quality. For the final output, I would certainly choose the Best option. Again your service provider might request a draft quality version for testing.

If you have a setting that you need to use frequently but isnt listed in the default options you can use Custom to create one using the Save Preset option. Saved settings can be recalled using the Load Preset option. You will see that if you change the width/height settings the Fit To dialogue will change to Custom (Figure 6) and then this setting can be saved as a new preset.

Now were getting to where the real action is. The width and height fields are where you will make the changes to the image size. In Figure 7, you can see that to the left of the width and height input fields there is a vertical chain link icon (outlined in red). Clicking this icon toggles on and off the link between the two dimensions.

When locked, changing one dimension will cause the other to be changed in order to maintain the original proportions, or aspect ratio, of the image. When turned off these two dimensions can be changed independently. This is often not desirable as it will distort the newly resized image but there are times you might want to do this, thus the option to do so.

Before making changes to the values in the width and height fields you may want to change the type of value youll be modifying. You can do this using the pull-down menu highlighted in yellow in Figure 7. You can see how the options will appear in Figure 8 below.

One thing you might find a bit redundant is that despite the fact that there is a separate pull-down list for both the width and height dimension type, they will almost always be the same. For instance, you cannot have one display in inches and the other display pixels even if you unlock the values using the link icon. The exception is the columns option, which well look at shortly.

Of course, now comes the part where you need to decide which of those values shown in Figure 8 you need to use. Percent is pretty self-explanatory as it simply increases or decreases the image dimensions based on the percentage you input. Keep in mind that there are limits here and you cant set a percentage that will increase the pixel dimensions larger than 300,000 pixels on the long side.

It may seem like the percentage option is rather limited in scope of use as you often need specific pixel/inch/cm dimensions. However, years (and years) ago it was used quite often to enlarge images. Many Photoshop users swore by the method of using successive 10% image size increases to create cleaner and more detailed image enlargements than simply jumping to say 150 or 200 percent image resizes. The more advanced algorithms that are now in Photoshop do a pretty good job. Still, you might try that technique out for yourself and compare the results.

Resizing by pixel dimensions probably runs neck and neck with inches (or centimeters) for the most often used parameter. This certainly is related to needing specific output sizes for viewing on webpages, mobile applications, and other electronic displays.

Note that when resizing for electronic displays, the Resolution parameter will generally have no effect on how the image will look on your computer or mobile device screen. This is because a 400300-pixel image displayed at 100% on a screen will remain 400300 pixels (see Figure 9). It doesnt matter if the resolution is 72ppi or 600ppi. This is because your screen has a fixed number of pixels and as such, the resolution figure means nothing in this case when viewing images, usually.

Some desktop publishing and layout applications may display an image differently based on resolution because they are designed to preview the output to print. As such, Pixels Per Inch do make a difference in this case. The same applies when outputting to print in Photoshop.

This brings us to the remaining value types available for resizing. These include inches, centimeters, millimeters, points, and picas. Using any of these values allows you to resize to a specific output size for printing. For output to a printer, you would typically use the inches or centimeters options. Millimeters could be used here as well. Points and picas are going to be typically used by those in desktop publishing and other publication development tools.

When using these physical print/output values, resolution does become important. While 300ppi is often used for a lot of printing situations, it may not necessarily be the best. 300ppi will often provide very good results for print sizes like 46 up to 1620 inches and maybe even a bit larger. However as output size grows, the expectation is that the viewer will be further away from the image. This is often why large prints, 3040 inches in size, for instance, can be printed at 180ppi and even much less for billboard size prints. Unless someone gets very close to the image, the quality will be very difficult, if not impossible to see.

If you are sending your work to be printed, the service will often offer recommended output resolutions as well as some other settings. If you are using your own printer, the manual might offer some preferred settings based on the output size.

The last item in the list of values is columns. I mentioned columns earlier as the one value you can set in width or height that can be different from the other dimension type. Like picas and points, columns are going to be something familiar to those who layout documents and work in desktop application tools like InDesign. Ok, so what is the actual value of a column when using this value? So glad you asked.

Column size is set in Photoshops preferences and can be found under the Units & Rulers section. By default, the column size is set to 180 points (2.5 inches). Column and gutter width can be adjusted in the area highlighted in red in Figure 10.

This dialogue box also offers some other settings you might want to adjust to suit your workflow. For instance, I prefer to have my rulers set to pixels and you might want to have new images default to a different print resolution than 300ppi.

Before we get into the various resampling options, I want to point out that the terms resize and resample are not necessarily interchangeable. Technically, resizing is simply adjusting the printable output dimensions and as you change the size in inches, the resolution is simply adjusted to fit all the available pixels into the new printable area. You can see this by unchecking the resample box and adjusting the inches value. The resolution will change to fit.

Likewise, you can change the resolution value and the width and height will change accordingly. You will even see that once you uncheck the resample box, the Width, Height, and Resolution labels are all linked and the ability to unlock is not possible. This is because pixels are not being added or subtracted from the image. Of course, if you do need to adjust the actual pixel dimensions of an image, then youll need to check the Resample box if it isnt already.

Here is a brief overview of what the different resampling methods do:

By default, Photoshop will choose what it feels is the best resample algorithm for the resize type you are performing. For enlarging, or adding more pixels, Photoshop is defaulting to Preserve Details where reducing image size uses Bicubic Sharper. The settings Automatic uses could change at any time with future updates so when you get a new update I would check to make sure.

Note: If you dont see the option for Preserve Details 2.0 this could be because you have not enabled it in the preferences. First introduced in Photoshop CC 2018, the Preserve Details 2.0 feature falls under Technology Previews in Photoshops preferences (Figure 13). This is still the case in version 23.4.1 which Im using at the time of this writing. Perhaps in a future version this will get moved into a regular feature.

To take the ambiguity out of the result I would set the method I use manually from the pull-down list. In Figure 14 I have an example image where I doubled the pixel dimensions of the photo using the three different options for enlargement. The large image is a crop at 100% of the original image. The three images below are 100% views of the same point in the image after doubling the pixel count. You can click on the images to see a full-size view of each. At the reduced size to fit into this article, the images are too small to see much if any difference.

I find that Preserve Details 2.0 does the best overall job for clean and natural-looking output. The original Preserve Details does a good job but it tends to over-sharpen a little which creates some halos if you look closely. Bicubic Smoother is no slouch itself and without having the other options side by side for direct comparison. It looks very close at first glance to the Preserve Details options but the fine details are a little mushy and looking at the softbox reflection in the eye you can see some detail in the softbox grid is lost.

For reducing image sizes the Bicubic Sharper does a good job but I have to say most of the options look very good when reducing an images size. If you find that Bicubic Sharper is a little oversharpened, you can try the regular Bicubic option. However, I would still look at Preserve Details 2.0 when the reduction is only between about 1 to 30% of the original dimensions. As always you might just experiment as the content of your image may look better in one approach or the other.

The Bicubic options (Smoother, Sharper, and regular) have a distinct advantage over Preserve Details in that they are much faster to process. If you are needing to resize a large batch of images for proofing or have any other situation where the absolute highest detail isnt necessary, I would use the bicubic options. On my Dell XPS 15 9500 (i7-10875H), upsizing a 42MP image by 200 percent took six seconds with Preserve Details 2.0 and two seconds in the original Preserve Details. Using Bicubic Smoother was pretty much instantaneous.

For illustrations and similar types of images, Nearest Neighbor can do a good job at preserving the solid colors and crisp edges you may find in those types of files. However, it can produce some strong aliasing so watch out for that. Bilinear uses a weighted averaging based on nearby pixels and is the least accurate option. It is also very fast but with modern computers, this advantage really isnt relevant.

One last option in the Image Size dialogue box is in the upper right-hand corner and is just a small gear icon as seen in Figure 15. Scale styles might not sound interesting but if you work with layer styles such as drop shadows, strokes, etc, this option is huge. Layer styles such as drop shadow contain their own set of parameters that dictate their appearance. If the Layer style isnt properly scaled to match the resampled overall image, the effect of the style can be ruined. Having this option for Photoshop to automatically adjust, or not, the Layer Styles in the resampled file is very useful.

Before wrapping this up I wanted to touch on the use of the terms PPI and DPI. These often seem to get used interchangeably but they really shouldnt. PPI refers to the number of pixels per linear inch. This determines how many of the images pixels will be present in the physical space of a print. PPI as it applies to a digital image being displayed on an electronic display doesnt have a specific meaning. In most cases, it has no bearing on how the image is displayed on a screen since when it is viewed at 100% size, it will display the image pixels at a one-to-one ratio to the displays pixels.

Why does this matter?

Sometimes people feel the need to match the PPI setting of their image to the DPI specification of their printer. With printers capable of outputting at DPI levels like 2,400 or 4,800, it can seem like a good idea to upsize your image PPI to match the printer DPI. The fact is that there is usually not a direct correlation to output resolution here. These high DPI output printers use very tiny dots of ink to mix together to create more accurate colors and a wider variety of ink density. As such this is typically not going to help and can actually make the output worse. Plus it can have the detrimental side effect of creating huge file sizes that can be difficult to work with. Instead, stick with your printer manuals recommendation.

So it turns out that resizing images may not be as simple as it appears. Of course, this depends on how important the quality of the final output is to the image usage. I regularly hear from fellow photographers that they are often disappointed in the quality of the images they upload to social media sites like Instagram and Facebook. Often this comes from the fact they are uploading very large images and letting the servers at these services resize the images as they upload. This is usually due to the fact those servers use simple and fast approaches to resizing the images since processing time is money.

Remember earlier how I pointed out how resampling using Preserve Details 2.0 took several seconds vs Bicubic which was nearly instant? If you had to process 350,000,000 photos a day like Facebook does on average, youd probably want to go for the fastest option too. By resizing your images to the recommended size(s) for the site you are using, you improve the quality of the images after they upload.

View original post here:

How to Resize an Image in Photoshop - PetaPixel

French Bulldogs Are Popular and Have Become Armed Robbery Targets – The New York Times

ELK GROVE, Calif. The French bulldog business is booming for Jaymar Del Rosario, a breeder whose puppies can sell for tens of thousands of dollars. When he leaves the house to meet a buyer, his checklist includes veterinary paperwork, a bag of puppy kibble and his Glock 26.

If I dont know the area, if I dont know the people, I always carry my handgun, Mr. Del Rosario said on a recent afternoon as he displayed Cashew, a 6-month-old French bulldog of a new fluffy variety that can fetch $30,000 or more.

With their perky ears, their please-pick-me-up-and-cradle-me gaze and their short-legged crocodile waddle, French bulldogs have become the it dog for influencers, pop stars and professional athletes. Loyal companions in the work-from-home era, French bulldogs seem always poised for an Instagram upload. They are now the second-most-popular dog breed in the United States after Labrador retrievers.

Some are also being violently stolen from their owners. Over the past year, thefts of French bulldogs have been reported in Miami, New York, Chicago, Houston and especially, it seems across California. Often, the dogs are taken at gunpoint. In perhaps the most notorious robbery, Lady Gagas two French bulldogs, Koji and Gustav, were ripped from the hands of her dog walker, who was struck, choked and shot in last years attack on a Los Angeles sidewalk.

The price of owning a Frenchie has for years been punishing to the household budget puppies typically sell for $4,000 to $6,000 but can go for multiples more if they are one of the new, trendy varieties. Yet owning a French bulldog increasingly comes with nonmonetary costs, too: The paranoia of a thief reaching over a garden fence. The hypervigilance while walking ones dog after reading about the latest abduction.

For unlucky owners, French bulldogs are at the confluence of two very American traits: the love of canine companions and the ubiquity of firearms.

On a chilly January evening in the Adams Point neighborhood of Oakland, Calif., Rita Warda was walking Dezzie, her 7-year-old Frenchie, not far from her home. An S.U.V. pulled up and its passengers exited and lunged toward her.

They had their gun and they said, Give me your dog, Ms. Warda said.

Three days later, a stranger called and said she had found the dog wandering around a local high school. Ms. Warda is now taking self-defense classes and advises French bulldog owners to carry pepper spray or a whistle. Ms. Warda says she does not know why Dezzies abductors gave him up but it could have been his advanced age: Frenchies have one of the shortest life spans among dog breeds, and 7 years old was already long in the tooth.

In late April, Cristina Rodriguez drove home from her job at a cannabis dispensary in the Melrose section of Los Angeles. When she pulled up to her home in North Hollywood, someone opened her car door and took Moolan, her 2-year-old black and white Frenchie.

Ms. Rodriguez said she did not remember many details of the theft. When you have a gun at your head, you kind of just black out, she said.

But footage from surveillance cameras in her neighborhood and near the dispensary appear to indicate that the thieves followed her for 45 minutes in traffic before pouncing.

They stole my baby from me, Ms. Rodriguez said. Its so sad coming home every day and not having her greet me.

It is uncertain how prevalent robberies of French bulldogs are nationally, and some local law enforcement agencies said they do not keep a running count of these particular crimes.

Patricia Sosa, a board member of the French Bull Dog Club of America, said she was not aware of any annual tally. Social media groups created by Frenchie owners are often peppered with warnings. If you own a Frenchie, says one post on a Facebook group dedicated to lost or stolen French bulldogs, do not let it get out of your sight.

Criminals are making more money from stealing frenchies than robbing convenience stores, the posting said.

Ms. Sosa, who has a breeding business north of New Orleans, said the lure of profiting from the French bulldog craze had also spawned an industry of fake sellers demanding deposits for dogs that do not exist.

There are so many scams going on, she said. People think, Hey, Ill say I have a Frenchie for sale and make a quick five, six, seven thousand dollars.

Ms. Sosa said breeders were particularly vulnerable to thefts. She does not give out her address to clients until she thoroughly researches them. I have security cameras everywhere, she said.

French bulldogs, as the name suggests, are a French offshoot from the small bulldogs bred in England in the mid-1800s. An earlier iteration of the Bouledogue Franais, as it is called in France, was favored as a rat catcher by butchers in Paris before becoming the toy dog of artists and the bourgeoisie, and the canine muses that appeared in works by Edgar Degas and Henri de Toulouse-Lautrec.

Today, the American Kennel Club defines French bulldogs as having a square head with bat ears and the roach back.

In the world of veterinary medicine, Frenchies are controversial because their beloved features their big heads and bulging puppy dog eyes, recessed noses and folds of skin create what Dan ONeill, a dog expert at the University of Londons Royal Veterinary College, calls ultra-predispositions to medical problems.

Their heads are so large that mothers have trouble giving birth; most French bulldog puppies are delivered by cesarean section. Their short, muscular bodies also make it hard for them to naturally conceive. Breeders typically artificially inseminate the dogs.

Most concerning for researchers like Mr. ONeill is the dogs flat face, which can belabor its breathing. French bulldogs often make snoring noises even when fully awake, they often tire easily and they are susceptible to the heat. They also can develop rashes in their folds of skin. Because of their bulging eyes, some French bulldogs are incapable of a full blink.

Mr. ONeill leads a group of veterinary surgeons and other dog experts in the United Kingdom that urges prospective buyers to stop and think before buying a flat-faced dog, a category that includes French bulldogs, English bulldogs, Pugs, Shih Tzus, Pekingese and Boxers.

Theres a flat-faced dog crisis, Mr. ONeill said. French bulldogs, he concluded in a recent research paper, have four times the level of disorders of all other dogs.

These pleadings and warnings have not stopped French bulldogs from rocketing in popularity, propelled in large part by social media. As in the United States, the French bulldog in Britain has been neck and neck with the Labrador for the title of most popular breed in recent years.

Ms. Sosa blamed poor breeding for bad outcomes. Well-bred dogs are relatively healthy, she said.

Mr. Del Rosario, the breeder in Elk Grove, a suburban city just south of Sacramento, says professional football and basketball players have been some of his most loyal customers. He has sold puppies to players for the Kansas City Chiefs, Cincinnati Bengals, Tampa Bay Buccaneers, Houston Texans, New York Jets and Arizona Cardinals. Four years ago, the San Francisco 49ers bought Zoe, a black brindle Frenchie that serves as the teams emotional support dog. Two years later, the team added Rookie, a blue-gray French bulldog puppy with hazel eyes, to its canine roster.

Mr. Del Rosarios most expensive Frenchie was a lilac with a purplish gray coat, light eyes that glowed red and a pinkish tint on his muzzle. It sold for $100,000 to a South Korean buyer who wanted the dog for its rare genetics. The dog was one of several hundred puppies that Mr. Del Rosario has sold over the past decade and a half.

He has kept seven Frenchies for his extended family, including his two daughters, 9 and 10 years old. The girls play with the Frenchies at home but Mr. Del Rosario is strict about not letting them walk the dogs alone.

I dont care if youre going to the mailbox, he said. Nope, they just cant take the dogs out by themselves.

With all this stuff going on with these dogs, you just never know.

Read the original post:

French Bulldogs Are Popular and Have Become Armed Robbery Targets - The New York Times

How to Use Apple Notes to Have Secret Chats With Others – Lifehacker

Photo: Ekaterina_Minaeva (Shutterstock)

Apples Notes app might not be the first option to come to mind when you want to keep a conversation hidden from others. But by using its collaboration features to invite others into a conversationand then deleting the messages when youre doneyou can erase all evidence of your chat. Sure, its nowhere near as safe as using an encrypted messaging app with disappearing messages (and its pretty easy to take screenshots of your shared notes or copy your chats to another app), but the Notes app is a quick and easy option for secret messaging in a pinch.

To get started, open Notes and create a new note. Type something in the note, then tap the three-dots icon in the top-right corner of the page, and tap Share note. You can now tap Share options and disable Anyone can add people. Under Permission, make sure youve selected Can make changes.

Go back one page and select how youd like to share the note with someone else. As long as they have an Apple ID, they will be able to access your note. We went with iMessage to share the note. Once the other person has joined, you can start typing a message and theyll be able to see it pretty much in realtime.

To make it easy to differentiate between your messages and those by your contact, tap the three-dots icon in the top-right corner and select Manage Shared Note. Select Highlight All Changes and go back to your note. When theyve replied to your message, Notes will highlight it in a different color.

When youre done with the conversation, delete everything youve typed first. Then tap the three-dots icon in the top-right corner, go to Share Options, and change the document permission to View only.

Return the previous page, swipe left on the name of your contact, and select Remove. This will stop them from accessing the note further. You can now delete the note from Apple Notes, and with it, all traces of your conversation will be gone.

While Apple Notes (or for that matter, Google Docs) allows you to have collaborative chats, these arent really the apps you want to turn to for true privacy. Ideally, youd want to use encrypted messaging apps like Signal for these conversations.

In Signal, every chat is encrypted by default and the app doesnt save information it doesnt need. Once youve started a chat in Signal, tap the contacts name at the top of the page and select Disappearing Messages. You can set a custom time for these messages and each message in the conversation will automatically disappear after that.

You dont need to manually delete anything or use an unencrypted platform for secret messages. When you upload images to Signal, it has a handy option to automatically blur all faces. Features like these make it far more suitable for private conversations. And if you dont want to use Signal, weve got a list of alternatives for you.

View original post here:

How to Use Apple Notes to Have Secret Chats With Others - Lifehacker

The Software of the Gaps: An Excerpt from Non-Computable You – Walter Bradley Center for Natural and Artificial Intelligence

There are human characteristics that cannot be duplicated by AI. Emotions such as love, compassion, empathy, sadness, and happiness cannot be duplicated. Nor can traits such as understanding, creativity, sentience, qualia, and consciousness.

Or can they?

Extreme AI champions argue that qualia and, indeed, all human traits will someday be duplicated by AI. They insist that while were not there yet, the current development of AI indicates we will be there soon. These proponents are appealing to the Software of the Gaps, a secular cousin of the God of the Gaps. Machine intelligence, they claim, will someday have the proper code to duplicate all human attributes.Impersonate, perhaps. But experience, no.

AI will never be creative or have understanding. Machines may mimic certain other human traits but will never duplicate them. AI can be programmed only to simulate love, compassion, and understanding.

The simulation of AI love is wonderfully depicted by a human-appearing robot boy brilliantly acted by a young Haley Joel Osment in Steven Spielbergs 2001 movie A. I. Artificial Intelligence. Before activation, the robot boy played by Osment is emotionless. But when his love simulation software is turned on, the boys immediate attraction to his adoptive mother is convincing, thanks to Osments marvelous acting skill. The robot boy is attentive, submissive, and full of snuggle-love.

But mimicking love is not love. Computers do not experience emotion. I can write a simple program to have a computer enthusiastically say I love you! and draw a smiley face. But the computer feels nothing. AI that mimics should not be confused with the real thing.

Moreover, tomorrows AI, no matter what is achieved, will be from computer code written by human programmers. Programmers tap into their creativity when writing code. All computer code is the result of human creativity the written code itself can never be a source of creativity itself. The computer will perform as it is instructed by the programmer.

But some hold that as code becomes more and more complex, human-like emergent attributes such as consciousness will appear. (Emergent means that an entity develops properties its parts do not have on their own a sum greater than the parts can account for.) This is sometimes called Strong AI.

Those who believe in the coming of Strong AI argue that non-algorithmic consciousness will be an emergent property as AI complexity ever increases. In other words, consciousness will just happen, as a sort of natural outgrowth of the codes increasing complexity.

Such unfounded optimism is akin to that of a naive young boy standing in front of a large pile of horse manure. He becomes excited and begins digging into the pile, flinging handfuls of manure over his shoulders. With all this horse poop, he says, there must be a pony in here somewhere!

Strong AI proponents similarly claim, in essence, With all this computational complexity, there must be some consciousness here somewhere! There is the consciousness residing in the mind of the human programmer. But consciousness does not reside in the code itself, and it doesnt emerge from the code, any more than a pony will emerge from a pile of manure.Like the boy flinging horse poop over his shoulder, strong AI proponents no matter how insistently optimisticwill be disappointed. There is no pony in the manure; there is no consciousness in the code.

Are there any similarities between human brains and computers? Sure. Humans can perform algorithmic operations. We can add a column of numbers like a computer, though not as fast. We learn, recognize, and remember faces, and so can AI. AI, unlike me, never forgets a face.

Because of these types of similarities, some believe that once technology has further advanced, and once enough memory storage is available, uploading the brain should work. Whole Brain Emulation (also called mind upload or brain upload) is the idea that at some point we should be able to scan a human brain and copy it to a computer.1

The deal breaker for Whole Brain Emulation is that much of you is non-computable. This fact nixes any ability to upload your mind into a computer. For the same reason that a computer cannot be programmed to experience qualia, our ability to experience qualia cannot be uploaded to a computer. Only our algorithmic part can be uploaded. And an uploaded entity that is totally algorithmic, lacking the non-computable, would not be a person.

So dont count on digital immortality. There are other more credible roads to eternal life.

1 Becca Caddy, Will You Ever Be Able to Upload Your Entire Brain to a Computer? Metro, June 5, 2019. Also see Selmer Bringsjord, Can We Upload Ourselves to a Computer and Live Forever?, April 9, 2020, interview by Robert J. Marks, Mind Matters News, podcast, 22:14.

You may also wish to read the earlier excerpt published here: Why you are not and cannot be computable. A computer science prof explains in a new book that computer intelligence does not hold a candle to human intelligence. In this excerpt from his forthcoming book, Non-Computable You, Robert J. Marks shows why most human experience is not even computable.

Read more:

The Software of the Gaps: An Excerpt from Non-Computable You - Walter Bradley Center for Natural and Artificial Intelligence

Wayfinding Software and Smart Kiosk Company Re-Launches as RoveIQ – PR Web

RoveIQ Kiosk at Avalon Lifestyle Center

NEWPORT, Ky. (PRWEB) June 28, 2022

A successful Kentucky-based wayfinding software and smart kiosk company announced today that the business is changing its name from smartLINK to RoveIQ. Technology from RoveIQ helps visitors of mixed-use properties, city districts, sports arenas, hospitals and universities easily find their way around with customized, interactive, 3D maps on smart kiosks, digital displays or mobile devices.

The name RoveIQ much better represents how we enrich lives through intelligent software designed to move humans both physically and emotionally, said PJ Thelen, CEO of RoveIQ. In todays current environment, people are getting out and are obsessed with creating new and exciting experiences, and RoveIQ elevates each journey whether the desire is to be efficient with time or go in search of a new discovery.

RoveIQ develops software that operates on indoor and outdoor interactive displays, as well as through computer and smartphone browsers. With RoveIQ, businesses and organizations provide customized, 3D maps that help visitors navigate their spaces, while also serving up additional information such as augmented reality selfies, coupons, current events, and advertising. The software provides properties with data analytics, allowing owners and managers to gain meaningful insights and generate additional digital out-of-home advertising revenue. RoveIQ creates an opportunity for mixed-use real estate developments and all types of venues to better communicate with and engage visitors, as well as make their experiences seamless, easy, and enjoyable.

Locations, like the Miami Design District in Miami, Florida, Fashion Island in Newport Beach, California, and Barclays Center in Brooklyn, New York, use RoveIQ to help visitors navigate their spaces and find specific locations and amenities. Beyond finding what they need, visitors often discover the unexpected and experience moments of delight such as receiving an offer for a store they are about to visit, encouraging them to leverage the technology whenever they visit a RoveIQ property.

How Customers Work with RoveIQRoveIQ software runs on various platforms, from outdoor kiosks to any web browser, and no app needs to be downloaded. Three main elements help property, advertising and operators manage and learn from RoveIQ.

1) RoveIQs CMS (Content Management System) allows users to upload, manage and modify all media, including customized maps, advertising and other content, from a single dashboard.2) The integrated Ad Server lets users schedule, manage and remove advertisements as needed. Programmatic ads (digital advertisements delivered via a service) can help RoveIQ customers generate a return on investment more quickly.3) Data Analytics and Reporting Suite anonymously collect user interactions touch, visual, Wi-Fi and mobile to provide insight into the effectiveness of programs and campaigns.

Upcoming Healthcare Initiative This summer, RoveIQ is releasing software developed specifically for healthcare facilities and hospitals. Integrated with MyChart, the software will provide patients and caregivers with detailed appointment directions including parking, check-in information, navigation within healthcare facilities to easily find the correct parking garage, the exact entrance and location of the hospital department or room, and many other services people need within the facility.

More information about RoveIQs products and services, can be found by visiting roveiq.com or emailing pj@roveiq.com.

# # #

About RoveIQ, roveiq.comRoveIQ is a wayfinding software company based in Newport, Kentucky. Created to help move humans intelligently, RoveIQs software solutions provide customized maps, directories and wayfinding services for smart cities, healthcare, universities, real estate, and entertainment venues. Facilities and venues that use RoveIQ can easily provide visitors with interactive, 3D wayfinding tools that allow users to create and discover unique experiences. For more information, visit roveiq.com.

More on the Rebranding Effort In the last two years, smartLINK has progressed from a company that produces digital signage and wayfinding tools on kiosks into a company that provides advanced wayfinding software with two-way communication between real estate organizations and customers. Customized maps serve trusted information to visitors, while visitor data can be analyzed to provide insights to better serve customers as well as increase revenue through advertising.

This evolution led to a necessary brand and name change. Created in partnership with BrandFuel Co, a branding and marketing firm based in Covington, Kentucky, Thelen and his team landed on the name RoveIQ and the tagline, embark intelligently.

The word rove refers to the companys mission to help enrich peoples lives through wandering and discovering and the IQ suggests exploring leveraging intelligent software with the best information possible, gathered from locals and experts. Rove also brings to mind the common dog name Rover, and dogs are amazing wayfinders by nature. The tagline embark intelligently references a dogs bark.

Other brand elements include a mosaic graphic, which alludes to both data and discovery, as well as various color schemes that can be aligned with clients branding or with specific RoveIQ software products.

Media ContactBeth Strautz773-895-5387beth@vaguspr.com

Share article on social media or email:

Go here to see the original:

Wayfinding Software and Smart Kiosk Company Re-Launches as RoveIQ - PR Web

Party will use a special app to manage the MCD Badlaav Campaign: Gopal Rai – News Nation

New Delhi :

With the MCD elections around the corner, the Aam Aadmi Party is preparing for a large-scale MCD Badlaav Campaign which will be launched on 2nd December, Party State Convenor Gopal Rai informed. Different responsibilities have been assigned to the office-bearers of the party for this. The preparations for the Badlaav campaign started on 27th November.

A special app will be used to upload all the campaign details. Gopal Rai added that an in-charge would be appointed in each division who would be responsible for conducting the MCD Badlaav Campaign. On 5th December, teams in every division of Delhi will launch the MCD Badlaav Campaign at the ground level by placing tables. AAP National Secretary Pankaj Gupta said that their main aim is to have at least one in-charge in every division. Senior Aam Aadmi Party leader Gopal Rai addressed a virtual meeting on Friday. He said, In view of the MCD elections, we are going to launch a big membership drive soon.

In this MCD Badlaav Campaign, we will upload the data through a special app to streamline the data and for efficient work. The training for this app will be given by Pankaj Gupta Ji, Adil Ji, and Anuj Ji at the party office on November 27. Yesterday, the district teams of the Partys State Organisation, observers, and the speaker and organization ministers from each assembly were trained at the party office on how to carry the MCD Badlaav Campaign forward. He continued, For 4 days, between 28th November to 1st December, a planning meeting in all the Vidhan Sabha will be fixed by every Lok Sabha and District Presidents.

Meetings will be held after discussion with the MLAs and other people present there. Two things have to be completed in that meeting. One, about who will be in charge of the change campaign from each of our Mandals. When I am talking about the in-charge, it is to be kept in mind that this post will be in addition to the Mandal President. They could be an office-bearer of the Legislative Assembly or a ward-level office-bearer or a Vidhansabha-level office-bearer of the Centre.

Two, those who are our responsible colleagues, who are working in other positions, also have to be made in charge of the board. Their responsibility is to successfully complete the MCD change campaign. Giving more information about the launch of MCD Badlaav Campaign, Gopal Rai said, Everyone will be given complete training on the app, how to operate the app and how to upload the data on it in a meeting. These meetings will be held on 28th, 29th, 30th, and 1st. On 2nd December, we will launch this at the Central level. In this, everyone from the high-level people to the Mandal President will be called to the party office. So by calling all those people who are in charge of the Mandals and those who are becoming Mandal President, we will launch this MCD Badlaav Campaign before the central media on 2nd December.

After this, on December 5, our desks will be set up in every Mandal. If 5th is a Sunday, then on that day in every division of Delhi, our teams will launch this change campaign at the ground level by setting up desks. He continued, At the launch, all the party members, all the MLAs, all the councilors, all the frontal office-bearers, wherever they are, will be taking up their responsibilities in the campaign and playing their respective roles.

They will get the design of posters and hoardings before the central launch on the 2nd and our outdoor campaign to put up posters and hoardings in all the Legislative Assemblies will be completed before the 2nd day. So in this, we will add new members. After the MCD Badlaav Campaign, all the organization ministers will meet the new members of MCD Badlaav Campaign booth-wise and give them responsibility so that those people can also become a part of our organization active in this.

He said, This is a major campaign, so we want every booth to have at least 4-5 lanes, so we have to decide through this MCD Badlaav Campaign that we have at least 8-10 people in every street. These people will further our campaign and its awareness in their respective streets, to ensure good connectivity. We will try to recognise our correct and potential new members. They can be identified according to the streets.

So that later we will be able to make them the organization officers at the level of that street through which we will conduct further activities. AAP National Secretary Pankaj Gupta said, As Gopal Ji said that we have to reach every street, every gully, and our effort is to ensure that enough people take charge and be appointed to oversee the efforts. We have to add so many people that there are about 50 names on every page in every booth.

Out of them at least two people should be the ones to whom we can give responsibility. This is our aim. This time we will go ahead keeping this in mind because every time we try to access different divisions, booths get blocked. That's why Arvind Ji has given a special responsibility that we can go beyond the booth to reach even smaller groups of people. We will come up with a good name for that too. You have to take responsibility to think of a name.

See the rest here:

Party will use a special app to manage the MCD Badlaav Campaign: Gopal Rai - News Nation

Events for children, teens, adults at the library – The Herald-Times

Monroe County Public Library provides opportunities for local residents to read, learn, connect and create. The downtown library is at 303 E. Kirkwood Ave. and the Ellettsville branch is at 600 W. Temperance St. All events are free of charge. Event funding is provided by the Friends of the Library foundation.

Curious about the Ellettsville Teen Space, but you arent a teen? Patrons of all ages are invited to check it out on the fourth Saturday of every month there's a DIY design studio, video games, virtual reality, and more! Its noon-6 p.m. Saturday at the Ellettsville branch library.

Discover why yoga is such a powerful tool for keeping you healthy in body, mindand spirit! In this all-levels class, beginners safely learn the basics, while more experienced students take their practice to a deeper level. This event is suitable for people of all ages and most physical abilities. For ages 16 and up. Its noon-1 p.m. Saturday in meeting room B at the Ellettsville Branch Library. Please register at mcpl.info/calendar.

Join Maqub Reese, founder and CEO of Tribe Consulting, LLC and associate director for the Kelley Office of Diversity Initiatives, to explore your identities and learn what it means to be anti-racist during this engaging interactive workshop. You'll also learn about racial literacy, and global, national and local leaders who are Black, Indigenous andothe people of color and their intersections. For ages 7-12. Its 1-3 p.m. Saturday in meeting room 1B and 1C combo at the downtown library. Register at mcpl.info/calendar.

Join the library staff as they create a safe space to discuss books written by and for LGBTQ+ individuals. In November and December, youll discuss "We Do This 'Til We Free Us: Abolitionist Organizing and Transforming Justice," by Mariame Kaba. Its 6-7 p.m. Monday on Zoom. For ages 16 and older. Register for the Zoom link and book by emailing Linds Badger at lindsey@middlewayhouse.org.

Books on Tap is the book club with a twist! Enjoy fantastic drinks, a comfortable atmosphereand great discussion on a compelling book. Each registered participant will receive a drink recipe that goes with the months book selection. This months selection is Binti by Nnedi Okorafor, winner of the Hugo Award and the Nebula Award for best novella! For ages 21 and older. Its 6:30-8 p.m. Monday on Zoom Register at mcpl.info/calendar for the Zoom link and a copy of the book (if you need one). November and December will also include discussion on what types of books you want to read in 2022 make sure to attend if you want to influence the book selections. Visit mcpl.info/bookclub for other book clubs and titles.

Drop in for free one-on-one help with math and science-related assignments arithmetic,algebra, geometry, trigonometry, calculus, physics, chemistryand ISTEP and SAT review. For middle school and high school students only. Its 7-8:45 p.m. Monday in program room 2B at the downtown library.

Join in the fun with stories, songs, puppets and more to encourage the development of early literacy skills in young children, using the guidelines from the Every Child Ready to Read program. For ages 3-6 and caregivers, but all are welcome. Its 10-10:45 a.m. Tuesday in the childrens program room at the main library. Register at mcpl.info/calendar.

Come to chat about relationships, dating, and other tough topics, stay for the tacos! For ages 12-19. Its 4-5 p.m. Tuesday in Ellettsville meeting room a. Drop in.

Using 3D-modeling software (like the free and user-friendly Tinkercad), you can turn digital designs into real-world physical objects come design your own keychain! The library will guide you through the creative process, then print it! For ages 12-19. Its 4-5 p.m. Tuesday in The Ground Floor teen space at the downtown library. Drop in. Please note, 3D printing takes time you will need to return to the library at a later date to pick up your creation. Single color prints only.

Prenatal yoga is a gentle exercise that supports your changing body's needs, builds strengthand deepens awareness of your mind, body, and connection with your baby. For pregnant adults. Its 7-8 p.m. Tuesday in the childrens program room at the downtown library. Register at mcpl.info/calendar.

Create an original one-page comic to share with the community and enter to win one of 20 gift cards to Vintage Phoenix Comic Books! Categories are available for kids ages 7-11 and teens ages 12-19. Upload your comic to mcpl.info/comic by Tuesdayto enter the contest! Please note: All comics must be appropriate for all ages comics that do not meet this guideline will not be shared.

Families with kids from infants to age 3 can play, sing, read and talk together with other little ones then enjoy toy time. Its 9:30-10:15 a.m. (birth-18 months) and 10:30-11:15 a.m. (18 months-3 years) Wednesday in the childrens program room at the downtown library. Register at mcpl.info/calendar.

Drop in, adventure through fantastic realms, then leave when you want! These sessions of Dungeons and Dragons are designed to be short, fun, and evolving adventures that anyone can play. All skill levels are welcome! For ages 12-19. Its 4-5 p.m. Wednesday in meeting room 1Bat the downtown library.

Give back to the library by shopping at the bookstore! All movies and music at the Friends of the Library Bookstore are half price in November. The sale includes DVDs, CDs, LPs, tapes, and Blu-Rays. The Bookstore also offers unique gifts for library supporters, including masks, T-shirts, notecards, tote bags, and more! The bookstore is open 11 a.m.-7 p.m. Tuesday and Thursday; 11 a.m.-5 p.m. Friday and Saturday; and 1 p.m.-5 p.m. Sunday. Visit mcpl.info/bookstore for more information.

This is a sampling of this weeks library events. For the full calendar, visit mcpl.info/events.

See the original post here:

Events for children, teens, adults at the library - The Herald-Times

Xiaomi Read Mode: tricks to get the most out of it – InTallaght

Xiaomi always tries to surprise users, especially with different tricks and settings that they can use on their mobiles.

It goes without saying that MIUI is one of the most complete and capable customization layers in terms of Android devices, and today you will learn about a special function called Read Mode, in addition to some special tricks to get the most out of it.

Speaking fully about this tool, it is a configuration designed to cause the least possible effect of visual fatigue when using the mobile. This is especially applicable to situations where dark environments and during night hours.

This is something that, without a doubt, will benefit those who expose themselves for long periods to the light of the screen, which can be detrimental to visual health in the medium and long term. So then, below you will see the steps to activate the aforementioned function, as well as some tricks that you can configure later.

Enter the application Setting. In the third block of apps, look for the one of Screen. Access the second option that says Reading mode. Activate the function.

Now, the extra tools come within the option of Program, located one box below. By activating it, you will be able to access both the Night reading, such as Customize the schedule, leaving specific hours in which you want the function is activated and deactivated.

In addition to these settings, you can also manage a slider bar at the bottom of the window, which adjust the intensity of the filter which is applied when the reading mode is active. Keep in mind that a higher intensity of it will give less mental fatigue when viewing the screen, although this could make the interface less realistic.

Finally, another trick that can make it easier for you to activate the function, goes through add your quick access to the drop-down bar of settings. You can achieve it in less than a minute if you do this:

Unfold your bar Quick Settings, sliding your finger down. Find and click on the box that indicates Edit.

Among the options that are shown to you, locate the function of Reading mode, press and hold its icon and then upload it with all the others.

Visit link:

Xiaomi Read Mode: tricks to get the most out of it - InTallaght

Die as a human or live forever as a cyborg: Will robots rule the world? – Sydney Morning Herald

Normal text sizeLarger text sizeVery large text size

In movies, theyre the bad guys killer cyborgs with bones of steel and lightning-fast reflexes, perhaps an Austrian accent too. But Peter Scott-Morgan has never been afraid of robots. As a scientist and roboticist by trade, he spent decades researching how artificial intelligence (AI) might transform our lives.

Then, in 2017, Dr Scott-Morgan was diagnosed with motor neuron disease, the same paralysing condition that killed Stephen Hawking. Months after puzzling over his wonky foot falling asleep, he was told he had two years to live.

He had other ideas. To survive, he would turn to the technology he had spent his career researching. He would become the cyborg. Scott-Morgan has now had two major surgeries to help keep himself alive with robotics machine upgrades that breathe for him, help him speak, and hopefully will even see him stand again as the advancing paralysis traps him inside his body. He plans to merge his brain with AI eventually too, so he can speak with his thoughts rather than the flicker of his eyes. And Im OK with giving up some control to the AI to stay me, he says. Though that might change what it means to be human ... Theres a long tradition of scientists experimenting on themselves. But die as a human or live as a cyborg? To me, its a no-brainer.

But what about the rest of us? Is humanity destined to merge with machine? We keep hearing that the robots are coming to take our jobs, how likely are they to stage a coup? And why are Facebook and Elon Musk already building machines to read our thoughts?

Credit:Illustration: Matt Davidson

A century ago, a Spanish scientist mapped the human brain and uncovered a hidden kingdom. As microscopes began to peer deeper into that mass of little grey cells, Santiago Cajal lay bare the wiring within, so dense he called it a jungle. It is from his detailed drawings that the world understood neurons for the first time and how they exchange information in a tangled network, giving rise to the senses, the emotions and possibly even consciousness itself.

Decades later, a philosopher and a young, homeless mathematician wondered if that network could be broken down into the most fundamental binary of logic: true or false. Neurons could, after all, be considered on or off, firing a signal or not. This theory, by Warren McCulloch and Walter Pitts at the University of Chicago, proved to be an incomplete model for the human brain, too simple to capture all the strange magic really going on inside. But it did give rise to the binary code of computers those ones and zeroes now form infinite variations of on or off to tell machines what to do. Scientists have been trying to bring computers closer to human brains, at least in function, ever since.

Because machines interpret the world through this binary code, and algorithms (rules made from that code), they are good at a lot of specific things we find difficult, such as solving complex equations fast (and playing chess better than a grandmaster). Yet they often struggle with the mundane things we, with our more complex, adaptable thinking centres, find easy: recognising facial expressions, making small talk and, most of all, improvising.

To overcome this, machine learning models seek to train computers to categorise and then react to things themselves rather than waiting on human programming. Over the past decade, one such model known as deep learning has charged beyond the rest, fuelling an AI boom. Its why your iPhone can recognise your face and Alexa understands you when you ask her to switch on the lights. And deep learning did it by going back to Cajals neural jungle. The learning is said to be deep because a machine is trained to classify patterns by filtering incoming information through layers of interconnected neuron-like nodes.

Im sorry, Dave, Im afraid I cant do that. In the 1968 sci-fi classic 2001: A Space Odyssey, a computer called HAL (Heuristically programmed ALgorithmic) takes over a spaceship.Credit:Fair Use

While these artificial networks take a staggering amount of data to train compared to a human brain, experts such as Scott-Morgan hope they will only get better and more efficient as computing power increases (it is roughly doubling every two years). Already, AI can translate speech, trade stock, and perform surgery (under supervision). Since his own surgical journey was documented in the British documentary Peter: The Human Cyborg, Scott-Morgan has been upgrading to a very Hollywood cyborg-like interface that uses AI to track the movement of his eyes across its screen with tiny cameras and then offers up phrases for his robot voice to say predictive text based on the letters he has spelt out so far.

As UNSW professor of AI Toby Walsh points out, machines are not limited by biological processing speeds the way humans and animals are. But others suspect that the capability of even this kind of AI is about to hit a wall. At the University of Sheffield, computer scientist James Marshall says deep learning networks are still based on a cartoon of how the [human] brain works. They are not really making decisions, because they do not understand for themselves what matters and what doesnt. That means theyre fragile. To tell a picture of a cat from a dog, for example, an AI needs to sift through a huge trove of images. While it might pick up tiny changes that would escape the notice of a human, such as a few pixels out of place, these tiny changes usually dont matter a lot because we understand the main features that set a cat apart from a dog. But suddenly you change some pixels and the AI thinks its a dog, Marshall says. Or if it sees a drawing of a cat or a cat in real life [in 3D] it might have to start from scratch again.

The tendency of AI, however powerful, to break in unexpected ways is part of the reason those driverless cars we keep being promised are yet to arrive. Machines can be fooled even into seeing things that arent really there driverless cars tricked into accelerating past stop signs when the addition of a few stickers on the sign makes them instead perceive increased speed limits; or facial recognition programs duped into skipping past suspects wearing wigs and glasses.

Any AI network is vulnerable to this kind of manipulation, and if hackers know its weak points they can do more than break it, they can hijack it to perform a new task entirely. Of course, AI can be trained to identify and resist this kind of sabotage too but, at some point, it will encounter a problem it hasnt prepared for.

Perhaps a little paradoxically, some experts say that a way to give deep learning more common sense is to fuse it with the old, more rigid form of AI that came before it, where machines used hard-coded rules to understand how the world worked. Others say deep learning needs to become more flexible yet, writing its own algorithms and programs to perform new functions as it needs to, even testing its actions in the real world through robotics (or at least very good simulators) to help it understand causality. Amazons new line of Alexa assistants look through a camera to better understand the world (and their owners).

But I dont think [deep learning] will ever work for driverless cars, Marshall says. When you have to build a more and more complicated machine for a fairly simple task, maybe the machine is built wrong.

Arnold Schwarzenegger (and his iconic Austrian accent) starred as a killer cyborg in The Terminator franchise.Credit:

Marshall is flying a drone around his lab. Its not bumping into walls, the way drones normally do when trying to distinguish one beige, slab of office wallpaper from another. This drone has a tiny chip in its brain holding an algorithm borrowed from a honeybee. It tells it how to navigate the world as the insect does.

At Marshalls lab in Sheffield, now a company offshoot of his university called Opteran, the team is trying something new modelling machine thinking on animals. Marshall calls it natural intelligence, not artificial intelligence. Autonomy, the kind driverless cars and robot vacuums need to navigate their surrounds, is a solved problem, he says. It happens all the time in the natural world. We require very little brain power ourselves to drive, most of the time were on autopilot.

Bees have a less formidable number of neurons than humans about a million, next to tens of billions and yet they can still perform impressive behaviours: navigating, communicating and problem-solving. Marshall has been mapping their brains, training them to perform tasks such as flying through tunnels and then measuring their neural activity; making silicon models of different regions of their brain according to their function and then converting that into algorithms his machines can follow.

Its like a jigsaw puzzle, Marshall says. We havent mapped it all yet, even those million neurons still interact in really complex ways.

So far, he has converted into code how bees sense the world and navigate it, and is busy finalising algorithms from the decision-making centre of their brains. Unlike Cajol, hes not looking to record all the exquisite detail that keeps the brain alive. We just need how it does the function we want. We dont just reproduce the neurons, we reproduce the computation.

When he first put his bee navigation algorithm in the drone, he was stunned at how much it improved, changing course as people moved around it, as walls came closer. Thats when we saw it could work, he says. But because everyone is focused on deep learning, we decided to make our own company to scale it up.

Marshall is also mapping the brains of ants to improve ground-based robots, imagining a world in which autonomous devices are as common as computers, cleaning and improving the world around us. And as machines get smaller smaller even than the head of a pin or the width of a human hair scientists hope they may help fight disease in the body too, cleaning blood or killing cancer and infection. Perhaps one day these nanobots could even repair the nerves fraying apart in people with motor neuron disease such as Scott-Morgan, or keep humans alive longer.

Marshall hopes to eventually look into the brains of larger animals too, including primates. There scientists might find more complex functions again, beyond just autonomy, and into advanced problem-solving, even moral reasoning. Still, just as Marshall is sure his robot bee is not a real bee, he doubts wed be able to reproduce an entire human brain in silico and fire it up to see if some kind of consciousness springs to life. A lot of this research comes out of that very question: could we just replicate the brain somehow, suppose we had a 3D printer, Marshall says. But the brain isnt just its neurons, its how it all interacts. And we still dont understand it yet.

In his latest book Livewired, US neuroscientist David Eagleman describes in new detail the plasticity of the human brain, where neurons fight for territory like drug cartels. There may even be a kind of evolution, a survival of the fittest being waged within our minds day to day, as new neural connections are forged. Quantum scientists, meanwhile, wonder if reactions are happening inside the brain, at its smallest scale, which we cannot even measure. How then could we ever hope to replicate it accurately? Or upload someones consciousness to a machine (another popular sci-fi plot)?

Will Smith battles another pesky AI that thinks it knows best (and a few thousand robots) in the 2004 film I, Robot.

Of all the renderings of AI in science fiction, few occupy the minds of real-world researchers like the singularity a hypothetical (and some say inevitable) tipping point where machine intelligence growth becomes exponential, out of control. In the 1960s, British mathematician I.J. Good spoke of an intelligence explosion, and everyone from Stephen Hawking to Elon Musk has since weighed in.

The theory is that as soon as we have a system as smart as a human, and we allow it to design a system superior to itself, well kick off a domino effect of ever-increasing intelligence that could shift the balance of power on Earth overnight. Humans, who are limited by slow biological evolution, couldnt compete, and would be superseded, Hawking told the BBC in 2014.

And, if AI were ever smart enough to be put in charge and make decisions for us, as is imagined in films such as I, Robot and The Matrix, what if their radical take on efficiency involves enslaving or powering down humans (i.e. mass murder)? Remember the glowing red eye of Hal the AI in 2001: A Space Odyssey, who decided the best thing to do, when faced with a crisis far out in space, was to stage a mutiny against his human crew? Musk himself says that, for a powerful AI, wiping out the human race wouldnt be personal if we stood in their way, it would be a matter of course, like squishing an ant hill to build a road.

When we refer to intelligence in machines, we usually mean weve taught a computer to do something that in humans requires intelligence, Walsh says. As of 2021, those smarts are still very narrow beating a human in a game of chess, for example. AI enthusiasts point to machines helping write music or mimicking the styles of great painters as signs of burgeoning creativity, but such demonstrations still rely on considerable human input, and results are often random or spectacularly bad. The limits of deep learning again mean true spontaneity, originality, is lacking. At IBM, Arvind Krishna imagines you could train an AI on images of what is and isnt beautiful, good art and bad art, for example, but that would still be training the AI on the creators own tastes, not moulding a new artist for the world. Mostly, experts see machines becoming another tool to deepen human creativity and decision-making, revealing patterns and combinations that might have otherwise been missed.

Loading

Still, Walsh says theres no scientific or technical reason why the gap between human and machine intelligence couldnt close one day. Every time we thought that we were special, that the sun went around the Earth, that we were different than the apes, we were wrong, he says. So to think theres anything special about our intelligence, anything that we could not create and probably surpass in silicon, I think would be terribly conceited of the human race.

Indeed, machines have a lot of apparent advantages over us mere flesh bags, as Hawking alluded to. Theyre faster thinkers, with bigger, potentially infinite memories; they can network and interface in a way that would be called telepathy if a human could do it. And theyre not limited by their physical bodies.

In Scott-Morgans case, transforming into a cyborg has already come with unexpected benefits. He can no longer speak on his own Im answering these questions long after my body has stopped working sufficiently well to keep me alive, he writes instead but through his new robot voice, he can communicate in any language. In May, his digital avatar even broke into song during a live interview with broadcaster Stephen Fry. His wheelchair, meanwhile, will soon allow him to stand, so he will tower over his fellow mortals and hopefully, with the aid of an inbuilt AI, it will drive itself wherever Scott-Morgan wishes to go. (I envision being able to speed through an obstacle course or safely make my way through a showroom of porcelain vases.)

The hair of his avatar is never out of place and my powers will double every two years. Ill be a thousand times more powerful by the time Im 80. Hes working on programming in a maniacal laugh for his avatar, too.

Of course, because these AI networks are being built by humans, they may inherit the worst of us along with the best. Weve seen this already on platforms such as Facebook and YouTube where AI used to curate user content has been shown to veer sharply into extremism and misinformation. Or police surveillance networks learning their human developers cultural prejudices. And, because AIs operate using complex mathematics, they are often themselves a black box, hard to scrutinise. Experts, including the late Hawking, have stressed that regulation and ethical frameworks must catch up fast to the technology, so we can maximise its social good, not just profit margins.

But what we may learn too, is that theres a ceiling to how intelligent something can be. The universe is full of fundamental limits, Walsh says. It might not be [as simple as] we wake up one day and the computers can program themselves. I suspect that we will get smarter computers, but it will be the old-fashioned way, through our sweat, ingenuity and perseverance.

While Marshall doubts well ever create a machine that is itself conscious (along the lines of, say, the eloquently self-aware cyborgs in Blade Runner), he is wary of the new push for robots or algorithms that can evolve independently designed to breed the way computer viruses spread now and so rewrite and advance their own programming. I dont think thats the path, Marshall says. I think we need to always know what it does, and, if it can evolve on its own, well, life finds a way

How can you tell? Cyborgs called replicants are much like humans in the 1982 sci-fi film Blade Runner. Credit:Fair Use

Rather than turning to one all-knowing AI to run the show, many experts think it more likely we will draw on the power of machines to improve our own thinking. If we had a better way to connect with computers, closer than our screens, futurists wonder if we could surf the internet with our minds, back up our memories to the cloud, even download ready-formed skills such as a second language or another sense entirely like echolocation or infrared vision.

In 2020, Elon Musk was ruling out none of this when he introduced the world to a pig called Gertrude and the coin-sized computer chip in her brain he hoped would allow people to plug in directly to machines one day. Its kind of like a Fitbit in your skull with tiny wires, Musk said, conceding this is sounding increasingly like a Black Mirror episode. In 2021, a monkey with the same chip, made by Musks company Neuralink, was shown playing a game of ping-pong using only his mind to control a joystick.

Labs, including military labs, around the world have been developing neural implants for more than a decade, mostly to help people with paralysis operate robotic limbs and those with epilepsy head off seizures. In 2016, an implant connected to a robotic arm even gave back the sensation of touch as well as movement to a man paralysed from the neck down he used it to fist-bump president Barack Obama.

But this is still new technology, so far involving about 100 electrodes inserted into the brain that read its neural signals and send them wirelessly back to a machine. Neuralinks prototype has more than a 1000 electrodes, each smaller than a human hair, and grand claims of fast insertion into the skull using a robotic surgery (and no need for even a general anesthetic).

Plunging anything into the brain is risky and can cause damage. But in 2016 two neurologists at the University of Melbourne, Tom Oxley and Nicholas Opie, developed a clever technique to insert an implant without the need for open surgery using, Oxley says, the veins and blood vessels as the natural highway into the brain. Theyve just received $52 million in funding from Silicon Valley to run more clinical trials of their own chip, called the Stentrode, in the US. Its about the size of a paperclip and in Melbourne its helped patients with motor neuron disease text, email and bank online by thought alone.

Loading

Neuralinks end goal is to develop a non-invasive headset instead of a chip but for now such external devices pick up a much weaker signal from the brain. Facebook, meanwhile, is looking at wearable wrist devices that would read your mind, literally, where nerves carry messages down to your hands, eventually allowing users to do away with the traditional mouse and keyboard and type at a speed of 100 words per minute just by thinking. Like Neuralink, helping patients with paralysis is their first goal, but they also plan to scale up to everyday users. Already, researchers funded by Facebook have managed to translate brain waves into speech with an accuracy rate of between 61 and 76 per cent (that beats Google Translate in some cases), using existing electrodes implanted in the brains of patients with epilepsy.

Some of this work being done by Facebook and Musk is right out on the edge for enhancement, says the chief executive of Bionics Queensland, Robyn Stokes, but it will likely benefit health applications along the way. Just as brain chips could become digital assistants of the mind, she imagines they could also help manage mental health conditions such as serious depression. Those sorts of brain computer interfaces are really advancing quickly, she says, pointing to the Strentrode. She expects an implant that can perform many functions inside the body, beyond reading brainwaves, will soon follow.

Even then, there are still concerns. While the brains now-famed plasticity could help it rewire around implants, for example, some experts warn it could also mean it quickly forgets how to perform important functions, if they are taken over by machines. What then if something fails?

Peter Scott-Morgan tries out AI technology that tracks his eye movements to spell out his speech.Credit:Cardiff Productions

Still, enthusiasts, or transhumanists, imagine the next stage of human evolution will inevitably be technological future generations can expect reinforced bones and improved brain power thanks to cybernetic upgrades. In British drama Years and Years, a new parental nightmare plays out as a daughter announces she wants to upload her mind and live as a machine. (I dont want to be flesh. I want to escape this thing and become digital.)

In his first book on robotics in 1984, long before his disease had emerged, Scott-Morgan himself considered how AI might unlock human potential, and vice versa. AI on its own is like a brilliant jazz pianist, but without anyone to jam with, he says now. Its nowhere near its full potential. The duet of human and AI, meanwhile, would seem close to magic ... a mutually dependent partnership, not a rivalry. And, to his mind, it could well be the only route that doesnt lead to a dead end. I anticipate that otherwise therell be a crippling backlash against whats typically perceived as the uncontrolled rise of raw AI.

Scott-Morgan plans for his eye-controlled communication interface to rely more and more on its underlying AI to generate his speech. That means sometimes what comes out will not be what biological Peter was planning to say. And Im very comfortable with that. I keep reassuring [everyone] I have absolutely no qualms about technology potentially making me appear cleverer, or funnier, or simply less forgetful, than I was before.

Others imagine a greater fusion of robotics, especially nanotech, with animals too. Already parts of nature are being re-engineered as technology in the lab from viruses repurposed as vaccines and computer chips that mimic the function of human organs to a robot-fish hybrid sent down as a deep-sea probe to collect data beneath the waves. Both the US and Russian armies have kitted out trained dolphins as underwater spies over the years, so perhaps its no surprise military researchers have been looking at going further even putting mind-controlling brain chips into sharks next. And, if bees die out, some experts say cyborg insects may be needed to pollinate plants in their place. All this again raises the strange question of when something is alive, or conscious, and whether we are building better robots or creating new life entirely.

The Terminator robots have no plans to co-exist to humans. They want the whole planet.Credit:Fair Use

Even if we dont get shark cyborgs, low-cost lethal machines are already changing the face of warfare. Imagine fighter drones talking to one another to find bombing targets, instead of a human pilot back at a base. Or swarms of explosive drones slamming themselves into people and buildings.

These are not visions of the future but news stories from 2020. According to a recent UN report, Turkish drones, packing explosives and facial recognition cameras, were sent out by Libyas army in 2020 to eliminate rebels via swarm attack in Tripoli, without requiring a remote connection between drone and base. They were, effectively, hunting their own targets. And the tech on board was not much more impressive than what youd find on a smartphone. Meanwhile, the Poseidon is a new class of robotic underwater vehicle that Russia is said to have already made, which can travel undetected and launch cobalt bombs to irradiate entire coastal cities all unmanned.

Loading

Machines that decide to kill like this, based on their sensors and a pre-programmed target profile, are making humanitarian groups increasingly nervous. The International Committee of the Red Cross wants the worlds governments to ban fully autonomous weapons outright. ICRC president Peter Maurer says they will make it difficult for countries to comply with international law, in effect substituting human decisions about life and death with sensor, software and machine processes.

Walsh agrees autonomous killer robots raise a host of ethical, legal and technical problems. If things go wrong or they break international law, who is held accountable? Should it be the programmer, the commander or the robot on trial for war crimes? Theyre not sentient, theyre not conscious, they cant have empathy, they cant be punished, Walsh says. And that takes us to a very, very dark place. It would be terribly destabilising and would change the speed and scale of war.

Of course, he adds, autonomous systems built for defence, such as the robots used to clear landmines, show that AI can reduce casualties in war too. And computers will continue to come online that can process battlefield data and make recommendations faster than humans ever could. But [we need] human oversight, human judgment, which is still significantly better than machines, at least today, Walsh says.

Loading

He thinks we should ban lethal autonomous weapons as we have chemical and biological weapons (as well as blinding lasers and cluster munitions), with enforcement powers for the UN to check no rogue state is stepping out of line.

The problem is that such bans rarely happen before things get ugly. For chemical weapons, it took the horrors of the First World War.

Im fearful that we wont have the initiative to do the same here until weve seen such weapons being used, Walsh says. A swarm of robot drones, hunting down humans and killing them mercilessly. It will look like a Hollywood movie.

Read more:

Die as a human or live forever as a cyborg: Will robots rule the world? - Sydney Morning Herald

4 Reasons Sunday Is the Perfect Day to Take an Updated Pet Photo – wpdh.com

Let's face it most of us take a photo of our pet on a regular basis but if you don't All American Pet Photo Day is the one day a year where you should be sure to snap a picture of your pet.

Needing a picture of our pet may seem silly until you give it some thought. Besides the obvious reason which is bragging about how cute they are and comparing photos with your co-workers, there are some more critical reasons you should have an updated photo of your furry or even scaly family member.

If your pet gets lost an updated photo will be a big help in finding them.You will be able to upload it straight to social media from your phone.

Social media is a big reason to have the latest photo of your pet. Using their likeness instead of yours can keep you off the grid a bit from the trolls and the haters.

A happy thought. Having a pet photo loaded on your phone can be a great thing to glance at if your day is going sound. Their gaze will keep you pleasant and hopefully take your mind off the work stress.

If someone has to pick up your pet for you from daycare, the kennel or the vet. You want to be sure to send them a picture so they don't bring home the wrong dog.

So this Sunday, snap away with a real camera or just your cell phone. Tell that animal/critter in your life they matter by taking a photo that you can plan to keep forever. Then stick a reminder in your fun for July 11, 2022.

Why do they meow? Why do they nap so much? Why do they have whiskers? Cats, and their undeniably adorable babies known as kittens, are mysterious creatures. Their larger relatives, after all, are some of the most mystical and lethal animals on the planet. Many questions related to domestic felines, however, have perfectly logical answers. Heres a look at some of the most common questions related to kittens and cats, and the answers cat lovers are looking for.

Check out these 50 fascinating facts about dogs:

Excerpt from:

4 Reasons Sunday Is the Perfect Day to Take an Updated Pet Photo - wpdh.com

Mind and body connect with Garmin Venu 2 Stuff – Stuff Magazines

Garmin has once again upped the ante in the GPS smartwatch and fitness tracking landscape with the release of the Garmin Venu 2.

The second-generation Garmin Venu 2 is the perfect GPS smartwatch that combines sports fitness and functionality for the modern age. It is designed to be the bridge between the mind and the body in order for its users to live the best healthy and holistic active lifestyle.

The Venu 2 utilises Garmins advanced health and monitoring features gives the user an up-to-date stats-based reflection of their mental and physical health. It is all about giving users the information they need to be the best version of themselves.

Advanced fitness features as well as more than 25 built-in apps will give users the perfect tool to get in tune with their mind and body.

This includes GPS sports apps, such as walking, running, cycling, HIIT, swimming, golf and many more on offer.

On a micro level, this is fantastic for sportsmen and women who want to track specific stats such as stroke type detection for swimmers and shot distance measurement for golfers in a long list of sport-specific features.

The unprecedented functionality is where the Venu 2 really shines with a plethora of options for the user.

Some of the daily smart features include the well-known calendar and weather apps as well as continuous activity tracking features so the user literally does not miss a step.

At a physical workout level, the Venu 2 also does not disappoint. The built-in animated workouts are one of its best features with an in-depth muscle targeting feature for each workout as well as the correct way to do each specific curl or lunge.

Workouts take on another level as users can use preloaded workouts or fully customize every aspect of their workout in the Garmin Connect app on a smartphone. The preloaded workouts include cardio, yoga, strength, HIIT and Pilates. Essentially, the Venu 2 fits in with your lifestyle and creates the best platform to help users achieve their daily goals.

Health monitoring features such as the Health Snapshot feature, Body Battery energy levels, sleep score, fitness age and all-day stress tracking combines all that is required for the modern user on a 24/7 basis.

On a monitoring level, the sleep scoring system comes in real handy to track your bodys sleep response one of the key factors of a healthy lifestyle. The sleep scoring system can also give insights to improve the quality of sleep. The stress tracking feature is also one of the key elements of the Venu 2, with relaxation reminders to give users a real balance in their daily routines. The built-in memory can also store up to 200 hours of activity data on the watch.

The functionality of the Venu 2 will as per usual gives unlimited access to your Android or Apple smartphone with smart notifications delivered to your wrist. Safety is also one of the best features of the Venu 2 with built-in incident detection and assistance. A live location can be sent to emergency contacts which is quite a reassuring feature for those that prefer outdoor activities.

Activities can also be achieved uninterrupted with a battery of up to 11 days in smartwatch mode and up to 8 hours in GPS mode thanks to rapid recharging and battery saver mode. A water rating of 5 ATM is also a fantastic feature that can withstand pressures of up to 50 metres, perfect for users with a love for water-based activities or when taking a shower.

The AMOLED display gives a crystal-clear picture of your health and with a superb battery life you will never miss a beat whether you are training or relaxing with up to 650 songs that can be stored right on your wrist. Users can also upload playlists from Deezer, Amazon Music or Spotify.

This circular designed watch has an aesthetic beauty that comes in two colours as well as two wrist sizes to cater to every individuals needs. Another advantage of the Venu 2 lies in its lightweight (38.2 grams) and durable design with a silicone strap, stainless steel bezel material and a Corning Gorilla Glass 3 lens.

Further personalization comes in the form of watch faces, apps and widgets from Garmins Connect IQ Store which can be synced with the Garmin Connect app on a users compatible smartphone to review your health and fitness data in detail.

As the Venu 2 smartwatch is part of the Garmin Connect ecosystem and community it lets you store and monitor all your health stats, join challenges, earn badges, and interact with friends and family.

Garmin has always been about connecting mind and body with its latest cutting-edge technology and the Venu 2 is the latest star edition to the Garmin family.

See more here:

Mind and body connect with Garmin Venu 2 Stuff - Stuff Magazines

Making your coding job easier: Where to start – Techaeris

Coding a computer program or website can be a complicated, confusing process. There are many pieces you must know and put in the right place for the end product to operate correctly. Knowing the basic languages, learning as much as you can, and practicing when you can make you an expert quickly and allow you to share your knowledge with others. Here are a few ways that you can make your coding job easier.

Estimated reading time: 4 minutes

Your coding job will become far simpler if you have mastered the basic programming languages such as HTML, CSS, and JavaScript. Knowing how these development products work and practicing them when you can gradually make using them easier as you go. You should also become familiar with other tools that are used extensively when writing code. If you have experience with a python formatter or the different versions of website templates on the market, you can add these to what you are writing and simplify the process for yourself. Make a list of the types of programming that you want to become an expert in, then research the places where you can study them.

Even though you have become an expert at the basic coding languages, you need to keep studying the other aspects of coding a program. Reach out to your local college or community learning center to see what courses they may offer. Register for as many as your schedule permits. Look online for free lessons that experts in the field upload. Take notes on what you experience, and then apply it as soon as you can. Practice these skills whenever you have a free moment. Build your own website on your laptop or desktop to display what you know or offer to design one for a friend or colleague. Keep up with the latest trends and research where you can find out more about them.

While learning everything you can about coding can make you an expert, you must utilize those skills to become great at programming. Set aside a portion of time a day to get on your computer and practice what you know. Repeat lessons that you might have difficulty with, and contact your instructor or someone who knows a great deal about the product if you get stuck. Write the dialogue for a procedure, then test it to see if it works. Get together with others who enjoy creating programs and websites and work together. If you run into an issue, you can collaborate to find the answer. Everyone in attendance will understand what happened and the method that you used to get the correct solution.

Sharing the knowledge that you have learned with others interested in coding can help refresh it in your own mind and make the process simpler for you. Record videos of yourself explaining different methods of writing code and upload them to a platform that others can access. Write a blog post on your own website or offer to provide an article for another site or publication. Volunteer to teach classes and seminars on the aspects of programming that you feel you know well tutor someone who has just started in the field and could use extra assistance to succeed. When you explain what you know to another person, it refreshes it in your mind and allows you to reinforce it in your memory.

Programming a website or computer software program can be a complex undertaking. There are many different languages to know, and you must be aware of where to put them. However, with the right training, a great deal of practice, and collaborating with those who program, you can become a coding expert. Exercising your knowledge as frequently as you can and talking to others about it, and designing courses to share will make writing each web page or program easier every time.

What do you think about these tips for making your coding job easier? Let us know on social media by using the buttons below.

Last Updated on July 11, 2021.

Read the original post:

Making your coding job easier: Where to start - Techaeris