Monthly Archives: July 2021

1/ST BET AI Pick of the Week (7-24) – VSiN Exclusive News – News – VSiN

Posted: July 25, 2021 at 3:26 pm

Weve been tinkering with the Artificial Intelligence programs at 1/ST BET, which gives us more than 50 data points (such as speed, pace, class, jockey, trainer and pedigree stats) for every race based on how you like to handicap.

Last Saturday, our pick on Arklow lost, so we dip to 8-of-18 overall since taking over this feature. Based on a $2 Win bet on the A.I. Pick of the Week, thats $36 in wagers and payoffs totalling $40.40 for a still respectable ROI of $2.24 for every $2 wagered.

This week, I ran Saturdays plays from me and my handicapping friends in my Tuleys Thoroughbred Takes column at vsin.com/horses through the 1/ST BET programs and came up with our A.I. Pick of the Week:

Saturday, July 24

Del Mar Race No. 10 (9:36 p.m. ET/6:36 p.m. PT)

#2 Going Global (4-5 ML odds)

Going Global ranks 1st in 16 of the 52 factors used by 1/ST BET A.I.

This 3-year-old also ranks 1st in 5 of the Top 15 Factors at betmix.com, including best speed in last race, best average speed in last three races and best last turn time.

Shes also ranked in the Top 5 in 8 of the other 10 categories, including rankings No. 2 in average lifetime earnings and average turf earnings, plus No. 3 with trainer/jockey combo with DAmato and Prat.

1/ST BET has a special offer for new customers: Get an instant $10 free upon sign up and then earn $10 for every $1,000 wagered, up to $

Go here to read the rest:

1/ST BET AI Pick of the Week (7-24) - VSiN Exclusive News - News - VSiN

Posted in Ai | Comments Off on 1/ST BET AI Pick of the Week (7-24) – VSiN Exclusive News – News – VSiN

Why AI Is Pushing Marketing Professionals To Reinvent Themselves – Forbes

Posted: at 3:26 pm

Changing course

Todays marketers are challenged to adapt to new technologies, consumer habits and practices, channels, and methods of engagement arguably faster than any other generation. One of the hottest areas of interest is artificial intelligence. How can AI be leveraged to understand, interact with, and generate loyalty with consumers? Raj Venkatesan (Darden Business School), co-author of The AI Marketing Canvas: A Five-Stage Road Map to Implementing Artificial Intelligence in Marketing with Jim Lecinski (Kellogg Business School), shares insight on how marketers must upskill to address the changing marketing landscape.

Kimberly Whitler: How has marketing evolved?

Raj Venkatesan: The art challenges the technology, and the technology inspires the art. I think this quote from Jon Lasseter is very appropriate to consider the evolution of marketing. Like art, marketing also has a symbiotic relationship with technology. In the old days, one had Mass marketing via radio. Then when the new technology of TV came about, we saw the golden era of Television commercials, but we still had one ad for all the customers. Then we had the advent of cable TV and we started segmentation. A brand perhaps had 3 ad versions for their different segments that aired in the appropriate cable channels. With Direct mail, we started the advent of one-to-one marketing, and started to increase our customization. Then we had digital marketing and the rise of the internet we had more customization. With AI now, we are at the stage where brands are personalizing their marketing to a segment of 1.

All through this evolution, though, the fundamentals of marketing hold true. Address customer needs, focus on benefits not features, develop emotional connections, be authentic etc.

Whitler: Given these changes, how should marketing professionals reinventthemselves?

Venkatesan: Invest in yourself, take courses online about AI, attend conferences, try experiments. The key is to focus on your strengths as a marketer, i.e., a deep understanding of customers and their needs. AI and analytics provide you with the tools to obtain customer insights, and use these insights to develop personalized marketing.

Modern marketing professionals are connectors and collaborators. They are the key executive within organizations to advocate for customers. They need the ability to provide data science professionals the right guidance and ask the right questions that can be answers by AI/analytics. We are not at a stage and perhaps never will be in the near future, where AI/Analytics can manage a marketing campaign end to end without human intervention. There is uncertainty in any predictions, and the marketers role is to combine the data driven predictions with their individual heuristics about the customers.

Successful marketers will view data, and data science professionals as their allies and key collaborators.

Whitler: What skillsets should marketing organizations be looking at add?

Venkatesan: Marketers need professionals who understand AI and can work with AI or data science specialists. They also need project managers who are adept at agile product development. Using AI requires a lot of experimentation and agility. Professionals who can manage multiple projects and build flexibility in organizations are critical. There is also a need to work with IT to understand the plethora of technology solutions available to collect, and process customer data. Professionals who are good at understanding the output from analytics and developing customer stories that can provide insights to senior managers are very valuable. Finally, as marketing uses more data, there is a need for organizations to develop skills around privacy, responsible customer data management and cybersecurity.

Join the Discussion: @KimWhitler

In full disclosure, Venkatesan is a colleague at the Darden Business School.

More:

Why AI Is Pushing Marketing Professionals To Reinvent Themselves - Forbes

Posted in Ai | Comments Off on Why AI Is Pushing Marketing Professionals To Reinvent Themselves – Forbes

Not All AI Is Really AI: What You Need to Know – SHRM

Posted: at 3:26 pm

A wide range of technology solutions purport to be "driven by AI," or artificial intelligence. But are they really? Not everything labeled AIis truly artificial intelligence. The technology, in reality, has not advanced nearly far enough to actually be "intelligent."

"AI is often a sensationalized topic," said Neil Morelli, chief industrial and organizational psychologist for Codility, an assessment platform designed to identify the best tech talent. "That makes it easy to swing from one extreme reaction to another," he said.

"On the one hand, fear of AI's misuse, 'uncontrollability,' and 'black box' characteristics. And on the other hand, a gleeful, over-hyped optimism and adoption based on overpromising or misunderstanding AI's capabilities and limitations." Both can lead to negative outcomes, he said.

Much of the confusion that exists over what AI is, or isn't, is driven by the overly broad use of the term, fueled to a large degree by popular entertainment, the media and misinformation.

[Want to learn more about the future of work? Join us at theSHRM Annual Conference & Expo 2021, taking place Sept. 9-12 in Las Vegas and virtually.]

What Is AI, Really?

"Much of what is labeled as 'artificial intelligence'today is not,"said Peter Scott, the founding director of Next Wave Institute, a technology training and coaching firm. "This mislabeling is so common we call it 'AI-washing.' "

The boundaries have often shifted when it comes to AI, he said. "AI has been described as 'what we can't do yet,' because as soon as we learn how to do it, we stop calling it AI."

The ultimate goal of AI, Scott said, "is to create a machine that thinks like a human, and many people feel that anything short of that doesn't deserve the name." That's one extreme.

On the other hand, most of those in the field "will say that if it uses machine learning, especially if it uses deep learning, then it is AI," he said. Officially, "AI is a superset of machine learning, which leaves enough wiggle room for legions of advertisers to ply their trade, because the difference between the two is not well-defined."

Jeff Kiske, director of engineering, machine learning at Ripcord, agrees. Most of what is called AI today could better be referred to as "machine learning," he said. This, he added, is how he prefers to refer to "cutting-edge, data-driven technology." The term machine learning, noted Kiske, "implies that the computer has learned to model a phenomenon based on data. When companies tout their products as 'driven by machine learning,' I would expect a significantly higher level of sophistication."

Joshua A. Gerlick, a Fowler Fellow at Case Western Reserve University in Cleveland, said that AI "is an incredibly broad field of study that encompasses many technologies." At the risk of oversimplification, he said, "a common theme that differentiates a 'true' from a 'misleading' AI system is whether it learns from patterns and features in the data that it is analyzing."

This is the promise of many use-cases in HR for machine learning that actually don't rise to the level of true artificial intelligence.

Implications for HR

For example, Gerlick said: "Imagine a human resources department acquiring software that is 'powered by AI' to match newly hired employees with an experienced mentor within the organization. The software is programmed to find common keywords in both the profiles of the mentees and potential mentors, and a selection is obtained based upon the highest mutual match." While an algorithm is certainly facilitating the matching process within the software, Gerlick said, "it is absolutely not an AI-powered algorithm. This algorithm is simply replicating a process that any human could accomplish, and although it is fast, it does not make the matchmaking process more effective."

A truly AI-powered software platform, he said, would require some initial datalike profiles of previous mentee-mentor pairs and whether the outcomes were successful. It would then learn the factors that led to a successful pairing. "In fact, the software would be so sensitive that it might only be applicable to identifying successful mentee-mentor pairs at this one specific organization," Gerlick said. "In a roundabout way, it has 'learned' how to understand the unique culture of the organization and the archetypes of individuals who work within it. A human resources executive should find that the AI-powered software platform improves its effectiveness over timeand hopefully exceeds the success of its human counterparts, leaving them the time to undertake more complex initiatives."

Christen da Costa, founder of Gadgetreview.com, saidhe thinks the term "AI" is thrown around far too readily. "Most automation tools, for example, are not what I would call AI," he noted. "They take in information fed to them by the user and look for cases that match it. Over time they learn the user's preferences and become better, but that's algorithmic learning. While it can be an aspect of AI, it does not an AI make."

Does it matter? It can. When HR professionals are considering adopting new technology, it's important to not be confusedor swayedby lofty tech terms that tend to be thrown around far too frequently.

It's also important to not be overly enamored of, or potentially misled by, the lure of "artificial intelligence."

"Thoughtful readers and observers of AI in HR would be wise to remember that AI systems help perform manual, repetitious and laborious tasks in HR," Codility'sMorelli said. "However, the range and scope of these tasks are probably narrower than some vendors and providers lead people to believe."

There is no AI system that understands, perceives, learns, pattern-matches or adapts on its own, he said. "Instead, it needs human-labeled and curated data as a starting point. For this reason, users and evaluators should apply more scrutiny to the training data used to teach AI systems," he said, "especially the data's origin, development and characteristics."

"When skeptical over whether a technology is truly 'powered by AI,' consider asking a few simple questions," Gerlick suggested:

If the answers to those questions are yes, he said, "then artificial intelligence might be lending a helping hand."

Lin Grensing-Pophal is a freelance writer in Chippewa Falls, Wis.

See the original post here:

Not All AI Is Really AI: What You Need to Know - SHRM

Posted in Ai | Comments Off on Not All AI Is Really AI: What You Need to Know – SHRM

A man used AI to bring back his deceased fianc. But the creators of the tech warn it could be dangerous and used to spread misinformation. – Yahoo…

Posted: at 3:26 pm

GPT-3 is a computer program that attempts to write like humans. Fabrizio Bensch/ Reuters

A man used artificial intelligence (AI) to create a chatbot that mimicked his late fianc.

The groundbreaking AI technology was designed by Elon Musk's research group OpenAI.

OpenAI has long warned that the technology could be used for mass information campaigns.

See more stories on Insider's business page.

After Joshua Barbeau's fianc passed away, he spoke to her for months. Or, rather, he spoke to a chatbot programmed to sound exactly like her.

In a story for the San Francisco Chronicle, Barbeau detailed how Project December, a software that uses artificial intelligence technology to create hyper-realistic chatbots, recreated the experience of speaking with his late fianc. All he had to do was plug in old messages and give some background information, and suddenly the model could emulate his partner with stunning accuracy.

It may sound like a miracle (or a Black Mirror episode), but the AI creators warn that the same technology could be used to fuel mass misinformation campaigns.

Project December is powered by GPT-3, an AI model designed by the Elon Musk-backed research group OpenAI. By consuming massive datasets of human-created text (Reddit threads were particularly helpful), GPT-3 can imitate human writing, producing everything from academic papers to letters from former lovers.

It's some of the most sophisticated - and dangerous - language-based AI programming to date.

When OpenAI released GPT-2, the predecessor to GPT-3, the group wrote that it can potentially be used in "malicious ways." The organization anticipated bad actors using the technology could automate "abusive or faked content on social media," "generate misleading news articles," or "impersonate others online."

GPT-2 could be used to "unlock new as-yet-unanticipated capabilities for these actors," the group wrote.

OpenAI staggered the release of GPT-2, and still restricts access to the superior GPT-3, in order to "give people time" to learn the "societal implications" of such technology.

Story continues

Misinformation is already rampant on social media, even with GPT-3 not widely available. A new study found that YouTube's algorithm still pushes misinformation, and the nonprofit Center for Countering Digital Hate recently identified 12 people responsible for sharing 65 percent of COVID-19 conspiracy theories on social media. Dubbed the "Disinformation Dozen," they have millions of followers.

As AI continues to develop, Oren Etzioni, CEO of the non-profit, bioscience research group, Allen Institute, previously told Insider it will only become harder to tell what's real.

"The question 'Is this text or image or video or email authentic?' is going to become increasingly difficult to answer just based on the content alone," he said.

Read the original article on Business Insider

Read the original post:

A man used AI to bring back his deceased fianc. But the creators of the tech warn it could be dangerous and used to spread misinformation. - Yahoo...

Posted in Ai | Comments Off on A man used AI to bring back his deceased fianc. But the creators of the tech warn it could be dangerous and used to spread misinformation. – Yahoo…

Inside the complicated, messy world of pet cloning – Massive Science

Posted: at 3:26 pm

Between 15,000 and 25,000 years ago, humans started hanging out with a particularly friendly breed of wolves: dogs. Scientists assume that these ancient pooches were first drawn to our settlements by the smell of human food and poop. We began sharing our scraps with them, hunting with them, eventually breeding the ones we liked best. This is how dogs evolved to mirror us, and how we became obsessed with them. While monkeys and pigs have more right-brain strategic grit, the dog possesses a human-facing emotional intelligence. More than mans best friend, we engineered a creature that would be a more reliable friend to us than we are to each other. And so, it has accompanied us in this long game of civilization. At this point in our evolutionary friendship, we get a hit of oxytocin when we look into our dogs eyes, the love hormone, the same burst that occurs when we first look into the eyes of our newborn baby an evolutionary trick that, in both cases, prevents us from throwing them out.

Our obsession with dogs has come a long way since our early days of hunting and gathering together. Where birth rates decline, canines increasingly replace children. Rich cosmopolitan types source them from all over the world, emblems of their uniqueness, after reading up about the characteristics of each breed. They read books to understand their designer dogs behavior, a flourishing subgenre of science writing. They identify with their dogs, psychoanalyze them, create Instagram accounts in their name, spend more and more on their accessories, feed them ethically, even medicate them if necessary. But after 7 to 15 years, on average, their beloved pet still dies. And then the superrich dog-lover faces a painful choice: buy another dog of the same breed or reincarnate their beloved friend for $50,000.

Commercial pet cloning has been a growing industry since Texas-based biotech company ViaGen first started offering it to Americans in 2015, but many of its customers still prefer to remain anonymous. Commissioning a clone is a deeply personal choice, one often made by a rich person mourning a recently deceased pet and it is still far from being a widely accepted practice. There are spurious reasons for this (a perceived affront against nature) but there are also well-founded bio-ethical concerns. The process is relatively inefficient and usually requires impregnating multiple host dogs to produce a single clone, which means many traumatic pregnancies and many dead clones.

Our obsession with dogs has come a long way since our early days of hunting and gathering together

Photo by Marc Pell on Unsplash

But the love of a wealthy pet widow, who refuses to settle for another dog, can override all those concerns. One of the key selling points of cloning is the customers devotion to their dogs personalityand an underlying belief that behavior is heritable, a logical continuation of the popular idea that different breeds have a particular habitusthe kind of genetic determinism that still seems acceptable when it comes to our pets. Pet cloning companies encourage this notion. They cant promise that a clone will act like its predecessor, but they do their best to insinuate as much with supportive case studies. This, their critics allege, is how they fleece grieving pet owners. Buying and selling hope, the customer and the company embark on a small-scale genetic experiment. Together, they are spawning new evolutionary strains of mans best friend. But the question remains: to what end?

We drove to a tax free state to receive it, so that we wouldnt have to pay taxes on the clone, says Jordan (name changed). [Clones] arent cheap.

The LA-based artist rented a room in a desert motel and waited there with a cocktail of feelings: determination, doubt, elation, grief. In the room next door, a ViaGen employee was waiting with a litter of four puppiesall exact genetic copies of Jordans dearly departed Boxer. Many cloning attempts yield no clones whatsoever; some produce several. If youre lucky, like Jordan, you get four clones for the price of one.

Jordan took turns receiving each of the puppies and evaluating them. I sat like a queen on a throne in an empty hotel room, while each one was brought in to me, so that I could spend 10 minutes with it and sort of get to know its temperament, to see, like, were they the same? Or were they different? Ten minutes with each puppy turned out not to be enough, so Jordan took the entire litter home with him. [I] rotated each one out for an hour on my lap. It was quite amazing. Because the very first one that I put on my lap turns out to be [the One], upside down like this with his mouth open, sleeping, snoring, just like sitting in my lap exactly like [my old dog] would do. It was the same face, the same mannerism. It was wild.

Jordans original dog was beautifulin his words, the Kendall Jenner of Boxersand the clone looked almost identical. But having a gorgeous clone wasnt enough; the copy would also have to be that one soulmate special dog. Jordan was looking for that heritable behavior. He had done research on the topic. For monozygotic twins separated at birth and raised completely differently, the correlation coefficient is about .75, he says. Behavioral geneticists refer to this question as the heritability coefficient, the proportion of variance in a specific temperamental trait in the population that is due to genetic differences.

ViaGens representatives and copywriters take great pains not to promise that your clone will match the original on the character front, but they do suggest that its a possibility. On their website, they answer the question of heritable behavior with observations and customer testimonials tales of cloned cows who roll their tongues just like their previous iterations, cats who roll their Rs the same way as their predecessors. Embryologist Dennis Milutinovich, ViaGens cloning lab manager, offers anecdotal evidence in the same vein. I, personally, am convinced that behavior is probably 75% genetic, and everything else, nurture, is 25%, he says to me on Zoom. Thats just based off my own viewing and experience with the clones. His colleague, Chief Science Officer Shawn Walker, is more measured. Its the same genetic makeup, [and] genetics makes up all the characteristics of the animal, but we dont know how it affects behavior. But what I can say is I have been overwhelmingly surprised at how much it appears behavior is controlled by genetics, based on the feedback we get from the clients.

Jordan (who gave the three other clones away to friends) has taken the supposed 25% nurture that forms a dogs personality into his own hands. If youre doing this because its such a fantastic dog, then I feel like you should try and make as many parameters the same. He has done his best to replicate the conditions and routines of his old dogs upbringing as precisely as possible. But no amount of effort could have made the replication perfect, and there are some differences between the dogs. The new one is more confident and less scared of trucks. Hes cuddlier than the original, and, Jordan admits, also a little bit naughtier. He almost sounds...better.

RePet, the fictional pet-cloning company in Roger Spottiswoodes campy techno-thriller The 6th Day (2000), has a more forward sales pitch than ViaGen. Your RePet Oliver will be exactly the same dog. Hell know all the same tricks you taught him, hell remember where all the bones are buried, he wont even know hes a clone. Arnold Schwarzeneggers Thanksgiving box office bomb gets a lot right about the future pet cloning industry, and American capitalism generally: the more controversial the industry, the cheesier the brand identity. ViaGen Pets. Love that lasts forever, the companys website promises.

Katy, a ViaGen customer, says that after her clone was born, the company would send her weekly progress reports that included photos of the puppy in miniature potemkin villages. (It was this strange kind of space thats made to look like a whole little town [with] fake grass and fake little streets and stuff They created a simulacrum of a kind of Pleasantville-style town, with all fuzzy items, like fuzzy little cars, little fuzzy dog toys.) Her friend Garrett chimes in, You have to see the reports because its literally Philip K. Dick. Like, it is Total Recall. Its like, ViaGen Pets: For a better tomorrow, or, So the past never leaves, or something. Cute pet pics and inspirational copy are ViaGens marketing bread and butter. Just as many animal products feature a smiling cow somewhere in their artwork, a controversial industry like pet cloning requires a lot of cutesy gloss. ViaGen which also clones cats (for $35,000) and livestockdoes its best not to trouble customers with the slightly messy process enabling their pets rebirth.

Milutinovich gives me the rundown of a standard embryo implantation process. We have a vet on staff, she works exclusively with us doing surgeries and a lot of animal care and stuff like that. Shell just make a small midline incision and exteriorize the ovary and just take my embryos into a little catheter, go right into the oviduct of the ovary, and plunge those in. Whole procedure takes ten minutes maybe. Tuck that back in, stitch it up, and hopefully you have a pregnant surrogate. (Thats a pretty good basic overview of it, Walker confirms in his distinctive Virginia drawl. The only difference between species, between cats and dogs and horses, would be that for horses wed do a nonsurgical transfer.)

The process hasnt evolved much since Dolly the Sheep was born in 1996. Milutinovich and his staff use a process called enucleation to prepare a surrogates oocyte the mothers egg cell to accept another dogs DNA. First, they stain the surrogates DNA in the oocytes nucleus. Then UV light is flashed on it so that it glows in the dark and a tiny little needle is used to suck out the DNA. Next, the nucleus of a somatic cell from the original animal is injected into the enucleated oocyte with the tiny needle. Milutinovich and team then zap the oocyte with electricity until the inserted cell fuses into its ooplasm an eggs cytoplasm. The next step is a process called activation, in which the newly fused cell is pulsed with electricity until it kicks into gear and begins to function like a naturally fertilized oocyte in other words, it begins to develop. After that, its only a matter of implanting it and hoping for the best.

Articles critical of the pet cloning industry dwell on a number of ethically ambiguous aspects of the embryo development and implantation processes. A 2018 article in Smithsonian Magazine describes the cloning of a dog named Snuppy: Many cloned pregnancies dont take hold in the uterus or die shortly after birth, as was the case with Snuppys twin. Snuppy and his twin were two of only three pregnancies that resulted from more than 1,000 embryos implanted into 123 surrogates. An advisor to a South Korean dog cloning company is quoted as saying: You need a good number of dogs to do this type of cloning. I would say its about 20 percent. Very high. That means four female dogs enduring a traumatic pregnancy, and four clone babies dying, to reincarnate one rich persons dog. Bio-ethicist Jessica Pierce,writing in theNew York Times, went as far as to say that the pet cloning industry was creating a whole canine underclass that remains largely invisible to us but whose bodies serve as a biological substrate.

Walker acknowledges that he and his team encounter non-viable embryos and birth defects, but he defends ViaGens processes by referring me to the horror of traditional breeding practices. Obviously we dont like to get into that discussion a whole lot, but if you talk to any dog or cat breeder, pig breeder, theres a number of animals that basically do have congenital defectsand so we do encounter those just like everybody else does in the breeding world.

Breeding clones of our pets is the easy part, but replicating their personalities is far more difficult

Photo by FLOUFFY on Unsplash

ViaGen (The worldwide leader in cloning the animals we love) are emphatic about their love for animals. I think the one thing youll find isas a general group, the company is big-time animal lovers, Walker tells me. ViaGens Chief Science Officer peppers his cutest anecdotes like the one about a bucking horse who liked to have his tongue scratched, or a bull named Chance whose clone was christened Second Chance with folksy turns of phrase. The foals in there just lovin on the girls, he says. The guy would have it alongside the road and set kids on itjust a big ol bull, ya know? Lab manager Milutinovich is an animal lover too. I have a surrogate cat that had our second litter of cloned cats. I brought her home and shes the best cat ever, he told me, in an effort to allay my concerns about the fate of surrogate animals. I have a toy poodle, Walker jumps in, and now I got an eight-month-old Great Dane thats grown like mad.

Asked about the specific number of surgeries that surrogates endure before retirement, Milutinovich demurs, then tries to reassure me. We keep it very reasonable, because obviously, we dont want any more work for these animals than they absolutely need to [do]. After a few litters, theyre feeling pretty good and we tend to give them a good home. How ViaGen determines when surrogates are feeling pretty good Milutinovich did not explain.

Their way of selecting surrogates made for slightly more pleasant conversation. According to Walker, the key is to find a dog with great maternal instincts. Additionally, they should be docile and easy to work with. One thing that customers sometimes find surprising as in Katys case is that the surrogates dont have to be the same breed as the clones theyll give birth to. I actually asked too many questions, kind of breaking the fourth wall of the experience, she told me. When she asked about the mothers breed, she was told it was a beagle, but no other details about the surrogates experiences were made available.

Arnold Schwarzenegger walks into a RePet outlet in a shopping mall. On a screen embedded in the wall, a smarmy infomercial host intones, Your pet doesnt want to break your heart. Thanks to RePet, he doesnt have to. An eager sales associate sidles up to Arnold.

You lost a dog, right?

Yes, my daughters.

Oh, what a heartbreak. Whatd you say his name was again?

Oliver.

Well, Olivers in luck, because were having a special this week: 20% off. When did Oliver die?

Sometime this morning.

Oh, thats perfect. We can still do a post-mortem syncording. But you gotta act fast, because theres only a 12-hour window on deceased brains.

I have a problem with that whole idea. I mean, suppose the clones have no soul, or theyre dangerous?

Cloned pets are every bit as safe as real pets. Plus, theyre insured.

Arnold seriously considers the proposition and says: Let me think about that. I might be back. The associate responds with another nod to Schwarzeneggers most beloved franchise: Youll be back. ViaGen has a way of pushing this same sales pitch a little bit harder. A pets DNA is only useful for cloning in the immediate aftermath of its death, and ViaGen offers desperate pet widows the opportunity to store that DNA in liquid nitrogen for $1600, just in case they may want to clone it one day. Only 10% of customers end up going forward with the whole procedure, but its nice to think that a bit of your old friend (usually a four-millimeter punch of abdominal tissue) is still alive somewhere, in case you get rich. Though some customers have the foresight to clone their dog while still alive, ViaGens sales pitch is aimed squarely at mourners.

In January 2020, Katy and her fianc Scott adopted a Basset hound named Jenny, who died two weeks later in a gruesome elevator accident. It was horrific, her friend Garret says, something that no one should ever have to go through. The pain gave birth to a plan. I dont quite remember because I was in such a state of grief, says Katy, but I believe some kind of idea started percolating between Scott and Garrett, both of whom are kind of futurists in their own right, and also great, execution-oriented people who solve problems. And thats kind of how Scott, I would say, grieves.

Katy had some misgivings. As soon as she and Scott had retained ViaGen and gotten this clone situation on track, she decided to pursue what might be a slightly healthier solution. While ViaGens embryologists in Rochester were plunging oocytes into surrogates oviducts, Katy hired a private investigator in Dallas an ex-Navy SEAL to find Jennys original mother. When the PI found the owners, there was one puppy left from the original litter, but they were using him as a breeder dog. The PI tried to negotiate but couldnt make a deal As soon as we wanted to offer them money they started to think that theyd gotten the golden goose with these dogs and returned to Dallas empty-handed. Katy and Scott were starting to lose hope. But then, one December day, they got a call from ViaGen saying they had a viable pregnancy and a new Jenny was on the way. Katy was elated, sensing the possibility of imminent relief from her grief and guilt.

The day she got the call, a severe winter storm hit the tri-state area. It would receive nicknames like the Groundhog Day noreaster and dump a foot of snow on New York City and over three feet on some parts of the Eastern Seaboard. But, after a year of waiting, a super-blizzard wasnt enough to stop Katy. She ploughed from Tribeca to Rochester, with Garret and her other pet Basset hound Lucy in tow. They pulled into a snowed-in Chick-Fil-A parking lot next to a lone Subaru. A ViaGen vet tech emerged with a small crate, which he safely stowed in her back seat, before handing her a water bowl and some puppy food and driving away. Garrett says the whole experience was very weird and kind of impersonal. This didnt bother Katy. Puppy was so cute! she recalls. Puppy was also shivering. But not everyone was pleased. Lucy, the other Basset hound, did not receive Jenny 2.0 well.

Katys new dog would turn out to be quite different much more sinister than Jenny. It just seems like a little bit of a satanic version of the original dog, she told me. The puppy likes to kind of bite your face. Not in, like, a bite-bite-bite way, but in a kind-of like gnaw-gnaw-gnaw way. And thats a little maniacal. Garrett put a finer point on it by comparing the young clone to Damien, the boy Antichrist from the 1976 supernatural thriller The Omen. But after a year of uncertainty and regret, Katy and Scott are happy to make a go of it. After shelling out the equivalent of a years college tuition, they dont really have a choice. Though sad for Katy, Jenny 2.0s sinister streak is a victory for nurture over nature, particularity over pure biology, a happy ending if you have the right politics.

After all these cogitations, the same question remains: was it worth all the money and, more importantly, all the animal sacrifice? Unlike space-bound dogs, surrogate pets and their spawn do not go down in history as pioneers. The outcome of their labor is at best a scientific curiosity. For now, there are few of them, but that may soon change. The price of commercial pet cloning has already halved since its inception and is expected to decrease further. Technological innovations could help boost its popularity, particularly our continued efforts to tap into the animal mind.

In The 6th Day, a pets personality is transferred to its clone via a process called syncording. That could be a possibility soon, according to Milutinovich. Were getting there, he says, drawing hope from Elon Musks Neuralink and a Chinese companys questionable efforts to copy a cats memory. This is still a long shot, luckily, a bubble in a bubble. Our pets uniquely mysterious minds are the only aspect of their lives still beyond our control their last bastions of privacy from our relentless affection.

Read the rest here:

Inside the complicated, messy world of pet cloning - Massive Science

Posted in Cloning | Comments Off on Inside the complicated, messy world of pet cloning – Massive Science

How AI Will Help Keep Time at the Tokyo Olympics – WIRED

Posted: at 3:26 pm

In volleyball, we're now using cameras with computer vision technologies to track not only athletes, but also the ball, says Alain Zobrist, head of Omega Timing. So it's a combination where we use camera technology and artificial intelligence to do this.

Omega Timing's R&D department comprises 180 engineers, and the development process started with positioning systems and motion sensor systems in-house, according to Zobrist, in 2012. The goal was to get to a point where, for multiple sports at the 500-plus sports events it works on each year, Omega could provide detailed live data on athlete performance. That data would also have to take less than a tenth of a second to be measured, processed, and transmitted during events so that the information matches what viewers are seeing on screen.

With beach volleyball, this meant taking this positioning and motion technology and training an AI to recognize myriad shot typesfrom smashes to blocks to spikes and variations thereofand pass types, as well as the ball's flight path, then combine this data with information gleaned from gyroscope sensors in the players clothing. These motion sensors let the system know the direction of movement of the athletes, as well as height of jumps, speed, etc. Once processed, this is all then fed live to broadcasters for use in commentary or on-screen graphics.

According to Zobrist, one of the hardest lessons for the AI to learn was accurately tracking the ball in play when the cameras could no longer see it. Sometimes, it's covered by an athlete's body part. Sometimes it's out of the TV frame, he says. So, the challenge was to track the ball when you have lost it. To have the software predict where the ball goes, and then, when it appears again, recalculate the gap from when it lost the object and got it back, and fill in the [missing] data and then continue automatically. That was the one of the biggest issues.

It's this tracking of the ball that is crucial for the AI to determine what is happening during play. When you can track the ball, you will know where it was located and when it changed direction. And with the combination of the sensors on the athletes, the algorithm will then recognize the shot, Zobrist says. Whether it was a block or a smash. You will know which team and which player it was. So it's this combination of both technologies that allows us to be accurate in the measurement of the data.

Omega Timing claims its beach volleyball system is 99 percent accurate, thanks to the sensors and multiple cameras running at 250 frames a second. Toby Breckon, professor in computer vision and image processing at Durham University, however, is interested to see if this stands up during the Gamesand, crucially, if the system is fooled by differences in race and gender.

What has been done is reasonably impressive. And you would need a large data set to train an AI on all the different moves, Breckon says. But one of the things is accuracy. How often does it get it wrong in terms of those different moves? How often does it lose track of the ball? And also if it works uniformly over all races and genders. Is that 99 percent accuracy on, say, the USA women's team and 99 percent accuracy on the Ghanaian women's team?

Zobrist is confident, and explains that while it may have been easier to call in Google or IBM to supply the AI expertise needed, this was not an option for Omega. What is extremely important, whether it's for a scoring sport, or timing sport, is that we can't have discrepancies between the explanation of the performance and the ultimate result," he says. "So to protect the integrity of the result, we cannot rely on another company. We need to have the expertise to be able to explain the result and how the athletes got there.

As for future timing and tracking upgrades, Zobrist is tight-lipped, but says the Paris Games in 2024 will be key. You will see a whole new set of innovations. Of course, it will remain around timekeeping, scoring, and certainly also around motion sensors and positioning systems. And certainly also Los Angeles in 2028. We've got some really interesting projects for there that actually we've only just started.

More Great WIRED Stories

Read the original:

How AI Will Help Keep Time at the Tokyo Olympics - WIRED

Posted in Ai | Comments Off on How AI Will Help Keep Time at the Tokyo Olympics – WIRED

This AI can spot sunken ships from the damn sky – The Next Web

Posted: at 3:26 pm

The Research Brief is a short take about interesting academic work.

In collaboration with the United States Navys Underwater Archaeology Branch, I taught a computer how to recognize shipwrecks on the ocean floor from scans taken by aircraft and ships on the surface. The computer model we created is 92% accurate in finding known shipwrecks. The project focused on the coasts of the mainland U.S. and Puerto Rico. It is now ready to be used to find unknown or unmapped shipwrecks.

The first step in creating the shipwreck model was to teach the computer what a shipwreck looks like. It was also important to teach the computer how to tell the difference between wrecks and the topography of the seafloor. To do this, I needed lots of examples of shipwrecks. I also needed to teach the model what the natural ocean floor looks like.

Conveniently, the National Oceanic and Atmospheric Administration keeps a public database of shipwrecks. It also has a large public database of different types of imagery collected from around the world, including sonar and lidar imagery of the seafloor. The imagery I used extends to a little over 14 miles (23 kilometers) from the coast and to a depth of 279 feet (85 meters). This imagery contains huge areas with no shipwrecks, as well as the occasional shipwreck.

Finding shipwrecks is important for understanding the human past think trade, migration, war but underwater archaeology is expensive and dangerous. A model that automatically maps all shipwrecks over a large area can reduce the time and cost needed to look for wrecks, either with underwater drones or human divers.

The Navys Underwater Archaeology Branch is interested in this work because it could help the unit find unmapped or unknown naval shipwrecks. More broadly, this is a new method in the field of underwater archaeology that can be expanded to look for various types of submerged archaeological features, including buildings, statues and airplanes.

This project is the first archaeology-focused model that was built to automatically identify shipwrecks over a large area, in this case the entire coast of the mainland U.S. There are a few related projects that are focused on finding shipwrecks using deep learning and imagery collected by an underwater drone. These projects are able to find a handful of shipwrecks that are in the area immediately surrounding the drone.

Wed like to include more shipwreck and imagery data from all over the world in the model. This will help the model get really good at recognizing many different types of shipwrecks. We also hope that the Navys Underwater Archaeology Branch will dive to some of the places where the model detected shipwrecks. This will allow us to check the models accuracy more carefully.

Im also working on a few other archaeological machine learning projects, and they all build on each other. The overall goal of my work is to build a customizable archaeological machine learning model. The model would be able to quickly and easily switch between predicting different types of archaeological features, on land as well as underwater, in different parts of the world. To this end, Im also working on projects focused on finding ancient Maya archaeological structures, caves at a Maya archaeological site and Romanian burial mounds.

This article byLeila Character, Doctoral student in Geography, The University of Texas at Austin College of Liberal Arts,is republished from The Conversation under a Creative Commons license. Read the original article.

See the rest here:

This AI can spot sunken ships from the damn sky - The Next Web

Posted in Ai | Comments Off on This AI can spot sunken ships from the damn sky – The Next Web

PODCAST: Twa Teams, One Street – Cloning Logan Chalmers and Sheridan Satisfaction – Evening Telegraph

Posted: at 3:26 pm

PODCAST: Twa Teams, One Street - Cloning Logan Chalmers and Sheridan Satisfaction - Evening Telegraph Calendar An icon of a desk calendar. Cancel An icon of a circle with a diagonal line across. Caret An icon of a block arrow pointing to the right. Email An icon of a paper envelope. Facebook An icon of the Facebook "f" mark. Google An icon of the Google "G" mark. Linked In An icon of the Linked In "in" mark. Logout An icon representing logout. Profile An icon that resembles human head and shoulders. Telephone An icon of a traditional telephone receiver. Tick An icon of a tick mark. Is Public An icon of a human eye and eyelashes. Is Not Public An icon of a human eye and eyelashes with a diagonal line through it. Folder An icon of a paper folder. Breaking An icon of an exclamation mark on a circular background. Camera An icon of a digital camera. Caret An icon of a caret arrow. Clock An icon of a clock face. Close An icon of the an X shape. Ellipsis An icon of 3 horizontal dots. Envelope An icon of a paper envelope. Facebook An icon of a facebook f logo. Camera An icon of a digital camera. Home An icon of a house. Instagram An icon of the Instagram logo. Linked In An icon of the Linked In logo. Magnifying Glass An icon of a magnifying glass. Next An icon of an arrow pointing to the right. Notice An explanation mark centred inside a circle. Previous An icon of an arrow pointing to the left. Rating An icon of a star. Tag An icon of a tag. Video Camera An icon of a video camera shape. All Sections

View post:

PODCAST: Twa Teams, One Street - Cloning Logan Chalmers and Sheridan Satisfaction - Evening Telegraph

Posted in Cloning | Comments Off on PODCAST: Twa Teams, One Street – Cloning Logan Chalmers and Sheridan Satisfaction – Evening Telegraph

Deadline 2024: Why you only have 3 years left to adopt AI – VentureBeat

Posted: at 3:26 pm

All the sessions from Transform 2021 are available on-demand now. Watch now.

If your company has yet to embrace AI, youre in a race against the clock. And by my calculations, you have just three years left.

How did I arrive at 2024 as the deadline for AI adoption? My prediction formulated with KUNGFU.AI advisor Paco Nathan is rooted in us noticing that many futurists J curves show innovations typically have a 12-to-15-year window of opportunity, a period between when a technology emerges and when it reaches the point of widespread adoption.

While AI can be traced to the mid-1950s and machine learning dates back to the late 1970s, the concept of deep learning was popularized by the AlexNet paper published in 2012. Of course, its not just machine learning that started the clock ticking.

Though cloud computing was initially introduced in 2006, it didnt take off until 2010 or so. The rise of data engineering can also be traced to the same year. The original paper for Apache Spark was published in 2010, and it became foundational for so much of todays distributed data infrastructure.

Additionally, the concept of data science has a widely reported inception date of 2009. Thats when Jeff Hammerbacher, DJ Patil and others began getting recognized for leading data science teams and helping define the practice.

If you do the math, those 20092012 dates put us within that 12-to-15-year window. And that makes 2024 the cutoff for companies hoping to gain a competitive advantage from AI.

If you look at the graph below from Everett Rogers Diffusion of Innovations youll get a sense of how those who wait to put AI into production will miss out on cornering the market. Here the red line shows successive groups adopting new technology while the purple line shows how market share eventually reaches a saturation level.

Source: Everett Rogers, Diffusion of Innovations

A 2019 survey conducted by the MIT Sloan Management Review and Boston Consulting Group explicitly shows how the Diffusion of Innovations theory applies to AI. Their research was based on a global survey of more than 3,000 executives, managers, and analysts across various industries.

Once the responses to questions around AI understanding and adoption were analyzed, survey respondents were assigned to one of four distinct categories:

Pioneers (20%) These organizations possess a deep knowledge of AI and incorporate it into their offerings and internal processes. Theyre the trailblazers.

Investigators (30%) These organizations understand AI but arent deploying it beyond the pilot stage. Theyre taking more of a look before you leap approach.

Experimenters (18%) These organizations are piloting AI without truly understanding it. Their strategy is fake-it-until-you-make-it.

Passives (32%) These organizations have little-to-no understanding of AI and will likely miss out on the opportunity to profit from it.

The 2020 survey, which uses the same questions and methodology, gives even greater insight into how executives embrace AI. 87% believe AI will offer their companies an advantage over others. Just 59% of companies, however, have an AI strategy.

Comparing the MIT and BCG 2020 survey responses to those since the surveys inception in 2017, a growing number of execs recognize that competitors are using AI. Yet only one in 10 companies are using AI to generate significant financial benefits.

I anticipate this gap between leaders and laggards will continue widening, making this your companys last chance to take action before 2024 (if it hasnt already).

MIT and BCGs 2020 data reveals that companies focused on the initial steps of AI adoption (ensuring data, talent, and a strategy are in place) will have a 21% chance of becoming a market leader. When companies begin to iterate on AI solutions with their organizational users (effectively adopting AI and applying it across multiple use cases) that chance rises to 39%. And those that can orchestrate the macro and micro interactions between humans and machines (sharing knowledge amongst both and smartly structuring those interactions) will have a 73% chance of market leadership.

Building upon MIT and BCGs success predictions, McKinsey & Company has specifically broken down how AI integration impacts revenue in this 2020 chart.

Source: McKinsey & Company Global Survey, 2020

While the ROI for AI integration can be immediate, thats not typically the case. According to MIT and BCGs 2019 data, only two out of three companies that have made some investment in AI (Investigators and Experimenters) report gains within three years. This stat improves to three out of five when companies that have made significant investments in AI (Pioneers) are included.

The 2020 MIT/BCG data builds upon this, claiming companies that use AI to make extensive changes to many business processes are 5X more likely to realize a major financial benefit vs. those making small or no changes to a few business processes.

So where will you be in 2024? On your way to reaping the rewards of AI, or lamenting that you missed an opportunity for market advantage?

Steve Meier is a co-founder and Head of Growth at AI services firm KUNGFU.AI.

Read more:

Deadline 2024: Why you only have 3 years left to adopt AI - VentureBeat

Posted in Ai | Comments Off on Deadline 2024: Why you only have 3 years left to adopt AI – VentureBeat

AI Weekly: OpenAIs pivot from robotics acknowledges the power of simulation – VentureBeat

Posted: at 3:26 pm

All the sessions from Transform 2021 are available on-demand now. Watch now.

Late last week, OpenAI confirmed it shuttered its robotics division in part due to difficulties in collecting the data necessary to break through technical barriers. After years of research into machines that can learn to perform tasks like solving a Rubiks Cube, company cofounder Wojciech Zaremba said it makes sense for OpenAI to shift its focus to other domains, where training data is more readily available.

Beyond the commercial motivations for eschewing robotics in favor of media synthesis and natural language processing, OpenAIs decision reflects a growing philosophical debate in AI and robotics research. Some experts believe training systems in simulation will be sufficient to build robots that can complete complex tasks, like assembling electronics. Others emphasize the importance of collecting real-world data, which can provide a stronger baseline.

A longstanding challenge in simulations involving real data is that every scene must respond to a robots movements even though those that might not have been recorded by the original sensor. Whatever angle or viewpoint isnt captured by a photo or video has to be rendered or simulated using predictive models, which is why simulation has historically relied on computer-generated graphics and physics-based rendering that somewhat crudely represents the world.

But Julian Togelius, an AI and games researcher and associate professor at New York University, notes that robots pose challenges that dont exist within the confines of simulation. Batteries deplete, tires behave differently when warm, and sensors regularly need to be recalibrated. Moreover, robots break and tend to be slow and cost a pretty penny. The Shadow Dexterous Hand, the machine that OpenAI used in its Rubiks Cube experiments, has a starting price in the thousands. And OpenAI had to improve the hands robustness by reducing its tendon stress.

Robotics is an admirable endeavor, and I very much respect those who try to tame the mechanical beasts, Togelius wrote in a tweet. But theyre not a reasonable way to do reinforcement learning, or any other episode-hungry type of learning. In my humble opinion, the future belongs to simulations.

Gideon Kowadlo, the cofounder of Cerenaut, an independent research group developing AI to improve decision making, argues that no matter how much data is available in the real world, theres more data in simulation data thats easier to control, ultimately. Simulators can synthesize different environments and scenarios to test algorithms under rare conditions. Moreover, they can randomize variables to create diverse training sets with varying objects and environment properties.

Indeed, Ted Xiao, a scientist at Googles robotics division, says that OpenAIs move away from work with physical machines doesnt have to signal the end of the labs research along this direction. By applying techniques including reinforcement learning to tasks like language and code understanding, OpenAI might be able to develop more capable systems that can then be applied back to robotics. For example, many robotics labs use humans holding controllers to generate data to train robots. But a general AI system that understands controllers (i.e., video games) and the video feeds from robotics with cameras might learn to teleoperate quickly.

Recent studies hint at how a simulation-first approach to robotics might work. In 2020, Nvidia and Stanford developed a technique that decomposes vision and control tasks into machine learning models that can be trained separately. Microsoft has created an AI drone navigation system that can reason out the correct actions to take from camera images. Scientist at DeepMind trained acube-stacking system to learn from observation in a simulated environment. And a team at Google detailed a framework that takes a motion capture clip of an animal and uses reinforcement learning to train a control policy, employing an adaptation technique to randomize the dynamics in the simulation by, for example, varying mass and friction.

In a blog post in 2017, OpenAI researchers wrote that they believe general-purpose robots can be built by training entirely in simulation, followed by a small amount of self-calibration in the real world. This would increasingly appear to be the case.

For AI coverage, send news tips toKyle Wiggers and be sure to subscribe to the AI Weekly newsletterand bookmark our AI channel,The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

View original post here:

AI Weekly: OpenAIs pivot from robotics acknowledges the power of simulation - VentureBeat

Posted in Ai | Comments Off on AI Weekly: OpenAIs pivot from robotics acknowledges the power of simulation – VentureBeat