The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: March 5, 2020
I would clone my dog and Im not ashamed to admit it – The Guardian
Posted: March 5, 2020 at 6:27 pm
You are at level one of crazy dog parent when you throw your pooch a birthday or bark mitzvah. You are at level two when your dog has more clothes than you do. And you are at level 100 when you store your dogs skin samples and spend $50,000 (39,000) to clone it.
David and Alicia Tschirhart are level 100 pet parents. The Californian couple made headlines recently for cloning Marley, their pet Labrador. Marley was an extremely good boy; he once fought off a rattlesnake, possibly saving a pregnant Alicias life. When Marley died from cancer, his owners had a hard time letting him go, so they didnt: they used a pet cloning company to create a genetically identical puppy called Ziggy. They have the same personality, they play the same, they favour the same toys, Alicia told reporters.
I didnt grow up with pets, apart from a short-lived stick insect (regular readers may remember poor Fatima). So there was a time when I would have thought anyone who cloned a dog was barking mad. I would probably have been holier than thou about how irresponsible it was to spend a fortune genetically reproducing an animal. I would have wrung my hands over celebrities such as Barbra Streisand, who cloned her dog Samantha and then had the facsimiles pose at the grave of their mother.
That was in the years BC (before canine). Now that I have a dog, I understand completely. If money were no object I would 100% clone Rascal, my tiny rescue mutt. Rascal has never saved me from a rattlesnake and, to be honest, I doubt he would. He would probably just sniff the snake while it killed me. But he has saved me in other ways and I would do anything to extend my time with him. It would be a doggone shame not to.
Arwa Mahdawi is a Guardian columnist
Excerpt from:
I would clone my dog and Im not ashamed to admit it - The Guardian
Posted in Cloning
Comments Off on I would clone my dog and Im not ashamed to admit it – The Guardian
‘Star Wars: The Rise of Skywalker’ Novelization Reveals Rey’s Father is a Failed Palpatine Clone – /FILM
Posted: at 6:27 pm
One of the biggest surprises to come out ofStar Wars: The Rise of Skywalkerwas its controversial revelation that Rey wasnt actually nobody. But less surprising is that theStar Wars: The Rise of Skywalker novelization continues to give us information about the movie that we never asked for. The latest is another whopper about Reys identity or rather, her fathers identity.
TheStar Wars: The Rise of Skywalker novelization reveals that Reys father is actually a failed Palpatine clone, thus ending the months-long discussion over what woman would actually let a wrinkled, undead Palpatine father her child.
Youve probably heard the revelation that the Palpatine we see inStar Wars: The Rise of Skywalker isnt the exact same Palpatine we saw in the prequel and original trilogies. Before his demise inReturn of the Jedi, Palpatine transferred his consciousness into a clone body, which led to the decaying form we ultimately meet inThe Rise of Skywalker. When Rey meets her grandfather for the first time, she passes by other unfinished clones, which means he had tried this process before. Little does she know, the last clone attempt was her own father.
During the scene when Rey pretends to take part in the Sith Ritual on Exegol to trick Palpatine, she has visions of her grandfathers past. In this vision, the novelization describes (via ScreenRant) that Palpatine thrust his consciousnessinto a clone body, but the transfer was imperfect and the members of the Sith Eternal worked to engineer a new vessel for Palpatines essence. One of these attempts was labeleda useless, powerless failurewho wasa not-quite-identical clone. That failed clone would still manage to live on, escape Exegol, meet Jodie Comer, and become Reys father.
This isnt the first time that cloning has produced offspring in the galaxy far, far away. When Jango Fett agreed to be cloned for the Republicans army on Kamino, he received as part of his payment an unaltered clone who aged normally and became the iconic Boba Fett. And of course, theres the former Supreme Leader Snoke, who also ended up being a clone. No word yet on whether he is anyones grandfather.
Read this article:
Posted in Cloning
Comments Off on ‘Star Wars: The Rise of Skywalker’ Novelization Reveals Rey’s Father is a Failed Palpatine Clone – /FILM
Send In The Clones – Escalon Times
Posted: at 6:27 pm
How is it that February had one extraday this year and yet it seems like I still lost a couple of weeks out of themonth?
Honestly, it feels as though I missedout on some things because I felt they were still several days away when inreality, they had already happened.
It was Saturday, Feb. 29 which wasfull of events for me to cover in Escalon when I realized that somehow, theRead Across America event to celebrate Dr. Seusss birthday had been observedthe day before, on Friday. I was driving by Dent Elementary School which hadsent me an invitation to be a guest reader when that fact hit me. Having beena reader in the past it would have been fun to do so again, and I always,always get to at least a couple of the Escalon schools to cover the event. Butnot this year.
So what happened? Just too much elsegoing on and not marking the date on my calendar as soon as I got theinvitation to read. It just quietly slid right past me. We also were wrappingup two other outside publication projects on Friday (yes, we routinely work onmore than just our three weekly newspapers) so there wasnt much time duringthe day for anything else.
It left me feeling sad, though, as Ilook forward to many of these school site events.
Saturday, however, was a day ofcelebration for Escalon, as the high school observed its 100th anniversary andthere was plenty of other good stuff happening as well, including home baseballgames and the fun color run first thing in the morning.
Sunday, as I picked up a few groceries,I ran into a friend who said she was surprised she hadnt seen me covering anevent that occurred Saturday in Oakdale. As I shared that I was already busywith three events in Escalon but I would have gotten to Oakdale if I could have,she simply asked: You mean you havent been cloned yet?
We both chuckled over that and I toldher I would check with my boss to see if that is something we should look into.So far, there has been no cloning; I just keep up a pretty hectic pace.
Also on Sunday, I was able to take acouple of hours out of the day to be entertained by the version of ToyStories put on at the Oakdale High School theater by the Drama Department, asort of compilation of all four movies in the series. Their production kept thoseof us in the matinee audience laughing and there were lots of standoutperformances.
Definitely a whirlwind weekend butpacked with plenty of fun and work that didnt necessarily seem like work.
Now we dive into March and personally,even though it would wreak havoc on spring sports schedules, I really hope weget some rainy weather to make up for what was nearly a precipitation-freemonth of February. There has already been talk of the drought word so Im notsure any of us would mind a few rainy days over the next few months if we couldavoid that situation.
We are also now entering the zone wherethings seem to start happening fast. Once the holidays are over and we get intospringtime, it is almost like time itself goes into overdrive, at least in ourbusiness. One day its early March and the next, we are getting ready for highschool graduation ceremonies and people are putting in time off requests forsummer vacation.
Between now and then, though, there willbe plenty to both keep us busy and to fill the pages of the paper from theOakdale Rodeo to Easter egg hunts to the Chocolate Festival and Relay For Life.
And, I am actually already thinking ofgetting away for a bit of a summer vacation this year myself.
Well, if that cloning thing comesthrough. Stay tuned.
MargJackson is editor of The Escalon Times, The Oakdale Leader and The RiverbankNews. She may be reached at mjackson@oakdaleleader.com or by calling 847-3021.
More here:
Posted in Cloning
Comments Off on Send In The Clones – Escalon Times
Cheque cloning in Kochi: How north Indian racket scammed banks of Rs 2.6 cr – THE WEEK
Posted: at 6:27 pm
Even as banks are bolstering security at multiple levels, they are yet to plug significant loopholes especially in securing deposits and preventing fraud. In a major scam unearthed in Kochi, unidentified fraudsters siphoned off crores of rupees from various nationalised banks using cloned cheques last year.
More than Rs 2.6 crore were siphoned off in five transactions from Punjab National Bank, Central Bank of India, Union Bank of India and Canara Bank. It is suspected that a racket based out of north India could be behind the fraud. It is suspected that more banks may have fallen prey to fraud in this manner.
Modus operandi
Preliminary assessments revealed the fraud was committed by printing exact copies of the cheque leaves issued to account holders. Their signatures could also have been forged.
The money was withdrawn from the accounts in bank branches in Uttar Pradesh and Maharashtra by people from these same regions. However, the cheque was given for clearing in Kochi and nearby areas.
When money is withdrawn from the account, the account owner gets a message. However, none of the affected accounts in this case belonged to individuals. These are rather owned by institutions such as colleges, schools and societies, and thus the messages likely went unnoticed. These institutions usually check the account statements periodically.
From Faridabad to Kadavanthra
The fraud happened during August-September 2019.
The cheque issued in the name of a company for Rs 43,10,119 from the account of a consumer forum at the Punjab National Bank's Faridabad branch in Haryana was deposited in the drop box of the Central Bank's Kadavanthra branch.
The cheque was passed by the clearing house in Chennai. The money transferred was withdrawn from the account in the following days. This account owner is yet to be traced.
The consumer forum approached the bank after a month, pointing out that no cheque was issued for Rs 43 lakh. The cheque book was also presented, taking the lid off the cheque leaf cloning fraud.
In a similar manner, three cheques of the Canara Bank were used to withdraw Rs 30 lakh, Rs 40 lakh and Rs 40 lakh on various days.
Rs 1.10 crore was siphoned off from the Central Bank. The scam was unearthed after a cheque of Rs 31 lakh from the Union Bank was detected to be fake at Canara Bank's Aluva branch. Then the Canara Bank general manager sent a circular on the fraud to various branches on December 23.
Whodunnit?
It is not known whether any bank staff had colluded with the fraudsters and given details of the money in the accounts and the signatures of the account holders. Nor is it clear whether the banks have apprised the Reserve Bank of India of the fraud. The role of institutions that print cheque books on outsourcing basis is also under the scanner.
(This story was originally published in onmanorama)
Continue reading here:
Cheque cloning in Kochi: How north Indian racket scammed banks of Rs 2.6 cr - THE WEEK
Posted in Cloning
Comments Off on Cheque cloning in Kochi: How north Indian racket scammed banks of Rs 2.6 cr – THE WEEK
Spy vs Spy: cloned phones, break-ins and rogue agents all in a days work at the State Security central – Daily Maverick
Posted: at 6:27 pm
Minister of State Security Ayanda Dlodlo. (Photo : Moeletsi Mabe / Sunday Times)
So thoroughly have South Africas security and intelligence services been corrupted and repurposed that if they were an ailing person in need of psychological diagnosis, one would immediately offer pathological liar.
And so we must treat information that the cellphones of State Security Minister Ayanda Dlodlo and her deputy, Zizi Kodwa, as well as those of several other ministry officials, had been cloned, with the prerequisite caution.
So too reports that an undisclosed sum of money, as well as classified documents, had been stolen from the safe of the State Security Offices in Pretoria in January in what is believed to have been an inside job.
That is what unfortunately happens when state institutions are unconstitutionally repurposed, plundered, corrupted and weaponised as political tools in factional battles.
The security needs of the South African Republic play second fiddle to those who have climbed the greasy pole of politics and personal power, and this is especially so in the murky and unaccountable world that passes for South African intelligence.
The phone cloning story first surfaced on 26 February 2020, in Independent Group titles. The incident was later confirmed by department spokesperson Mava Scott who told News24 that the matter had been reported to the Gauteng SAPS, which had allocated high-profile investigators to look into the matter.
On the surface, this would appear to be perfectly reasonable until one pauses to grasp for logic and to ask the question: Why take it to the SAPS? Surely the State Security Agency has its own counter-intelligence capacity?
Until you realise that its own counter-intelligence capacity might not be on the books at all and that the Special Operations Unit (SOU), headed by Zumas spy Thulani Dlomo and its 186 rogue agents have still not been rounded up by law enforcement agencies.
This is despite evidence to the Zondo Commission of Inquiry into State Capture and the findings of the High-Level Panel Review report on state security which found the SOU was being run as a private, parallel and unconstitutional unit reporting only to Jacob Zuma while paid for by the citizens.
In fact, it was the inspector-general of intelligence himself who revealed, during a conversation with the public protector, that about 186 rogue SSA agents were still out there somewhere, operating merrily, unaccountable to the SSA or anyone else. They had access to safe houses stuffed to the rafters with cash.
So, what do we know about the phone cloning?
The lore so far is that a message appeared to have been sent from Kodwas mobile phone to other staffers in the department. It is an important detail as these are ministry and not SSA staff.
Kodwa has insisted he did not send the message and it was this that led to the discovery that the phones of the ministers and officials had been cloned. Of course, it would shed a tiny shard of light should Deputy Minister Kodwa share the message sent from his phone. It would provide a vital clue as to the intention of the cloners.
When the message was sent and how the cloning of the ministers phones, as well as that of other officials, was discovered would also help to unravel the tight knot of smoke and mirrors.
But perhaps something is being pre-empted here?
Perhaps, in time, some sort of message will surface and when it does Kodwa will be able to respond whether this is/was the message sent from his cloned phone.
The most baffling question is, why hand the matter to the SAPS when the SSA should surely have some sort of relatively reliable counter-intelligence capacity despite the deep rot? Are there no ethical members who can be relied on?
Maybe not.
Experts in the field are in agreement that once suspicions have been raised that a phone or device has been cloned the best and most logical course would be to keep it quiet, set a trap and bust the snooper.
Detective work 101.
One would imagine that the Ministry of State Security and its staff would have sophisticated encryption software on their government-issued phones. Whoever it was that allegedly cloned the phones themselves must have been in possession of sophisticated systems.
Which points to the rogue SOU and Dlomo, with the SOUs immense capacity built up over years.
So, are Dlodlo and Kodwa the targets of a disinformation campaign?
While Dlodlo came out in a full-throated support of ANC SG Ace Magashule in 2018, she is known to be flexible.
Dlodlo has been in reported conflict with SSA domestic branch head Mahlodi Muofhe. The SSA acting head, Loyiso Jafta, too is viewed as being a Thuma Mina acolyte.
There are those who are of the opinion that the High-Level Panel Review report did not go far enough. Sure, it exposed the whole shadow world of Zumas private army of spies who blew billions in taxpayers money, but the institution itself should have been disbanded
It has been a long time coming.
We should not only blame Zuma for the mess which continues to place South Africas national security at risk every day.
As Professor Jane Duncan noted in her 2018 book Stopping the Spies Constructing and resisting the surveillance state in South Africa, the rot is not episodic but systemic. The illegal spying on domestic political groupings as well as anyone else in the republic has virtually been normalised for more than 20 years.
Duncan notes that the downhill trajectory began around 2003 when the Thabo Mbeki presidency required an expansion of the NIAs mandate, resulting in a directive that included political and economic intelligence.
In the case of political intelligence, the NIA was to focus on the strengths and the weaknesses of political formations, their constitutions and plans, political figures and their role in governance, said Duncan.
By 2004, the intelligence service had ballooned in size and personnel and accounted for an unsustainable 74% of the total domestic intelligence budget.
In 2005, signs emerged that intelligence operatives were becoming embroiled in the factional battles in the ANC: a problem that was proved to exist by a commission of inquiry which partly blamed the culture of secrecy in the intelligence services as the problem.
These are dangerous times.
A tipping point will soon be reached as the vast, deep and dangerous network of corrupt officials most of whom belong to the ruling party across all government departments are exposed and their pipelines to unlimited illicit funds are slowly being shut off in an attempt to bring South Africa back from the brink.
In the meantime, always remember, where there is smoke, there are mirrors, and often a raging fire. DM
Please note you must be a Maverick Insider to comment. Sign up here or if you are already an Insider.
View post:
Posted in Cloning
Comments Off on Spy vs Spy: cloned phones, break-ins and rogue agents all in a days work at the State Security central – Daily Maverick
How AI May Prevent The Next Coronavirus Outbreak – Forbes
Posted: at 6:24 pm
AI can be used for the early detection of virus outbreaks that might result in a pandemic. (Photo by ... [+] Emanuele Cremaschi/Getty Images)
AI detected the coronavirus long before the worlds population really knew what it was. On December 31st, a Toronto-based startup called BlueDot identified the outbreak in Wuhan, several hours after the first cases were diagnosed by local authorities. The BlueDot team confirmed the info its system had relayed and informed their clients that very day, nearly a week before Chinese and international health organisations made official announcements.
Thanks to the speed and scale of AI, BlueDot was able to get a head start over everyone else. If nothing else, this reveals that AI will be key in forestalling the next coronavirus-like outbreak.
BlueDot isn't the only startup harnessing AI and machine learning to combat the spread of contagious viruses. One Israel-based medtech company, Nanox, has developed a mobile digital X-ray system that uses AI cloud-based software to diagnose infections and help prevent epidemic outbreaks. Dubbed the Nanox System, it incorporates a vast image database, radiologist matching, diagnostic reviews and annotations, and also assistive artificial intelligence systems, which combine all of the above to arrive at an early diagnosis.
Nanox is currently building on this technology to develop a new standing X-ray machine that will supply tomographic images of the lungs. The company plans to market the machine so that it can be installed in public places, such as airports, train stations, seaports, or anywhere else where large groups of people rub shoulders.
Given that the new system, as well as the existing Nanox System, are lower cost mobile imaging devices, it's unsurprising to hear that Nanox has attracted investment from funds looking to capitalise on AI's potential for thwarting epidemics. This month, the company announced a $26 million strategic investment from Foxconn. It also signed an agreement this week to supply 1,000 of its Nanox Systems to medical imaging services across Australia, New Zealand and Norway. Coronavirus be warned.
Its CEO and co-founder Ran Poliakine, explains that such deals are a testament to how the future of epidemic prevention lies with AI-based diagnostic tools. "Nanox has achieved a technological breakthrough by digitizing traditional X-rays, and now we are ready to take a giant leap forward in making it possible to provide one scan per person, per year, for preventative measures," he tells me.
Importantly, the key feature of AI in terms of preventing epidemics is its speed and scale. As Poliakine explains, "AI can detect conditions instantly which makes it a great source of power when trying to prevent epidemics. If we talk about 1,000 systems scanning 60 people a day on average, this translates to 60,000 scans that need to be processed daily by the professional teams."
Poliakine also affirms that no human force available today that can support this volume with the necessary speed and efficiency. Time and again, this is a point made forcefully by other individuals and companies working in this burgeoning sector.
"When it comes to detecting outbreaks, machines can be trained to process vast amounts of data in the same way that a human expert would," explains Dr Kamran Khan, the founder and CEO of BlueDot, as well as a professor at the University of Toronto. "But a machine can do this around the clock, tirelessly, and with incredible speed, making the process vastly more scalable, timely, and efficient. This complements human intelligence to interpret the data, assess its relevance, and consider how best to apply it with decision-making."
Basically, AI is set to become a giant firewall against infectious diseases and pandemics. And it won't only be because of AI-assisted screening and diagnostic techniques. Because as Sergey Young, a longevity expert and founder of the Longevity Vision Fund, tells me, artificial intelligence will also be pivotal in identifying potential vaccines and treatments against the next coronavirus, as well as COVID-19 itself.
"AI has the capacity to quickly search enormous databases for an existing drug that can fight coronavirus or develop a new one in literally months," he says. "For example, Longevity Vision Funds portfolio company Insilico Medicine, which specializes in AI in the area of drug discovery and development, used its AI-based system to identify thousands of new molecules that could serve as potential medications for coronavirus in just four days. The speed and scalability of AI is essential to fast-tracking drug trials and the development of vaccines."
This kind of treatment-discovery will prove vitally important in the future. And in conjunction with screening, it suggests that artificial intelligence will become one of the primary ingredients in ensuring that another coronavirus won't have an outsized impact on the global economy. Already, the COVID-19 coronavirus is likely to cut global GDP growth by $1.1 trillion this year, in addition to having already wiped around $5 trillion off the value of global stock markets. Clearly, avoiding such financial destruction in the future would be more than welcome, and artificial intelligence will prove indispensable in this respect. Especially as the scale of potential pandemics increases with an increasingly populated and globalised world.
Sergey Young also explains that AI could play a substantial role in the area of impact management and treatment, at least if we accept their increasing encroachment into society. He notes that, in China, robots are being used in hospitals to alleviate the stresses currently being piled on medical staff, while ambulances in the city of Hangzhou are assisted by navigational AI to help them reach patients faster. Robots have even been dispatched to a public plaza in Guangzhou in order to warn passersby who aren't wearing face-masks. Even more dystopian, China is also allegedly using drones to ensure residents are staying at home and reducing the risk of the coronavirus spreading further.
Even if we don't reach that strange point in human history where AI and robots police our behaviour during possible health crises, artificial intelligence will still become massively important in detecting outbreaks before they spread and in identifying possible treatments. Companies such as BlueDot, Nanox, and Insilico Medicine will prove increasingly essential in warding off future coronavirus-style pandemics, and with it they'll provide one very strong example of AI being a force for good.
Read the original here:
Posted in Ai
Comments Off on How AI May Prevent The Next Coronavirus Outbreak – Forbes
How AI and Neuroscience Can Help Each Other Progress? – Analytics Insight
Posted: at 6:24 pm
Artificial Intelligence has progressed immensely in the past few years. From being just a fiction context to penetrating into the regular lives of people, AI has brought transformation in several ways. Such advancements are an output of various factors that include the application of new statistical approaches and enhanced computing powers. However, according to 2017 report by DeepMind,a Perspective in the journal Neuron, argues that people often discount the contribution and use of ideas from experimental and theoretical neuroscience.
TheDeepMind reportsresearchers believe that drawing inspiration from neuroscience in AI research is important for two reasons. First, neuroscience can help validate AI techniques that already exist. They said, Put simply if we discover one of our artificial algorithms mimics a function within the brain, it suggests our approach may be on the right track. Second, neuroscience can provide a rich source of inspiration for new types of algorithms and architectures to employ when building artificial brains. Traditional approaches to AI have historically been dominated by logic-based methods and theoretical mathematical models.
Moreover,in a recent blog post, DeepMind suggests that the human brain and AI learning methods are closely linked when it comes to learning through reward.
Computer scientists have developed algorithms for reinforcement learning in artificial systems. These algorithms enable AI systems to learn complex strategies without external instruction, guided instead by reward predictions.
As noted by the post, a recent development in computer science which yields significant improvements in performance on reinforcement learning problems may provide a deep, parsimonious explanation for several previously unexplained features of reward learning in the brain, and opens up new avenues of research into the brains dopamine system, with potential implications for learning and motivation disorders.
DeepMind found that dopamine neurons in the brain were each tuned to different levels of pessimism or optimism. If they were a choir, they wouldnt all be singing the same note, but harmonizing each with a consistent vocal register, like bass and soprano singers. In artificial reinforcement learning systems, this diverse tuning creates a richer training signal that greatly speeds learning in neural networks, and researchers speculate that the brain might use it for the same reason.
The existence of distributional reinforcement learning in the brain has interesting implications both for AI and neuroscience. Firstly, this discovery validates distributional reinforcement learning it gives researchers increased confidence that AI research is on the right track since this algorithm is already being used in the most intelligent entity they are aware of: the brain.
Therefore, a shared framework for intelligence in context to artificial intelligence and neuroscience will allow scientists to build smarter machines, and enable them to understand humankind better. This collaborative drive to propel both could possibly expand human cognitive capabilities while bridging the gap between humans and machines.
Share This ArticleDo the sharing thingy
About AuthorMore info about author
Smriti is a Content Analyst at Analytics Insight. She writes Tech/Business articles for Analytics Insight. Her creative work can be confirmed @analyticsinsight.net. She adores crushing over books, crafts, creative works and people, movies and music from eternity!!
The rest is here:
How AI and Neuroscience Can Help Each Other Progress? - Analytics Insight
Posted in Ai
Comments Off on How AI and Neuroscience Can Help Each Other Progress? – Analytics Insight
Creating a Curious, Ethical, and Diverse AI Workforce – War on the Rocks
Posted: at 6:24 pm
Does the law of war apply to artificially intelligent systems? The U.S. Department of Defense is taking this question seriously, and adopted ethical principles for artificial intelligence (AI) in February 2020 based on the Defense Innovation Boards set of AI ethical guidelines proposed last year. However, just as defense organizations must abide by the law of war and other norms and values, individuals are responsible for the systems they create and use.
Ensuring that AI systems are as committed as we are to responsible and lawful behavior requires changes to engineering practices. Considerations of ethical, moral, and legal implications are not new to defense organizations, but they are starting to become more common in AI engineering teams. AI systems are revolutionizing many commercial sector products and services, and are applicable to many military applications from institutional processes and logistics to those informing warfighters in the field. As AI becomes ubiquitous, changes are necessary to integrate ethics into AI development now, before it is too late.
The United States needs a curious, ethical AI workforce working collaboratively to make trustworthy AI systems. Members of AI development teams must have deep discussions regarding the implications of their work on the warfighters using them. This work does not come easily. In order to develop AI systems effectively and ethically, defense organizations should foster an ethical, inclusive work environment and hire a diverse workforce. This workforce should include curiosity experts (people who focus on human needs and behaviors), who are more likely to imagine the potential unwanted and unintended consequences associated with the systems use and misuse, and ask tough questions about those consequences.
Create an Ethical, Inclusive Environment
People with similar concepts of the world and a similar education are more likely to miss the same issues due to their shared bias. The data used by AI systems are similarly biased, and people collecting the data may not be aware of how that is conveyed through the data they create. An organizations bias will be pervasive in the data provided by that organization, and the AI systems developed with that data will perpetuate the bias.
Bias can be mitigated with ethical workforces that value diverse human intelligence and the wide set of possible life experiences. Diversity doesnt just mean making sure that there are a mix of genders on the project team, or that people look different, though those attributes are important. A project team should have a wide set of life experiences, disability status, social status, and experience being the other. Diversity also means including a mix of people in uniform, civilians, academic partners, and contractors as well as those individuals who have diverse life experiences. This diversity does not mean lowering the bar of experience or talent, but rather extending it. To be successful, all of these individuals need to be engaged as full members of the project team in an inclusive environment.
Individuals coming from different backgrounds will be more capable of imagining a broad set of uses, and more importantly, misuses of these systems. Assembling a diverse workforce that brings talented, experienced people together will reinforce technology ethics. Imbuing the workforce with curiosity, empathy, and understanding for the warfighters using the systems and affected by the systems will further support the work.
Diverse and inclusive leadership is key to an organizations success. When leadership in the organization isnt diverse, it is less likely to attract and, more importantly, retain talent. This is primarily thought to be because those talented individuals may assume that the organization is not inclusive or that there is no future in the organization for them. If leadership is lacking in diversity, an organization can promote someone early or hire from the outside if necessary.
Adopting a set of technology ethics is a first step to supporting project teams in making better, more confident decisions that are ethical. Technology ethics are ethics that are designed specifically for development of software and emerging technologies. They help align diverse project teams to assist them in setting appropriate norms for AI systems. Much like physicians adhere to a version of the American Medical Associations Code of Medical Ethics, technology ethics help guide a project team working on AI systems that have the potential for harm (most AI systems do). These project teams need to have early, difficult conversations about how they will manage a variety of situations.
A shared set of technology ethics serves as a central point to guide decision-making. We are all unique, and many of us have shared knowledge and experiences. These are what naturally draw people together, making it feel like a bigger challenge to work with people who have completely different experiences. However, the experience of working with people who are significantly different builds the capacity for innovation and creative thinking. Using ethics as a bridge between differences strengthens the team by creating shared knowledge and common ground. Technology ethics must be weaved into the work at a very early stage, and the AI workforce must continue to advocate technology ethics as the AI system matures. Human involvement (a human-in-the-loop) is required throughout the life cycle of AI systems an AI system cannot be simply turned on and left to run. Technology ethics should be considered throughout the entire life cycle.
Without technology ethics it is harder for project teams to align, and important discussions may be inadvertently skipped. Technology ethics bring into focus the obligation for the project team to take its work and its implications seriously, and can also empower individuals to ask tough questions with regard to unwanted and unintended consequences that they imagine with the systems use and misuse. By aligning on a set of technology ethics, the development team can define clear directives with regard to system functionality.
Identifying a set of technology ethics is an intimidating task and one that should be approached carefully. Some project teams will want to adopt guidance initially from organizations such as the Association for Computing Machinerys Code of Ethics and Professional Conduct, and the Montreal Declaration for a Responsible Development of Artificial Intelligence, while others like IBM and Microsoft are developing their own guidance. The Defense Departments newly adopted five AI ethics principles are: responsible, equitable, traceable, reliable, and governable. The original Defense Innovation Board recommendation is described in detail in the supporting document.
In the past, ethics have only been referenced in, and not directly part of, software development efforts. The knowledge that AI systems can cause much broader harm more quickly than software technologies could in the past raises new ethical questions that need to be addressed by the AI workforce. A skilled and diverse workforce, bursting with curiosity and engaged with the AI system, will result in AI systems that are accountable to humans, de-risked, respectful, secure, honest, and usable.
Value Curiosity
AI systems will be created and used by a wide range of individuals, and misuse will come from potentially unexpected sources individuals and organizations with completely different experiences and with potentially unlimited resources. Adversaries already are using techniques that are very difficult to anticipate. The adoption of technology ethics isnt enough to make AI systems safe. Making sure that the teams building these systems are able to imagine and then mitigate issues is profoundly important.
The term curiosity experts is shorthand for people who have a broad range of skills and job titles including: cognitive psychologists, digital anthropologists, human-machine interaction and human-computer interaction professionals, and user experience researchers and designers. Curiosity experts core responsibility is to be curious and speculative within the ethical and inclusive environment an organization has created. Curiosity experts will partner with defense experts, and may already be part of your team, doing research and helping to make interactions more usable.
Curiosity experts help connect the human needs, the initial problem to be solved, and the solution to an engineering problem. Working with defense experts (and ideally the warfighters themselves), they will enable a project team to uncover potential issues before they arise by focusing on understanding how the system will be used, the situation and constraints for using the system, as well as the abilities of the people who will use it. Curiosity experts can conduct a variety of proven qualitative and quantitative methods, and once they have a solid understanding, they share that information with the project team in easy to consume formats such as stories. The research they conduct is necessary to understand the needs that are being addressed, so that the team builds the right thing. This may sound familiar wargaming uses very similar tactics, and storytelling is an important component.
Its important for curiosity experts to lead (and then teach others to lead) co-design activities such as abusability testing and other speculative exercises, in which the project team imagines the misuse of the AI system they are considering building. AI systems need to be interpretable and usable by warfighters, and this has been recognized as a priority by the Defense Advanced Research Projects Agency, which is working on the Explainable AI program. Curiosity experts with interaction design experience can contribute materially to this effort as they help keep the people using these systems in mind, and call out the AI workforce when necessary. When the project team asks, Why dont they get it? curiosity experts can nudge the project team to pivot instead to What can we do better to meet the warfighters needs? As individuals on the team become more comfortable with this mindset, they become curiosity experts at heart, even when their primary responsibility is something else.
Hire a Diverse Workforce
Building diverse project teams helps to increase each individuals creativity and effectiveness. Diversity in this sense relates to skill sets, education (with regard to school and program), and problem-framing approach. Coming together with different ways of looking at the world will help teams and organizations solve challenging problems faster.
Building a diverse project team to advance this ethical framework will take time and effort. Organizations that represent minority groups such as the National Society of Black Engineers and technical conferences that embrace diversity such as the Grace Hopper Celebration can be a great resource. Prospective candidates should ask hard questions about the organization, including about the organizations ethics, diversity, and inclusion. These questions are indicative of curious individuals you want on your team. Once you recruit more diverse individuals, you can set progress goals. For example, Atlassian introduced a new approach to diversity reporting in 2016 that focused on team dynamics and shared how people from underrepresented backgrounds were spread across the companys teams.
It is common in technology, and AI specifically, to value specific degrees and learning styles. Some employers have staffed their organization with class after class of graduates from particular degree programs at particular universities. These organizations benefit from the ability of these graduates to easily bond and rely on shared knowledge. However, these same benefits can become weaknesses for the project team. The peril of creating high-risk products and services with a homogeneous team is that they may all miss the same critical piece of information; have the same gaps in technical knowledge; assume the same things about the process; or not be able to think differently enough to imagine unintended consequences. They wont even realize their mistake until it is too late.
In many organizations this risk is disguised by adding one or two individuals to a group who are significantly different from the majority in an aspect such as gender, race, or culture. Unfortunately, their presence isnt enough to significantly reduce the risk of groupthink, and their experience will be dismissed because it is different if there are not enough individuals who are socially distinct in the group. Eventually, due to many of these factors, retention becomes a significant concern. Project teams need to be built with diversity from the start, or be quickly adjusted.
A diverse team of thoughtful and talented machine learning experts, programmers, and curiosity experts (among others) is not yet complete. The AI workforce needs direct access to experts in the military or defense industry who are familiar with the situations and organizations the AI system is being designed for, and who can spot assumptions and issues early. These individuals, be they in uniform, civilians, or consultants, may also be able to act as liaisons to the warfighters so that more direct contact can be made with those closest to the work.
Rethinking the Workforce
Encouraging project teams to be curious and speculative in imagining scenarios at the edges of AI will help to prepare for actual system use. As the AI workforce considers how to manage a variety of use cases, framing conversations with technology ethics will provoke serious and contentious discussions. These conversations are precious with regard to aligning the team prior to facing a difficult situation. A clear understanding of what the expectations are in specific situations helps the team to create mitigation plans for how they will respond, both during the creation of the AI system and once it is in production.
The AI sector needs to think about the workforce in different ways. As Prof. Hannah Fry suggests in The Guardian, diversity and inclusion in the workforce is just as important as a technology ethics pledge (if not more so) to ensure that we are reducing unwanted bias and unintended consequences. Creating an ethical, inclusive environment, valuing curiosity, and hiring a diverse workforce are necessary steps to make ethical AI. Clear communication and alignment on ethics is the best way to bring disparate groups of people into a shared understanding and to create AI systems that are accountable to humans, de-risked, respectful, secure, honest, and usable.
Over the next several years, my organization, Carnegie Mellon Universitys Software Engineering Institute, is advancing a professional discipline of AI Engineering to help the defense and national security communities develop, deploy, operate, and evolve game-changing mission capabilities that leverage rapidly evolving artificial intelligence and machine learning technologies. At the core of this effort is supporting the AI workforce in designing trustworthy AI systems by successfully integrating ethics in a diverse workforce.
Carol Smith (@carologic) is a senior research scientist in Human-Machine Interaction at Carnegie Mellon Universitys Software Engineering Institute and an adjunct instructor for CMUs Human-Computer Interaction Institute. She has been conducting user experience research to improve the human experience across industries for 19 years and working to improve AI systems since 2015. Carol is recognized globally as a leader in user experience and has presented over 140 talks and workshops in over 40 cities around the world, served two terms on the User Experience Professionals Association international board, and is currently an editor for the Journal of Usability Studies and the upcoming Association for Computing Machinery Digital Threats: Research and Practice journals Special Issue on Human-Machine Teaming. She holds an M.S. in Human-Computer Interaction from DePaul University.
This material is based upon work funded and supported by the Department of Defense under Contract No. FA8702-15-D-0002 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center.
The views, opinions, and/or findings contained in this material are those of the author(s) and should not be construed as an official government position, policy, or decision, unless designated by other documentation.
Image: U.S. Air Force (Photo by J.M. Eddins Jr.)
See the article here:
Creating a Curious, Ethical, and Diverse AI Workforce - War on the Rocks
Posted in Ai
Comments Off on Creating a Curious, Ethical, and Diverse AI Workforce – War on the Rocks
Is Artificial Intelligence (AI) A Threat To Humans? – Forbes
Posted: at 6:24 pm
Are artificial intelligence (AI) and superintelligent machines the best or worst thing that could ever happen to humankind? This has been a question in existence since the 1940s when computer scientist Alan Turing wondered and began to believe that there would be a time when machines could have an unlimited impact on humanity through a process that mimicked evolution.
Is Artificial Intelligence (AI) A Threat To Humans?
When Oxford University Professor Nick Bostroms New York Times best-seller, Superintelligence: Paths, Dangers, Strategies was first published in 2014, it struck a nerve at the heart of this debate with its focus on all the things that could go wrong. However, in my recent conversation with Bostrom, he also acknowledged theres an enormous upside to artificial intelligence technology.
You can see the full video of our conversation here:
Since the writing of Bostrom's book in 2014, progress has been very rapid in artificial intelligence and machine and deep learning. Artificial intelligence is in the public discourse, and most governments have some sort of strategy or road map to address AI. In his book, he talked about AI being a little bit like children playing with a bomb that could go off at any time.
Bostrom explained, "There's a mismatch between our level of maturity in terms of our wisdom, our ability to cooperate as a species on the one hand and on the other hand our instrumental ability to use technology to make big changes in the world. It seems like we've grown stronger faster than we've grown wiser."
There are all kinds of exciting AI tools and applications that are beginning to affect the economy in many ways. These shouldnt be overshadowed by the overhype on the hypothetical future point where you get AIs with the same general learning and planning abilities that humans have as well as superintelligent machines.These are two different contexts that require attention.
Today, the more imminent threat isn't from a superintelligence, but the usefulyet potentially dangerousapplications AI is used for presently.
How is AI dangerous?
If we focus on whats possible today with AI, here are some of the potential negative impacts of artificial intelligence that we should consider and plan for:
Change the jobs humans do/job automation: AI will change the workplace and the jobs that humans do. Some jobs will be lost to AI technology, so humans will need to embrace the change and find new activities that will provide them the social and mental benefits their job provided.
Political, legal, and social ramifications: As Bostrom advises, rather than avoid pursuing AI innovation, "Our focus should be on putting ourselves in the best possible position so that when all the pieces fall into place, we've done our homework. We've developed scalable AI control methods, we've thought hard about the ethics and the governments, etc. And then proceed further and then hopefully have an extremely good outcome from that." If our governments and business institutions don't spend time now formulating rules, regulations, and responsibilities, there could be significant negative ramifications as AI continues to mature.
AI-enabled terrorism: Artificial intelligence will change the way conflicts are fought from autonomous drones, robotic swarms, and remote and nanorobot attacks. In addition to being concerned with a nuclear arms race, we'll need to monitor the global autonomous weapons race.
Social manipulation and AI bias: So far, AI is still at risk for being biased by the humans that build it. If there is bias in the data sets the AI is trained from, that bias will affect AI action. In the wrong hands, AI can be used, as it was in the 2016 U.S. presidential election, for social manipulation and to amplify misinformation.
AI surveillance: AIs face recognition capabilities give us conveniences such as being able to unlock phones and gain access to a building without keys, but it also launched what many civil liberties groups believe is alarming surveillance of the public. In China and other countries, the police and government are invading public privacy by using face recognition technology. Bostrom explains that AI's ability to monitor the global information systems from surveillance data, cameras, and mining social network communication has great potential for good and for bad.
Deepfakes: AI technology makes it very easy to create "fake" videos of real people. These can be used without an individual's permission to spread fake news, create porn in a person's likeness who actually isn't acting in it, and more to not only damage an individual's reputation but livelihood. The technology is getting so good the possibility for people to be duped by it is high.
As Nick Bostrom explained, The biggest threat is the longer-term problem introducing something radical thats super intelligent and failing to align it with human values and intentions. This is a big technical problem. Wed succeed at solving the capability problem before we succeed at solving the safety and alignment problem.
Today, Nick describes himself as a frightful optimist that is very excited about what AI can do if we get it right. He said, The near-term effects are just overwhelmingly positive. The longer-term effect is more of an open question and is very hard to predict. If we do our homework and the more we get our act together as a world and a species in whatever time we have available, the better we are prepared for this, the better the odds for a favorable outcome. In that case, it could be extremely favorable.
For more on AI and other technology trends, see Bernard Marrs new book Tech Trends in Practice: The 25 Technologies That Are Driving The 4Th Industrial Revolution, which is available to pre-order now.
See the article here:
Is Artificial Intelligence (AI) A Threat To Humans? - Forbes
Posted in Ai
Comments Off on Is Artificial Intelligence (AI) A Threat To Humans? – Forbes
The Pentagon’s AI Shop Takes A Venture Capital Approach to Funding Tech – Defense One
Posted: at 6:24 pm
The Joint Artificial Intelligence Center will take a Series A, B, approach to building tech for customers, with product managers and mission teams.
The Joint Artificial Intelligence Center will take a Series A, B, approach to building tech for customers, with product managers and mission teams. By PatrickTucker
Military leaders who long to copy the way Silicon Valley funds projects should know: the Valley isnt the hit machine people think it is, says Nand Mulchandani, chief technical officer of the Pentagons Joint Artificial Intelligence Center. The key is to follow the right venture capitalmodel.
Mulchandani, a veteran of several successful startups, aims to ensure JAICs investments in AI software and tools actually work out. So he is bringing a very specific venture-capital approach to thePentagon.
Heres the plan: when a DoD agency or military branch asks JAIC for help with some mission or activity, the Center will assign a mission team of, essentially, customer representatives to figure out what agency data might be relevant to theproblem.
Subscribe
Receive daily email updates:
Subscribe to the Defense One daily.
Be the first to receive updates.
Next, the JAIC will assign a product manager not DoDs customary program manager, but a role imported from the techindustry.
He or she handles the actual building of the product, not the administrative logistics of running a program. The product manager will gather customer needs, make those into product features, work with the program manager, ask, What does the product do? How is it priced? Mulchandani told Defense One in a phone conversation onThursday.
The mission team and product manager will take a small part of the agencys data to the software vendors or programs that they hire to solve the problem. These vendors will need to prove their solution works before scaling up to take on all availabledata.
Were going to have a Series A, a seed amount of money. You [the vendor] get a half a million bucks to curate the data, which tends to be the problem. Do the problem x in a very tiny way, taking sample data, seeing if an algorithm applies to it, and then scale it, Mulchandani saidon Wednesday at an event hosted by the Intelligence and National Security Alliance, orINSA.
In the venture capital industry, you take a large project, identify core risk factors, like team risk, customer risk, etc. you fund enough to take care of these risks and see if you can overcome the risks through a prototype or simulation, before you try to scale, he addedlater.
The customer must also plan to turn the product into a program of record or give it some other life outside of theJAIC.
Thats very different from the way the Defense Department pays for tech today, he said. The unit of currency in the DoD seems to be Well, this was a great idea; lets stick a couple million bucks on it, see what happens. Were not doing that way anymore he said onWednesday.
The JAIC is working with the General Services Administration Centers of Excellence to create product manager roles in DoD and to figure out how to scale small solutions up. Recently, some members of the JAIC and the Centers of Excellence participated in a series of human-centered design workshops to determine essential roles and responsibilities for managing data assets, across areas that the JAIC will be developing products, like cybersecurity, healthcare, predictive maintenance, and business automation, according to thestatement.
Mulchandani urges the Pentagon not to make a fetish of Silicon Valley. Without the right business and funding processes, many venture startups fail just as badly as poorly thought out government projects. You just dont hear aboutthem.
When you end up in a situation where theres too much capital chasing too few good ideas that are real, you end up in a situation where you are funding a lot of junk. What ends up happening [in Silicon Valley] is many of those companies just fail, he said Wednesday. The problem in DOD is similar. How do you apply discipline up front, on a venture model, to fund the good stuff as opposed to funding a lot of junk and then seeing two or three products that becomesuccessful?
Read more:
The Pentagon's AI Shop Takes A Venture Capital Approach to Funding Tech - Defense One
Posted in Ai
Comments Off on The Pentagon’s AI Shop Takes A Venture Capital Approach to Funding Tech – Defense One