The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Monthly Archives: September 2021
Behind the News: More than just a constitution – Fiji Times
Posted: September 12, 2021 at 9:30 am
This week, President Jioji Konrote, praised our eight-year-old Constitution, sayingit was one of the most progressive in the world.
Speaking on Constitution Day, he added that it enshrined political and civil rights such as, among others, the right to vote and freedom of speech.
Such a public proclamation was encouraging.
It should be.
But having a written Constitution with provisions that protect, promote and supporthuman rights is one thing.
Enjoying them in real life is another issue altogether.
On Tuesday, the Constitution Day holiday was supposed to, apart from allowing usto have a much-needed rest, remind us of the importance of this document in oureveryday lives.
The history of constitutions shows they were not enacted for politicians, politicalleaders and rulers.
Neither were they created to protect those in government.
Instead, their development was by citizens who were concerned about the powertheir leaders had and how this power had been used arbitrarily against them.
These same concerns continue to disturb us, all around the world, and in Fiji inparticular.
Having a written constitution is not everything in a democracy.
There are otherthings that need to be considered and present before it can work.
Our constitution is important.
Yes, because it is like a rule book that tells us how thegame of democracy is played.
It not only gives those who lead us the authority to make decisions on our behalf, butit also shows how those who exercise power over us may be held accountable.
It sets out governments powers and demarcates where governments power ends byguaranteeing individuals rights and freedoms which ensure the protection andpromotion of human liberty, equality and dignity.
A truly democratic constitution is one that is agreed to by the people and notimposed.
This is because, when citizens participate in developing their ownconstitution and give it their consent, it becomes homemade, just like the guavajam or babakau that mums make for the familys enjoyment.
This means citizens will own it and will protect it.
They will never remove it becausethey know it belongs to them.
Furthermore, a peoples constitution is legitimate because it represents theaspirations of its people and contains all the rules that dictate how they want to beruled.
That is why a country whose constitution is forced on the people and notunanimously accepted tends to be opposed and may face regular disruptions.
Because of the constitution, citizens that may not agree with rulers still accept theirlegitimacy and obey laws because they trust that the democratic system andprocesses that the constitution allows will choose another government using theirconsent in another election.
Recently, we have had several cases where opposition political figures werequestioned by police, not for making a hate speech, but for simply expressing anopinion.
We have also had cases where journalists were attacked, not for making a libellousaccusation, but for simply asking a question.
These incidents have happened not only while we have one of the most progressiveconstitutions in place.
They were been allowed to happen because some people didnot tolerate different views expressed by others.
They failed to grasp the concepts of plurality, dialogue and inclusiveness that ademocratic society demands.
This indicates that our commitment to freedom of speech and expression has beenendangered, and we may be at a point where the values behind these freedoms arebeing challenged.
Lets be real.
People can be held liable for types of speech that use erroneousstatements to harm others reputations or rhetoric that instigates disorder.
This is because the government has the authority to deter expressions that connoteand encourage violence and danger.
It has a duty to prohibit hateful speech.
But in doing so, authorities must ensure that the ideals of free speech, assembly andexpression are not unnecessarily and conveniently suppressed.
They should not overstep the line and abuse their authority by silencing peacefuland lawabiding dissents and punish citizens for freely expressing themselves.
Generally, all over the world, participatory rights and freedoms pertaining toexpression, protest, speech and assembly et cetera have been curtailed by curfewsand strict safety protocols put in place to combat COVID-19 in the world.
As a result, we may come across laws that have been introduced without properconsultation or bulldozed through the corridors of Parliament without undergoingthe necessary legislative oversight.
We may find new policies that stop us from effectively holding authoritiesaccountable or from making a peaceful protest.
Freedom of speech and expression, as well as a host of other rights, are protected inour 2013 Constitution, under Chapter two, the Bill of Rights.
On paper this seems admirable.
We know we have the right to share informationand say what we think, as long as this is done responsibly.
We know we can demand services from our government.
We know we have the rightto agree or disagree with those in power or others for that matter, and to expressopinions peacefully.
However, whether we are able to exercising these fundamental rights without fear orfavour, which is a critical aspect of living in an open and fair society, is something wemust ask ourselves.
If we cannot genuinely access and enjoy these interconnected rights or are beingrestrained, despite assurances through our constitution then something doesntseem right.
This is exactly why the United States Country Reports on Human Rights Practicescontinues to highlight matters in Fiji every year.
In its 2020 report, for instance, it noted that laws on media prohibited irresponsible reporting and provided for government censorship.
As a result, journalists practised selfcensorship on sensitive political or communaltopics.
The report also raised concerns that the opposition and other critics of thegovernment had accused the government of using state power to silence critics.
People are discouraged from speaking their minds out, even though the countrysconstitution approves of the value of free speech.
The human rights movement, Amnesty International, says that when governmentstolerate unfavourable views or critical voices this is often a good indication of howthey will treat human rights generally.
Amnesty International supports people who speak out peacefully for themselves andfor others whether a journalist reporting on violence by security forces, a tradeunionist exposing poor working conditions or an indigenous leader defending their land rights.
For us Fijians, we must not stop at simply having a constitution.
We must not just declare we have one of the best in the world.
We must live it and it must show!
We must go a step further and demand that its provisions allows for checks and balances and limits the power of those who lead us.
We must demand that human rights are institutionalised, internalised and mainstreamed so that we built a human rights culture that our children and their childrens children can embrace and enjoy.
It is up to us to preserve, protect and defend human rights.
It is more than just having a constitution.
Before I leave you, please remember all the COVID-19 safety and hygiene protocols wear your mask when you leave home, wash your hands regularly with soap or analcoholbased hand sanitiser and practise social-distancing rules.
Until we meet on this same page same time next week stay blessed, stay healthy and stay safe!
See the article here:
Behind the News: More than just a constitution - Fiji Times
Posted in Freedom of Speech
Comments Off on Behind the News: More than just a constitution – Fiji Times
Artificial Intelligence & Autopilot | Tesla
Posted: at 9:29 am
Hardware Hardware
Build silicon chips that power our full self-driving software from the ground up, taking every small architectural and micro-architectural improvement into account while pushing hard to squeeze maximum silicon performance-per-watt. Perform floor-planning, timing and power analyses on the design. Write robust, randomized tests and scoreboards to verify functionality and performance. Implement compilers and drivers to program and communicate with the chip, with a strong focus on performance optimization and power savings. Finally, validate the silicon chip and bring it to mass production.
Apply cutting-edge research to train deep neural networks on problems ranging from perception to control. Our per-camera networks analyze raw images to perform semantic segmentation, object detection and monocular depth estimation. Our birds-eye-view networks take video from all cameras to output the road layout, static infrastructure and 3D objects directly in the top-down view. Our networks learn from the most complicated and diverse scenarios in the world, iteratively sourced from our fleet of nearly 1M vehicles in real time. A full build of Autopilot neural networks involves 48 networks that take 70,000 GPU hours to train . Together, they output 1,000 distinct tensors (predictions) at each timestep.
Develop the core algorithms that drive the car by creating a high-fidelity representation of the world and planning trajectories in that space. In order to train the neural networks to predict such representations, algorithmically create accurate and large-scale ground truth data by combining information from the car's sensors across space and time. Use state-of-the-art techniques to build a robust planning and decision-making system that operates in complicated real-world situations under uncertainty. Evaluate your algorithms at the scale of the entire Tesla fleet.
Throughput, latency, correctness and determinism are the main metrics we optimize our code for. Build the Autopilot software foundations up from the lowest levels of the stack, tightly integrating with our custom hardware. Implement super-reliable bootloaders with support for over-the-air updates and bring up customized Linux kernels. Write fast, memory-efficient low-level code to capture high-frequency, high-volume data from our sensors, and to share it with multiple consumer processes without impacting central memory access latency or starving critical functional code from CPU cycles. Squeeze and pipeline compute across a variety of hardware processing units, distributed across multiple system-on-chips.
Build open- and closed-loop, hardware-in-the-loop evaluation tools and infrastructure at scale, to accelerate the pace of innovation, track performance improvements and prevent regressions. Leverage anonymized characteristic clips from our fleet and integrate them into large suites of test cases. Write code simulating our real-world environment, producing highly realistic graphics and other sensor data that feed our Autopilot software for live debugging or automated testing.
Develop the next generation of automation, including a general purpose, bi-pedal, humanoid robot capable of performing tasks that are unsafe, repetitive or boring. Were seeking mechanical, electrical, controls and software engineers to help us leverage our AI expertise beyond our vehicle fleet.
Thank you for your submission, we'll be in touch!
Sorry, we are not able to process your request at this time, please try again later.
View post:
Posted in Ai
Comments Off on Artificial Intelligence & Autopilot | Tesla
The Top Five Trends In AI: How To Prepare For AI Success – Forbes
Posted: at 9:29 am
The Top 5 Trends in AI
The strategic importance of AI is growing at an accelerating pace. Many companies are reaping the rewards of AI now and will increase their investments as a result.
Every board member and every senior executive must understand the key trends in AI that will impact their businesses.
The Top 5 Trends in AI are as follows:
1)Increasing investments
2)Rapid response
3)Risk management
4)Job changes
5)Organizational transformation
According to ResearchAndMarkets.com, the global artificial intelligence market is expected to grow from $40 billion in 2020 to $51 billion in 2021 at a compound annual growth rate (CAGR) of 28%. The market is expected to reach $171 billion in 2025 at a CAGR of 35%.
Companies that are AI leaders are building an AI flywheel that will enable them to strengthen the lead they already have over their competitors. The flywheel effect comes primarily from AI systems that perform well and then produce more data, helping the system continually improve its performance. Eventually, a competitor will never be able to catch up. (See The AI Threat: Winner-Takes-All)
Another flywheel effect comes from the ability to attract AI talent by building an organization that can enable growth opportunities in AI for that talent.
The companies that have fully embraced AI are focused primarily on:
Creating better customer experiences
Improving decision-making and
Innovating on products and services
(See McKinsey: Winning Companies Are Increasing Their Investment In AI During Covid-19. What Do They Know That You Dont?)
COVID-19 prompted many companies to accelerate their investments in AI. According to a PWC survey, AI is used in strategic decisions around workforce planning, supply chain resilience, scenario planning, and demand projections.
Most companies engage in an annual scenario/strategic planning process. AI can make the strategic planning process an ongoing one. By creating AI models, the strategic plan can be continually updated based on changes in supply, demand, operations, competitive moves, and more.
AI can help sense new threats and opportunities and help a company move away from historical reporting to insightful forecasting.
Companies want to address AI risks but are slow to take action. The top issues include improving privacy, explainability, bias reduction, and improving defenses against cyber threats.
AI lives on data, and data privacy and consumer protection are paramount. (See Consumer Protection and AI7 Expert Tips To Stay Out Of Trouble)
AI is sometimes a black box. In some cases, you need to know how and why AI makes certain decisions. In some cases, it's not that important. However, you need to know when it's essential and when it's not.
AI can have many different types of bias risks. The board and senior management need to understand how to mitigate these risks and ensure that action is taken. (See How AI Can Go Terribly Wrong: 5 Biases That Create Failure)
Cyber threats can become more serious when state actors use AI. Its an arms race. How are you playing the cyber security game with AI? (See If Microsoft Can Be Hacked, What About Your Company? How AI Is Transforming Cybersecurity).
AI will replace some jobs. Most importantly, however, is that AI will replace many tasks. Suppose a job consists of many tasks that can be done more effectively or efficiently than a human. In that case, that job is likely to be replaced. (See Covid Has Changed How We Work. With The Rise Of AI, Is Your Job At Risk?)
If a job has some tasks that are better done by AI, and some that a human does better, then that human's work contribution can be augmented and improved by AI.
In some cases, new jobs will be created related to the development, management, and ongoing maintenance of AI-based systems. Creating these jobs may be challenging, and each company needs to determine the best approach and the type of people they will need to succeed.
Importantly, workers at all levels will need to understand the implications of AI on their jobs. Some will need to be trained in entirely new skills, and some will need to learn that AI is not a threat but an opportunity. All will be concerned about how AI will impact their future.
Management will need to over-communicate the impacts of AI on jobs and on the organization as a whole.
For a company to fully benefit from AI, it requires a cultural shift. The organization needs to become data-driven. It needs to learn how to share data, subject matter expertise, and AI models across the organization, breaking down traditional silos.
Automating routine tasks is important and is an excellent way to get a quick return on investment but isn't a top priority for companies that have adopted AI.
According to PWC, the top-ranked AI apps for 2021 include:
Managing risk, fraud, and cybersecurity threats
Improving AI ethics, explainability, and bias detection
Helping employees make better decisions
These applications are wide-reaching and strategic and will significantly benefit from organizational transformation as they are designed, built, and rolled out.
Whether you choose to buy or build your AI-based solutions, you'll need continuous collaboration between the board, senior management, and project leaders. Further collaboration will be required between line managers, data scientists, AI engineers, and the users of the solution.
Please let me know if you see additional trends that can impact your corporate success in AI. I'd love to hear from you.
Continued here:
The Top Five Trends In AI: How To Prepare For AI Success - Forbes
Posted in Ai
Comments Off on The Top Five Trends In AI: How To Prepare For AI Success – Forbes
AI in next-generation connected systems – Ericsson
Posted: at 9:29 am
We see that hybrid approaches will be useful in next-generation intelligent systems where robust learning of complex models is combined with symbolic logic that provides knowledge representation, reasoning, and explanation facilities. The knowledge could be, for example, universal laws of physics or the best-known methods in a specific domain.
Intelligent systems must be endowed with the ability to make decisions autonomously to fulfill given objectives, robustness to be able to solve a problem in several different ways, and flexibility in decision-making by utilizing various pieces of both prepopulated and learned knowledge.
In a large distributed system, decisions are made at different locations and levels. Some decisions are based on local data and governed by tight control loops with low latency. Other decisions are more strategic, affect the system globally, and are made based on data collected from many different sources. Decisions made at a higher global level may also require real-time responses in critical cases such as power-grid failures, cascading node failures, and so on. The intelligence that automates such large and complex systems must reflect their distributed nature and support the management topology.
Data generated at the edge, in device or network edge node, will at times need to be processed in place. It may not always be feasible to transfer data to a centralized cloud; there may be laws governing where data can reside as well as privacy or security implications for data transfer. The scale of decisions in these cases is restricted to a small domain, so the algorithms and computing power necessary are usually fast and light. However, local models could be based on incomplete and biased statistics, which may lead to loss of performance. There is a need to leverage the scale of distribution, make appropriate abstractions of local models and transfer the insights gained to other local models.
Learning about global data patterns from multiple networked devices or nodes without having access to the actual data is also possible. Federated learning has paved the way on this front and more distributed training patterns such as vertical federated learning or split learning have emerged. These new architectures allow machine-learning models to adapt their deployments to the requirements they are to fulfill in terms of data transfer or compute, as well as memory and network resource consumption while maintaining excellent performance guarantees. However, more research is needed, in particular, to cater to different kinds of models and model combinations and stronger privacy guarantees.
A common distributed and decentralized paradigm is required to make the best use of local and global data and models as well as determine how to distribute learning and reasoning across nodes to fulfill extreme latency requirements. Such paradigms themselves may be built using machine learning and other AI techniques to incorporate features of self-management, self-optimization, and self-evolution.
AI-based autonomous systems comprise complex models and algorithms; moreover, these models evolve over time with new data and knowledge without manual intervention. The dependence on data, the complexity of algorithms, and the possibility of unexpected emergent behavior of the AI-based systems requires new methodologies to guarantee transparency, explainability, technical robustness and safety, privacy and data governance, nondiscrimination and fairness, human agency and oversight, and societal and environmental wellbeing and accountability. These elements are crucial for ensuring that humans can understand and consequently establish calibrated trust in AI-based systems [5].
Explainable AI(XAI) is used to achieve transparency of AI-based systems that explain for the stakeholder why and how the AI algorithm arrived at a specific decision. The methods are applicable to multiple AI techniques like supervised learning, reinforcement learning (RL), machine reasoning, and so on [5]. XAI is acknowledged as being a crucial feature for the practical deployment of AI models in systems, for satisfying the fundamental rights of AI users related to AI decision-making, and is essential in telecommunications where standardization bodies such as ETSI and IEEE emphasize the need for XAI for the trustworthiness of intelligent communication systems.
The evolving nature of AI models requires either new approaches or extensions to the existing approaches to ensure the robustness and safety of AI models during both training and deployment in the real world. Along with statistical guarantees provided by adversarial robustness, formal verification techniques could be tailored to give deterministic guarantees for safety-critical AI-based systems. Security is one of the contributors to robustness, where both data and models are to be protected from malicious attacks. Privacy of the data, that is, the source and the intended use, must be preserved. The models themselves must not leak privacy information. Furthermore, data should be validated for fairness and domain expectations because of the bias it can introduce to AI decisions.
Since the stakeholders of AI systems are ultimately humans, methods such as those based on causal reasoning and data provenance need to be developed to provide accountability of decisions. The systems should be designed to continuously learn and refine the stakeholder requirements they are set to meet and escalate to a higher level of automated decision making or eventually to human level when they do not have sufficient confidence in certain decisions.
Connected, intelligent machines of varied types are becoming more present in our lives, ranging from virtual assistants to collaborative robots or cobots [6]. For a proper collaboration, it is essential that these machines can understand human needs and intents accurately. Furthermore, all data related to these machines should be available for situational awareness. AI is fundamental throughout this process to enhance the capabilities and collaboration of humans and machines.
Advances in natural language processing and computer vision have made it possible for machines to have a more accurate interpretation of human inputs. This is leveraged by considering nonverbal communication, such as body language and tone of voice. The accurate detection of emotions is now evolving and can support the identification of more complex behaviors, such as tiredness and distraction. In addition, progress in areas such as scene understanding and semantic-information extraction is crucial to having a complete knowledge representation of the environment (see Figure 3). All the perceptual information should be used by the machine to determine the optimum action that maximizes the collaboration. Reinforcement learning (RL), which is where a policy is trained to take the best action, given the current state and observation of the environment, is receiving increasingly more attention [6]. To avoid unsafe situations, strategies like safe AI are under investigation to ensure safety along the RL model life cycle. Details of RL is provided in the next section.
AI has also enabled a more complete understanding of how the machine operates with the aid of digital twins. Extended reality (XR) devices are becoming more present in mixed- reality setups to visualize detailed data of machines and interact with digital twins at the same time. This increases human understanding of how machines are operating and helps anticipate their actions. In combination with the XR interface, XAI can be applied to provide reasons for a certain decision taken by the machine.
To make collaboration happen, it is also important that the machines respond and interact with humans in a timely manner. As AI methods involved in the collaborative setup can have high computing complexity and machines might have constrained hardware resources, a distributed intelligence solution is required to achieve real-time responses. This means that the communication infrastructure plays a key role in the whole process by supporting ultra-reliable and low-latency communication networks.
See the original post here:
Posted in Ai
Comments Off on AI in next-generation connected systems – Ericsson
The term ‘AI’ overpromises: Here’s how to make it work for humans instead – Big Think
Posted: at 9:29 am
One of the popular memes in literature, movies and tech journalism is that man's creation will rise and destroy it.
Lately, this has taken the form of a fear of AI becoming omnipotent, rising up and annihilating mankind.
The economy has jumped on the AI bandwagon; for a certain period, if you did not have "AI" in your investor pitch, you could forget about funding. (Tip: If you are just using a Google service to tag some images, you are not doing AI.)
However, is there actually anything deserving of the term AI? I would like to make the point that there isn't, and that our current thinking is too focused on working on systems without thinking much about the humans using them, robbing us of the true benefits.
What companies currently employ in the wild are nearly exclusively statistical pattern recognition and replication engines. Basically, all those systems follow the "monkey see, monkey do" pattern: They get fed a certain amount of data and try to mimic some known (or fabricated) output as closely as possible.
When used to provide value, you give them some real-life input and read the predicted output. What if they encounter things never seen before? Well, you better hope that those "new" things are sufficiently similar to previous things, or your "intelligent" system will give quite stupid responses.
But there is not the slightest shred of understanding, reasoning and context in there, just simple re-creation of things seen before. An image recognition system trained to detect sheep in a picture does not have the slightest idea what "sheep" actually means. However, those systems have become so good at recreating the output, that they sometimes look like they know what they are doing.
Isn't that good enough, you may ask? Well, for some limited cases, it is. But it is not "intelligent", as it lacks any ability to reason and needs informed users to identify less obvious outliers with possibly harmful downstream effects.
The ladder of thinking has three rungs, pictured in the graph below:
Image: Notger Heinz
Imitation: You imitate what you have been shown. For this, you do not need any understanding, just correlations. You are able to remember and replicate the past. Lab mice or current AI systems are on this rung.
Intervention: You understand causal connections and are able to figure out what would happen if you now would do this, based on what you learned about the world in the past. This requires a mental model of the part of the world you want to influence and the most relevant of its downstream dependencies. You are able to imagine a different future. You meet dogs and small children on that rung, so it is not a bad place to be.
Counterfactual reasoning: The highest rung, where you wonder what would have happened, had you done this or that in the past. This requires a full world model and a way to simulate the world in your head. You are able to imagine multiple pasts and futures. You meet crows, dolphins and adult humans here.
In order to ascend from one rung to the next, you need to develop a completely new set of skills. You can't just make an imitation system larger and expect it to suddenly be able to reason. Yet this is what we are currently doing with our ever-increasing deep learning models: We think that by giving them more power to imitate, they will at some point magically develop the ability to think. Apart from self-delusional hope and selling nice stories to investors and newspapers, there is little reason to believe that.
And we haven't even touched the topic of computational complexity and economical and ecological impact of ever-growing models. We might simply not be able to grow our models to the size needed, even if the method worked (which it doesn't, so far).
Whatever those systems create is the mere semblance of intelligence and in pursuing the goal of generating artificial intelligence by imitation, we are following a cargo cult.
Instead, we should get comfortable with the fact that the current ways will not achieve real AI, and we should stop calling it that. Machine learning (ML) is a perfectly fitting term for a tool with awesome capabilities in the narrow fields where it can be applied. And with any tool, you should not try to make the entire world your nail, but instead find out where to use it and where not.
Machines are strong when it comes to quickly and repeatedly performing a task with minimal uncertainty. They are the ruling class of the first rung.
Humans are strong when it comes to context, understanding and making sense with very little data at hand and high uncertainties. They are the ruling class of the second and third rung.
So what if we focus our efforts away from the current obsession with removing the human element from everything and thought about combining both strengths? There is an enormous potential in giving machine learning systems the optimal, human-centric shape, in finding the right human-machine interface, so that both can shine. The ML system prepares the data, does some automatable tasks and then hands the results to the human, who further handles them according to context.
ML can become something like good staff to a CEO, a workhorse to a farmer or a good user interface to an app user: empowering, saving time, reducing mistakes.
Building a ML system for a given task is rather easy and will become ever easier. But finding a robust, working integration of the data and the pre-processed results of the data with the decision-maker (i.e. human) is a hard task. There is a reason why most ML projects fail at the stage of adoption/integration with the organization seeking to use them.
Solving this is a creative task: It is about domain understanding, product design and communication. Instead of going ever bigger to serve, say, more targetted ads, the true prize is in connecting data and humans in clever ways to make better decisions and be able to solve tougher and more important problems.
Republished with permission of the World Economic Forum. Read the original article.
From Your Site Articles
Related Articles Around the Web
See the rest here:
The term 'AI' overpromises: Here's how to make it work for humans instead - Big Think
Posted in Ai
Comments Off on The term ‘AI’ overpromises: Here’s how to make it work for humans instead – Big Think
How AI and 5G will power the next wave of innovation – ZDNet
Posted: at 9:29 am
In the next 10 years, artificial intelligence is expected to transform every industry, and the catalyst for this transformation is 5G. Together, the two technologies will enable fast, secure, and cost-effective deployment of internet of things devices and smart networks.
AI-powered 5G networks will accelerate the "fourth industrial revolution and create unprecedented opportunities in business and society," Ronnie Vasishta, senior vice president of telecom at graphical chipmaker and software platform developer NVIDIA, said in a special address at the 2021 Mobile World Congress in Barcelona several weeks ago.
"Billions of things are located throughout the network and data centers. A ubiquitous 5G network will connect these data centers and intelligent things at the rate, latency, cost, and power required by the application," Vasishta said. "As this network morphs to adapt to 5G, not only will AI drive innovation, but it will also be required to manage, organize, and increase the efficiency of the network itself."
Unlike previous wireless tech generations, 5G was born in the cloud era and designed specifically for IoT. 5G can connect billions of sensorssuch as video camerasto edge data centers for AI processing.
Here are four real-world examples of where the combination of AI and 5G connectivity is reshaping industries:
Thousands of cameras monitoring automated vehicle assembly. Visual inspection software with deep learning algorithms is used to recognize defects in vehicles. This allows car manufacturers to analyze and identify quality issues on the assembly line.
Urban planning and traffic management for smart cities. In an environment where massive amounts of people and things interact with each other, AI-powered visual inspection software monitors all moving and non-moving elements to improve city safety, space management, and traffic.
Conversational AI and natural language processing enabling future services. Chatbots, voice assistants, and other messaging services are helping various industries automate customer support. Conversational AI is evolving to include new ways of communicating with humans using facial expression and contextual awareness.
Powerful edge computing for extended reality. Virtual reality and augmented reality are no longer tethered by cables to workstations. Thanks to advanced wireless technologies such as 5G, industry professionals can make real-time design changes in AR or be virtually present anywhere in VR.
NVIDIA has been developing AI solutions for more than a decade, working with an extensive ecosystem of independent software vendors and startups on the NVDIA platform. The company recently partnered with Google Cloud to establish an AI-on-5G Innovation Lab, which network infrastructure and AI software providers will use to develop, test, and launch 5G/AI apps.
NVIDIA's AI-on-5G portfolio includes a unified platform, servers, software-defined 5G virtual radio area networks, enterprise AI apps, and software development kits such as Isaac and Metropolis. A commercial version of NVIDIA AI-on-5G will become available in the second half of this calendar year.
Back in April, NVIDIA launched Aerial A100, which, according to Vasishta, is a "new type of computing platform designed for the (network) edge, combining AI and 5G into EGX for the enterprise." NVIDIA EGX is an accelerated computing platform that allows continuous streaming of data between 5G base stations, warehouses, stores, and other locations. When implementing EGX with Aerial A100, organizations get a complete AI suite of capabilities.
5G and AI infrastructures today are inefficient because they're deployed and managed separately. For enterprises, running AI and 5G on the same computing platform reduces equipment, power, and space costs, while providing greater security for AI apps. For telcos, deploying AI apps over 5G opens up new uses cases and revenue streams. They can convert every 5G base station to an edge data center to support both 5G workloads and AI services.
Telcos and enterprises can greatly benefit from converged platforms like NVIDIA's AI-on-5G, where 5G serves as a secure, ultra-reliable, and cost-effective communication fabric between sensors and AI apps.
See the rest here:
How AI and 5G will power the next wave of innovation - ZDNet
Posted in Ai
Comments Off on How AI and 5G will power the next wave of innovation – ZDNet
Can we really rely on AI? – Times of Malta
Posted: at 9:29 am
Artificial Intelligence (AI) is a series of systems, developments and techniques that enable machines to calculate actions and data sets. It is a constellation of many different technologies that work together to enable machines to sense, understand, act, and learn at a human intelligence level. AI systems are becoming increasingly complex as they are used in more and more areas of our lives to create forecasts and prediction models.
Popular search engines now make recommendations based on the text users enter. The search engine uses AI to predict what they are trying to find to give them better information. When one uses maps apps on their phone to navigate, AI reads numerous data points and provides the user with updated traffic information in real time. Statistical machine translation methods are used to find patterns in billions of words of translated text, such as United Nations books and records, and then applies these patterns to new translations.
Several companies are using technological advancements in Machine Learning (ML), natural language processing and other forms of AI to make relevant and immediate recommendations for their customers. Modern technologies based on ML and AI are being adopted by the robotics industry to develop robots that can work autonomously and overcome all the challenges they face on the move.
Many see AI as increasing human capacity, but some predict the opposite: that as people become increasingly dependent on machine-controlled networks, their ability to think for themselves will be undermined
By prioritising technology solutions that enable them to harness the power of AI, companies can instantly provide potential buyers with bespoke content and relevant information about them in a virtual world. Human concierges wearing augmented reality (AR) headsets that tell them what customers want before they ask are already a reality.
One company that has succeeded in this approach is a renowned tailor, who uses AI in partnership with human stylists to select clothes for his customers. In fact, it has been found that most people accept AI recommendations when it works in partnership with humans.
While Hollywood films and science-fiction novels often portray AI as humanlike robots conquering the world, the current evolution of AI technology is not scary, but intelligent. Given the scepticism of leaders in modern AI research and the diverse nature of modern narrow AI systems, there is little cause to worry that general artificial intelligence will disrupt society anytime soon.
Many see AI as increasing human capacity, but some predict the opposite: that as people become increasingly dependent on machine-controlled networks, their ability to think for themselves, act independently of automated systems, and interact with one another will be undermined.
As we gather more and more data, ML tools have improved. This ability to process rapid, enormous amounts of data, refine information and find connections is causing AI technology to proliferate. Scientists are using AI to manage data-intensive terrain, refine climate science, make more accurate predictions, and enable society and nature to adapt to the future.
Analysts expect people to become increasingly reliant on connected AI and increasingly complex digital systems. Nevertheless, if we implement these systems wisely, we can continue the process of improving everyday life with positive results.
This article was prepared by collating various publicly available online sources.
Claude Calleja, Executive, eSkills Malta Foundation
Independent journalism costs money. Support Times of Malta for the price of a coffee.
View post:
Posted in Ai
Comments Off on Can we really rely on AI? – Times of Malta
AI And Advanced Analytics In Shipping And Logistics: An Interview With Gregory Brown And Laura Patel, UPS – Forbes
Posted: at 9:29 am
One of the most profound, and perhaps unanticipated impacts of the COVID pandemic is the dramatic changes to the global supply chain, global workforce, and newfound pressures on delivery and logistics. Certainly no one would have imagined that a primarily health-related cause should have such profound economic, workforce, and basic materials impacts.
However, out of challenge comes opportunity. Organizations are re-examining their processes and technologies that deal with all aspects of producing and delivering goods and services to market, from the origination of raw materials to the delivery of finished products. Organizations focused on shipping and logistics are especially realizing the new reality, and certainly UPS, one of the largest delivery and logistics companies in the world is no exception.
Speaking at an upcoming Data for AI event on October 7, 2021, Gregory Brown, Vice President of Strategy and R&D, Advanced Technology Group at UPS and Laura Patel, Principal Data Scientist at UPS explain exactly these impacts and how UPS makes data-driven decisions for AI innovation. In this Forbes interview, Greg and Laura share some insights they will be diving into at the online event.
What are some innovative ways youre leveraging advanced data analytics to benefit UPS?
Gregory Brown, UPS
Greg: We are using advanced data analytics, AI and automation to process more volume, more efficiently with the reliability that our customers have come to expect from us. These tools are critical to increasing the visibility and control over what packages are coming into our network, where those packages are going, and how soon they need to be delivered. Weve also reduced the time in transit between millions of zip code combinations. That translates to more smiles from satisfied customers.
How do you identify which problem area(s) to start with for your data analytics and cognitive technology projects?
Greg: Were always looking for opportunities where our internal and external customer needs intersect. For example, our internal customers in operations may benefit from enhanced utilization of our facilities and vehicles, and our external customers would also benefit with enhanced reliability, visibility, or reduced time in transit. To find the most impactful opportunities, we use a technology-agnostic approach that gives us the flexibility to identify solutions that best fit our business.
What are some of the unique opportunities you have when it comes to data and AI?
Laura Patel, UPS
Laura: Predicting how much, and what type, of packages may enter our network is a unique opportunity to leverage advanced data analytics and AI. We are able to more efficiently determine how 24.7 million packages and documents flow through our network around the world every day. This increases our teams opportunities to more quickly make data-driven decisions throughout our global network.
Can you share some of the challenges when it comes to AI and ML adoption?
Laura: Incorporating new technologies, even new applications of existing technologies, across a global enterprise that operates around the clock every day requires an extraordinary amount of planning and preparation. We take extra care to ensure that the adoption of new technologies is done seamlessly, reliably, and securely, and is understood and usable by our employees and customers.
How do analytics, automation, and AI work together at UPS?
Laura: We start with a solid foundation in analytics, building up to AI, which enables us to build to automation. By using a building-block approach, we can identify problems that need to be addressed and build solutions that are robust, replicable and scalable.
What are you doing to develop a data literate and AI ready workforce?
Greg: We attract Masters and Ph.D.-level talent to our company, and also utilize our promote-from-within culture to train our employees to be adept with these technologies. We also make it more accessible for people who are not specialists in AI and data analytics to understand the concepts and incorporate the technologies into their day-to-day responsibilities. This inclusive, team-based approach enables us to apply data analytics and AI across our enterprise.
What AI technologies are you most looking forward to in the coming years?
Greg: Were excited about how technologies will continue to create opportunities to solve complex problems in innovative ways. Technology is increasingly being built into the tools that our employees use, our vehicles, and our facilities. The next step would be making these technologies more portable by utilizing smart infrastructure. A seamless smart infrastructure would unlock opportunities to serve our customers in ways that we only could have dreamed of a few years ago.
Greg and Laura both share that they will be diving into these details in greater depth at the online Data for AI event coming up on October 7, 2021.
Excerpt from:
Posted in Ai
Comments Off on AI And Advanced Analytics In Shipping And Logistics: An Interview With Gregory Brown And Laura Patel, UPS – Forbes
In the US, the AI Industry Risks Becoming Winner-Take-Most – WIRED
Posted: at 9:29 am
A new study warns that the American AI industry is highly concentrated in the San Francisco Bay Area and that this could prove to be a weakness in the long run. The Bay leads all other regions of the country in AI research and investment activity, accounting for about one-quarter of AI conference papers, patents, and companies in the US. Bay Area metro areas see levels of AI activity four times higher than other top cities for AI development.
When you have a high percentage of all AI activity in Bay Area metros, you may be overconcentrating, losing diversity, and getting groupthink in the algorithmic economy. It locks in a winner-take-most dimension to this sector, and thats where we hope that federal policy will begin to invest in new and different AI clusters in new and different places to provide a balance or counter, Mark Muro, policy director at the Brookings Institution and the studys coauthor, told WIRED.
The study, titled The geography of AI, ranks nearly 400 US metro areas based on their capabilities in AI, using metrics like AI job listings, early-stage company creation data from Crunchbase, published research, and federal research and development funding. It found that two-thirds of AI activity is in just 15 metro areas, largely along coastlines: the two superstars of San Francisco and San Jose, plus 13 other early adopter locales like Austin and Seattle. Meanwhile, more than half of the metro areas together account for just 5 percent of AI activity.
The impact of AI on peoples everyday lives is expected to grow as more businesses and governments adopt the technology. While automation can grow productivityPwC predicts it will add $3.7 trillion to North American economies by 2030some economists and ethicists fear AI will also accelerate inequality and give more wealth and power to people who are already wealthy and powerful. Cities with the ability to support early-stage AI development and forge talent pipelines for local businesses will reap the benefits as the AI industry continues to grow. Those that dont could potentially get left behind, although increased AI adoption can have downsides, too, like job loss from automation.
Muro says the US would be wise to invest in other parts of the country before AIs regional overconcentration becomes even more entrenched.
The study identifies nearly 90 cities in the United States that have the potential to bring more AI-related jobs and resources to their communities, including major cities like Atlanta, Chicago, Detroit, and Houston, as well as some college towns like Bloomington, Indiana, and Athens, Georgia.
Cognizant of the fact that its difficult to drive AI development without deep research and investment capabilities, Muro and his coauthor Sifan Liu urge cities to develop highly realistic plans to support local AI. They suggest regions focus on plans to attract and retain AI talent, expand education opportunities at high schools and community colleges, and consider tax breaks for businesses in the AI space.
Muro also advocates policy that focuses on AI use cases valuable to local businesses or industries, awards government contracts to local AI companies, and differentiates their city from others. There is plenty of room for AI growth beyond tech companies, as suggested by cities in the studys early adopter category. In Lincoln, Nebraska, American Express is the biggest AI employer, according to an analysis of Burning Glass data performed as part of the study. In Los Angeles and Santa Cruz, California, the cybersecurity company Crowdstrike leads the way. In Washington, DC, Capital One and Booz Allen Hamilton are major employers.
Regional leaders in places like San Diego and Louisville, Kentucky, have taken steps to assess the AI needs of their respective regions.
A 2019 Brookings report predicted that Kentucky would be one of the US states most heavily impacted by job loss due to automation. In April, Brookings Metro laid out a strategy for Louisville, one of the states largest cities, to adapt. That report suggests a partnership with other Midwest and Southeast US cities on AI and data solutions for health care.
The rest is here:
In the US, the AI Industry Risks Becoming Winner-Take-Most - WIRED
Posted in Ai
Comments Off on In the US, the AI Industry Risks Becoming Winner-Take-Most – WIRED
Leading MLOps Tools Are The Next Frontier Of Scaling AI In The Enterprise – Forbes
Posted: at 9:29 am
Machine learning on digital interface and blue network background
Machine Learning Operations (MLOps) is on the rise as a critical technology to help to scale machine learning in the enterprise. According to McKinsey, by 2030, ML could add up to 13 trillion dollars back into the global economy by enabling workers in all sectors to improve their output. Furthermore, MarketWatch indicates that, in 2021, the global MLOps market size will be USD million and it is expected to reach USD million by the end of 2027, with a CAGR during 2021-2027. According to IBM by 2023, 70% of AI workloads will use application containers or be built using a serverless programming model, necessitating a DevOps culture. Whats more, according to Algorithmia, 85% of machine learning models never make it to production. For businesses, creating machine learning applications, managing those models and putting them into action is challenging. Different companies, such as DataRobot, have emerged as top machine learning operations tool enablers for the industry to handle these challenges.
Processing, implementing and deploying machine learning models requires specific tools that can solve challenges in the process. The challenge of getting data from aa data to decisions is made more accessible by applying various operations on-device or in the cloud as needed. To do this at scale, businesses need a platform to add support for new ML frameworks through open interfaces. There are several ways to add or remove models and processes.
The leading machine learning operations tools for enterprise are:
BRAZIL - 2021/05/11: In this photo illustration the DataRobot logo seen displayed on a smartphone ... [+] screen. (Photo Illustration by Rafael Henrique/SOPA Images/LightRocket via Getty Images)
DataRobot specializes in automated machine learning for businesses, which eases the process of model development and upkeep within an app or platform. DataRobots suite of products also gives users access to a pre-trained model store. DataRobot offers several features that help businesses get started with ML data pipelines and operations, including a visual debugger for debugging machine learning code.
DataRobot's competitive advantage is the ease of use for non-technical users. DataRobot's user interface enables ML beginners to input data and build a model without in-depth coding knowledge or background. Some unique solutions include the ability to run models in a web browser, prototyping tools to test data pipelines and algorithms before launching them in production, and the ability of DataRobots AutoML suite to choose between hundreds of machine learning algorithms automatically. The model store can add more than 200 open-source frameworks from TensorFlow, SciKit-Learn, XGBoost, PyTorch, and TensorRT.
Some of DataRobot's top customers are Deloitte, Panasonic, US Bank, Lenovo, among others. An example success story is a cross-functional team at Panasonic that used DataRobot to build predictive maintenance models that identified and repaired equipment problems up to 9 days earlier than their previous method. This reduced the number of machine failures and increased productivity by 5%.
H2O is a complete platform for data science and machine learning that enables companies to implement end-to-end workflows from data preparation to model building with one consistent SDK. The company also offers support in developing, deploying and managing models.
H2O's automation engine enables businesses to create, deploy and manage machine learning applications in a visual environment. These environments offer pre-configured workflows for common machine learning tasks like feature engineering, model training and deployment. This is where the competitive advantage comes: it speeds up results for non-technical users who can run experiments from one interface that includes data preparation with automated feature engineering and model training with XGBoost. H2O's platform supports any data type, scales to large clusters of GPUs and integrates with Spark, Python, R and other languages.
Some companies using H20 include global leaders in retail, banking, telecommunications and insurance. An example success story is a telecom company that wanted to analyze customer experience data to predict potential churners. The telecom company reduced churn by 10% and increased the number of customers contacted per month from 30,000 to 100,000.
Close-up of sign with logo on facade of the regional headquarters of ecommerce company Amazon in the ... [+] Silicon Valley town of Sunnyvale, California, October 28, 2018. (Photo by Smith Collection/Gado/Getty Images)
Amazon SageMaker is a platform for data scientists. It was built to address businesses challenges in getting from raw data to production-ready machine learning models. Amazons cloud software enables enterprises to implement end-to-end workflows and create, train, deploy and manage machine learning applications. This eliminates the need for companies to maintain their internal data, science teams.
Amazon SageMaker's competitive advantage is that it offers pre-configured templates for deep learning, reinforcement learning and multi-cloud training across multiple frameworks, like Apache MXNet, TensorFlow and others. Amazon also provides custom configurations for businesses that need a more specific type of model or tool. With support for feature engineering and automatic hyperparameter tuning, Amazon SageMaker speeds up building a model and reduces time spent debugging.
Amazon SageMaker's biggest customers range from Toyota to Nielsen, ExxonMobil to Epic Games. An example success story is Nielsen, which migrated its National Television Audience Measurement platform to AWS and built a new, cloud-native television rating platform that allowed the company to grow its measurement capabilities from measuring 40,000 households daily to more than 30 million households each day.
MLFlow is a machine learning platform that enables collaborative experimentation and tracking. This speeds up the entire process of building, training and deploying models across data teams. MLFlow has an open-source lightweight library for Python developers who want to track experiments on TensorFlow, SciKit-Learn and PyTorch via one API. The company also offers a server product that allows teams to track experiments on Spark via one API.
MLFlow's main competitive advantage is allowing employees outside of the data science team to collaborate on building, training and deploying models. The platform also speeds up time for deploying models and tracking experiments across tools.
Some companies that use MLFlow include Microsoft, Zillow, Facebook, Booking.com and Genpact. For example, Microsoft supports open-source MLflow in Azure Machine Learning to provide its customers with maximum flexibility. This means developers can use the standard MLflow tracking API to track runs and deploy models directly into the Azure Machine Learning service.
HANOVER, GERMANY - MARCH 02: Visitors check out a slimmed down version of the IBM Watson ... [+] supercomputer recently featured on the Jeopardy television game show at the IBM stand at the CeBIT technology trade fair on March 2, 2011 in Hanover, Germany. CeBIT 2011 will be open to the public from March 1-5. (Photo by Sean Gallup/Getty Images)
IBM Watson Machine Learning allows businesses to deploy self-learning models at scale, allowing AI to be used in applications and is available for free or with a price based on workload.
The main competitive advantage of IBM Watson Machine Learning is that it provides the possibility to train, deploy and manage models according to a companys specific requirements. The platform supports the deployment of models on any infrastructure (cloud or on-premises) for many businesses.
IBM Watson Studio is the ideal platform for companies to build their multicolored ModelOps practice. It provides an integrated development environment that allows developers to use the latest cognitive computing tools from within a single package, also part of IBM Machine Learning. This means businesses can develop, build and train models in one place and deploy them on any framework like TensorFlow, SparkML or H20.
An interesting case study is American Airlines. American Airlines needed a new technological platform and a different method of development that would help it provide digital self-service functionality and customer value more swiftly throughout its business. By providing the airline with a common platform, IBM assists it in moving some of its critical applications to the IBM Cloud and using new methods to develop creative apps quickly while improving customer experiences.
Algorithmia is a single platform that covers all aspects of machine learning operations (MLOps). It allows for collaboration between data experts and engineers on complicated applications. 100,000 people are using the service, including UN staff members and Fortune 500 businesses.
The companys main competitive advantages include the ability to ramp up speed and productivity by streamlining data science operations and reducing costs by bringing data science operations in-house. The platform also allows developers to automate data science tasks with code. It enables the creation of workflows for predictive apps using standard tools like Jupyter Notebooks, RStudio, Apache Spark and TensorFlow via a simple drag-and-drop interface.
Customers of Algorithmia include Tevec, EY and Github. According to EY Partner Carl Case, EY successfully used Algorithmia's MLOps solution: Weve reduced false positives in institutional systems by 40-60%, sometimes more, and the real benefit of working with Algorithmia has been taking deployment timelines down and getting models to production.
MLOps tools are essential for enterprises that want to turn their valuable datasets into actionable insights at the pace of digital transformation. These tools focus on model management and deployment, both to the cloud and device. In addition, there is also support for new frameworks as they are released to enable businesses to handle ongoing machine learning operations. The significance of these tools is only expected to grow as enterprises apply machine learning at scale. Lastly, MLOps should leave businesses feeling empowered to test and run their models, eliminating errors and misfires.
Read this article:
Leading MLOps Tools Are The Next Frontier Of Scaling AI In The Enterprise - Forbes
Posted in Ai
Comments Off on Leading MLOps Tools Are The Next Frontier Of Scaling AI In The Enterprise – Forbes







