Breaking News and Updates
- Abolition Of Work
- Alternative Medicine
- Artificial Intelligence
- Atlas Shrugged
- Ayn Rand
- Basic Income Guarantee
- Cbd Oil
- Chess Engines
- Cloud Computing
- Conscious Evolution
- Cosmic Heaven
- Designer Babies
- Donald Trump
- Ethical Egoism
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom of Speech
- Gene Medicine
- Genetic Engineering
- Germ Warfare
- Golden Rule
- Government Oppression
- High Seas
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Longevity
- Immortality Medicine
- Intentional Communities
- Jordan Peterson
- Life Extension
- Mars Colonization
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- National Vanguard
- New Utopia
- Online Casino
- Personal Empowerment
- Political Correctness
- Politically Incorrect
- Post Human
- Post Humanism
- Private Islands
- Quantum Computing
- Quantum Physics
- Resource Based Economy
- Ron Paul
- Second Amendment
- Second Amendment
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Teilhard De Charden
- The Singularity
- Tor Browser
- Transhuman News
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Zeitgeist Movement
The Evolutionary Perspective
Category Archives: Ai
Posted: December 18, 2019 at 8:44 pm
Last year, Finland launched a free online crash course in artificial intelligence with the aim of educating its citizens about the new technology. Now, as a Christmas present to the world, the European nation is making the six week program available for anyone to take.
Strictly speaking, its a present for the European Union. Finland is relinquishing the EUs rotating presidency at the end of the year, and decided to translate its course into every EU language as a gift to citizens. But there arent any geographical restrictions as to who can take the course, so really its to the worlds benefit.
The course certainly proved itself in Finland, with more than 1 percent of the Nordic nations 5.5 million citizens signing up. The course, named Elements of AI, is currently available in English, Swedish, Estonian, Finnish, and German.
There are already quite a few sites for people looking to learn the basics of AI, but Finlands offering seems worth your time if youre interested in such a thing. Its nicely designed, offers short tests at the end of each section, and covers a range of topics from the philosophical implications of AI to technical subjects like Bayesian probability. Its supposed to take about six weeks to finish, with each section taking between five and 10 hours.
The Finnish government said it originally designed the course to give its citizen an advantage in AI. Finland has always punched above its weight in the tech and education, so it seems sensible to marry the two strengths. Megan Schaible of the tech consultancy Reaktor, which helped design the course, said the motivation was to prove that AI should not be left in the hands of a few elite coders.
Read the original post:
Posted: at 8:44 pm
Tis the end of the year when pundits typically dust off the crystal ball and take a stab at what tech, and its impact on consumers,will look like over the next12 months.
But we're also on the doorstep of a brand-new decade, which this time around promisesfurther advances in 5G networks, artificial intelligence, quantum computing, self-driving vehicles and more, all of which willdramatically alter the way we live, work and play.
So what tech advances can we look forward to in the new year? Heres what we can expect to see in 2020 and in some cases beyond.
(Photo: Getty Images)
The next generation of wireless has showed up on lists like this for years now. But in 2020, 5G really will finally begin to make its mark in the U.S., with all four major national carriers three if the T-Mobile-Sprint merger finally goes through continue to build out their 5G networks across the country.
Weve been hearing about the promise of 5G on the global stage for what seems like forever, and the carriersrecently launched in select markets. Still, the rollout in most places will continue to take time, as will the payoff: blistering fast wireless speeds and network responsiveness on our phones, improved self-driving cars and augmented reality, remote surgery, and entire smartcities.
As 2019 winds down, only a few phones can exploit the latest networks, not to mention all the remaining holes in 5G coverage. But youll see a whole lot more 5G phone introductions in the new year, including what many of us expect will be a 5G iPhone come September.
Dark side of sharing economy: Use Airbnb, Uber or Lyft? Beware. There can be a dark side to sharing with strangers
A look back at the 2010s: Revisiting everything in tech from Alexa to Xbox
When those holes are filled, roughly two-thirds of consumers said theyd be more willing to buy a 5G-capable smartphone, according to a mobile trends survey by Deloitte.
But Deloitte executive Kevin Westcott also said that telcos will need to manage consumer expectations about what 5G can deliver and determine what the killer apps for 5G will be.
The Deloitte survey also found that a combination of economic barriers (pricing, affordability) and a sense that current phones are good enough, will continue to slow the smartphone refresh cycle.
Are you ready for all the tech around you to disappear? No, not right away.The trend towards so-called ambient computing is not going to happen overnight, nor is anyone suggesting that screens and keyboards are going to go away entirely, or that youll stop reaching for a smartphone. But as more tiny sensorsare built into walls, TVs, household appliances, fixtures, what you're wearing, and eventually even your own body, youll be able to gesture or speak to a concealed assistant to get things done.
Steve Koenig, vice president of research at the Consumer Technology Association likens ambient computing to Star Trek, and suggests that at some point we won't need to place Amazon Echo Dots or other smart speakers in every room of house, since well just speak out loud to whatever, wherever.
Self-driving cars have been getting most the attention. But its not just cars that are going autonomous try planes and boats.
Cirrus Aircraft, for example, is in the final stages of getting Federal Aviation Administration approval for a self-landing system for one of its private jets, and the tech, which I recently got to test, has real potential to save lives.
How so? If the pilot becomes incapacitated, a passenger can press a single button on the roof of the main cabin. At that moment, the plane starts acting as if the pilot were still doing things. It factors in real-time weather, wind, the terrain, how much fuel remains, all the nearby airports where an emergency landing is possible, including the lengths of all runways, and automatically broadcasts its whereaboutsto air traffic control.From there the system safely lands the plane.
Or consider the 2020 version of the Mayflower, not a Pilgrim ship, but rather a marine research vessel from IBM and a marine exploration non-profit known as Promare. The plan is to have the unmanned shipcross the Atlantic in September from Plymouth, England to Plymouth, Massachusetts. The ship will be powered by a hybrid propulsion system, utilizing wind, solar, state-of-the-art batteries, and a diesel generator. It plans to follow the 3,220-mile route the original Mayflower took 400 years ago.
Two of Americas biggest passions come together. esports is one of the fastest growing spectator sports around the world, and the Supreme Court cleared a path last year for legalized gambling across the states. The betting community is licking their chops at the prospect of exploiting this mostly untapped market. Youll be able to bet on esports in more places, whetherat a sportsbook inside a casino or through an app on your phone.
One of the scary prospects about artificial intelligence is that it is going to eliminate all these jobs. Research out of MIT and IBM Watson suggests that while AI will for sure impact the workplace, it wont lead to a huge loss of jobs.
That's a somewhat optimistic take given an alternate view thatAI-driven automation is going to displace workers.The research suggests thatAI will increasingly help us with tasks that can be automated, but will have a less direct impact on jobs that require skills such as design expertise and industrial strategy. The onus will be on bosses and employeesto start adapting to newroles and to try and expandtheirskills, effortsthe researchers say will beginin the new year.
The scary signs are still out there, however. For instance, McDonalds is already testing AI-powered drive-thrus that can recognize voice, which could reduce the need for human order-takers.
Perhaps its more wishful thinking than a flat-out prediction, but as Westcott puts it, Im hoping what goes away are the 17 power cords in my briefcase. Presumably a slight exaggeration.
But the thing we all want to see are batteries that dont prematurely peter out, and more seamless charging solutions.
Were still far off from the day where youll be able to get ample power to last all day on your phone or other devices just by walking into a room. But over-the-air wireless charging is slowly but surely progressing. This past June, for example, Seattle company Ossiareceived FCC certification for a first-of-its kind system to deliver over-the-air power at a distance. Devices with Ossias tech built-in should start appearing in the new year.
The Samsung Galaxy Fold smartphone featuring a foldable OLED display.(Photo: Samsung)
We know how the nascent market for foldable phones unfolded in 2019 things were kind of messy.Samsungs Galaxy Fold was delayed for months following screen problems, and even when the phone finally did arrive, it cost nearly $2,000. But that doesnt mean the idea behind flexible screen technologies goes away.
Samsung is still at it, and so is Lenovo-owned Motorola with its new retroRazr. The promise remains the same: let a devicefold or bend in such a way that you can take a smartphone-like form factor and morph it into a small tablet or computer. The ultimate success of such efforts will boil down to at least three of the factors that are always critical in tech: cost, simplicity, andutility.
Data scandals and privacy breaches have placed Facebook, Google and other others under the government's cross-hairs, and ordinary citizens are concerned. Expect some sort of reckoning, though it isn't obviousat this stage what that reckoningwill look like.
Pew recently put out a report that says roughly 6 in 10 Americans believe it is not possible to go about their daily lives without having their data collected.
Open question: Will there be national privacy regulations, perhaps ones modeled after the California law that is set to go into effect in the new year?
It isnt easy to explain quantum computing or the field it harnesses, quantum mechanics. In the simplest terms, think something exponentially more powerful than what we consider conventional computing, which is expressed in1s or 0s of bits. Quantum computing takes a quantum leap with whatare known as "qubits."
And while IBM, Intel, Google, Microsoft and others are all fighting for quantum supremacy, the takeaway over the next decadeis that thetechmay helpsolve problems far faster than before, fromdiagnosing disease to crackingforms of encryption, raising the stakes in data security.
Quantum computing: Google claims quantum computing breakthrough
What tech do you want or expect to see? Email: email@example.com; Follow @edbaig on Twitter.
Read or Share this story: https://www.usatoday.com/story/tech/2019/12/18/tech-trends-2020-battery-power-ai-privacy/4360879002/
Read more from the original source:
Posted: at 8:44 pm
Civilization advances by extending the number of important operations we can perform without thinking about them. Alfred North Whitehead, British mathematician, 1919
Hailed as a truly transformational technology, artificial intelligence (AI) is positioned to disrupt businesses either by enabling new approaches to solving complex problems, or threatening the status quo for whole business sectors or types of jobs. Whether you understand what the excitement is all about and how it will be applied to your market, or you struggle to understand how you might take advantage of the technology, having some basic understanding of artificial intelligence and its potential applications has to be part of your strategic planning process.
RELATED CONTENT: Be (AI) smarter about your digital transformation
Despite the hype, it is sobering to remember that artificial intelligence is not a magic trick that can do anything; its a tool with which a magician can do a few tricks. In the below article, I discuss the current landscape and outline some considerations of how artificial intelligence may be applied to embedded systems, with a focus on how to plan for deployment in these more constrained environments.
Definitions and basic principlesAI is a computer science discipline looking at how computers can be used to mimic human intelligence. AI has existed since the dawn of computing in the 20th Century, when pioneers such as Alan Turing foresaw the possibility of computers solving problems in ways similar to how humans might do so.
Classical computer programming solves problems by encoding algorithms explicitly in code, guiding computers to execute logic to process data and compute an output. In contrast, Machine Learning (ML) is an AI approach that seeks to find patterns in data, effectively learning based on the data. There are many ways in which this can be implemented, including pre-labeling data (or not), reinforcement learning to guide algorithm development, extracting features through statistical analysis (or some other means), and then classifying input data against this trained data set to determine an output with a stated degree of confidence.
Deep Learning (DL) is a subset of ML that uses multiple layers of neural networks to iterativelytrain a model from large data sets. Once trained, a model can look at new data sets to make an inference about the new data. This approach has gained a lot of recent attention, and has been applied to problems as varied as image processing and speech recognition, or financial asset modeling. We see this approach also having a significant impact in future critical infrastructure and devices.
Applying ML/DL in embedded systemsDue to the large data sets required to create accurate models, and the large amount of computing power required to train models, training is usually performed in the cloud or high-performance computing environments. In contrast, inference is often applied in devices close to the source of data. Whereas distributed or edge training is a topic of great interest, it is not the way in which most ML systems are deployed today. For the sake of simplicity, lets assume that training takes place in the cloud, and inference will take place at the edge or in-device.
As weve described, ML and DL are data-centric disciplines. As such, creating and training models requires access to large data sets, and tools and environments that provide a rich environment for data manipulation. Frameworks and languages that ease the manipulation of data, and implement complex math libraries and statistical analysis, are used. Often these are language frameworks such as Python, on which ML frameworks are then built. There are many such frameworks, but some common ones include TensorFlow, Caffe or PyTorch.
ML frameworks can be used for model development and training, and can also be used to run inference engines using trained models at the edge. A simple deployment scenario is therefore to deploy a framework such as TensorFlow in a device. As these require rich runtime environments, such as Python, they are best suited to general-purpose compute workloads on Linux. Due to the need to run ML in mobile devices, were seeing a number of lighter-weight inference engines (TensorFlow Lite, PyTorch mobile) starting to be developed that will require less resources, but these are not yet widely available or as mature as their full-featured parents.
ML is highly computationally intensive, and early deployments (such as in autonomous vehicles) rely on specialized hardware accelerators such as GPUs, FPGAs or specialized neural networks. As these accelerators become more prevalent in SoCs, we can anticipate seeing highly efficient engines to run DL models in constrained devices. When that happens, another deployment option will be to compile trained models for optimized deployment on DNN accelerators. Some such tools already exist, and require modern compiler frameworks such as LLVM to target the model front-ends, and the hardware accelerator back-ends.
Implications for embedded developmentEmbedded development is often driven by the need to deploy highly optimized and efficient systems. The classical development approach is to start with very constrained hardware and software environments, and add capability only as needed. This has been the typical realm of RTOS applications.
With rapidly changing technologies, we see the development approach starts with making complex systems work, and then optimizing for deployment at a later stage. As with many major advances in software, open source communities are a large driver of the pace and scale of innovation that we see in ML. Embracing tools and frameworks that originate in open source, and often start with development in Linux, is rapidly becoming the primary innovation path. Using both a real-time operating system (RTOS) and Linux, or migrating open source from Linux to an RTOS, are therefore important developer journeys that must be supported.
Read this article:
Latest AI That Learns On-The-Fly Is Raising Serious Concerns, Including For Self-Driving Cars – Forbes
Posted: at 8:44 pm
AI Machine Learning is being debated due to the "update problem" of adaptiveness.
Humans typically learn new things on-the-fly.
Lets use jigsaw puzzles to explore the learning process.
Imagine that you are asked to solve a jigsaw puzzle and youve not previously had the time nor inclination to solve jigsaw puzzles (yes, there are some people that swear they will never do a jigsaw puzzle, as though it is beneath them or otherwise a useless use of their mind).
Upon dumping out onto the table all the pieces from the box, you likely turn all the pieces right side up and do a quick visual scan of the pieces and the picture shown on the box of what you are trying to solve for.
Most people are self-motivated to try and put all the pieces together as efficiently as they can, meaning that it would be unusual for someone to purposely find pieces that fit together and yet not put them together. Reasonable people would be aiming to increasingly build toward solving the jigsaw puzzle and strive to do so in a relatively efficient manner.
A young child is bound to just jump into the task and pick pieces at random, trying to fit them together, even if the colors dont match and even if the shapes dont connect with each other. After a bit of time doing this, most children gradually realize that they ought to be looking to connect pieces that will fit together and that also matches in color as depicted on the overall picture being solved for.
All right, youve had a while to solve the jigsaw puzzle and lets assume you were able to do so.
Did you learn anything in the process of solving the jigsaw puzzle, especially something that might be applied to doing additional jigsaw puzzles later on?
Perhaps you figured out that there are some pieces that are at the edge of the puzzle. Those pieces are easy to find since they have a square edge. Furthermore, you might also divine that if you put together all the edges first, youll have an outline of the solved puzzle and can build within that outline.
It seems like a smart idea.
In recap, you cleverly noticed a pattern among the pieces, namely that there was some with a straight or squared edge. Based on that pattern, you took an additional mental step and decided that you could likely do the edge of the puzzle with less effort than the rest of the puzzle, plus by completing the overall edge it would seem to further your efforts toward completing the rest of the puzzle.
Maybe you figured this out while doing the puzzle and opted to try the approach right away, rather than simply mentally filing the discovered technique away to use on a later occasion.
I next give you a second jigsaw puzzle.
What do you do?
You might decide to use your newfound technique and proceed ahead by doing the edges first.
Suppose though that Ive played a bit of a trick and given you a so-called edgeless jigsaw puzzle. An edgeless version is one that doesnt have a straight or square edge to the puzzle and instead the edges are merely everyday pieces that appear to be perpetually unconnected.
If you are insistent on trying to first find all the straight or square-edged pieces, youll be quite disappointed and frustrated, having to then abandon the edge-first algorithm that youve devised.
Some edgeless puzzles go further by having some pieces that are within the body of the puzzle that have square or straight edges, thereby possibly fooling you into believing that those pieces are for the true edge of the jigsaw.
Overall, heres what happened as you learned to do jigsaw puzzles.
You likely started by doing things in a somewhat random way, especially for the first jigsaw, finding pieces that fit together and assembling portions or chunks of the jigsaw. While doing so, you had noticed that there were some that appeared to be the edges and so you came up with the notion that doing the edges was a keen way to more efficiently solve the puzzle. You might have employed this discovery right away, while in the act of solving the puzzle.
When you were given the second jigsaw, you tried to apply your lesson learned from the first one, but it didnt hold true.
Turns out that the edge approach doesnt always work, though you did not perhaps realize this limitation upon initial discovery of the tactic.
As this quick example showcases, learning can occur in the act of performing a task and might well be helpful for future performances of the task.
Meanwhile, what youve learned during a given task wont necessarily be applicable in future tasks, and could at times confuse you or make you less efficient, since you might be determined to apply something that youve learned and yet it no longer is applicable in other situations.
Adaptive Versus Lock-down While Learning
Learning that occurs on-the-fly is considered adaptive, implying that you are adapting as you go along.
In contrast, if you arent aiming to learn on-the-fly, you can try to lock out the learning process and seek to proceed without doing any learning. This kind of lock-down of the learning process involves inhibiting any learning and making use of only what has previously been learned.
Voila, now its time to discuss Artificial Intelligence (AI).
Todays AI systems have seemingly gotten pretty good at a number of human-like tasks (though quite constrained tasks), partially as a result of advances in Machine Learning (ML).
Machine Learning involves the computer system seeking to find patterns and then leveraging those patterns for boosting the performance of the AI.
An AI developer usually opts to try out different kinds of Machine Learning methods when they are putting together an AI system (see my piece on ensemble ML) and typically settles on a specific ML that they will then embed into their AI system.
A looming issue that society is gradually uncovering involves whether AI Machine Learning should be adaptive as it performs its efforts, or whether it is better to lock-down the adaptiveness while the ML is undertaking a task.
Lets consider why this an important point.
Such a concern has been specially raised in the MedTech space, involving AI-based medical devices and systems that are being used in medicine and healthcare.
Suppose that an inventor creates a new medical device that examines blood samples and the device while using AI tries to make predictions about the health of the patient that provided the blood.
Usually, such devices would require federal regulatory approval before it could be placed into the marketplace for usage.
If this medical device is making use of AI Machine Learning, it implies that the system could be using adaptive techniques and therefore will try to improve its predictive capability while examining blood samples.
Any federal agency that initially tested the medical device to try and ensure that it was reliable and accurate prior to it being released would have done so at a point in time prior to those adaptive acts that are going to occur while the AI ML is in everyday use.
Thus, the medical device using AI ML is going to inevitably change what it does, likely veering outside the realm of what the agency thought it was approving.
On the downside, the ML is potentially going to learn things that arent necessarily applicable, and yet not realize that those aspects are not always relevant and proceed to falsely assess a given blood sample (recall the story of believing that doing the edge of a jigsaw can be done by simply finding the straight or squared pieces, which didnt turn out to be a valid approach in all cases).
On the upside, the ML might be identifying valuable nuances by being adaptive and self-improve itself toward assessing blood samples, boosting what it does and enhancing patient care.
Yes, some argue, there is that chance of the upside, but when making potentially life-or-death assessments, do we want an AI Machine Learning algorithm being unleashed such that it could adapt in ways that arent desirable and might, in fact, be downright dangerous?
Thats the rub.
Some assert that the adaptive aspects should not be allowed on-the-fly to adjust what the AI system does, and instead in a lock-down mode merely collect and identify potential changes that they would be inspected and approved by a human, such as the AI developers that put together the system.
Furthermore, in a regulatory situation, the AI developers would need to go back to the regulatory agency and propose that the AI system is now a newly proposed updated version and get agency approval before those adaptations were used in the real-world acts of the system.
This thorny question about adaptiveness running free or being locked down is often called the update problem and is raising quite a debate.
In case you think the answer is simple, always lock-down, unfortunately, life is not always so easy.
Those that dont want the lock-down are apt to say that doing so will hamstring the AI Machine Learning, which presumably has the advantage of being able to self-adjust and get better as it undertakes its efforts.
If you force the AI ML to perform in a lock-down manner, you might as well toss out the AI ML since it no longer is free to adjust and enhance what it does.
Trying to find a suitable middle ground, some suggest that there could be guardrails that serve to keep the AI ML from going too far astray.
By putting boundaries or limits on the kinds of adjustments or adaptiveness, you could maybe get the best of both worlds, namely a form of adaptive capability that furthers the system and yet keeps it within a suitable range that wont cause the system to seemingly become unsavory.
The U.S. Food and Drug Administration (FDA) has sketched a regulatory framework for AI ML and medical devices (see link here) and is seeking input on this update problem debate.
Overall, this element of AI ML is still up for debate across all areas of application, not just the medical domain, and brings to the forefront the trade-offs involved in deploying AI ML systems.
Heres an interesting question: Do we want true self-driving cars to be able to utilize AI Machine Learning in an adaptive manner or in a lock-down manner?
Its kind of a trick question or at least a tricky question.
Lets unpack the matter.
The Levels Of Self-Driving Cars
It is important to clarify what I mean when referring to true self-driving cars.
True self-driving cars are ones that the AI drives the car entirely on its own and there isnt any human assistance during the driving task.
These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we dont yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).
Since semi-autonomous cars require a human driver, the adoption of those types of cars wont be markedly different than driving conventional vehicles, so Im not going to include them in this discussion about AI ML (though for clarification, Level 2 and Level 3 could indeed have AI ML involved in their systems and thus this discussion overall is relevant even to semi-autonomous cars).
For semi-autonomous cars, it is equally important that I mention a disturbing aspect thats been arising, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And Update Problem
For Level 4 and Level 5 true self-driving vehicles, there wont be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
The AI driving software is developed, tested, and loaded into the on-board computer processors that are in the driverless car. To allow for the AI software to be updated over time, the driverless car has an OTA (Over-The-Air) electronic communication capability.
When the AI developers decide its time to do an update, they will push out the latest version of the AI driving software to the vehicle. Usually, this happens while the self-driving car is parked, say in your garage, perhaps charging up if its an EV, and the OTA then takes place.
Right now, it is rare for the OTA updating to occur while the car is in motion, though there are efforts underway for enabling OTA of that nature (there is controversy about doing so, see link here).
Not only can updates be pushed into the driverless car, the OTA can be used to grab up aspects from the self-driving car. For example, the sensors on the self-driving car will have collected lots of images, video, and radar and LIDAR data, doing so during a driving journey. This data could be sent up to the cloud being used by the automaker or self-driving tech firm.
We are ready now to discuss the AI Machine Learning topic as it relates to adaptiveness versus lock-down in the use case of self-driving cars.
Should the AI ML thats on-board the driverless car be allowed to update itself, being adaptive, or should the updates only be performed via OTA from the cloud and as based on presumably the latest updates instituted and approved by the AI developers?
This might seem rather abstract, so lets use a simple example to illuminate the matter.
Consider the instance of a driverless car that encounters a dog in the roadway.
Perhaps the AI ML on-board the self-driving car detects the dog and opts to honk the horn of the car to try and prod the dog to get out of the way. Lets pretend that the horn honking succeeds and the dog scampers away.
In an adaptive mode, the AI ML might adjust to now include that honking the horn is successful at prompting an animal to get off the road.
Suppose a while later, theres a cat in the road. The AI system opts to honk the horn, and the cat scurries away (though that cat is mighty steamed!).
So far, this horn honking seems to be working out well.
The next day, theres a moose in the roadway.
The AI system honks the horn, since doing so worked previously, and the AI assumes that the moose is going to run away.
Oops, turns out that the moose opts to charge toward the driverless car, having been startled by the horn and decides that it should charge at the menacing mechanical beast.
Now, I realize this example is a bit contrived, but Im trying to quickly illustrate that the AI ML of an adaptive style could adjust in a manner that wont necessarily be right in all cases (again, recall the earlier jigsaw story).
Rather than the on-board AI ML adjusting, perhaps it would be safer to keep it in lock-down.
But, you say, the on-board AI will be forever in a static state and not be improving.
Well, recall that theres the OTA capability of updating.
Presumably, the driverless car could have provided the data about the initial instance of the dog and the horn honking up to the cloud, and the AI developers might have studied the matter. Then, upon carefully adjusting the AI system, the AI developers might, later on, push the latest animal avoidance routine down into the driverless car.
The point being that there is an open question about whether we want to have multi-ton life-or-death cars on our roadways that are being run by AI that is able to adjust itself, or whether we want the on-board AI to be on lock-down and only allow updates via OTA (which presumably would be explicitly derived and approved via human hands and minds).
Thats the crux of the update problem for driverless cars.
There is a plethora of trade-offs involved in the self-driving car adaptiveness dilemma.
If a self-driving car isnt adjusting on-the-fly, it might not cope well with any new situations that crop-up and will perhaps fail to make an urgent choice appropriately. Having to wait maybe hours, days, or weeks to get an OTA update might prolong the time that the AI continues to be unable to adequately handle certain roadway situations.
Human drivers adapt on-the-fly, and if we are seeking to have the AI driving system be as good or possibly better than human drivers, wouldnt we want and need to have the AI ML be adaptive on-the-fly?
Can suitable system-related guardrails be put in place to keep the AI ML from adapting in some kind of wild or untoward manner?
Though we commonly deride human drivers for their flaws and foibles, the ability of humans to learn and adjust their behavior is quite a marvel, one that continues to be somewhat elusive when it comes to achieving the same in AI and Machine Learning.
Some believe that we need to solve the jigsaw puzzle of the human mind and how it works before well have AI ML thats of any top form.
This isnt a mere edge problem and instead sits at the core of achieving true AI.
See more here:
Posted: at 8:44 pm
Artificial intelligence is one of the fastest moving and least predictable industries. Just think about all the things that were inconceivable a few years back: deepfakes, AI-powered machine translation, bots that can master the most complicated games, etc.
But it never hurts to try our chances at predicting the future of AI. We asked scientists and AI thought leaders about what they think will happen in the AI space in the year to come. Heres what you need to know.
As Jeroen Tas, Philips Chief Innovation & Strategy Officer, told TNW: AIs main impact in 2020 will be transforming healthcare workflows to the benefit of patients and healthcare professionals alike, while at the same time reducing costs. Its ability to acquire data in real-time from multiple hospital information flows electronic health records, emergency department admissions, equipment utilization, staffing levels etc. and to interpret and analyze it in meaningful ways will enable a wide range of efficiency and care enhancing capabilities.
This will come in the form of optimized scheduling, automated reporting, and automatic initialization of equipment settings, Tas explained, which will be customized to an individual clinicians way of working and an individual patients condition features that improve the patient and staff experience, result in better outcomes, and contribute to lower costs.
There is tremendous waste in many healthcare systems related to complex administration processes, lack of preventative care, and over- and under-diagnosis and treatment. These are areas where AI could really start to make a difference, Tas told TNW. Further out, one of the most promising applications of AI will be in the area of Command Centers which will optimize patient flow and resource allocation.
Philips is a key player in the development of necessary AI-enabled apps seamlessly being integrated into existing healthcare workflows. Currently, one in every tworesearchers at Philips worldwide work with data science and AI, pioneering new ways to apply this tech to revolutionizing healthcare.
For example, Tas explained how combining AI with expert clinical and domain knowledge will begin to speed up routine and simple yes/no diagnoses not replacing clinicians, but freeing up more time for them to focus on the difficult, often complex, decisions surrounding an individual patients care: AI-enabled systems will track, predict, and support the allocation of patient acuity and availability of medical staff, ICU beds, operating rooms, and diagnostic and therapeutic equipment.
2020 will be the year of AI trustability, Karthik Ramakrishnan, Head of Advisory and AI Enablement at Element AI, told TNW. 2019 saw the emergence of early principles for AI ethics and risk management, and there have been early attempts at operationalizing these principles in toolkits and other research approaches. The concept of explainability (being able to explain the forces behind AI-based decisions) is also becoming increasingly well known.
There has certainly been a growing focus on AI ethics in 2019. Early in the year, the European Commission published a set of seven guidelines for developing ethical AI. In October, Element AI, which was co-founded by Yoshua Bengio, one of the pioneers of deep learning, partnered with the Mozilla Foundation to create data trusts and push for the ethical use of AI. Big tech companies such as Microsoft and Google have also taken steps toward making their AI development conformant to ethical norms.
The growing interest in ethical AI comes after some visible failures around trust and AI in the marketplace, Ramakrishnan reminded us, such as the Apple Pay rollout, or the recent surge in interest regarding the Cambridge Analytica scandal.
In 2020, enterprises will pay closer attention to AI trust whether theyre ready to or not. Expect to see VCs pay attention, too, with new startups emerging to help with solutions, Ramakrishnan said.
Well see a rise of data synthesis methodologies to combat data challenges in AI, Rana el Kaliouby, CEO and co-founder of Affectiva, told TNW. Deep learning techniques are data-hungry, meaning that AI algorithms built on deep learning can only work accurately when theyre trained and validated on massive amounts of data. But companies developing AI often find it challenging getting access to the right kinds of data, and the necessary volumes of data.
Many researchers in the AI space are beginning to test and use emerging data synthesis methodologies to overcome the limitations of real-world data available to them. With these methodologies, companies can take data that has already been collected and synthesize it to create new data, el Kaliouby said.
Take the automotive industry, for example. Theres a lot of interest in understanding whats happening with people inside of a vehicle as the industry works to develop advanced driver safety features and to personalize the transportation experience. However, its difficult, expensive, and time-consuming to collect real-world driver data. Data synthesis is helping address that for example, if you have a video of me driving in my car, you can use that data to create new scenarios, i.e., to simulate me turning my head, or wearing a hat or sunglasses, el Kaliouby added.
Thanks to advances in areas such as generative adversarial networks (GAN), many areas of AI research can now synthesize their own training data. Data synthesis, however, doesnt eliminate the need for collecting real-world data, el Kaliouby reminds: [Real data] will always be critical to the development of accurate AI algorithms. However [data synthesis] can augment those data sets.
Neural network architectures will continue to grow in size and depth and produce more accurate results and become better at mimicking human performance on tasks that involve data analysis, Kate Saenko, Associate Professor at the Department of Computer Science at Boston University, told TNW. At the same time, methods for improving the efficiency of neural networks will also improve, and we will see more real-time and power-efficient networks running on small devices.
Saenko predicts that neural generation methods such as deepfakes will also continue to improve and create ever more realistic manipulations of text, photos, videos, audio, and other multimedia that are undetectable to humans. The creation and detection of deepfakes has already become a cat-and-mouse chase.
As AI enters more and more fields, new issues and concerns will arise. There will be more scrutiny of the reliability and bias behind these AI methods as they become more widely deployed in society, for example, more local governments considering a ban on AI-powered surveillance because of privacy and fairness concerns, Saenko said.
Saenko, who is also the director of BUs Computer Vision and Learning Group, has a long history in researching visual AI algorithms. In 2018, she helped develop RISE, a method for scrutinizing the decisions made by computer vision algorithms.
In 2020, expect to see significant new innovations in the area of what IBM calls AI for AI: using AI to help automate the steps and processes involved in the life cycle of creating, deploying, managing, and operating AI models to help scale AI more widely into the enterprise, said Sriram Raghavan, VP of IBM Research AI.
Automating AI has become a growing area of research and development in the past few years. One example is Googles AutoML, a tool that simplifies the process of creating machine learning models and makes the technology accessible to a wider audience. Earlier this year, IBM launched AutoAI, a platform for automating data preparation, model development, feature engineering, and hyperparameter optimization.
In addition, we will begin to see more examples of the use of neurosymbolic AI which combines statistical data-drivenapproaches with powerful knowledge representation and reasoning techniques to yield more explainable & robust AI that can learn from less data, Raghavan told TNW.
An example is the Neurosymbolic Concept Learner, a hybrid AI model developed by researchers at IBM and MIT. NSCL combines classical rule-based AI and neural networks and shows promise in solving some of the endemic problems of current AI models, including large data requirements and a lack of explainability.
2020 will be the year that the manufacturing industry embraces AI to modernize the production line, said Massimiliano Versace, CEO, and co-founder of Neurala. For the manufacturing industry, one of the biggest challenges is quality control. Product managers are struggling to inspect each individual product and component while also meeting deadlines for massive orders.
By integrating AI solutions as a part of workflows, AI will be able to augment and address this challenge, Versace believes: In the same way that the power drill changed the way we use screwdrivers, AI will augment existing processes in the manufacturing industry by reducing the burden of mundane and potentially dangerous tasks, freeing up workers time to focus on innovative product development that will push the industry forward.
Manufacturers will move towards the edge, Versace adds. With AI and data becoming centralized, manufacturers are forced to pay massive fees to top cloud providers to access data that is keeping systems up and running. The challenges of cloud-based AI have spurred a slate of innovations toward creating edge AI, software and hardware that can run AI algorithms without the need to have a link to the cloud.
New routes to training AI that can be deployed and refined at the edge will become more prevalent. As we move into the new year, more and more manufacturers will begin to turn to the edge to generate data, minimize latency problems and reduce massive cloud fees. By running AI where it is needed (at the edge), manufacturers can maintain ownership of their data, Versace told TNW.
AI will remain a top national military and economic security issue in 2020 and beyond, said Ishan Manaktala, CEO of Symphony AyasdiAI. Already, governments are investing heavily in AI as a possible next competitive front. China has invested over $140 billion, while the UK, France, and the rest of Europe have plowed more than $25 billion into AI programs. The U.S., starting late, spent roughly $2 billion on AI in 2019 and will spend more than $4 billion in 2020.
Manaktala added, But experts urge more investment, warning that the U.S. is still behind. A recent National Security Commission on Artificial Intelligence noted that China is likely to overtake U.S. research and development spending in the next decade. The NSCAI outlined five points in its preliminary report: invest in AI R&D, apply AI to national security missions, train and recruit AI talent, protect U.S. technology advantages, and marshal global coordination.
We predict drug discovery will be vastly improved in 2020 as manual visual processes are automated because visual AI will be able to monitor and detect cellular drug interactions on a massive scale, Emrah Gultekin, CEO at Chooch, told TNW. Currently, years are wasted in clinical trials because drug researchers are taking notes, then entering those notes in spreadsheets and submitting them to the FDA for approval. Instead, highly accurate analysis driven by AI can lead to radically faster drug discoveries.
Drug development is a tedious process that can take up to 12 years and involve the collective efforts of thousands of researchers. The costs of developing new drugs can easily exceed $1 billion. But theres hope that AI algorithms can speed up the process of experimentation and data gathering in drug discovery.
Additionally, cell counting isa massive problem in biological researchnot just in drug discovery. People are hunched over microscopes or sitting in front of screens with clickers in their hands counting cells. There are expensive machines that attempt to count, inaccurately. But visual AI platforms can perform this task in seconds, with 99% accuracy in just moments, Gultekin added.
This post is brought to you byPhilips.
Read this article:
Posted: at 8:44 pm
AI transforms the nature of work, but doesnt change the jobs to be done.
From voice-activated smart speakers like Google Home to the spam filter on our work emails, AI has infiltrated our daily lives. Depending on who you talk to, AI will either enable us to do our jobs betteror make them completely redundant. The reality is that AI transforms the nature of work, but doesnt change the jobs to be done. The aspects that make us inherently humancritical reasoning, communication and empathywill still be vital attributes in the future of work.
If you give a computer a problem, it learns from its interactions with the problem to identify a solution faster than humans can. But, if you ask a computer to look at two paintings and say which is more interesting, it cannot. Unlike people, artificial intelligence is not able to think abstractly and emotionally.
By supplementing human intelligence and creativity with technology that reduces menial processes, there is a great opportunity to enable recruitersnot replace them. McKinsey research shows that over two thirds of businesses (69%) believe AI brings value to their Human Resources function.
Here are three ways AI improves recruitment practices:
1. Reducing unconscious bias
People have an unintentional tendency to make decisions based on their underlying beliefs, experiences and feelingsits how we make sense of the world around us. And recruiting is no different. In fact, theres bias in something as straightforward as the words we choose.
Research shows that job descriptions that use descriptive words like support and understanding are biased towards female applicants, whereas competitive and lead are biased towards males. When we use these loaded words, were limiting the pool of candidates who will apply for an open role, making the recruiting process biased and affecting hiring outcomes. AI-enabled tools such as Textio can support recruiters to identify the use of bias in role description wording. Removing these words and making descriptions neutral and inclusive can lead to 42% more applications.
Unconscious bias can extend beyond our choice of words to the decisions we make about candidates. Unintentionally, recruiters and hiring managers can decide to interview someone based on the university they attended or even where they are from, or view them as a cultural fit based on answers. But decisions based on these familiarities disregard important factors like a candidates previous work experience and skills. When AI is used to select the shortlist for interviews, it can circumvent bias that was introduced by manually scanning resumes.
While AI can reduce bias, this is only true if the programs themselves are designed carefully. Machine learning algorithms are subject to the potentially biased programming choice of the people who build them and the data set theyre given. While development of this technology is still being fine tuned, we need to focus on finding the balance between artificial intelligence and human intelligence. We shouldnt rely solely on one or the other, but instead use them to complement each other.
2. Improving recruitment for hiring managers, recruiters and candidates
It goes without saying that recruitment is a people-first function. Candidates want to speak to a recruiter or hiring manager and form an authentic connection, which they wont be able to get from interacting with a machine.
Using AI, recruiters can remove tedious and time-consuming processes, so recruiters have more time to focus on engaging candidates as part of the assessment process.
XOR is a good example of this. The platform enables pre-screening of applications, qualifications and automatic interview scheduling. By taking out these tedious administrative tasks from a recruiters day, they can optimize their time to focus on finding the best fit for the role.
AI also helps create an engaging and personalized candidate experience. AI can be leveraged to nurture talent pools by serving relevant content to candidates based on their previous applications. At different stages of the process, AI can ask candidates qualifying questions, learn what types of roles they would be interested in, and serve them content that assists in their application.
But AI does have a different impact on the candidate experience depending on what stage it is implemented in the recruitment process. Some candidates prefer interacting with a chatbot at the start of the application process, as they feel more comfortable to ask general questions such as salary and job location. For delivery firm Yodel, implementing chatbots at the initial stage of the application process resulted in a decrease in applicant drop-off rates. Now only 8% of applicants choose not to proceed with their application, compared to a previous drop-off rate of 50-60%.
When it comes to more meaningful discussionssuch as how the role aligns with a candidate's career goals and how they can progress within the companyhuman interaction is highly valued. Considering when and how you use AI to enhance the recruitment experience is key to getting the best results.
3. Identifying the best candidate for a role
At its core, recruitment is about finding the best person for a role. During the screening process, recruiters can use AI to identify key candidates by mapping the traits and characteristics of previous high-performing employees in the same role to find a match. This means recruiters are able to fill open roles more quickly and ensure that new hires are prepared to contribute to their new workplace.
PredictiveHire is one of these tools. It uses AI to run initial screening of job applications, making the process faster and more objective by pulling data and trends from a companys previous high-performing employees and scanning against candidate applications. With 88% accuracy, PredictiveHire identifies the traits inherent to a companys high performers so recruiters can progress them to the interview stage.
Undoubtedly, we will continue to see more exciting applications of AI in the next few years. The talent search process can certainly be streamlined and improved by incorporating AI. For recruiters, it is about finding the right balance in marrying AI application and human intelligence to make the hiring process what it should beseamless and engaging.
Posted: at 8:44 pm
Wally Kankowski owns a pool repair business in Florida and likes 12 creams in his McDonalds coffee each morning. What he doesnt like is the way the company is pushing him to place his order via a touchscreen kiosk instead of talking with counter staff, some of whom he has known for years. The thing is knocking someone out of a job, he says.
Wally is one of several humans who discuss the present and future of workplace automation in the seventh installment of the Sleepwalkers podcast, which offers an atlas of the artificial intelligence revolution. The episode explores how work and society will change as AI begins to take over more tasks that people currently do, whether in apple orchards or psychiatry offices.
Some of the portents are scary. Kai-Fu Lee, an AI investor and formerly Googles top executive in China, warns that AI advances will be much more disruptive to workers than other recent technologies. He predicts that 40 percent of the worlds jobs will be lost to automation in the next 15 years.
AI will make phenomenal companies and tycoons faster, and it will also displace jobs faster, than computers and the internet, Lee says. He advises governments to start thinking now about how to support the large numbers of people who will be unable to work because automation has made their skills obsolete. Its going to be a serious matter for social stability, he adds.
The episode also looks at how automation could be designed to assist humans, not replace themand to narrow divisions in society, not widen them.
Toyota is working on autonomous driving technology designed to make driving safer and more fun, not replace the need for a driver altogether.
George Kantor from Carnegie Mellon University describes his hope that plant-breeding robots will help develop better crop varieties, easing the impacts of climate change and heading off humanitarian crises. Better seeds means better crops, and that could ultimately lead to a more well-nourished world, he says.
Kantor and Lee both argue that thinking about the positive outcomes of automation is necessary to fending off the bad ones. Whether we point at a future that is utopia or dystopia, if everybody believes in it, then it becomes a self-fulfilling prophecy, Lee says. Id like to be part of that force which points toward a utopian direction, even though I fully recognize the possibility and risks of the negative ending.
More Great WIRED Stories
The World Is On The Cusp Of Revolutionizing Many Sectors Through AI and Data Analytics: Yogesh Mudras – Entrepreneur
Posted: at 8:44 pm
Some trends observed by the security and surveillance sector are Artificial Intelligence, Cloud Computing, Cybersecurity Sensor integration, according to Yogesh Mudras, Managing Director, Informa Markets
December18, 20193 min read
Opinions expressed by Entrepreneur contributors are their own.
You're reading Entrepreneur India, an international franchise of Entrepreneur Media.
The number of cybersecurity incidents including denial of service attacks has increased disproportionately, according to annual reports from the Ministry of Electronics and Information Technology. In 2016-17, while the reported incidents stood at 35,418, in 2017-18 there were 69,539 incidents, rising to 274,465 in 2018-19. India reported slightly more than 313,000 cybersecurity incidents in the ten months to October. This is a jump of more than 55 per cent from the 208,456 such incidents reported in 2018, says Yogesh Mudras, Managing Director, Informa Markets. Informa Markets creates platforms for international markets to trade, innovate and grow. This year it is organising The International Fire and Security Exhibition and Conference (IFSEC) India Expos13th edition from December 19th to 21st December at the India Expo Mart in Greater Noida.
Mudras hail artificial intelligence as not a futuristic vision but as a need of the hour. AI is being integrated and deployed in various sectors. This includes finance, national security, health care, criminal justice, transportation, and smart cities, points out Mudras. He further adds, AI is a major driver to provide security, combat terrorism, and improve speech recognition programs.
Some trends observed in the security and surveillance sector are Artificial Intelligence, Cloud Computing, Cybersecurity Sensor integration. Intellectual property (IP)-based surveillance technology, touted as the future of surveillance systems, has replaced closed-circuit analogue systems. Some trends like sensors, biometrics, real-time connectivity, advanced processing software and analytics have also propelled the industry growth.
What the Government Could Do To Improve Security Sector?
Indias ambition of sustaining high growth in the safety domain depends on one important factor i.e. E-Infrastructure. E-Infrastructure comprises tools, facilities and resources that are needed for advanced collaboration and includes the integration of various technologies such as the Internet, computing power, bandwidth provisioning, data storage etc.
The country is plagued with weak e-infrastructure and is not capable of meeting the needs of a growing economy and its population. Mudras claims that corporate growth and investments can be hampered if the government fails to close the e-infrastructure deficit.
To attract otherwise muted private sector investments, Mudras claims, Sustained policy improvements, confidence in the sustainability of economic growth, and infrastructure development are essential.
Does Security and Surveillance Sector Need Improvement In India?
Mudras believes that ongoing threats to homeland security, rising urbanisation, proliferating crimes against women and low people-to-police ratio are major factors that underline the need to augment safety and security in the country. This (the demand) has opened up a huge market for leading players in the security and surveillance industry, with global revenue spends on security hardware, software and services projected to reach $103 billion this year, says Mudras.
The 13th edition of IFSEC 2019 will see participation from over 15 countries such as China, Taiwan, South Korea, Malaysia, Lithuania, Czech Republic, UK, Russia, US and Japan to name a few. It will bring together over 300 domestic and globally renowned brands, key government officials, consultants and business experts.
Talking about the exhibition, Mudras says, The Indian security market is experiencing unprecedented boom due to huge demand. The growing awareness in the retail and enterprise segment is giving security solutions a cult status. A new phase of the consolidation process is on in the Indian security market.
Posted: at 8:44 pm
The past decade, and particularly the past few years, has been transformative for artificial intelligence, not so much in terms of what we can do with this technology as what we are doing with it. Some place the advent of this era to 2007, with the introduction of smartphones. At its most essential, intelligence is just intelligence, whether artifact or animal. It is a form of computation, and as such, a transformation of information. The cornucopia of deeply personal information that resulted from the willful tethering of a huge portion of society to the internet has allowed us to pass immense explicit and implicit knowledge from human culture via human brains into digital form. Here we can not only use it to operate with human-like competence but also produce further knowledge and behavior by means of machine-based computation.
Joanna J. Bryson is an associate professor of computer science at the University of Bath.
For decadeseven prior to the inception of the termAI has aroused both fear and excitement as humanity contemplates creating machines in our image. This expectation that intelligent artifacts should by necessity be human-like artifacts blinded most of us to the important fact that we have been achieving AI for some time. While the breakthroughs in surpassing human ability at human pursuits, such as chess, make headlines, AI has been a standard part of the industrial repertoire since at least the 1980s. Then production-rule or expert systems became a standard technology for checking circuit boards and detecting credit card fraud. Similarly, machine-learning (ML) strategies like genetic algorithms have long been used for intractable computational problems, such as scheduling, and neural networks not only to model and understand human learning, but also for basic industrial control and monitoring.
In the 1990s, probabilistic and Bayesian methods revolutionized ML and opened the door to some of the most pervasive AI technologies now available: searching through massive troves of data. This search capacity included the ability to do semantic analysis of raw text, astonishingly enabling web users to find the documents they seek out of trillions of webpages just by typing only a few words.
AI is core to some of the most successful companies in history in terms of market capitalizationApple, Alphabet, Microsoft, and Amazon. Along with information and communication technology (ICT) more generally, AI has revolutionized the ease with which people from all over the world can access knowledge, credit, and other benefits of contemporary global society. Such access has helped lead to massive reduction of global inequality and extreme poverty, for example by allowing farmers to know fair prices, the best crops, and giving them access to accurate weather predictions.
For decades, AI has aroused both fear and excitement as humanity contemplates creating machines in our image.
Having said this, academics, technologists, and the general public have raised a number of concerns that may indicate a need for down-regulation or constraint. As Brad Smith, the president of Microsoft recently asserted, Information technology raises issues that go to the heart of fundamental human-rights protections like privacy and freedom of expression. These issues heighten responsibility for tech companies that create these products. In our view, they also call for thoughtful government regulation and for the development of norms around acceptable uses.
Artificial intelligence is already changing society at a faster pace than we realize, but at the same time it is not as novel or unique in human experience as we are often led to imagine. Other artifactual entities, such as language and writing, corporations and governments, telecommunications and oil, have previously extended our capacities, altered our economies, and disrupted our social ordergenerally though not universally for the better. The evidence assumption that we are on average better off for our progress is ironically perhaps the greatest hurdle we currently need to overcome: sustainable living and reversing the collapse of biodiversity.
AI and ICT more generally may well require radical innovations in the way we govern, and particularly in the way we raise revenue for redistribution. We are faced with transnational wealth transfers through business innovations that have outstripped our capacity to measure or even identify the level of income generated. Further, this new currency of unknowable value is often personal data, and personal data gives those who hold it the immense power of prediction over the individuals it references.
But beyond the economic and governance challenges, we need to remember that AI first and foremost extends and enhances what it means to be human, and in particular our problem-solving capacities. Given ongoing global challenges such as security, sustainability, and reversing the collapse of biodiversity, such enhancements promise to continue to be of significant benefit, assuming we can establish good mechanisms for their regulation. Through a sensible portfolio of regulatory policies and agencies, we should continue to expandand also to limit, as appropriatethe scope of potential AI applications.
Read the full article.
Read more from the original source:
Posted: at 8:44 pm
The zoom and enhance trope is a TV clich, but advances in AI are slowly making it a reality. Researchers have shown that machine learning can enlarge low-resolution images, restoring sharpness that wasnt there before. Now, this technology is making its way to consumers, with image editor Pixelmator among the first to offer such a feature.
The Photoshop competitor today announced what it calls ML Super Resolution for the $60 Pro version of its software: a function that the company says can scale an image up to three times its original resolution without image defects like pixelation or blurriness.
After our tests, we would say this claim needs a few caveats. But overall, the performance of Pixelmators super resolution feature is impressive.
Pixelation is smoothed away in a range of images, from illustration to photography to text. The results are better than those delivered by traditional upscaling algorithms, and although the process is not instantaneous (it took around eight seconds per image on our 2017 MacBook Pro), its fast enough to be a boon to designers and image editors of all stripes. There are some examples below from Pixelmator, with a zoomed-in low resolution image on the left, and the processed ML Super Resolution image on the right:
You can see more images over on Pixelmators blog, including comparisons with traditional upscaling techniques like the Bilinear, Lanczos, and Nearest Neighbor algorithms. While ML Super Resolution isnt a magic wand, it does deliver consistently impressive results.
Research into super resolution has been ongoing for some time now, with tech companies like Google and Nvidia creating their own algorithms in the past few years. In each case, the software is trained on a dataset containing pairs of low-resolution and high-resolution images. The algorithm compares this data and creates rules for how the pixels change from image to image. Then when its shown a low-resolution picture its never seen before, it predicts what extra pixels are needed and inserts them.
Pixelmators creators told The Verge that their algorithm was made from scratch in order to be lightweight enough to run on users devices. Its just 5MB in size, compared to research algorithms that are often 50 times larger. Its trained on a range of images in order to anticipate users different needs, but the training dataset is surprisingly small just 15,000 samples were needed to create Pixelmators ML Super Resolution tool.
The company isnt the first to offer this technology commercially. There are a number of single-use super resolution tools online, including BigJPG.com and LetsEnhance.io. In our tests, the output from these sites was of a more mixed quality than Pixelmators (though it was generally good), and free users can only process a small number of images. Adobe has also released a super resolution feature, but the results are, again, less dramatic.
Overall, Pixelmator seems to be offering the best commercial super resolution tool weve seen (let us know in the comments if you know of a better one), and every day, zoom and enhance becomes less of a joke.
Correction: An earlier version of this story included comparisons between images that had been non-destructively downsized then upscaled using Pixelmators ML Super Resolution, resulting in unrealistically improved results. These have been removed. We regret the error.
Read the original here: