Around the Region: Ponte Vedra Library's Forum series presents "The Maritime Zone" and more

BEACHES Friend of the Beaches Award nomination

Beaches Watch is seeking nominations by 5 p.m. Friday, Feb. 13, for its annual Friend of the Beaches Award. Nominations may be submitted to info@beacheswatch.com or mailed to Beaches Watch Inc., P.O. Box 50311, Jacksonville Beach, Fl . 32240. For more, beacheswatch.com.

BEACHES Ritz Chamber Players concert

The Ritz Chamber Players will perform 6:30 p.m. Tuesday, Feb. 17, at the Beaches Chapel at Beaches Museum and History Park, 381 Beach Blvd., Jacksonville Beach. For more, beachesmuseum.org.

CLAY Junior high students wins county spelling bee

Wilkinson Junior High School eighth grader Joshua Brown, 14, won the Clay County Spelling Bee with the word roughhewn in the 18th round. Brown and all area county bee winners compete on Saturday, Feb. 21, at the Florida Times-Union Regional Spelling Bee at the Jacksonville Main Public Library. The winner advances to Mays Scripps National Spelling Bee in Washington, D.C.

CLAY Celebrate Clay nominations

The Paul E. and Klare N. Reinhold Foundation Inc. is accepting applications through Friday, Feb. 6, for the 2015 Celebrate Clay awards, recognizing individuals and organizations for their community service. For more, reinhold.net.

FERNANDINA BEACH City commission holds Wednesday workshop

The City Commission holds a workshop from 9 a.m.-5 p.m. on Wednesday, Feb. 4, at the Fernandina Beach Golf Course Clubhouse, 2800 Bill Melton Road. The workshop will include a year in review report from the city manager, a presentation on the citys proposed accounting software upgrade and a goal setting exercise. The public is invited.

Excerpt from:

Around the Region: Ponte Vedra Library's Forum series presents "The Maritime Zone" and more

Beaches resort start delayed

Chairman of Sandals Resorts International Gordon "Butch" Stewart. Below, one of the pools at Sandals Barbados. (Pictures by Lennox Devonish.)

SANDALS RESORTS INTERNATIONALS proposed Beaches hotel in St Peter will not be starting this year.

Fresh from opening the new Sandals Barbados Resort & Spa, hotel magnate Gordon "Butch" Stewart said the "complex project" was still at the design stage. This means work is likely to start next year.

But speaking to the media following a tour of the refurbished property at Dover, Christ Church, Stewart said planning for the northern property was proceeding "at great speed" and that the new Sandals would be a major boost for Barbados, especially in the traditionally slow summer months.

At the same time, the veteran hotelier called suggestions that his company wanted a private beach in Barbados was "the most unfair accusation I have ever had".

Stewart said Sandals was now 80 per cent occupied, and that "if we end up at 80 per cent for the rest of the year I would be very unhappy"".

He also praised the quality of staff the company recruited in Barbados, and thanked residents in districts neighbouring Sandals for the patience and understanding while the hotel was being refurbished. (SC)

Excerpt from:

Beaches resort start delayed

Astronomy – Ch. 4: History of Astronomy (4 of 16) Ancient Structures: Ajanta Caves, India – Video


Astronomy - Ch. 4: History of Astronomy (4 of 16) Ancient Structures: Ajanta Caves, India
Visit http://ilectureonline.com for more math and science lectures! In this video I will explain how the Ajanta Caves in India honors the summer and winter s...

By: Michel van Biezen

Visit link:

Astronomy - Ch. 4: History of Astronomy (4 of 16) Ancient Structures: Ajanta Caves, India - Video

Astronomy – Ch. 4: History of Astronomy (5 of 16) Ancient Structures: Chaco Canyon, New Mexico – Video


Astronomy - Ch. 4: History of Astronomy (5 of 16) Ancient Structures: Chaco Canyon, New Mexico
Visit http://ilectureonline.com for more math and science lectures! In this video I will explain how the Pueblo Indians observed the summer and winter solsti...

By: Michel van Biezen

Excerpt from:

Astronomy - Ch. 4: History of Astronomy (5 of 16) Ancient Structures: Chaco Canyon, New Mexico - Video

Filmmaker James Barrat discusses the future of artificial intelligence – Video


Filmmaker James Barrat discusses the future of artificial intelligence
CCTV America #39;s Elaine Reyes interviewed James Barrat, documentary filmmaker and author of "Our Final Intervention: Artificial Intelligence and the End of the...

By: CCTV America

View original post here:

Filmmaker James Barrat discusses the future of artificial intelligence - Video

How Science Leads to Altruism, and Why The Robot Apocalypse Will Be a Good Thing – Video


How Science Leads to Altruism, and Why The Robot Apocalypse Will Be a Good Thing
Currently, what is best for humans is also what is best for science, but what happens when artificial intelligence begins to exceed biological intelligence? In this video, I contend that the...

By: Mike Gashler

Original post:

How Science Leads to Altruism, and Why The Robot Apocalypse Will Be a Good Thing - Video

Artificial general intelligence – Wikipedia, the free …

Artificial general intelligence (AGI) is the intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can. It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists. Artificial general intelligence is also referred to as "strong AI",[1] "full AI"[2] or as the ability to perform "general intelligent action".[3]

Some references emphasize a distinction between strong AI and "applied AI"[4] (also called "narrow AI"[1] or "weak AI"[5]): the use of software to study or accomplish specific problem solving or reasoning tasks. Weak AI, in contrast to strong AI, does not attempt to simulate the full range of human cognitive abilities.

Many different definitions of intelligence have been proposed (such as being able to pass the Turing test) but there is to date no definition that satisfies everyone.[6] However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following:[7]

Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed.[8] This would include an ability to detect and respond to hazard.[9] Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in)[10] and autonomy.[11] Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.

Scientists have varying ideas of what kinds of tests a superintelligent machine needs to pass in order to be considered an operation definition of artificial general intelligence. A few of these scientists include the late Alan Turing, Ben Goertzel, and Nils Nilsson. A few of the tests they have proposed are:

1. The Turing Test (Turing)

2. The Coffee Test (Goertzel)

3. The Robot College Student Test (Goertzel)

4. The Employment Test (Nilsson)

These are a few tests that cover a variety of qualities that a machine might need to have to be considered AGI, including the ability to reason and learn, as well as being conscious and self-aware.[12]

Go here to see the original:

Artificial general intelligence - Wikipedia, the free ...

The Upside of Artificial Intelligence Development | WIRED

In Practical Artificial Intelligence Is Already Changing the World, I promised to write a follow-on article that discussed why Kevin Kelly (@kevin2kelly), the founding executive editor of Wired magazine, and Irving Wladawsky-Berger, a former IBM employee and strategic advisor to Citigroup, are optimistic about the future of artificial intelligence (AI). In that article I noted that some pundits believe that AI poses a grave threat to humanity while other pundits believe that AI systems are going to be tools that humans can use to improve conditions around them. I also wrote that it would be foolish to predict which school of thought is correct this early in the game.

In the near-term, however, I predicted that those who believe that AI systems are tools to be used by humans are going to be proven correct. Irving Wladawsky-Berger is firmly in that camp and he believes that Kevin Kelly is as well. What should we expect from this new generation of AI machines and applications? asks Wladawsky-Berger. Are they basically the next generation of sophisticated tools enhancing our human capabilities, as was previously the case with electricity, cars, airplanes, computers and the Internet? Or are they radically different from our previous tools because they embody something as fundamentally human as intelligence? Kevin Kelly as am I is firmly in the AI-as-a-tool camp. [The Future of AI: An Ubiquitous, Invisible, Smart Utility, The Wall Street Journal, 21 November 2014]M

Wladawsky-Berger bases his conclusion about Kevin Kellys beliefs about artificial intelligence (AI) from what Kelly wrote in an article in Wired Magazine. [The Three Breakthroughs That Have Finally Unleashed AI on the World, Wired, 27 October 2014] In that article, Kelly writes about IBMs Watson system and how it is transforming as it learns and about all of the good things that cognitive computing systems can do now and will do in the future. He continues:

Amid all this activity, a picture of our AI future is coming into view, and it is not the HAL 9000 a discrete machine animated by a charismatic (yet potentially homicidal) humanlike consciousness or a Singularitan rapture of superintelligence. The AI on the horizon looks more like Amazon Web Services cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. This common utility will serve you as much IQ as you want but no more than you need. Like all utilities, AI will be supremely boring, even as it transforms the Internet, the global economy, and civilization. It will enliven inert objects, much as electricity did more than a century ago. Everything that we formerly electrified we will now cognitize. This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species. There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ. In fact, the business plans of the next 10,000 startups are easy to forecast: Take X and add AI. This is a big deal, and now its here.

[ Related on Insights: Google and Elon Musk to Decide What Is Good for Humanity ]

Unlike the dire warnings that have filled news outlets over the past year, Kellys view of the future of AI is not only optimistic its almost joyous. Wladawsky-Berger and Kelly are not alone in their optimism about AIs future. Timothy B. Lee (), senior editor at @voxdotcom, also believes that the up side of artificial intelligence will far outweigh the risks of developing it further. [Will artificial intelligence destroy humanity? Here are 5 reasons not to worry. Vox, 15 January 2015] Lee believes the naysayers overestimate the likelihood that well have computers as smart as human beings and exaggerate the danger that such computers would pose to the human race. In reality, the development of intelligent machines is likely to be a slow and gradual process, and computers with superhuman intelligence, if they ever exist, will need us at least as much as we need them. Even though Kelly is optimistic about the future of AI, he doesnt dismiss the cautions being raised about how its developed. He writes, As AIs develop, we might have to engineer ways to prevent consciousness in them our most premium AI services will be advertised as consciousness-free. Kellys big concern about AIs future is who will control the systems we use. He explains:

Cloud-based AI will become an increasingly ingrained part of our everyday life. But it will come at a price. Cloud computing obeys the law of increasing returns, sometimes called the network effect, which holds that the value of a network increases much faster as it grows bigger. The bigger the network, the more attractive it is to new users, which makes it even bigger, and thus more attractive, and so on. A cloud that serves AI will obey the same law. The more people who use an AI, the smarter it gets. The smarter it gets, the more people use it. The more people that use it, the smarter it gets. Once a company enters this virtuous cycle, it tends to grow so big, so fast, that it overwhelms any upstart competitors. As a result, our AI future is likely to be ruled by an oligarchy of two or three large, general-purpose cloud-based commercial intelligences.

That concern aside, Kelly believes that AI will help make humans smarter and more effective. He notes, for example, that AI chess programs have helped make human chess players much better. He adds, If AI can help humans become better chess players, it stands to reason that it can help us become better pilots, better doctors, better judges, better teachers. In other words, Kelly sees AI as tool that can help mankind get better not a threat that is going to destroy mankind. He continues:

Most of the commercial work completed by AI will be done by special-purpose, narrowly focused software brains that can, for example, translate any language into any other language, but do little else. Drive a car, but not converse. Or recall every pixel of every video on YouTube but not anticipate your work routines. In the next 10 years, 99 percent of the artificial intelligence that you will interact with, directly or indirectly, will be nerdily autistic, supersmart specialists. In fact, this wont really be intelligence, at least not as weve come to think of it. Indeed, intelligence may be a liability especially if by intelligence we mean our peculiar self-awareness, all our frantic loops of introspection and messy currents of self-consciousness.

I agree that with that assessment. Derrick Harris (), a senior writer at Gigaom, asserts that the fact of the matter is that artificial intelligence (at least the narrow kind) is here, is real, and is getting better. [Artificial intelligence is real now and its just getting started, Gigaom, 9 January 2015] He explains:

Link:

The Upside of Artificial Intelligence Development | WIRED

The Upside of Artificial Intelligence Development

In Practical Artificial Intelligence Is Already Changing the World, I promised to write a follow-on article that discussed why Kevin Kelly (@kevin2kelly), the founding executive editor of Wired magazine, and Irving Wladawsky-Berger, a former IBM employee and strategic advisor to Citigroup, are optimistic about the future of artificial intelligence (AI). In that article I noted that some pundits believe that AI poses a grave threat to humanity while other pundits believe that AI systems are going to be tools that humans can use to improve conditions around them. I also wrote that it would be foolish to predict which school of thought is correct this early in the game.

In the near-term, however, I predicted that those who believe that AI systems are tools to be used by humans are going to be proven correct. Irving Wladawsky-Berger is firmly in that camp and he believes that Kevin Kelly is as well. What should we expect from this new generation of AI machines and applications? asks Wladawsky-Berger. Are they basically the next generation of sophisticated tools enhancing our human capabilities, as was previously the case with electricity, cars, airplanes, computers and the Internet? Or are they radically different from our previous tools because they embody something as fundamentally human as intelligence? Kevin Kelly as am I is firmly in the AI-as-a-tool camp. [The Future of AI: An Ubiquitous, Invisible, Smart Utility, The Wall Street Journal, 21 November 2014]M

Wladawsky-Berger bases his conclusion about Kevin Kellys beliefs about artificial intelligence (AI) from what Kelly wrote in an article in Wired Magazine. [The Three Breakthroughs That Have Finally Unleashed AI on the World, Wired, 27 October 2014] In that article, Kelly writes about IBMs Watson system and how it is transforming as it learns and about all of the good things that cognitive computing systems can do now and will do in the future. He continues:

Amid all this activity, a picture of our AI future is coming into view, and it is not the HAL 9000 a discrete machine animated by a charismatic (yet potentially homicidal) humanlike consciousness or a Singularitan rapture of superintelligence. The AI on the horizon looks more like Amazon Web Services cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. This common utility will serve you as much IQ as you want but no more than you need. Like all utilities, AI will be supremely boring, even as it transforms the Internet, the global economy, and civilization. It will enliven inert objects, much as electricity did more than a century ago. Everything that we formerly electrified we will now cognitize. This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species. There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ. In fact, the business plans of the next 10,000 startups are easy to forecast: Take X and add AI. This is a big deal, and now its here.

[ Related on Insights: Google and Elon Musk to Decide What Is Good for Humanity ]

Unlike the dire warnings that have filled news outlets over the past year, Kellys view of the future of AI is not only optimistic its almost joyous. Wladawsky-Berger and Kelly are not alone in their optimism about AIs future. Timothy B. Lee (), senior editor at @voxdotcom, also believes that the up side of artificial intelligence will far outweigh the risks of developing it further. [Will artificial intelligence destroy humanity? Here are 5 reasons not to worry. Vox, 15 January 2015] Lee believes the naysayers overestimate the likelihood that well have computers as smart as human beings and exaggerate the danger that such computers would pose to the human race. In reality, the development of intelligent machines is likely to be a slow and gradual process, and computers with superhuman intelligence, if they ever exist, will need us at least as much as we need them. Even though Kelly is optimistic about the future of AI, he doesnt dismiss the cautions being raised about how its developed. He writes, As AIs develop, we might have to engineer ways to prevent consciousness in them our most premium AI services will be advertised as consciousness-free. Kellys big concern about AIs future is who will control the systems we use. He explains:

Cloud-based AI will become an increasingly ingrained part of our everyday life. But it will come at a price. Cloud computing obeys the law of increasing returns, sometimes called the network effect, which holds that the value of a network increases much faster as it grows bigger. The bigger the network, the more attractive it is to new users, which makes it even bigger, and thus more attractive, and so on. A cloud that serves AI will obey the same law. The more people who use an AI, the smarter it gets. The smarter it gets, the more people use it. The more people that use it, the smarter it gets. Once a company enters this virtuous cycle, it tends to grow so big, so fast, that it overwhelms any upstart competitors. As a result, our AI future is likely to be ruled by an oligarchy of two or three large, general-purpose cloud-based commercial intelligences.

That concern aside, Kelly believes that AI will help make humans smarter and more effective. He notes, for example, that AI chess programs have helped make human chess players much better. He adds, If AI can help humans become better chess players, it stands to reason that it can help us become better pilots, better doctors, better judges, better teachers. In other words, Kelly sees AI as tool that can help mankind get better not a threat that is going to destroy mankind. He continues:

Most of the commercial work completed by AI will be done by special-purpose, narrowly focused software brains that can, for example, translate any language into any other language, but do little else. Drive a car, but not converse. Or recall every pixel of every video on YouTube but not anticipate your work routines. In the next 10 years, 99 percent of the artificial intelligence that you will interact with, directly or indirectly, will be nerdily autistic, supersmart specialists. In fact, this wont really be intelligence, at least not as weve come to think of it. Indeed, intelligence may be a liability especially if by intelligence we mean our peculiar self-awareness, all our frantic loops of introspection and messy currents of self-consciousness.

I agree that with that assessment. Derrick Harris (), a senior writer at Gigaom, asserts that the fact of the matter is that artificial intelligence (at least the narrow kind) is here, is real, and is getting better. [Artificial intelligence is real now and its just getting started, Gigaom, 9 January 2015] He explains:

Go here to read the rest:

The Upside of Artificial Intelligence Development

Programming safety into self-driving cars: Better AI algorithms for semi-autonomous vehicles

For decades, researchers in artificial intelligence, or AI, worked on specialized problems, developing theoretical concepts and workable algorithms for various aspects of the field. Computer vision, planning and reasoning experts all struggled independently in areas that many thought would be easy to solve, but which proved incredibly difficult.

However, in recent years, as the individual aspects of artificial intelligence matured, researchers began bringing the pieces together, leading to amazing displays of high-level intelligence: from IBM's Watson to the recent poker playing champion to the ability of AI to recognize cats on the internet.

These advances were on display this week at the 29th conference of the Association for the Advancement of Artificial Intelligence (AAAI) in Austin, Texas, where interdisciplinary and applied research were prevalent, according to Shlomo Zilberstein, the conference committee chair and co-author on three papers at the conference.

Zilberstein studies the way artificial agents plan their future actions, particularly when working semi-autonomously--that is to say in conjunction with people or other devices.

Examples of semi-autonomous systems include co-robots working with humans in manufacturing, search-and-rescue robots that can be managed by humans working remotely and "driverless" cars. It is the latter topic that has particularly piqued Zilberstein's interest in recent years.

The marketing campaigns of leading auto manufacturers have presented a vision of the future where the passenger (formerly known as the driver) can check his or her email, chat with friends or even sleep while shuttling between home and the office. Some prototype vehicles included seats that swivel back to create an interior living room, or as in the case of Google's driverless car, a design with no steering wheel or brakes.

Except in rare cases, it's not clear to Zilberstein that this vision for the vehicles of the near future is a realistic one.

"In many areas, there are lots of barriers to full autonomy," Zilberstein said. "These barriers are not only technological, but also relate to legal and ethical issues and economic concerns."

In his talk at the "Blue Sky" session at AAAI, Zilberstein argued that in many areas, including driving, we will go through a long period where humans act as co-pilots or supervisors, passing off responsibility to the vehicle when possible and taking the wheel when the driving gets tricky, before the technology reaches full autonomy (if it ever does).

In such a scenario, the car would need to communicate with drivers to alert them when they need to take over control. In cases where the driver is non-responsive, the car must be able to autonomously make the decision to safely move to the side of the road and stop.

See original here:

Programming safety into self-driving cars: Better AI algorithms for semi-autonomous vehicles

Beyond AI: artificial compassion

When we talk about artificial intelligence, we often make an unexamined assumption: that intelligence, understood as rational thought, is the same thing as mind. We use metaphors like the brains operating system or thinking machines, without always noticing their implicit bias.

But if what we are trying to build is artificial minds, we need only look at a map of the brain to see that in the domain were tackling, intelligence might be the smaller, easier part.

Maybe thats why we started with it.

After all, the rational part of our brain is a relatively recent add-on. Setting aside unconscious processes, most of our gray matter is devoted not to thinking, but to feeling.

There was a time when we deprecated this larger part of the mind, as something we should either ignore or, if it got unruly, control.

But now we understand that, as troublesome as they may sometimes be, emotions are essential to being fully conscious. For one thing, as neurologist Antonio Damasio has demonstrated, we need them in order to make decisions. A certain kind of brain damage leaves the intellect unharmed, but removes the emotions. People with this affliction tend to analyze options endlessly, never settling on a final choice.

But thats far from all: feelings condition our ability to do just about anything. Like an engine needs fuel or a computer needs electricity, humans need love, respect, a sense of purpose.

Consider that feeling unloved can cause crippling depression. Feeling disrespected is a leading trigger of anger or even violence. And one of the toughest forms of punishment is being made to feel lonely, through solitary confinement too much of it can cause people to go insane.

All this by way of saying that while were working on AI, we need to remember to include AC: artificial compassion.

To some extent, we already do. Consider the recommendation engine, as deployed by Amazon.com, Pandora, and others (If you like this, you might like that). Its easy to see it as an intelligence feature, simplifying our searches. But its also a compassion feature: if you feel a recommendation engine gets you, youre likely to bond with it, which may be irrational, but its no less valuable for that.

See the article here:

Beyond AI: artificial compassion

Ford drives scheduling with artificial intelligence

At Ford Motor Co., managers who were struggling to devise a schedule for the growing number of people in a three-year program for new hires found a solution in artificial intelligence.

At Ford Motor Co., managers were struggling to work out a schedule for the growing number of people in a three-year program for new hires fresh out of college.

They were caught in a quagmire of employee requests, the need for rotational assignments and a growing number of participants and jobs. Even with a group of people working on the schedule, the project was taking too much time and effort -- and getting worse.

For Ford, the answer came from a new hire in the very program that needed fixing: artificial intelligence.

While many people think of artificial intelligence, or AI, as something out of a sci-fi movie that has robots waging war on people, AI can be the answer for complex and weighty problems like scheduling.

"It was [a problem] that was taking away time from people who didn't have time," said Leonard Kinnaird-Heether, an AI researcher at Ford who built the program. "There was a way to solve this. AI was a good idea for this because this problem represents a core function that AI can take care of.... We developed a tool to automate it so we can give that time back."

Stephen Smith, a research professor focused on Ai at Carnegie Mellon University, said this kind of scheduling issue is a classic problem for AI to handle, and Ford was smart to use it.

"We need to rethink AI," he said. "It can be a powerful amplifier of human decision making without taking over the decision making.... It's a time saver.... In the area of planning and scheduling, that's one of the advantages that the AI model brings -- this flexibility, this tool that you can adapt to solve different problems pretty quickly."

Getting the grads

In the 1960s, the automaker created what is known as the Ford College Graduate Program to help recent college grads acclimate to the corporate environment at Ford. Most organizations inside the company -- like IT -- have their own implementation of the program.

Continue reading here:

Ford drives scheduling with artificial intelligence