Rocket Fuel President Richard Frankel to Speak at J.D. Power Automotive Marketing Roundtable

REDWOOD CITY, CA--(Marketwire - Oct 22, 2012) - Rocket Fuel, the leading provider of artificial intelligence advertising solutions for digital marketers, today announced that company President Richard Frankel will speak at the upcoming J.D. Power Automotive Marketing Roundtable, the premier event for global automotive brand marketers set to take place October 23-25, 2012 in Las Vegas.

News Facts:

About Rocket Fuel Follow Rocket Fuel on Twitter Follow Rocket Fuel on Facebook Read the Rocket Fuel Blog

About Rocket Fuel Inc.:

Rocket Fuel is the leading provider of artificial intelligence advertising solutions that transform digital media campaigns into self-optimizing engines that learn and adapt in real-time, and deliver outstanding results from awareness to sales. Recently awarded #22 in Forbes Most Promising Companies in America list, over 700 of the world's most successful marketers trust Rocket Fuel to power their advertising across display, video, mobile, and social media. Founded by online advertising veterans and rocket scientists from NASA, DoubleClick, IBM, and Salesforce.com, Rocket Fuel is based in Redwood Shores, California, and has offices in fifteen cities worldwide including New York, London, Toronto, and Hamburg.

Excerpt from:

Rocket Fuel President Richard Frankel to Speak at J.D. Power Automotive Marketing Roundtable

Rocket Fuel Enters Japanese Market via Strategic Alliance With cyber communications inc., a Subsidiary of Dentsu

REDWOOD CITY, CA--(Marketwire - Oct 18, 2012) - Rocket Fuel, the leading provider of artificial intelligence advertising solutions for digital marketers, today announced it will expand into Japan via a strategic alliance with cyber communications inc. (cci), Japan's largest digital marketing company and a wholly-owned subsidiary of Dentsu. Fueled by explosive demand for its artificial intelligence advertising technology among global brands and agencies, Rocket Fuel already has offices in 15 cities worldwide including New York, London, Toronto, and Hamburg.

News Facts:

Rocket Fuel Continues Global Expansion:

Quotes:

About Rocket Fuel About cci Follow Rocket Fuel on Twitter Follow Rocket Fuel on Facebook Read the Rocket Fuel Blog

About Rocket Fuel Inc.: Rocket Fuel is the leading provider of artificial intelligence advertising solutions that transform digital media campaigns into self-optimizing engines that learn and adapt in real-time, and deliver outstanding results from awareness to sales. Recently awarded #22 in Forbes Most Promising Companies in America list, over 700 of the world's most successful marketers trust Rocket Fuel to power their advertising across display, video, mobile, and social media. Founded by online advertising veterans and rocket scientists from NASA, DoubleClick, IBM, and Salesforce.com, Rocket Fuel is based in Redwood Shores, California, and has offices in fifteen cities worldwide including New York, London, Toronto, and Hamburg.

About cci: cci is a total interactive marketing company having its headquarters in Japan and now expanding into other Asian markets with a branch in Singapore. We are committed to help publishers, advertising agencies, and clients to gain access to a full range of digital communication, while remaining on the leading edge of the digital frontier. We serve more than 500 advertising agencies and more than 500 publishers in Japan.

By planning and marketing advertising products and services for transmission through cutting-edge media and devices, we have continued to offer "one-stop" marketing services in theinteractive domain, based on our sophisticated technology; ad network and ad exchange for publishers and ad platform for advertising agencies and advertisers.

For more information, please visit: http://www.cci.co.jp/en/overview/

Read the rest here:

Rocket Fuel Enters Japanese Market via Strategic Alliance With cyber communications inc., a Subsidiary of Dentsu

MEDIA ALERT: Peter Bardwick of Rocket Fuel to Participate in AGC Partners' Boston 2012 Conference on Wednesday October …

REDWOOD SHORES, CA--(Marketwire - Oct 16, 2012) - Rocket Fuel, the leading provider of artificial intelligence advertising solutions for digital marketers, today announced that CFO Peter Bardwick will join a panel at AGC Partners' 9th Annual East Coast Conference on October 16-17 at the Westin Copley Place in Boston, Massachusetts.

Key Facts:

Resources:

About Rocket Fuel

Follow Rocket Fuel on Twitter

Follow Rocket Fuel on Facebook

Read the Rocket Fuel Blog

About Rocket Fuel:

Rocket Fuel is the leading provider of artificial intelligence advertising solutions that transform digital media campaigns into self-optimizing engines that learn and adapt in real-time, and deliver outstanding results from awareness to sales. Recently awarded #22 in Forbes Most Promising Companies in America list, over 700 of the world's most successful marketers trust Rocket Fuel to power their advertising across display, video, mobile, and social media. Founded by online advertising veterans and rocket scientists from NASA, DoubleClick, IBM, and Salesforce.com, Rocket Fuel is based in Redwood Shores, California, and has offices in fifteen cities worldwide including New York, London, Toronto, and Hamburg.

2012 Rocket Fuel Inc. All rights reserved. Rocket Fuel Inc. is a registered trademark of Rocket Fuel Inc. in the U.S. and/or other countries. All other trademarks are the property of their respective owners.

Read more here:

MEDIA ALERT: Peter Bardwick of Rocket Fuel to Participate in AGC Partners' Boston 2012 Conference on Wednesday October ...

Paging Dr. Watson: Artificial Intelligence As a Prescription for Health Care

Everyone agrees health care in the United States is a colossal mess, and IBM is betting that artificially intelligent supercomputers are just what the doctor ordered. But some health professionals say robodoctors are just flashy toys.

Such are the deep questions raised by the medical incarnation of Watson, the language-processing, information-hunting AI that debuted in 2011 on the quiz show Jeopardy!, annihilating the best human player ever and inspiring geek dreams of where its awesome computational power might be focused next.

IBM has promised a Watson that will in microseconds trawl the worlds medical knowledge and advise doctors. It sounds great in principle, but the project hasnt yet produced peer-reviewed clinical results, and the journey from laboratory to bedside is long. Still, some doctors say Watson will be fantastically useful.

Its not humanly possible to practice the best possible medicine. We need machines, said Herbert Chase, a professor of clinical medicine at Columbia University and member of IBMs Watson Healthcare Advisory Board. A machine like that, with massively parallel processing, is like 500,000 of me sitting at Google and Pubmed, trying to find the right information.

Others, including physician Mark Graber, a former chief of the Veterans Administration hospital in Northport, New York, are less enthused. Doctors have enough knowledge, said Graber, who now heads the Society to Improve Diagnosis in Medicine. In medicine, thats not the problem we face.

Chase and Graber embody the essential tensions of applying Watson to healthcare, even if the machine is inarguably a wonder of artificial intelligence. Winning Jeopardy! might seem like a trivial, so to speak, accomplishment, but it was an enormous computational achievement.

Watson wasnt programmed with the information it needed, but given the cognitive tools necessary to acquire the knowledge itself, teasing out answers to complicated questions from vast amounts of electronic information. And it did this not in response to computer-language queries posed through an arcane interface, but with everyday conversational English.

'A machine like that is like 500,000 of me sitting at Google and Pubmed.'

After all, doctors make mistakes. Lots of mistakes. Enough to kill about 200,000 Americans annually. Experts put misdiagnosis rates around 10 percent, a number that varies widely by condition but in some situations, such as complicated cancers, goes far higher. Watsons programmers say the machine might prevent many of those mistakes. It would constantly be updated with the latest medical knowledge, bringing to every doctor insights that often take years to filter out of academia, and merging those insights with each patients own data.

We have all these different dimensions of data about an individual. How do we match the different characteristics they have personal, medical with a set of knowledge, of information, that is going to define what the best thing for them to do is? said Basit Chaudhry, lead research clinician for Watson, at the Wired Health Conference on Oct. 16.

The rest is here:

Paging Dr. Watson: Artificial Intelligence As a Prescription for Health Care

BioShock Infinite devs jump ship to Microsoft

Following reports in August that two of Irrational Games' top developers--director of product development Tim Gerritsen and 13-year studio veteran and former art director Nate Wells--had left the company, it now appears that the BioShock Infinite studio may have lost another two of its team members.

According to Superannuation and Kotaku, Irrational Games combat design director Clint Bundrick and artificial intelligence lead Don Norbury had both left the studio earlier this month to join, as detailed by their individual LinkedIn accounts (here and here).

Following the departure of Gerritsen and Wells in August, Irrational announced that the original BioShock art director, Scott Sinclair, would fill Wells' role amid reports of multiplayer cancellations. Irrational later brought in Epic Games production director Rod Fergusson to help complete the game.

BioShock Infinite was announced in 2010, and was originally scheduled to be launched this month. In the run up to the 2012 Electronic Entertainment Expo--at which it was not present-- it was announced that the game would be delayed to February 2013.

When BioShock Infinite does ship, it will do so with great sales expectations. In August 2011, one analyst suggested that the game would be a significant financial boon for Take-Two, saying that it could ship 4.9 million copies.

BioShock Infinite is set in a chaos-plagued airborne metropolis called Columbia. Gamers assume the role of Booker DeWitt, a former member of the feared Pinkerton National Detective Agency, which was the nation's largest security company in the late 19th century.

For more on BioShock Infinite, check out GameSpot's previous coverage.

Read more here:

BioShock Infinite devs jump ship to Microsoft

Vigilent Optimizes Data Center Uptime with Next Generation of Dynamic Cooling Control

EL CERRITO, Calif.--(BUSINESS WIRE)--

Vigilent, the leader in intelligent energy management systems, advanced the state of dynamic cooling control today with the latest release of its sophisticated cooling platform for data centers, telcos and buildings. The new release Version 5 expands automatic cooling control with a new generation of artificial intelligence-based technology that contributes to risk mitigation, delivers even greater energy savings, and adds extensive, automated reporting and visualization capabilities.

Built on a foundation of self-learning, patented artificial intelligence, the Vigilent system uses Intelligent Analytics technology to enhance the resiliency of a facilitys cooling infrastructure while optimizing it for maximum energy efficiency, minute-by-minute. Fortune 500 companies worldwide are incorporating Vigilent systems as an essential component of their Data Center Infrastructure Management (DCIM) strategy. The new releases expanded control and operational insights directly reduce cooling costs and offer extended ride-through and warning time should any unusual events occur essential to protecting uptime and availability.

The size and complexity of data centers have grown beyond the capacity of humans to effectively manage them for maximum uptime, safety, and efficiency, said Christopher Kryzan, Vice President of Marketing for Vigilent. Even the newest, most energy-efficient data centers can only operate at maximum cooling efficiency until something changes. The Vigilent system automatically and intuitively responds to changes in cooling requirements, predicts potential issues, and acts to mitigate risk giving operators much needed warning and time to respond to larger risk events.

Enhancements to the Vigilent artificial intelligence engine and resulting controls are based on the year-over-year acquired knowledge and best practices of millions of square feet of data centers already controlled by Vigilent systems. Users gain more granular monitoring and control of all cooling resources, and an improved user interface for comprehensive and instantaneous visibility into cooling and airflow status. The advantage of such precise monitoring is that the amount of cooling delivered in a given moment is the actual amount of cooling required at that moment.

By matching cooling capacity to actual IT load in real-time, you not only reduce energy costs, you gain a better understanding of how much available cooling capacity you have at any given time, said Andrew Lawrence, Vice President of Research for Datacenter Technologies and Eco-Efficient IT at 451 Research. This is a much more intelligent operation than just blasting cold air all the time; it is a dynamically changing system that produces a lot of data, which can be fed back to help improve availability as well as efficiency.

New features include:

Advanced Cooling Controls

User-configurable, Built-In Reports

Enhanced Trending

Read the original:

Vigilent Optimizes Data Center Uptime with Next Generation of Dynamic Cooling Control

Futurist's Cheat Sheet: Artificial Intelligence

There is no more powerful concept in futurist writings then the notion of artificial intelligence. The ability for humans to create machine-based life that thinks on its own and acts on its own has the potential to make our lives dramatically better - or worse, depending on what kind of science fiction you read. But getting there won't be easy.

Artificial intelligence has long been a pipe dream of scientists and science fiction writers. In reality, though, we are nowhere near the practical application of artificial intelligence. True artificial intelligence implies a conscious machine with subjective experiences and thoughts; self-aware, sentient (with the ability to feel) and the capacity for wisdom (sapience).

Apples Siri voice-activated personal assistant and Googles search algorithms are examples of the current state of artificial intelligence. Neither acts on its own nor perceives intentions. You can have a conversation with Siri by interacting with a collection of pre-loaded answers, but there is no intelligence behind it. Siri merely uses a set of rules to select the most appropriate canned answer to your question.

Siri and Google search are examples of what is called weak artificial intelligence - or machine intelligence not intended to match the capabilities of human beings. A weak AI engine could recognize characters, play chess or drive a car. But a machine performing intelligent actions is not necessarily acting intelligently. There is a difference between a smart machine (one that can take various inputs and act accordingly) and one that has its own cognitive capabilities. A smartphone can know many things about its surroundings, but does it know to call Mom when your fiance dumps you?

Strong AI lies on the other end of the spectrum. Strong AI presupposes that a machine can match or exceed the intelligence of a human. It can think on its own and perform intelligent calculations as well or better than a human could. Strong AI, as defined by engineering researchers and philosophers, does not currently exist. To find strong AI you need to turn to the science fiction realm of The Terminator, The Matrix or Isaac Asimovs I, Robot.

AI combines the theoretical with the philosophical before even getting into the nuts and bolts of how it can be achieved. How do you quantify the theoretical capabilities of a sentient computer when one does not yet exist?To even think about achieving artificial intelligence, one must first answer a very old and still very confusing question: exactly what is intelligence?

Humans consider themselves intelligent because they have the capacity to make sense of the world through a series of brain functions. The human mind integrates many different kinds of sensory information and performs computations to create assertions and judgments.

Take a look at the person closest to you. What do you see?

In your mind you see Dick or Jane - because your brain tells you that the person is Dick or Jane. What you are actually seeing is a variety of agents and individual components that your mind associates with Dick or Jane. Your brain makes instant, complicated computations that define what you see - and then more calculations to decide how to react to that object, perhaps to communicate with it. The neural network that is the human brain works in a complicated web to determine the world around it.

In the realm of artificial intelligence, the classic way to determine intelligent behavior is via the Turing test. Developed by early AI pioneer Alan Turing, the Turing test is designed to see if a machines capability for intelligent behavior makes it indistinguishable from that of a human: If you were having a conversation with an entity behind a curtain, could you tell if it was a machine or a human?

View post:

Futurist's Cheat Sheet: Artificial Intelligence

World’s First Strong Artificial Intelligence Engine for Mobile Unveiled by Kimera Systems

PORTLAND, Ore.--(BUSINESS WIRE)--

Kimera Systems (www.kimerasystems.com) today unveiled the worlds first strong Artificial Intelligence engine for devices. Kimeras technology goes beyond context awareness to give the network, devices and apps awareness of user intent and goals, delivering unprecedented improvement in user experience. Kimera has focused its technology on smartphones, and will soon make its software developer kit (SDK) available to app developers.

Weak AI the only AI available today was developed to solve a specific problem; the iPhones Siri and Googles algorithms are examples. Strong AI the technology Kimera is bringing to market works without a specific problem to solve. Instead, it learns to understand the user and environment to discover problems and then solve them.

Kimera makes smartphones smarter, said Mounir Shita, CEO of Kimera Systems. Beyond context awareness, Kimera enables your smartphone to not only understand what youre doing but also why youre doing it, then helps it to adapt to your immediate needs. Though Kimeras technology sounds like science fiction, its already built and working, and our SDK is almost ready for app developers to create smart software agents for your device to adapt to every situation. Kimeras AI service applies to your daily personal and work lives, vastly improving the efficiency in your day so you can focus on whats truly important.

Kimeras strong AI engine continuously models the world, providing any app or device anywhere on the Internet with useful intelligence that lets it holistically adapt to the individual. With Kimera, a smartphone not only knows which app a person needs and when to launch it, it can also adapt that app to specific situations, giving it new, personalized functionality.

Kimeras AI transforms how services are built and delivered to devices. It opens new revenue opportunities for handset manufacturers, wireless operators, developers, and other businesses. Kimera uses smart agents that interface and collaborate in real time with each other and other apps to achieve specific goals for you.

Imagine youve entered a supermarket and the phone in your pocket knows why youre there and what youre shopping for, added Shita. Several smart agents would be invoked, collaborate and present a custom service appropriate for just that day. It might launch a shopping app and create a grocery list specific to that supermarket that not only automatically includes the items you need, but takes into account your familys food allergies, handles prescription interactions, and adjusts for the guests youve invited to dinner. Kimera gives smartphones the capability to understand your intentions and be goal-aware. And trust me, this example only scratches the surface of what Kimeras technology can already do today.

Kickstarter Project

To bring its technology to app developers, Kimera also announced a Kickstarter project. Kickstarter is a funding platform for projects from films, games and music to art, design and technology. Since launching in 2009, over $350 million has been pledged by more than 2.5 million people, funding more than 30,000 projects.

Kimera seeks to raise $300,000 from people eager for a new revolution in mobile, where smart devices actually think. The Kickstarter project begins today and continues through November 18. For more on how you can be involved, visit http://kickstarter.kimerasystems.com/.

Visit link:

World’s First Strong Artificial Intelligence Engine for Mobile Unveiled by Kimera Systems

Google's Neural Networks Advance Artificial Intelligence [VIDEO]

Google has new inspiration for their software -- the human brain.

[More from Mashable: Google Doodle Celebrates Physicist Niels Bohrs 127th Birthday]

For months, the tech company has been developing "neural networking," a technique that collects data and uses that data towards other processes, much like the neurons in the human brain do when learning something.

Now Google is ready for those networks to be used commercially.

[More from Mashable: Google Fiber Gives High-Speed Boost to Midwest Startups [VIDEO]]

The company has already successfully used neural networking data for computers to recognize cats in YouTube videos. The computer itself was able to decide which features of the videos -- patterns, colors etc. -- to give importance to and then identify what it thought was a feline.

Google's next step is using neural networks to advance speech recognition technology, especially in their Android devices. Much like Apple's Siri technology, the more people use voice control, the more the artificial intelligence software can gather data to make inferences on what people are saying in different situations.

In the future, Google hopes neural networks will develop to the point that their image search tool will be able to easily understand what is appearing in a photo without relying on the photo's surrounding text.

Google has developed software similar to what neuroscientists believe exist in the visual cortex of mammals, Yoshua Bengio, a professor at the University of Montreal tells MIT's Technology Review.

"It turns out that the feature learning networks being used [by Google] are similar to the methods used by the brain that are able to discover objects that exist," says Bengio.

See the original post:

Google's Neural Networks Advance Artificial Intelligence [VIDEO]

Readers Write: Can we teach robots to think ethically?

Letters to the Editor for the October 8, 2012 weekly print issue:When we create artificial intelligence, will we create artificial 'ethicators,' too? The potential for 'cognitive decision-making skills' in computers is both challenging and exciting.

Regarding the Sept. 17 cover story, "Man & Machine," on the development of artificial intelligence (AI): I don't wish to be an alarmist, but I'm glad we're still far from inventing self-reasoning machines. Humankind has a history of creating new technologies simply because they're possible, only thinking about their impact later. Ray Bradbury suggested that science fiction is the nursery of new possibilities for humanity. If so, it should also be considered a warning.

Subscribe Today to the Monitor

Click Here for your FREE 30 DAYS of The Christian Science Monitor Weekly Digital Edition

From Isaac Asimov's novel "I, Robot" to HAL in Stanley Kubrick's film "2001: A Space Odyssey," thinkers have long been asking: How can we be sure an artificial intelligence will be good? A machine has no moral sense or inner Jiminy Cricket to guide it. Will we create artificial "ethicators," too? If we can't even train dogs reliably, are we really capable of training machines with human-level reasoning?

AliCarmen Carico

Weed, Calif.

Extremely sophisticated, "smart" software could play a key role in reviving the US economy, just as highly capable computer-based systems may replace some human job functions. But this article doesn't really push to the most challenging frontier of AI.

Computer systems may develop to the point where they seem to possess cognitive decision-making skills and reach conclusions not foreseen by their creators. These themes are touched on by "cyber prophets" like the computer pioneer Bill Joy when he wrote the groundbreaking article "Why the future doesn't need us" (Wired Magazine, 2000).

One of the most vital aspects of this new world is the rapid proliferation of a vast variety of "networks" where "smart" machines and "smart" systems share information in an endless "ebb and flow." The flowing data are altered and improved in what some refer to as a kind of "collective intelligence." Our current Internet is a mild precursor of the potential involved in such a system.

The rest is here:

Readers Write: Can we teach robots to think ethically?

Google Puts Its Virtual Brain Technology to Work

Platonic ideal: This composite image represents the ideal stimulus that Googles neural network recognizes as a cat face. Credit: Google

This summer Google set a new landmark in the field of artificial intelligence with software that learned how to recognize cats, people, and other things simply by watching YouTube videos (see "Self-Taught Software"). That technology, modeled on how brain cells operate, is now being put to work making Google's products smarter, with speech recognition being the first service to benefit.

Google's learning software is based on simulating groups of connected brain cells that communicate and influence one another. When such a neural network, as it's called, is exposed to data, the relationships between different neurons can change. That causes the network to develop the ability to react in certain ways to incoming data of a particular kindand the network is said to have learned something.

Neural networks have been used for decades in areas where machine learning is applied, such as chess-playing software or face detection. Google's engineers have found ways to put more computing power behind the approach than was previously possible, creating neural networks that can learn without human assistance and are robust enough to be used commercially, not just as research demonstrations.

The company's neural networks decide for themselves which features of data to pay attention to, and which patterns matter, rather than having humans decide that, say, colors and particular shapes are of interest to software trying to identify objects.

Google is now using these neural networks to recognize speech more accurately, a technology increasingly important to Google's smartphone operating system, Android, as well as the search app it makes available for Apple devices (see "Google's Answer to Siri Thinks Ahead"). "We got between 20 and 25 percent improvement in terms of words that are wrong," says Vincent Vanhoucke, a leader of Google's speech-recognition efforts. "That means that many more people will have a perfect experience without errors." The neural net is so far only working on U.S. English, and Vanhoucke says similar improvements should be possible when it is introduced for other dialects and languages.

Other Google products will likely improve over time with help from the new learning software. The company's image search tools, for example, could become better able to understand what's in a photo without relying on surrounding text. And Google's self-driving cars (see "Look, No Hands") and mobile computer built into a pair of glasses (see "You Will Want Google's Goggles") could benefit from software better able to make sense of more real-world data.

The new technology grabbed headlines back in June of this year, when Google engineers published results of an experiment that threw 10 million images grabbed from YouTube videos at their simulated brain cells, running 16,000 processors across a thousand computers for 10 days without pause.

Average features: This composite image represents the ideal stimulus for Google's software to recognize a human face. Credit: Google

Read the original:

Google Puts Its Virtual Brain Technology to Work

Vestec, Inc. Secures up to $6.4 Million Funding

Company Aims to Leverage Artificial Intelligence Research to Provide Sophisticated Speech Recognition Products at a Fraction of the Cost of Traditional Vendors

Waterloo, Ontario (PRWEB) October 02, 2012

Unlike traditional speech technology providers, Vestec is focused on demystifying and popularizing speech technologies by leveraging unique Artificial Intelligence (AI) research tools. Its AI-based approach significantly reduces time-to-market and deployment costs of speech solutions as well as significantly increases their understanding accuracy and customer satisfaction. The company offers a full range of robust, standards-based products to speech enable a wide variety of applications in all key industries. The core product portfolio consists of ASR (Automatic Speech Recognition), NLU (Natural Language Understanding), and TTS (Text to Speech) engines. All major languages spoken around the world are supported and the core speech products are available in embedded, server, distributed and cloud configurations.

Speech recognition is a major growth opportunity around the world and we intend to serve the market with competitively priced innovative products, said Dr. Fakhri Karray, Vestecs primary Founder and Chairman. Recent advances in Artificial Intelligence give us the proverbial silver bullet to upend the industrys traditional practice of premium pricing and technology mystification. We are thrilled that Sansar shares our vision and we are looking forward to working with them to build shareholder value.

ABOUT VESTEC:

Vestec was founded by a distinguished group of Artificial Intelligence (AI) researchers from Canadas famed University of Waterloo to leverage AI advances to address the cost and performance issues of traditional speech products. We firmly believe recent advances in AI are creating a paradigm shift in speech recognition, semantic understanding, and spoken text technologies. We offer AI-based speech products and custom solutions for enabling sophisticated speech-based user interfaces in all major languages for a wide variety of business processes. Visit us at: http://www.vestec.com

ABOUT DR. FAKHRI KARRAY:

Dr. Karray is a world renowned expert in computational intelligence and founded Vestec to commercialize language understanding technologies developed under his supervision at Canadas famed University of Waterloo. He is the University Research Chair Professor in the field of Intelligent Systems at the University of Waterloo as well as the co-Director of the Center for Pattern Analysis and Machine Intelligence (PAMI) Laboratories, a world renowned research center in the field of Intelligent Systems. He has written over 300 scientific papers, holds 14 US patents, has supervised more than seventy Masters, Doctoral and post-Doctoral researchers, and authored a seminal textbook in the field of Soft Computing. He has also chaired numerous international scientific conferences, received several national and international awards for his work in the field of computational intelligence, and advised some of Canadas leading corporations and entrepreneurs on technology development. He holds a PhD in Systems and Control from University of Illinois, Urbana-Champaign in the US as well as a BSc and MSc in Electrical Engineering from University of Tunis in Tunisia. He is a fluent speaker of Arabic, French, and English.

ABOUT WATERLOO:

Waterloo Region is Canadas premier technology hub. It is home to nearly 1,000 technology companies, generating more than $25 billion in annual revenues, and employing over 30,000 people. It hosts research centers of major international technology firms such as Google, Microsoft, Oracle, Intel, RIM, Open Text, Agfa, Sybase, McAfee, Desire2Learn and Electronic Arts. There are more than 550 startups and three business incubators for commercializing research. Taken together, the cluster represents competency in everything from software development, digital media, mobile and wireless, to advanced manufacturing, robotics, aerospace and defense, to clean and biotech, health, IT services and telecom.

Read more here:

Vestec, Inc. Secures up to $6.4 Million Funding

Eshwar Belani of Rocket Fuel to Speak at SMX East

REDWOOD SHORES, CA--(Marketwire - Oct 2, 2012) - Rocket Fuel, the leading provider of artificial intelligence advertising solutions for digital marketers, today announced that vice president of products and business development Eshwar Belani will join a panel discussion at SMX East, to be held on October 2-4, 2012 in New York City.

News Facts:

Resources:

About SMX East

About Rocket Fuel

Follow Rocket Fuel on Twitter

Follow Rocket Fuel on Facebook

Read the Rocket Fuel Blog

About Rocket Fuel:

Rocket Fuel is the leading provider of artificial intelligence advertising solutions that transform digital media campaigns into self-optimizing engines that learn and adapt in real-time, and deliver outstanding results from awareness to sales. Recently awarded #22 in Forbes Most Promising Companies in America list, over 700 of the world's most successful marketers trust Rocket Fuel to power their advertising across display, video, mobile, and social media. Founded by online advertising veterans and rocket scientists from NASA, DoubleClick, IBM, and Salesforce.com, Rocket Fuel is based in Redwood Shores, California, and has offices in fifteen cities worldwide including New York, London, Toronto, and Hamburg.

Read more:

Eshwar Belani of Rocket Fuel to Speak at SMX East

The buzz: Flying robots may get bee brains

3 hrs.

John Roach

Flying robots of the future may have the smarts of bees, a level of artificial intelligence with potential applications ranging from search and rescue missions to mechanical pollination of crops.

The first step in the Green Brain project underway at a pair of British universities is to develop accurate computer models of the neural systems that govern honey bee vision and sense of smell.

Such smarts would, eventually, allow the team to build a flying robot that can sense and act as autonomously as a bee instead of carrying out a set of pre-programmed instructions.

The $1.3 million project is led by James Marshall at the University of Sheffield in collaboration with the University of Sussex.

According to a news release, understanding the brain of the socially complex honey bee is an alternative approach to artificial intelligence research, where other teams have focused on the brains of rats, monkeys and humans.

The bee brain is smaller and more accessible than any vertebrate brain, Marshall explained.

In addition to building intelligent, autonomous robots, the research may lead to discoveries about what is causing wild honey bee populations around the world to plummet.

Or, at least, contribute to the development of mechanical pollinators such as the Robobees project at Harvard University.

Go here to read the rest:

The buzz: Flying robots may get bee brains

Artificial Medical Intelligence Announces Industry’s First Robotic Computer Automated Coding for Improved ROI at …

EATONTOWN, N.J.--(BUSINESS WIRE)--

Artificial Medical Intelligence (AMI) today announced the industrys first Robotic Computer Automated solution which is fully integrated into the companys primary Natural Language Processing -Computer Assisted Coding (CAC) software, EMscribe.

Using the innovative Robotic Computer Automated Coding solution for certain outpatient records including radiology, labs as well as treat and release outpatient records, medical coders do not need to view the medical charts for coding. Instead medical records go directly to the billing department where they are automatically coded with 100% accuracy. No other Health Information Management tools have this capability. While other CAC vendors make claims of increased performance by automating certain functions within coding processes and workflow, coders are still required to verify and validate the CAC output. AMIs Robotic Computer Automated Coding requires no coder review and is always 100% accurate and complete. This means that fewer resources are needed for the overall processing of patient records within healthcare facilities.

Robotic Computer Automated Coding has a clear and direct impact for healthcare facilities coding performance and overall coding departmental processing time and expenditures. For auditing, the Robotic Computer Automated Coding output can be reviewed at any time by pulling up a specific account or medical record number through the AMI or any Graphical User Interface.

Since 2009, Butler Health System has been using EMscribe through one of AMIs partners. The hospital is currently processing more than 26,000 outpatient charts per month through the EMscribe Robotic Computer Automated Coding solution with great success. Of the charts processed, more than 60% use Robotic Coding automatically bypassing manual coding and going instead directly to the billing department. The charts that are not using the Robotic Computer Automated Coding technology are still being processed through CAC and are presented to a coder with codes suggested by the EMscribe coding engine.

This exciting Robotic Computer Automated Coding approach is completely unique from other Computer Assisted Coding offerings and helps hospitals improve the bottom line revenue cycle performance through increased speed, efficiency and elimination of coder variability, said Stuart Covit, COO, Artificial Medical Intelligence Inc. Workflow varies from site to site and that presents the potential for exaggerated productivity improvement claims made by CAC vendors. With our innovative robotic technique, there is now a fundamental differentiation and shift in the offerings available to Health Information Management. Robotic Computer Assisted Coding translates to less staff focused on the mundane task of coding certain record types. The Robotic Automation process is exclusive to AMI and has been refined and field tested in live sites for 5 years. We are now moving aggressively to provide this standout capability to all hospital sites.

EMscribe Computer Assisted Coding software utilizes AMIs innovative Natural Language Processing (NLP) coding technology to read inpatient and outpatient records for appropriate diagnostic, procedure and CPT codes. It then pre-codes the records and presents them for coder validation, verification and review. Manual coders enhanced with the results of EMscribe can easily approve or amend the automatic results and increase efficiencies by as much as 80%. EMscribe has been deployed successfully in many hospitals throughout the United States.

In addition to hospitals, healthcare technology companies looking to address ICD-I0 can benefit from EMscribe technology to interface with their existing applications. The EMscribe technology can be decoupled and bolted onto other systems that require advanced CAC/NLP for the future ICD-10 coding system. With its NLP capabilities, EMscribe can serve as a valuable solution to deal with the new, much more specific and detailed coding system.

Robotic Computer Automated Coding is available immediately. The product can be seen at the upcoming AHIMA Conference October 1-3 in Chicago through some of AMIs partners which are listed on the companys website at http://www.artificialmed.com

About AMI

Go here to read the rest:

Artificial Medical Intelligence Announces Industry’s First Robotic Computer Automated Coding for Improved ROI at ...

Artificial Intelligence Used to Home In on New Fossil Sites

FREIGHTER GAP, Wyo.On blisteringly hot desert sands, researchers crawled on their hands and knees avoiding fist-size cacti littering the ground.Their goal: collecting bones and teeth of some of the earliest known primates to shed light on the adaptations at the root of the evolutionary lineage that led to humans.The fossils, though, are the size of a fingernail or smaller, and they are scattered over an area of about 10,000 square kilometers in the rocky desert of Wyoming's Great Divide Basin.

That's a lot of ground to cover, especially on all fours and in searing heat. So the scientists are relying on a tool never tried before in paleontology: artificial intelligence. Such an approach might be able to pinpoint fossil troves in their giant needle-in-a-haystack quest and suggest new strategies for fossil hunting. It then remained for them to wander to the middle of the desert to see if their innovation led them on a wild goose chase or not.

Normally, discovering fossils depends largely on luck. Paleontologists can take educated guesses as to where to searchtrekking down dry stream beds to look for bones that might have eroded off slopes, for instancebut they mostly depend on walking around to see what catches the eye. If they are lucky, they can cover ground in bucking and bouncing jeeps down dirt roads set up by oil and gas companies. In any case, traditional approaches can be challenging, lengthyand fruitless.

Increasingly, paleontologists are relying on technology to narrow their search for fossils. For instance, Google Earth has helped identify sites in South Africa containing fossils of the ancient hominid Australopithecus sediba.

But instead of inspecting satellite imagery by eye for potential sites, paleontologist Robert Anemone and remote-sensing specialist Jay Emerson of Western Michigan University and their colleagues have developed a way to automate the operation using an artificial neural network, a computer system that imitates how the brain learns. Their aim was to take advantage of how brains, both natural and artificial, quickly learn and recognize patterns, such as what fossils look like.

Training the artificial brain Artificial neurons are components of computer programs that mimic real neurons in that each neuron can send, receive and process information. Researchers first train the networks by feeding data to the artificial neurons and letting them know when their computations solve a given problem, such as reading handwriting or recognizing speech. The networks then alter the patterns of connections among these neurons to change the way they communicate with one another and work together. With such practice, the networks figure out which arrangements among neurons are best at computing desired answers.

The neural network presented the promise of locating fossil-rich sites "without walking over miles and miles of barren outcrops," says paleontologist John Fleagle of Stony Brook University. "It could save lots and lots of time and expense in the field."

That's why Anemone and his colleagues were out in the Wyoming desert with a neural network running on a laptop computer. It analyzed visible- and infrared-light satellite and aerial images of the Great Divide Basin, which included 100 known fossil sites. They first let the network know that 75 of these areas were fossil-rich so it could learn what this kind of site looked like. When they had it search for the other 25 sites, it correctly spotted 20 of them, raising hopes that it could identify new candidates.

Filling gaps in the primate record The researchers were hunting fossils dating to the late Paleocene and early Eocene epochs, about 55 million to 50 million years ago, when the Rocky Mountains were first rising and the climate was significantly warmer and wetter on average than today. Back then a large freshwater lake dominated the dig region, with streams flowing to it from the surrounding mountains. The area was home to crocodiles, turtles, lizards, fish and lots of mammals, including very primitive rodents, horses and bats as well as primates similar to modern lemurs, tarsiers, lorises and galagos.

Today, the area is mostly dry sagebrush scarred and pocked with gullies, buttes and dunes. Pronghorn antelope run alongside cars and groups of elk occasionally dash in front of them. Roaming stallions greet campers in the morning with thunderous snorts, and falcons occasionally dive at the visitors to keep them away from nests. The area seems mercifully free of venomous snakes, but thunderstorms can destroy tents and clog trails with slippery mud that can trap a truck.

Follow this link:

Artificial Intelligence Used to Home In on New Fossil Sites

In Artificial Intelligence Competition, Two Bots Pass for Human | 80beats

In the 2012 Bot Prize competition, the true winner may be the one who makes the most mistakes. In this match, video game avatars directed by artificial intelligence compete to see which comes across as most human in a fight against real human players. This year, for the first time, human participants mistook the two bots for humans more than half the time, a feat researchers attribute to the fact that these bots were programmed to be less-than-perfect players.

During the game, the aptly namedUnreal Tournament 2004, players, of course, try to kill each other, but they also categorize each character they meet as either bot or human. As they move through their virtual world, they use whats called a judgment gun to tag the figures they encounter, and these scores, or humanness ratings, determine the winners of the competition. Since the competition started, in 2008, the goal for the programmers entering their artificial intelligences in the game has always been to achieve a robot that was mistaken for a human half the time. This time, two bots achieved it, with humanness ratings of 52 percent. Furthermore, the human players had 40 percent humanness ratings, meaning that 60 percent of the time, human players thought other human players were robots.

The key to designing a convincing bot, researchers think, is to make sure its not too perfect. An avatar directed by a human would not have perfect aim, something a computer could easily achieve, explained Jacob Schrum, a doctoral student from the University of Texas who collaborated on one of the winning bots, in a press release. Instead, a good bot would shoot worse at long distances and make illogical decisions about which characters to pursue.

Maybe now the focus will be on teaching humans to act less robotic.

Video game photo via Jacob Schrum

The rest is here:

In Artificial Intelligence Competition, Two Bots Pass for Human | 80beats

Eric Porres of Rocket Fuel to Speak at OMMA Global at Advertising Week 2012

REDWOOD SHORES, CA--(Marketwire - Sep 27, 2012) - Rocket Fuel, the leading provider of artificial intelligence advertising solutions for digital marketers, today announced that CMO Eric Porres will join a panel discussion at OMMA Global at Advertising Week, to be held on October 1, 2012 at the New York Marriott Marquis.

Key Facts:

Resources:

About OMMA Global at Advertising Week

About Rocket Fuel

Follow Rocket Fuel on Twitter

Follow Rocket Fuel on Facebook

Read the Rocket Fuel Blog

About Rocket Fuel:

Rocket Fuel is the leading provider of artificial intelligence advertising solutions that transform digital media campaigns into self-optimizing engines that learn and adapt in real-time, and deliver outstanding results from awareness to sales. Recently awarded #22 in Forbes Most Promising Companies in America list, over 700 of the world's most successful marketers trust Rocket Fuel to power their advertising across display, video, mobile, and social media. Founded by online advertising veterans and rocket scientists from NASA, DoubleClick, IBM, and Salesforce.com, Rocket Fuel is based in Redwood Shores, California, and has offices in fifteen cities worldwide including New York, London, Toronto, and Hamburg.

See more here:

Eric Porres of Rocket Fuel to Speak at OMMA Global at Advertising Week 2012

DIAGNOS Inc.: CODELCO, the Main Copper Producer in the World, Has Signed a Second Service Agreement for the Use of …

BROSSARD, QUEBEC, CANADA--(Marketwire - Sept. 27, 2012) - DIAGNOS inc. ("DIAGNOS" or "the Corporation") (ADK.V), a leader in the use of artificial intelligence and advanced knowledge-extraction techniques, announced today the signature of a second agreement for the use of its CARDS (Computer Aided Resource Detection System) technology, to generate potential exploration targets on Codelco's (Corporacion Nacional del Cobre de Chile) exploration programs in Chile.

DIAGNOS will assist Codelco in identifying targets by using its CARDS technology, which makes possible the identification of sites having the same signature as known mineralized occurrences. DIAGNOS uses its proprietary technology to analyze geological, geophysical and geochemical to enable the identification of patterns hidden in the large amount of data each customer owns.

DIAGNOS can count on a multidisciplinary team that includes professionals in geophysics, geology, Artificial Intelligence, mathematics, as well as remote sensing and image interpretation.

About Codelco

Codelco is the world's largest copper producer, is headquartered in the Chile. In 2011 Codelco produced 1,796,000 tons of copper, 11% of total world copper production (including its share in El Abra mine). It is also one of the top companies in molybdenum production, with 23,098 metric tons during this period. Codelco is a 100% state-owned company and it has the largest copper reserves and resources known in the world. Codelco's sales in 2011 were US$17,515 million.

About DIAGNOS

Founded in 1998, DIAGNOS is a publicly traded Canadian corporation (ADK.V), with a mission to commercialize technologies combining contextual imaging and traditional data mining thereby improving decision making processes. DIAGNOS offers products, services, and solutions to clients in a variety of fields including healthcare, natural resources.

The Corporation's objective is to develop a royalty stream by significantly enhancing and participating in the exploration success rate of mining.

For more information, please visit our website at http://www.diagnos.com or the SEDAR website at http://www.sedar.com.

The TSX Venture Exchange has not reviewed and does not accept responsibility for the adequacy or accuracy of this release.

The rest is here:

DIAGNOS Inc.: CODELCO, the Main Copper Producer in the World, Has Signed a Second Service Agreement for the Use of ...