The Meaning and Origin of Poggers – Twinfinite

Guides

Breaking down Poggers!

Published on June 22, 2022 John Esposito

Home Guides The Meaning and Origin of Poggers

Twitch has seen a massive rise in popularity over the years with gamers looking to become the next top streamer. The rise in the streaming platforms popularity has also brought the invasion of Twitch culture into the world we live in with everyone shouting and using Twitch lingo in their everyday talk track. One of the mot popular terms is poggers and, if you have no idea what it means, keep on reading as were breaking down what poggers means and the origins of the word.

Poggers is used as a term to express excitement or celebration.

For example;

I just got a new car bro.

Thats poggers.

Poggers essentially is a substitute for the word cool or a similar term to share excitement. Any way that you express cool, poggers can be substituted in its place.

For example;

I just lost my car keys.

Thats not poggers.

In Twitch chat, it often is depicted with an emote of Pepe the Frog expressing excitement or celebration. However, it has evolved beyond Pepe as streamers will create their own version of the emote.

The earliest recollection of the term was back in February 2017, when a user uploaded the emote to BetterTTV: an emote platform that can be linked to a Twitch channel for additional chat emotes. After it was uploaded to BetterTTV, it saw a huge increase in usage amongst the League of Legends community as players would use the term to describe a nice or exciting play. From there, the term took off in popularity seeing huge usage in Fortnite for the same reason.

That is everything we know about the meaning of poggers and its origin. As you look to become more versed in the universe of Twitch, be sure to check out our guides on how to create Twitch clips to make awesome memories or how to change your name on Twitch.

See original here:

The Meaning and Origin of Poggers - Twinfinite

What is a Groyper? It’s a Combination of Nick Fuentes and Pepe the Frog

Behind Nick Fuentes, the host of America First, follows a new group of far-right conservatives. They call themselves the Groypers. And theyve launched a war against the Republican party.

Charlie Kirk, the founder of Turning Point USA, and Donald Trump Jr. do not seem like likely targets for a rising right-wing faction. But much to their surprise recently, the Groyper army chose events hosted by Kirk and Trump to descend upon.

Groypers attended a Turning Point USA conference at Ohio State University to heckle speakers with loaded questions. Turning Point is a conservative nonprofit that mobilizes Republican students on college campuses.

Trump, meanwhile, was booed off the stage at a free speech event at the University of California Los Angeles. The Groypers are claiming these two events, along with seven others, as victories against the mainstream Republican party, which they now consider to be fake conservatism.

A Groyper is a member of Fuentes movement of his brand of alt-right white nationalism. The alt-right is a loose collection of conservatives that harbor white nationalists. Fuentes is currently one of its most public faces.

As their chosen mascot, Groypers took hold of an exploitable illustration of Pepe the Frog. While iterations of Pepe are commonly used within the far-right, this version is of Pepe resting a conspicuous face against his two hands.

The meme appears in different forms on Groypers Twitter pages to show their allegiance.

https://twitter.com/that_groyper/status/1203275552636973059

Fuentes Twitter bio declares himself as the Groyper leader. His image header illustrates Pepe soldiers holding up a flag that states Groyper War, Total Victory!

The header also has the names of the events, primarily on college campuses, and dates of when the Groypers heckled conservative events, all claiming victory.

The Groypers galvanize around the idea that the current Republican party is fake conservatism. Basically, they try to push each conservative position farther to the right by supporting a white, male, heterosexual America. They embrace white nationalism in support of policies that, although they have foundations in conservatism, even some Republicans find too far.

The group is extreme on immigration restrictionism, often calling for a total shutdown of immigrants into America, and pushes anti-LGTBQ propaganda to continue to fight a culture wore they believe the right gave in on.

Support of Israel is one major difference between the Groypers and what they call the traditional Conservative Inc. While the Republican party remains firm in supporting its Israeli ally, the Groypers extreme nationalism and anti-Semitism pushes them outside the bounds of normal right-wing discourse.

America is NOT a propositional nation. We have NO ALLEGIANCE to Israel, Fuentes posted on his Telegram according to Vox. We are CHRISTIANS and we dont promote degeneracy. Demographic replacement is REAL and it will be CATASTROPHIC.

Fuentes is also known for casting doubt on the number of Jews that died in the Holocaust, using crude analogies relating to cookies and baking.

https://www.youtube.com/watch?v=F9aco6o5WBE

He described the above cookie comparison as his hilarious and epic Holocaust joke on his Telegram channel.

The Groypers are loyal to Fuentes and share this anti-Semitic perspective. For example, one Groyper page posted a tweet with a photo of a blimp that says jews rape kids.

your uber driver has arrived, @ayyyetone tweeted with the photo.

Fuentes, an extremely online pundit, thinks he can shape youth conservatism because hes better at catering to the internet culture that many Gen Z or Zoomer college students are inclined to consume.

I think the generational style is so important, Fuentes posted on Telegram. Idk if its post modern or post ironic but the style and tone is very native to Zoomers which is i think why ppl like Shapiro or Kirk imagine theyre check mating me with some of these controversies but in reality its just turning young ppl onto my content.

Fuentes and the Groypers primarily target the bulk of the Republican party.

In the past, they have heckled speakers like right-wing talk show host Ben Shapiro, as well as Trump Jr. and Kirk.

Their goal is to expose high-profile Republicans by asking loaded questions, often about Israel and homosexuality, to prove their distance from the extreme or true right.

There were a number of trolls who sabotaged the Q&A portion of tonights @tpusa event, Turning Points Benny Johnson tweeted following an event at Ohio State University. Many of the questions were abhorrent and were not asked in good faith.

Fuentes responded to Johnson by calling out Turning Points moderate stance. Turning Point has been scrutinized in the past for its own racist biases.

Turning Point is now making a concerted effort to slander all critics of their bullshit fake conservatism as extremist trolls,' @NickJFuentes tweeted. We are America First and you are being exposed for the sellout frauds you are.

Fuentes has been promoting the Groypers next event on Dec. 20. White nationalists Patrick Casey and Jacob Lloyd will join Fuentes in West Palm Beach for the Groyper Leadership Summit. The Groypers will also be mobilizing against Republicans again, as the event is set to overlap with another Turning Point conference.

The GLS will feature speeches by myself, Patrick Casey, and Jacob Lloydwe invite all Groypers to join us for a celebration of our Total Victory over Charlie Kirk in the Groyper Wars! Fuentes posted on his Telegram board.

READ MORE:

Originally posted here:

What is a Groyper? It's a Combination of Nick Fuentes and Pepe the Frog

Deep Learning AI Needs Tools To Adapt To Changes In The Data Environment – Forbes

Sergey Tarasov - stock.adobe.com

In the continuing theme of higher level tools to improve developing useful applications, today well visit feature engineering in a changing environment. Artificial intelligence (AI) is increasingly used to analyze data, and deep learning (DL) is one of the more complex aspects of AI. In multiple forums, Ive discussed the need to move past heavy reliance on not just pure coding, but even past the basic frameworks discussed by DL programmers. One of the keys to the complexity is figuring out the right data attributes, or features, which matter to any system. Its even more important in DL, both because of larger data sets and due to the less transparent nature to the inference engine over procedural code. As tricky as that is the first time, it needs to be a repeatable process, as environments change, and systems must change with them.

Defining the initial feature set is important, but its not the end of the game. While many people focus on DLs ability to change results based on more data, that still means the use of the same features. For instance, the features are fairly well known radiology. Its gaining more examples for training that matters, to see the variation of how those features appear. However, what is theres a new tumor? There might be a new feature that needs to be added to the mix. With supervised systems, thats easy to modify because you can provide labeled images with the features and the system can be retrained.

However, what about consumer taste? Features are defined, then the deep learning system looks for relationships between the different defined features and provides analysis. However, fashion changes over time. Imagine, for instance, a system defined when all pants had pleats. The question of whether or not pants should have pleats isnt an issue, so the designers did not train the system to analyze the existence of pleats. While the feature might be defined in the full data set, for performance issues the feature was not engineered into the engine.

Suddenly, theres a change. People start buying pants without pleats. That becomes something that consumers want. While that might be in the full dataset, the inference engine is not evaluating that variable because it is not a defined feature. The environment has changed. How can that be recognized, and the DL system changed?

SparkBeyond is a company working to address the problem. While the product works with initial feature engineering, the key advantage is that it helps with DevOps and other processes to work to keep DL driven applications current in changing environments.

What the companys platform does is analyze the base data being used by the DL systems. It is not AI itself, but leverages random forests (RF). This technique is a way of running multiple tests with different parameters. This is helped by the advances of cloud technologies and the ability to scale-out to multiple servers. Large numbers of decision trees can be analyzed, with new patterns being seen. The RF is one of the ways that machine learning has moved past a pure AI definition, as it can create insight far faster than other methods, identifying new classifications and relationships in large data sets.

The complexities of consumer behavior, and that of financial and other markets, is far more complex than that of pleats v no-pleats, its important to recognize and adapt to change as fast as possible. Changing environments are critical to analysis, said Mike Sterling, Director of Impact Management, SparkBeyond. Generating large volumes of hypotheses and models, and them testing them, is critical to identifying those changes in order to adapt deep learning systems to remain accurate in those environments.

Artificial intelligence does not exist on its own. It is a technology that fits into a larger solution to address a business issue. No market is stagnant while remaining relevant. How and when to update deep learning systems, as they are used in more and more places, is important. The ability to analyze the data sets is critical, both for initial feature engineering and as an ongoing process to keep the systems relevant and accurate.

I see this as one feature, if you will, of what will eventually become development suites similar to 4GL development in the 90s. It will take a few more years, but this step to incorporate more tools into the deep learning environment

More here:

Deep Learning AI Needs Tools To Adapt To Changes In The Data Environment - Forbes

What Opportunities are Appearing Thanks to AI, Artificial Intelligence? – We Heart

The AI sector is booming. Thanks to several leaps that have been made, we are closer than ever before to developing an AI that acts and reacts as a real human would do. Opportunities in this sector are flourishing, and there is always a way for you to get involved.

Photo by Annie Spratt.

Employees: If you are searching for a job in the tech sector, one of the most rewarding you could find is working with AI. It is a mistake to assume that all AI development is focussed on developing android technologies. There are many other applications for AI and each one needs experts at the helm to help bring it to fruition.

Whether you are a graduate, or you are looking for a change in careers, there is always a job opening that you could look into. Even if you dont have a background in this tech, there are many other ways you could get involved, whether you are working on an AIs cognitive abilities or even just testing out the product. Whatever your background and skillset might be, there is always a way for you to get involved.

Investors: AI development is incredibly costly. While many of the smaller developers may have a great idea that could be world-changing if they bring it to fruition. However, they often lack the finances to be able to do so. This is where investors can come in.

Investors like Tej Kohli, James Wise, or Jonathan Goodwin may have little expertise in these areas from their own personal experience, but they know how to recognise a viable idea when presented with one. Whether you are looking to get into venture investment yourself or you are a tech company looking for financial backing, their activities should give you some idea about the paths you need to follow.

Photo, Bence Boros.

Consumers: The world of AI isnt just open to investors and tech gurus. There is now a vast range of AI-driven tech emerging onto the market. You, as a consumer, get to be an instrumental part of driving this new tech forward as it means that the developers gain some insight into what features are popular and which arent.

Just look at the boom in home assistants that has erupted in the past few years. We are now able to live in fully functioning smart homes with music playing and lights turning off with a simple voice command. By exploring what AI has to offer through the role of the consumer, this all feeds back to the developers and helps them create the next generation of products.

No matter how interested you are in this sector, there is always going to be something you can pursue that will help to develop AI overall. This is an incredibly exciting era to live in, and AI is just one of the pieces of tech that could transform the world as we know it. Take a look at some of the roles and opportunities and see where you could jump in today.

Read the rest here:

What Opportunities are Appearing Thanks to AI, Artificial Intelligence? - We Heart

IoT And AI: Improving Customer Satisfaction – Forbes


Forbes
IoT And AI: Improving Customer Satisfaction
Forbes
Truethe Internet of Things (IoT) and artificial intelligence (AI) hold huge promise in helping us better engage and satisfy our customers. But that promise still depends heavily on our ability to process and act on the data we're gathering in a way ...

Read this article:

IoT And AI: Improving Customer Satisfaction - Forbes

When will AI be ready to really understand a conversation? – Fast Company

Imagine holding a meeting about a new product release, after which AI analyzes the discussion and creates a personalized list of action items for each participant. Or talking with your doctor about a diagnosis and then having an algorithm deliver a summary of your treatment plan based on the conversation. Tools like these can be a big boost given that people typically recall less than 20% of the ideas presented in a conversation just five minutes later. In healthcare, for instance, research shows that patients forget between 40% and 80% of what their doctors tell them very shortly after a visit.

You might think that AI is ready to step into the role of serving as secretary for your next important meeting. After all, Alexa, Siri, and other voice assistants can already schedule meetings, respond to requests, and set up reminders. Impressive as todays voice assistants and speech recognition software might be, however, developing AI that can track discussions between multiple people and understand their content and meaning presents a whole new level of challenge.

Free-flowing conversations involving multiple people are much messier than a command from a single person spoken directly to a voice assistant. In a conversation with Alexa, there is usually only one speaker for the AI to track and it receives instant feedback when it interprets something incorrectly. In natural human conversations, different accents, interruptions, overlapping speech, false starts, and filler words like umm and okay all make it harder for an algorithm to track the discussion correctly. These human speech habits and our tendency to bounce from topic to topic also make it significantly more difficult for an AI to understand the conversation and summarize it appropriately.

Say a meeting progresses from discussing a product launch to debating project roles, with an interlude about the meeting snacks provided by a restaurant that recently opened nearby. An AI must follow the wide-ranging conversation, accurately segment it into different topics, pick out the speech thats relevant to each of those topics, and understand what it all means. Otherwise, Visit the restaurant next door might be the first item in your post-meeting to-do list.

Another challenge is that even the best AI we currently have isnt particularly good at handling jargon, industry-speak, or context-specific terminology. At Abridge, a company I cofounded that uses AI to help patients follow through on conversations with their doctors, weve seen out-of-the-box speech-to-text algorithms make transcription mistakes such as substituting the word tastemaker for pacemaker or Asian populations for atrial fibrillation. We found that providing the AI with information about a conversations topic and context can help. In transcribing conversations with a cardiologist, for example, medical terms like pacemaker are assumed to be the go-to.

The structure of a conversation is also influenced by the relationship between participants. In a doctor-patient interaction, the discussion usually follows a specific template: the doctor asks questions, the patient shares their symptoms, then the doctor issues a diagnosis and treatment plan. Similarly, a customer service chat or a job interview follows a common structure and involves speakers with very different roles in the conversation. Weve found that providing an algorithm with information about the speakers roles and the typical trajectory of a conversation can help it better extract information from the discussion.

Finally, its critical that any AI designed to understand human conversations represents the speakers fairly, especially given that the participants may have their own implicit biases. In the workplace, for instance, AI must account for the fact that there are often power imbalances between the speakers in a conversation that fall along lines of gender and race. At Abridge, we evaluated one of our AI systems across different sociodemographic groups and discovered that the systems performance depends heavily on the language used in the conversations, which varies across groups.

While todays AI is still learning to understand human conversations, there are several companies working on this problem. At Abridge, we are currently building AI that can transcribe, analyze, and summarize discussions between doctors and patients to help patients better manage their health and ultimately improve health outcomes. Microsoft recently made a big bet in this space by acquiring Nuance, a company that uses AI to help doctors transcribe medical notes, for $16 billion. Google and Amazon have also been building tools for medical conversation transcription and analysis, suggesting that this market is going to see more activity in the near future.

Giving AI a seat at the table in meetings and customer interactions could dramatically improve productivity at companies around the world. Otter.ai is using AIs language capabilities to transcribe and annotate meetings, something that will be increasingly valuable as remote work continues to grow. Chorus is building algorithms that can analyze how conversations with customers and clients drive companies performance and make recommendations for improving interactions with customers.

Looking to the future, AI that can understand human conversations could lay the groundwork for applications with enormous societal benefits. Real-time, accurate transcription and summarization of ideas could make global companies more productive. At an individual level, having AI that can serve as your own personal secretary can help each of us focus on being present for the conversations were having without worrying about note taking or something important slipping through the cracks. Down the line, AI that can not only document human conversations but also engage in them could revolutionize education, elder care, retail, and a host of other services.

The ability to fully understand human conversations lies just beyond the bounds of todays AI, even though most humans are able to more or less master it before middle school. However, the technology is progressing rapidly and algorithms are increasingly able to transcribe, analyze, and even summarize our discussions. It wont be long before you find a voice assistant at your next business meeting or doctors appointment ready to share a summary of what was discussed and a list of next steps as soon as you walk out the door.

Sandeep Konam is a machine learning expert who trained in robotics at Carnegie Mellon University and has worked on numerous projects at the intersection of AI and healthcare. He is the cofounder and CTO of Abridge, a company that uses AI to help patients stay on top of their health.

The rest is here:

When will AI be ready to really understand a conversation? - Fast Company

A beginners guide to the AI apocalypse: Artificial stupidity – The Next Web

Welcome to the latest article in TNWs guide to the AI apocalypse. In this series well examine some of the most popular doomsday scenarios prognosticated by modern AI experts.

In this edition were going to flip the script and talk about something that might just save us from being destroyed by our robot overlords on September 23, 2029 (random date, but if it actually happens your mind is going to be blown), and that is: artificial stupidity.

But first, a few words about humans.

You wont find any comprehensive data on the subject outside of the testimonials at the Darwin Awards, but stupidity is surely the biggest threat to humans throughout all of history.

Luckily were still the smartest species on the planet, so weve managed to remain in charge for a long time despite our shortcomings. Unfortunately a new challenger has entered the arena in the form of AI. And despite its relative infancy, artificial intelligence isnt as far from challenging our status as the apex intellects as you might think.

The experts will tell you that were really far away from human-level AI (HLAI). But maybe thats because nobodys quite sure what the benchmark for that would be. What should a human be able to do? Can you play the guitar? I can. Can you play the piano? I cant.

Sure, you can argue that a human-level AI should be able to learn to play the guitar or the piano, just like a human can many play both. But the point is that measuring human ability isnt a cut-and-dry endeavor.

Computer scientist Roman Yampolskiy, of the university of Louisville, recently published a paper discussing this exact concept. He writes:

Imagine that tomorrow a prominent technology company announces that they have successfully created an Artificial Intelligence (AI) and offers for you to test it out.

You decide to start by testing developed AI for some very basic abilities such as multiplying 317 by 913, and memorizing your phone number. To your surprise, the system fails on both tasks.

When you question the systems creators, you are told that their AI is human-level artificial intelligence (HLAI) and as most people cannot perform those tasks neither can their AI. In fact, you are told, many people cant even compute 13 x 17, or remember name of a person they just met, or recognize their coworker outside of the office, or name what they had for breakfast last Tuesday.

The list of such limitations is quite significant and is the subject of study in the field of Artificial Stupidity.

Trying to define what HLAI should and shouldnt be able to do is just as difficult as trying to define the same for an 18-year-old human. Change a tire? Run a business? Win at Jeopardy?

This line of reasoning usually swings the conversation to narrow intelligence versus general intelligence. But here we run into a problem as well. General AI is, hypothetically, a machine capable of learning any function in any domain that a human can. That means a single GAI should be capable of replacing any human in the entire world given proper training.

Humans dont work that way however. Theres no general human intelligence. The combined potential for human function is not achievable by an individual. If we build a machine capable of replacing any of us, it stands to reason it will.

And thats cause for concern. We dont consider which ants are most talented when we wreck an anthill to build a softball field, why should our intellectual superiors?

The good news is that most serious AI experts dont think GAI will happen anytime soon, so the most well have to deal with is whatever fuzzy definition of HLAI the person or company who claims it comes up with. Much like Google decided it had achieved quantum supremacy by coming up with an arbitrary (and disputed) benchmark, itll surprise nobody in the industry if, for example, the AI crew at Facebook determines that a specific translation algorithm theyve invented meets their self-imposed criteria for HLAI (or something like that). Maybe itll be Amazon or OpenAI.

The bad news is that you also wont find many reputable scientists willing to rule GAI out. And that means we could be an eureka! or two away from someone like Ian Goodfellow oopsing up an algorithm that ties general intelligence to hardware. And when that happens, we could be looking at Bostroms Paperclip Maximizer in full effect. In other words: the robots wont kill us out of spite, theyll just forget we exist and transform the world and its habitats to suit their needs just as we did.

Thats one theory anyway. And, as with any potential extinction scenario, its important to have a plan to stop it. Based on the fact that we cant know exactly whats going to happen once a superintelligent artificial being emerges, we should probably just start hard-coding artificial stupidity into the mix.

The right dose of unwavering limitations think Asimovs Laws of Robotics but more specific to the number of parameters or compute a specific model can use and what level of network integration can exist between disparate systems could spell the difference between our existence and extinction.

So, rather than attempting to program advanced AI with a philosophical view on the sanctity of human life and what constitutes the greater good, we should just hamstring them with artificial stupidity from the start.

Published July 17, 2020 19:55 UTC

Read the original:

A beginners guide to the AI apocalypse: Artificial stupidity - The Next Web

An AI algorithm inspired by how kids learn is harder to confuse – MIT Technology Review

Information firehose: The standard practice for teaching a machine-learning algorithm is to give it all the details at once. Say youre building an image classification system to recognize different species of animals. You show it examples of each species and label them accordingly: German shepherd and poodle for dogs, for example.

But when a parent is teaching a child, the approach is entirely different. They start with much broader labels: any species of dog is at first simply a dog. Only after the child has learned how to distinguish these simpler categories does the parent break each one down into more specifics.

Dispelled confusion: Drawing inspiration from this approach, researchers at Carnegie Mellon University created a new technique that teaches a neural network to classify things in stages. In each stage, the network sees the same training data. But the labels start simple and broad, becoming more specific over time.

To determine this progression of difficulty, the researchers first showed the neural network the training data with the final detailed labels. They then computed whats known as a confusion matrix, which shows the categories the model had the most difficulty telling apart. The researchers used this to determine the stages of training, grouping the least distinguishable categories together under one label in early stages and splitting them back up into finer labels with each iteration.

Better accuracy: In tests with several popular image-classification data sets, the approach almost always led to a final machine-learning model that outperformed one trained by the conventional method. In the best-case scenario, it increased classification accuracy up to 7%.

Curriculum learning: While the approach is new, the idea behind it is not. The practice of training a neural network on increasing stages of difficulty is known as curriculum learning and has been around since the 1990s. But previous curriculum learning efforts focused on showing the neural network a different subset of data at each stage, rather than the same data with different labels. The latest approach was presented by the papers coauthor Otilia Stretcu at the International Conference of Learning Representations last week.

Why it matters: The vast majority of deep-learning research today emphasizes the size of models: if an image-classification system has difficulty distinguishing between different objects, it means it hasnt been trained on enough examples. But by borrowing insight from the way humans learn, the researchers found a new method that allowed them to obtain better results with exactly the same training data. It suggests a way of creating more data-efficient learning algorithms.

Read the original:

An AI algorithm inspired by how kids learn is harder to confuse - MIT Technology Review

Clara Labs nabs $7M Series A as it positions its AI assistant to meet … – TechCrunch

Clara Labs, creator of the Clara AI assistant, is announcing a $7 million Series A this morning led byBasis Set Ventures. Slack Fund also joined in the round, alongside existing investors Sequoia and First Round. The startup will be looking to further differentiate within the crowded field of email-centric personal assistants by building in features and integrations to address the needs of enterprise teams.

Founded in 2014, Clara Labs has spent much of the last three years trying to fix email. When CC-ed on emails, the Clara assistant can automatically schedule meetings reasoning around preferences like location and time.

If this sounds familiar, its because youve probably come across x.ai or Fin. But while all three startups look similar on paper, each has its own distinct ideology. Where Clara is running toward the needs of teams, Fin embraces the personal pains of travel planning and shopping. Meanwhile,x.ai opts for maximum automation and lower pricing.

That last point around automation needs some extra context. Clara Labs prides itself in its implementation of a learning strategy called human-in-the-loop. For machines to analyze emails, they have to make a lot of decisions is that date when you want to grab coffee, or is it the start of your vacation when youll be unable to meet?

In the open world of natural language, incremental machine learning advances only get you so far. So instead, companies like Clara convert uncertainty into simple questions that can be sent to humans on demand (think proprietary version of Amazon Mechanical Turk). The approach has become a tech trope with the rise of all things AI, but Maran Nelson, CEO of Clara Labs, is adamant that theres still a meaningful way to implement agile AI.

The trick is ensuring that a feedback mechanism exists for these questions to serve as training materials for uncertain machine learning models. Three years later, Clara Labs is confident that its approach is working.

Bankrolling the human in human-in-the-loop does cost everyone more, but people are willing to pay for performance. After all, even a nosebleed-inducing $399 per month top-tier plan costs a fraction of a real human assistant.

Anyone who has ever experimented with adding new email tools into old workflows understands that Gmail and Outlook have tapped into the dark masochistic part of our brain that remains addicted to inefficiency. Its tough to switch and the default of trying tools like Clara is often a slow return to the broken way of doing things. Nelson says shes keeping a keen eye on user engagement and numbers are healthy for now theres undoubtedly a connection between accuracy and engagement.

As Clara positions its services around the enterprise, it will need to take into account professional sales and recruiting workflows. Integrations with core systems like Slack, CRMs and job applicant tracking systems will help Clara keep engagement numbers high while feeding machine learning models new edge cases to improve the quality of the entire product.

Scheduling is different if youre a sales person and your sales team is measured by the total number of meetings scheduled, Nelson told me in an interview.

Nelson is planning to make new hires in marketing and sales to push the Clara team beyond its current R&D comfort zone. Meanwhile the technical team will continue to add new features and integrations, like conference room booking, that increase the value-add of the Clara assistant.

Xuezhao Lan of Basis Set Ventures will be joining the Clara Labs board of directors as the company moves into its next phase of growth. Lan will bring both knowledge of machine learning and strategy to the board. Todays Clara deal is one of the first public deals to involve the recently formed $136 million AI-focused Basis Set fund.

Read more:

Clara Labs nabs $7M Series A as it positions its AI assistant to meet ... - TechCrunch

Immervision uses AI for better wide-angle smartphone videos and photos – VentureBeat

Immervision has announced real-time video distortion correction software to help create professional quality videos on smartphones. The Montreal company also revealed an off-the-shelf 125-degree wide-angle lens, enabling mobile phone makers to improve their next-generation smartphone cameras.The software algorithms are now available for mobile phone makers to license from Immervisions exclusive distribution partner Ceva and promise to enhance images through artificial intelligence and machine learning.

The wider field of view (FOV) in phones creates more apparent distortion than you would see with other cameras. But the software algorithms from Immervision help correct stretched bodies and can adjust proportions of objects, lines, and faces in real time. The AI can take a line that looks like a banana and straighten it out, said Alessandro Gasparini, executive vice president of operations and chief commercial officer at Immervision, in an interview with VentureBeat.

Whether the goal is to leave the preset as is, fully customize it, let end users decide, allow phone orientation to dictate, or leverage machine learning to control the result, Immervision said it can help phone makers differentiate their hardware.Gasparini said the algorithms offer real-time distortion correction in both videos and pictures, adjusting the perspective, capturing more of a scene with less distortion, and correcting line and object distortion.

Above: Immervision fixes curved building lines caused by smaller fields of view.

Image Credit: Immervision

While the majority of tier one phone makers have wide-angle lenses in their phones, tier two and tier three mobile brands have yet to adopt them. Immervisions technology has been preconfigured on popular sensors, including Sony, Omnivision, and Samsung, and has one lens with ready-to-use software, reducing camera customization and integration time.The lens is 6.4 millimeters high and ranges from eight megapixels to 20 megapixels in terms of image quality.

We design lenses for the mobile industry, with action cameras and broadcast cameras of different sizes, different resolutions, and different field of views, Gasparini said.

Immervision surveyed users to find out what kind of image quality and distortion issues mattered most. Some lenses with low FOV numbers can make people on the edges of photos look fatter than they are, and that really makes people mad, Gasparini said. Most smartphones have lenses that are 100 to 130 degrees FOV. Immervisions competition in this market includes Apple and Samsung, which do their own work. But Immervision aims to arm the rest of the industry with the same kind of high-quality cameras.

Above: Immervision helps a camera get a better view of a scene.

Image Credit: Immervision

Gasparini said Immervision specializes in a combination of optical design and image processing, with different types of engineers under the same roof.

We find ourselves to be one of the largest independent optical design firms in the world, he said. If you look at some of the companies that manufacture optics today for smartphones, they might have one or two optical designers in their factory. Actually, we have more, and we have cross-pollination of different competencies in our company.

Immervision was founded in 2000 and employs around 30 people. Gasparini said the company has managed good profit margins as it works to help cameras better reproduce reality.

Software can do certain magic on images. But there are limitations, Gasparini said. And there are challenges the next generation has dealing with more video. The new smartphones are cinematographic, and more people will be shooting short films and movies with them. This will increase the challenge of processing them in real time.

See the original post:

Immervision uses AI for better wide-angle smartphone videos and photos - VentureBeat

No, Facebook did not shut down AI program for getting too smart – WTOP

AP Photo/Matt Rourke, File

WASHINGTON Facebook artificial intelligence bots tasked with dividing items between them have been shut down after the bots started talking to each other in their own language.

But hold off on making comparisons to Terminator or The Matrix.

ForbesBooks Radio host and technology correspondent Gregg Stebben said that Facebook shut down the artificial intelligence program not because the company was afraid the bots were going to take over, but because the bots did not accomplish the task they were assigned to do negotiate.

The bots are not really robots in the physical sense, Stebben said, but chat bots little servers or digital chips doing the responding. The bots were just discussing how to divide some items between them, according to Gizmodo.

The language the program created comprised English words with a syntax that would not be familiar to humans, Stebben said.

Below is a sample of the conversation between the bots, called Bob and Alice:

Bob: i can i i everything else

Alice: Balls have zero to me to me to me to me to me to me to me to me to

Though there is a method to the bots language, FAIR scientist Mike Lewis told FastCo Designthat the researchers interest was having bots who could talk to people.

If were calling it AI, why are we surprised when it shows intelligence? Stebben said. Increasingly we are going to begin communicating with beings that are not humans at all.

So should there be fail-safes to prevent an apocalyptic future controlled by machines?

What we will find is, we will never achieve a state where we have absolute control of machines, Stebben said. They will continue to surprise us, we will have to do things to continue to control them, and I think there will always be a risk that they will do things that we didnt expect.

WTOPs Dimitri Sotis contributed to this report.

Like WTOP on Facebook and follow @WTOP on Twitter to engage in conversation about this article and others.

2017 WTOP. All Rights Reserved.

Here is the original post:

No, Facebook did not shut down AI program for getting too smart - WTOP

Widex Introduces My Sound: A New Portfolio of AI-enabled Features for Customization of Its Industry-leading Widex MOMENT Hearing Aids – PRNewswire

HAUPPAUGE, N.Y., June 9, 2021 /PRNewswire/ --Building on the success of the revolutionary, artificial intelligence-based SoundSense Learn technology, Widex USA Inc. today announced Widex My Sound, a portfolio of AI features including a new solution that instantly enables intelligent customization of the company's cutting-edge Widex MOMENT hearing aids based on a user's activity and listening intent.

Widex was the first company to enable user-driven sound personalization by leveraging artificial intelligence in hearing aids. Now, within My Sound, Widex launches the third generation of its AI technology, vastly improving the usability of the AI solution based on the extensive data the company has gathered from the previous two generations.

This new AI solution further combines the capacity of artificial intelligence with users' personal real-world experience to deliver another level of automated customization. Through AI modeling and clustering of data collected via the Widex SoundSense Learn AI engine, highly qualified sound profile recommendations for the individual user can now be made based on the intent, need, and preferences of thousands of users in similar real-world situations.

"Widex is leading the industry by combining artificial intelligence and human intelligence to create natural sound experiences and foster social participation through better hearing," said Jodi Sasaki-Miraglia, AuD, Widex's Director of Professional Training and Education. "Once Widex Moment is fit properly by a local licensed hearing care professional, the user can, if necessary, customize their hearing aids with ease, choosing from multiple AI features. Plus, our latest generation delivers results in just seconds, putting control and intelligent personalization into the hands of every user."

My Sound is integrated into the Widex MOMENT app and is the home for all the powerful AI personalization Widex offers. The latest generation of AI utilizes the cloud-based user data of Widex users worldwide to make sound profile recommendations based on an individual user's current activity and listening intent. Users launch My Sound from the app and begin by selecting their activity, such as dining, then choosing their intent, such as socializing, conversation, or enjoying music.

Based on the user's selections, Widex can draw on tens of thousands of real-life data points, reflecting the preferences and listening situations of other Widex users who have used the app previously. In seconds, the user is presented with two recommendations, which can both be listened to before selecting the settings that sound best. In the event neither recommendation meets the individual user's needs, they can launch SoundSense Learn from the same screen to further personalize their hearing experience through that solution's sophisticated A/B testing process.

"Widex has created a radically different way of delivering hearing solutions for today's active hearing aid user," Sasaki-Miraglia continued. "Instead of having to program the hearing aid in a way that covers all situations the user might encounter, the hearing care professional ensures the best possible starting point for the user and My Sound then allows users to personalize their experience in real life, easily and instantly. In this way, the hearing solution adapts to the user's preferences and becomes even more personal."

The Widex MOMENT app, including My Sound with SoundSense Learn, is available for Apple and Android devices and is designed to work with Widex MOMENT Bluetooth hearing aids.

For more information about Widex MOMENT, click here. For high-res images and screen shots, click here.

AboutWidex

AtWidexwe believe in a world where there are no barriers to communication; a world where people interact freely, effortlessly and confidently. With sixty years' experience developing state-of-the-art technology, we provide hearing solutions that are easy to use, seamlessly integrated in daily life and enable people to hear naturally. As one of the world's leading hearing aid producers, our products are sold in more than one hundred countries, and we employ 4,000 people worldwide.

Media Contact: Dan Griffin Griffin360 212-481-3456 [emailprotected]

SOURCE Widex

Original post:

Widex Introduces My Sound: A New Portfolio of AI-enabled Features for Customization of Its Industry-leading Widex MOMENT Hearing Aids - PRNewswire

A Facebook AI Unexpectedly Created Its Own Unique Language – Futurism

In Brief While developing negotiating chatbot agents, Facebook researchers found that the bots spontaneously developed their own non-human language as they improved their techniques, highlighting how little we still know about how artificial intelligences learn. The Future of Language

A recent Facebook report on the way chatbots converse with each other has given the world a glimpse intothe future of language.

In the report, researchers from the Facebook Artificial Intelligence Research lab (FAIR) describe training their chatbot dialog agents to negotiate using machine learning. The chatbots were eager and successful dealmaking pupils, but the researchers eventually realized they needed to tweak their model because the bots were creating their own negotiation language, diverting from human languages.

To put it another way, when they used a model that allowed the chatbots to converse freely, using machine learning to incrementally improve their conversational negotiation strategies as they chatted, the bots eventually created and used their own non-human language.

The unique, spontaneous development of a non-human language was probably the most baffling and thrilling development for the researchers, but it wasnt the only one. The chatbots also proved to be smart about negotiating and used advanced strategies to improve their outcomes. For example, a bot might pretend to be interested in something that had no value to it in order to be able to sacrifice that thing later as part of a compromise.

Although Facebooks bargain-hunting bots arent a sign of an imminent singularity or anything even approaching that level of sophistication they are significant, in part because they prove once again that an important realm we once assumed was solely the domain of humans, language, is definitely a shared space. This discovery also highlights how much we still dont know about the ways that artificial intelligences (AIs) think and learn, even when we create them and model them after ourselves.

Read more:

A Facebook AI Unexpectedly Created Its Own Unique Language - Futurism

Role of AI soars in tackling Covid-19 pandemic – BusinessLine

For the first time in a pandemic, Artificial Intelligence (AI) is playing a role like never before in areas ranging from diagnosing risk to doubt-clearing, from delivery of services to drug discovery in tackling the Covid-19 outbreak.

While BlueDoT, a Canadian health monitoring firm that crunches flight data and news reports using AI, is being credited by international reports to be the first to warn its clients of an impending outbreak on December 31, beating countries and international developmental agencies, the Indian tech space too is buzzing with coronavirus cracking activities.

CoRover, a start-up in the AI space that has earlier developed chatbots for railways ticketing platform, has now created a video-bot by collaborating with a doctor from Fortis Healthcare. In this platform, a real doctor from Fortis Healthcare not a cartoon or an invisible knowledge bank will take questions from people about Covid-19.

Apollo Hospitals has come up with a risk assessment scanner for Covid-19, which is available in six languages and guides people about the potential risk of having the virus. The Jaipur-based Sawai Man Singh Hospital is trying out a robot, made by robot maker Club First, to serve food and medicines to patients to lower the exposure of health workers to coronavirus patients.

This is the first time in healthcare that Artificial Intelligence, Machine Learning, and Natural Language Processing are being used to create a Virtual Conversational AI platform, which assists anyone to be able to interact with doctors and have their queries answered unlike other search engines, which do not guarantee the authenticity of information, CoRovers Ankush Sabharwal claimed, while talking of its video-bot, which is likely to be launched soon.

Sabharwal told BusinessLine that answers to numerous questions have been recorded by Pratik Yashavant Patil, a doctor from Fortis Healthcare. In his AI avatar, Doctor Patil will bust myths, chat with you and will probably have answers to a lot of your questions.

Another start-up, Innoplexus AG, headquartered in Germany but founded by Indians, is claiming that its AI-enabled drug discovery platform is helping to arrive at combinations of existing drugs that may prove more efficacious in treating Covid-19 cases.

Its AI platform, after scanning the entire universe of Covid-related data has thrown up results to show that Hydroxycholoroquine or Chroloquine, an anti-malaria drug that is being prescribed as a prophylactic for coronavirus under many protocols works more effectively with some other existing drugs than when it is used alone, the company claims.

Our analysis shows that Chloroquine works more effectively in combination with Pegasys (a drug used to treat Hepatitis C] or Tocilizumab, (a rheumatoid arthritis drug) or Remdesivir (yet to be approved antiviral drug for Ebola) or Clarithromycin (an antibiotic). We are hoping to work with drug regulators and partners to test these in pre-clinical and clinical trials, said Gunjan Bhardwaj, CEO, Innoplexus.

To be sure, hundreds of clinical trials are currently under way with several cocktails of medicines for Covid-19 across the world, and some of these drugs were part of trials held in China and Taiwan. The World Health Organization (WHO) itself is monitoring a global mega clinical trial for testing drugs for Covid-19 called solidarity, which India decided to join on Friday.

Read the rest here:

Role of AI soars in tackling Covid-19 pandemic - BusinessLine

RSA: Eric Schmidt shares deep learning on AI – CIO

By David Needle

CIO | Feb 16, 2017 3:05 PM PT

Your message has been sent.

There was an error emailing this page.

SAN FRANCISCO Alphabet chairman Eric Schmidt says artificial intelligence is key to advances in diverse areas such as healthcare and datacenter design and that security concerns related to it are somewhat misguided. (Alphabet is the parent company of Google).

In a wide-ranging on-stage conversation here at the RSA Security conference with Gideon Lewis-Kraus, author of The Great A.I. Awakening, Schmidt shared his insights from decades of work related to AI (he studied AI as a PhD student 40 years ago) and why the technology seems to finally be hitting its stride.

In fact, last year Google CEO Sundar Pichai said AI is what helps the search giant build better products over time. "We will move from a mobile-first to an AI-first world, he said.

[ Why Googles Sergey Brin changed his tune on AI ]

Asked about that, Schmidt said that Google is still very much focused on mobile advances. Going from mobile first to AI first doesnt mean you stop doing one of those, he said.

Googles approach to AI is to take the algorithms it develops and apply them to business problems. AI works best when it has a lot of training data to learn from, he said. For example, Google used AI to develop picture search, using computer vision and training the system to recognize the difference between a gazelle and a lion after showing it thousands of pictures of each. That same mechanism applies to many things, he said.

As for business problems, Schmidt said Googles top engineers work to make their data centers as efficient as possible. But using AI weve been able to get a 15 percent improvement in power use.

In healthcare, Schmidt said machine learning can help with medical diagnosis and predict the best course of treatment. Were at the point where if you have numeric sequence, (AI software) can predict what the following number will be. Thats healthcare. People go to the hospital to find out whats going to happen next and we have small projects that I think show it can be done (using AI).

Schmidt said because computer vision technology is much better than human vision it can review millions of pictures far beyond what a human being could process to better identify problem areas. Speech recognition systems are also capable of understanding far more than humans do. But these are tools, he said, for humans to leverage. Computers have vision and speech, thats not the same as AI, he said.

Lewis-Kraus addresses fears that if AI systems become self-aware they could threaten humanity. The work in AI going on now is doing pretty much what we think its supposed to do. At what point can the system self-modify? Thats worth a discussion, but we are nowhere near any of those stages, were still in baby steps, said Schmidt. You have to think in terms of ten, 20 or 30 years . Were not facing any danger now.

Schmidt also raised concern that security fears and other factors could lead governments to limit access to the internet as countries such as China already do. I am extremely worried about the likelihood countries will block the openness and interconnectedness we have today. I wrote a book on it (The New Digital Age), he said.

I fear the security breaches and attacks on the internet will be used as a pretext to shut down access, Schmidt said, adding he would like to see governments come to an agreement and mechanisms to keep access to the Internet open. In the area of AI he wants to see the industry push to make sure research stays out in the open and not controlled by military labs.

Addressing the hall packed with security professionals, Schmidt made the case for open research, noting that historically companies never want to share anything about their research. Weve taken opposite view to build a large ecosystem that is completely transparent because it will get fixed faster, he said. Maybe there are some weaknesses, but I would rather do it that way because there are thousands of you who will help plug it.

Security is not one layer. Nave engineers say they can build a better firewall, but thats not really how things work . If you build a system that is perfect and closed, you will find out its neither perfect or closed.

Follow everything from CIO

Sponsored Links

Here is the original post:

RSA: Eric Schmidt shares deep learning on AI - CIO

Curi Bio Dips into AI with Acquisition of Dana Solutions – Medical Device and Diagnostics Industry

Curi Bio said it has acquired Dana Solutions, a company that specializes in the application of artificial intelligence and machine learning to in vitro cell-based assays. The deal was for an undisclosed sum.

Seattle, WA-based Curi will gain access to Danas AI/ML-based platforms including PhenoLearn, a deep learning platform for modeling cell and tissue phenotypes; Pulse, an automated platform for contractility analysis of beating cardiomyocytes; and PhenoTox, a deep learning platform for predictive safety pharmacology.

Curis human iPSC-based platforms help drug developers build predictive and mature human iPSC tissuesespecially for the discovery, safety testing, and efficacy testing of new therapeuticswith a focus on cardiac, skeletal muscle, and neuromuscular disease models. Curi seeks to de-risk and expedite the development of new drugs by providing human-relevant preclinical data and decreasing the industrys dependence on animal models, which often fail to translate to humans.

Curi Bio is developing human-relevant platforms integrating human cells, systems, and data to accelerate the discovery of new medicines, Curi CEO Michael Cho. With the acquisition of Danas AI/ML technologies for cell-based assays, Curi is now uniquely positioned to offer pharmaceutical companies an integrated platform leveraging predictive human iPSC-derived cells, tissue-specific biosystems, and AI/ML-enabled phenotypic data insights.

Go here to read the rest:

Curi Bio Dips into AI with Acquisition of Dana Solutions - Medical Device and Diagnostics Industry

FDA issues landmark clearance to AI-driven ICU predictive tool – Healthcare IT News

The U.S. Food and Drug Administration has authorized the use of CLEW Medical's artificial intelligence tool to predict hemodynamic instability in adult patients inintensive care units, the company announced on Wednesday.

The tool, CLEWICU, uses AI-based algorithms and machine learning models to identify the likelihood of occurrence of significant clinical events for ICU patients.

CLEW says the clearance is the FDA's first for such a device.

"AI can be a powerful force for change in healthcare, enabling assessment of time-critical patient information and predictive warning of deterioration that could enable better informed clinical decisions and improved outcomes in the ICU," said Dr. David Bates, medical director of clinical and quality analysis in information systems at Mass General Brigham and CLEW Advisory Board member, in a statement.

WHY IT MATTERS

Hemodynamic instability is a common COVID-19 complication, so CLEWICU's predictive capabilities could prove especially useful during the ongoing pandemic particularly given ICUs' strained resources around the country.

By analyzing patient data from various sources, including electronic health records and medical devices, CLEWICU provides a picture of overall unit status and helps identify individuals whose conditions are likely to deteriorate.

According to the company, the system notifies users of clinical deterioration up to eight hours in advance, enabling early intervention. The system also identifies low-risk patients who are unlikely to deteriorate, thus potentially enabling better ICU resource management and optimization.

"CLEW's AI-based solution is a huge leap forward in ICU patient care, providing preemptive and potentially lifesaving information that enables early intervention, reduces alarm fatigue and can potentially significantly improve clinical outcomes," said Dr. Craig Lilly of University of Massachusetts Medical School in a statement.

THE LARGER TREND

The FDA granted emergency use authorization to CLEWICU back this past June. The tool was among several AI-powered technology innovations developed, or modified, in response to the ongoing pandemic.

Mayo Clinic Chief Information OfficerCris Ross said in December that AI has been crucial in understanding the pandemic. He noted the variety of COVID-19-specific use cases, while he also flaggedthe risk of algorithmic bias.

"We know that Black and Hispanic patients are infected and die at higher rates than other populations. So we need to be vigilant for the possibility that that fact about the genetic or other predisposition that might be present in those populations could cause us to develop triage algorithms that might cause us to reduce resources available to Black or Hispanic patients because of one of the biases introduced by algorithm development," said Ross.

ON THE RECORD

"We are proud to have received this landmark FDA clearance and deliver a first-of-its-kind product for the industry, giving healthcare providers the critical data that they need to prevent life-threatening situations," said Gal Salomon, CLEW CEO, in a statement.

Kat Jercich is senior editor of Healthcare IT News.Twitter: @kjercichEmail: kjercich@himss.orgHealthcare IT News is a HIMSS Media publication.

Read the original here:

FDA issues landmark clearance to AI-driven ICU predictive tool - Healthcare IT News

Cylance is golden: BlackBerrys new cybersecurity R&D lab is all about AI and IoT – VentureBeat

BlackBerry has announced a new business unit dedicated entirely to cybersecurity research and development (R&D).

The BlackBerry Advanced Technology Development Labs (BlackBerry Labs) will operate at the forefront of cybersecurity R&D, according to BlackBerry. The unit will be spearheaded by BlackBerry chief technology officer (CTO) Charles Eagan, who will lead a team of 120 researchers, security experts, software developers, architects, and more.

Machine learning will be a major focus at the start, with BlackBerry exploring ways to leverage AI to improve security in cars and mobile devices, among other endpoints in the burgeoning internet of things (IoT) sphere.

Primarily, the purpose of this new division is to integrate emerging technologies into the work were currently accomplishing, Eagan told VentureBeat. Were now looking at applying machine learning to our existing areas of application, including automotive, mobile security, and so on. As new technologies and threats emerge, BlackBerry Labs will allow us to take a proactive approach to cybersecurity, not only updating our existing solutions, but evaluating how we can branch out and provide a more comprehensive, data-based, and diverse portfolio to secure the internet of things.

Though the new cybersecurity R&D business unit is now operational, the lab space itself which will be based at the companys operations center in Waterloo, Canada is still being built.

BlackBerrys transition from phonemaker to a company specializing in software and services is well documented, though its brand still lives on some smartphones through a licensing deal. The company never quite recovered from the dawn of the modern smartphone era, when its shares spiked at nearly $150 in mid-2008 (when it was still known as Research in Motion) before dropping by around two-thirds in the space of six months. Its worth noting that this reversal of fortune roughly coincided with Apples iOS and Googles Android starting to gain a foothold.

Over the past eight years, BlackBerrys shares have hovered at around the $10 mark, and last week they fell to a four-year low after the company missed its Q2 revenue estimates with a net loss of $44 million due in part to weak enterprise software sales. Today, BlackBerrys focus is on the B2B realm, where it offers software systems for the automotive industry, including infotainment and autonomous vehicles, as well as medical devices, industrial automation, and more. Many of these applications seek to address security concerns safeguarding connected devices in a world full of threats and BlackBerry is looking to reinvent itself by leveraging AI and machine learning.

The next generation of connected products [is] going to come online sooner than we think, and were going to use machine learning to better understand and manage the policies and identities of these connected devices, Eagan explained, to create a safe environment that will allow us to collaborate better, faster, and smarter across great distances and in all areas of application.

Above: BlackBerry CTO Charles Eagan

Last November, BlackBerry announced it was buying AI-powered cybersecurity startup Cylance for $1.4 billion, with the deal closing in February. In a nutshell, Cylance is an AI-powered endpoint protection platform designed to prevent advanced threats such as malware and ransomware.

The Cylance acquisition was entirely in line with BlackBerrys effort to become the worlds largest and most trusted AI-cybersecurity company, as CEO John Chen put it at the time. The deal was all about securing endpoints for enterprise customers and was specifically designed to boost BlackBerrys enterprise-focused IoT platform Spark and its UEMandQNX products.

The integration of Cylance into BlackBerrys core product is expected to be complete in early 2020. And the new cybersecurity unit is effectively setting a foundation on which Cylance or BlackBerry Cylance, as its now known can flourish.

Primarily, my role [in BlackBerry Labs] is to make sure that were making the most of the Cylance acquisition and that we have connectivity between all the different business units, Eagan said. Were really focusing on the importance of integrating BlackBerry Cylances machine learning technology into BlackBerrys product pipeline. However, its not just about creating an ecosystem of machine learning-based solutions, but rather smartly and strategically adopting machine learning into the work were accomplishing each day. My role is primarily helping to bridge the different teams and create this connectivity and cross-pollination between the various business units.

Above: Cylance dashboard

Barely a day goes by without some form of data breach,hack, or security lapse hitting the headlines, in part due to the growth of cloud computing and connected devices. And the growing threat presented by the sheer number of connected devices permeating homes and offices has created an opportunity for companies that offer tools to protect these various endpoints. The global cybersecurity market was reportedly worth around $152 billion in 2018, and its expected to grow to $250 billion within a few years.

Endpoint protection is a hot area in cybersecurity, with the likes of CrowdStrike recently hitting the public markets with a bang, SentinelOne closing a $120 million funding round, and Shape Security raising $51 million at a $1 billion valuation as it prepares for its own IPO. There are a number of bigger players in space too, of course, including Microsoft, Cisco, Intel, Trend Micro, and many others. And its against this backdrop that BlackBerry is trying to reinvent itself by investing in new cybersecurity technologies.

BlackBerry Labs is an intentional investment into the future of the company, Eagan said, noting that initial personnel estimates for BlackBerry Labs quickly escalated from 20 to 120. The investment of the people weve put into BlackBerry Labs is significant, as weve handpicked the team to include experts in the embedded IoT space with diverse capabilities, including strong data science expertise.

Notably, BlackBerry is also setting up dedicated hardware labs at its offices in Waterloo and Ottawa, where BlackBerry Labs personnel can test new products. Eagan also said the company is looking to partner with six universities on some of its R&D efforts.

In the more immediate term, Eagan said BlackBerry Labs will focus on automotive-based applications for machine learning in cybersecurity, particularly relevant given the expected growth of connected cars in the coming years. The connected car market was pegged at $63 billionin 2017, a figure that could rise to more than $200 billion by 2025.

With CES 2020 on the horizon, Eagan said BlackBerry will be using the annual Las Vegas tech extravaganza to demonstrate how its machine learning smarts can improve security in connected cars.

As vehicles become connected, we need to ensure a cybersecurity operations center is running diagnostics within the car at all times to facilitate a monitored environment, Eagan explained. This is something BlackBerry Cylance does extremely well, and were planning to tangibly bring it into the automotive sector in the upcoming months.

Above: A photo from the BlackBerry Network Operations Center in Waterloo, Canada, where BlackBerry Labs will be located

At a time when every company is effectively becoming a software company, the need to run a watertight ship is greater than ever. However, much has been written about the cybersecurity workforce shortfall and the fact that it isnt showing any signs of improving, which is why companies are investing in automated tools to circumvent the need for physical hands-on decks.

As the threat landscape expands, enterprises cannot rely on the same incident reaction-based model that may have been effective in the past, Eagan said. They need to scale quickly with solutions that leverage AI to help them prepare for attacks and address vulnerabilities in an automated and anticipatory way its the only way theyll be able to scale to meet their security needs.

That said, automation is only part of the solution. Skilled personnel are still very much required, which is one of the reasons BlackBerry shelled out north of $1 billion to acquire Cylance it was as much a talent grab as a product acquisition. And this combination of cutting-edge technology and top talent could help BlackBerry lure others on board.

The addition of the BlackBerry Cylance team has given us an influx of talent that has proven a real boon for our companys plans to better understand and adopt AI-based technology, Eagan continued. Implementing and integrating AI-based solutions, like those pioneered by BlackBerry Cylance, is certainly a focus for our team moving forward, but we remain committed to growing and hiring talent that will work alongside automated processes to ensure the best result possible for all users and organizations.

Here is the original post:

Cylance is golden: BlackBerrys new cybersecurity R&D lab is all about AI and IoT - VentureBeat

AI is changing how we do science. Get a glimpse – Science Magazine

By Science News StaffJul. 5, 2017 , 11:00 AM

Particle physicists began fiddling with artificial intelligence (AI) in the late 1980s, just as the term neural network captured the publics imagination. Their field lends itself to AI and machine-learning algorithms because nearly every experiment centers on finding subtle spatial patterns in the countless, similar readouts of complex particle detectorsjust the sort of thing at which AI excels. It took us several years to convince people that this is not just some magic, hocus-pocus, black box stuff, says Boaz Klima, of Fermi National Accelerator Laboratory (Fermilab) in Batavia, Illinois, one of the first physicists to embrace the techniques. Now, AI techniques number among physicists standard tools.

Neural networks search for fingerprints of new particles in the debris of collisions at the LHC.

2012 CERN, FOR THE BENEFIT OF THE ALICE COLLABORATION

Particle physicists strive to understand the inner workings of the universe by smashing subatomic particles together with enormous energies to blast out exotic new bits of matter. In 2012, for example, teams working with the worlds largest proton collider, the Large Hadron Collider (LHC) in Switzerland, discovered the long-predicted Higgs boson, the fleeting particle that is the linchpin to physicists explanation of how all other fundamental particles get their mass.

Such exotic particles dont come with labels, however. At the LHC, a Higgs boson emerges from roughly one out of every 1 billion proton collisions, and within a billionth of a picosecond it decays into other particles, such as a pair of photons or a quartet of particles called muons. To reconstruct the Higgs, physicists must spot all those more-common particles and see whether they fit together in a way thats consistent with them coming from the same parenta job made far harder by the hordes of extraneous particles in a typical collision.

Algorithms such as neural networks excel in sifting signal from background, says Pushpalatha Bhat, a physicist at Fermilab. In a particle detectorusually a huge barrel-shaped assemblage of various sensorsa photon typically creates a spray of particles or shower in a subsystem called an electromagnetic calorimeter. So do electrons and particles called hadrons, but their showers differ subtly from those of photons. Machine-learning algorithms can tell the difference by sniffing out correlations among the multiple variables that describe the showers. Such algorithms can also, for example, help distinguish the pairs of photons that originate from a Higgs decay from random pairs. This is the proverbial needle-in-the-haystack problem, Bhat says. Thats why its so important to extract the most information we can from the data.

Machine learning hasnt taken over the field. Physicists still rely mainly on their understanding of the underlying physics to figure out how to search data for signs of new particles and phenomena. But AI is likely to become more important, says Paolo Calafiura, a computer scientist at Lawrence Berkeley National Laboratory in Berkeley, California. In 2024, researchers plan to upgrade the LHC to increase its collision rate by a factor of 10. At that point, Calafiura says, machine learning will be vital for keeping up with the torrent of data. Adrian Cho

With billions of users and hundreds of billions of tweets and posts every year, social media has brought big data to social science. It has also opened an unprecedented opportunity to use artificial intelligence (AI) to glean meaning from the mass of human communications, psychologist Martin Seligman has recognized. At the University of Pennsylvanias Positive Psychology Center, he and more than 20 psychologists, physicians, and computer scientists in the World Well-Being Project use machine learning and natural language processing to sift through gobs of data to gauge the publics emotional and physical health.

Thats traditionally done with surveys. But social media data are unobtrusive, its very inexpensive, and the numbers you get are orders of magnitude greater, Seligman says. It is also messy, but AI offers a powerful way to reveal patterns.

In one recent study, Seligman and his colleagues looked at the Facebook updates of 29,000 users who had taken a self-assessment of depression. Using data from 28,000 of the users, a machine-learning algorithm found associations between words in the updates and depression levels. It could then successfully gauge depression in the other users based only on their updates.

In another study, the team predicted county-level heart disease mortality rates by analyzing 148 million tweets; words related to anger and negative relationships turned out to be risk factors. The predictions from social media matched actual mortality rates more closely than did predictions based on 10 leading risk factors, such as smoking and diabetes. The researchers have also used social media to predict personality, income, and political ideology, and to study hospital care, mystical experiences, and stereotypes. The team has even created a map coloring each U.S. county according to well-being, depression, trust, and five personality traits, as inferred from Twitter.

Theres a revolution going on in the analysis of language and its links to psychology, says James Pennebaker, a social psychologist at the University of Texas in Austin. He focuses not on content but style, and has found, for example, that the use of function words in a college admissions essay can predict grades. Articles and prepositions indicate analytical thinking and predict higher grades; pronouns and adverbs indicate narrative thinking and predict lower grades. He also found support for suggestions that much of the 1728 play Double Falsehood was likely written by William Shakespeare: Machine-learning algorithms matched it to Shakespeares other works based on factors such as cognitive complexity and rare words. Now, we can analyze everything that youve ever posted, ever written, and increasingly how you and Alexa talk, Pennebaker says. The result: richer and richer pictures of who people are. Matthew Hutson

For geneticists, autism is a vexing challenge. Inheritance patterns suggest it has a strong genetic component. But variants in scores of genes known to play some role in autism can explain only about 20% of all cases. Finding other variants that might contribute requires looking for clues in data on the 25,000 other human genes and their surrounding DNAan overwhelming task for human investigators. So computational biologist Olga Troyanskaya of Princeton University and the Simons Foundation in New York City enlisted the tools of artificial intelligence (AI).

Artificial intelligence tools are helping reveal thousands of genes that may contribute to autism.

BSIP SA/ALAMY STOCK PHOTO

We can only do so much as biologists to show what underlies diseases like autism, explains collaborator Robert Darnell, founding director of the New York Genome Center and a physician scientist at The Rockefeller University in New York City. The power of machines to ask a trillion questions where a scientist can ask just 10 is a game-changer.

Troyanskaya combined hundreds of data sets on which genes are active in specific human cells, how proteins interact, and where transcription factor binding sites and other key genome features are located. Then her team used machine learning to build a map of gene interactions and compared those of the few well-established autism risk genes with those of thousands of other unknown genes, looking for similarities. That flagged another 2500 genes likely to be involved in autism, they reported last year in Nature Neuroscience.

But genes dont act in isolation, as geneticists have recently realized. Their behavior is shaped by the millions of nearby noncoding bases, which interact with DNA-binding proteins and other factors. Identifying which noncoding variants might affect nearby autism genes is an even tougher problem than finding the genes in the first place, and graduate student Jian Zhou in Troyanskayas Princeton lab is deploying AI to solve it.

To train the programa deep-learning systemZhou exposed it to data collected by the Encyclopedia of DNA Elements and Roadmap Epigenomics, two projects that cataloged how tens of thousands of noncoding DNA sites affect neighboring genes. The system in effect learned which features to look for as it evaluates unknown stretches of noncoding DNA for potential activity.

When Zhou and Troyanskaya described their program, called DeepSEA, in Nature Methods in October 2015, Xiaohui Xie, a computer scientist at the University of California, Irvine, called it a milestone in applying deep learning to genomics. Now, the Princeton team is running the genomes of autism patients through DeepSEA, hoping to rank the impacts of noncoding bases.

Xie is also applying AI to the genome, though with a broader focus than autism. He, too, hopes to classify any mutations by the odds they are harmful. But he cautions that in genomics, deep learning systems are only as good as the data sets on which they are trained. Right now I think people are skeptical that such systems can reliably parse the genome, he says. But I think down the road more and more people will embrace deep learning. Elizabeth Pennisi

This past April, astrophysicist Kevin Schawinski posted fuzzy pictures of four galaxies on Twitter, along with a request: Could fellow astronomers help him classify them? Colleagues chimed in to say the images looked like ellipticals and spiralsfamiliar species of galaxies.

Some astronomers, suspecting trickery from the computation-minded Schawinski, asked outright: Were these real galaxies? Or were they simulations, with the relevant physics modeled on a computer? In truth they were neither, he says. At ETH Zurich in Switzerland, Schawinski, computer scientist Ce Zhang, and other collaborators had cooked the galaxies up inside a neural network that doesnt know anything about physics. It just seems to understand, on a deep level, how galaxies should look.

With his Twitter post, Schawinski just wanted to see how convincing the networks creations were. But his larger goal was to create something like the technology in movies that magically sharpens fuzzy surveillance images: a network that could make a blurry galaxy image look like it was taken by a better telescope than it actually was. That could let astronomers squeeze out finer details from reams of observations. Hundreds of millions or maybe billions of dollars have been spent on sky surveys, Schawinski says. With this technology we can immediately extract somewhat more information.

The forgery Schawinski posted on Twitter was the work of a generative adversarial network, a kind of machine-learning model that pits two dueling neural networks against each other. One is a generator that concocts images, the other a discriminator that tries to spot any flaws that would give away the manipulation, forcing the generator to get better. Schawinskis team took thousands of real images of galaxies, and then artificially degraded them. Then the researchers taught the generator to spruce up the images again so they could slip past the discriminator. Eventually the network could outperform other techniques for smoothing out noisy pictures of galaxies.

AI that knows what a galaxy should look like transforms a fuzzy image (left) into a crisp one (right).

KIYOSHI TAKAHASE SEGUNDO/ALAMY STOCK PHOTO

Schawinskis approach is a particularly avant-garde example of machine learning in astronomy, says astrophysicist Brian Nord of Fermi National Accelerator Laboratory in Batavia, Illinois, but its far from the only one. At the January meeting of the American Astronomical Society, Nord presented a machine-learning strategy to hunt down strong gravitational lenses: rare arcs of light in the sky that form when the images of distant galaxies travel through warped spacetime on the way to Earth. These lenses can be used to gauge distances across the universe and find unseen concentrations of mass.

Strong gravitational lenses are visually distinctive but difficult to describe with simple mathematical ruleshard for traditional computers to pick out, but easy for people. Nord and others realized that a neural network, trained on thousands of lenses, can gain similar intuition. In the following months, there have been almost a dozen papers, actually, on searching for strong lenses using some kind of machine learning. Its been a flurry, Nord says.

And its just part of a growing realization across astronomy that artificial intelligence strategies offer a powerful way to find and classify interesting objects in petabytes of data. To Schawinski, Thats one way I think in which real discovery is going to be made in this age of Oh my God, we have too much data. Joshua Sokol

Organic chemists are experts at working backward. Like master chefs who start with a vision of the finished dish and then work out how to make it, many chemists start with the final structure of a molecule they want to make, and then think about how to assemble it. You need the right ingredients and a recipe for how to combine them, says Marwin Segler, a graduate student at the University of Mnster in Germany. He and others are now bringing artificial intelligence (AI) into their molecular kitchens.

They hope AI can help them cope with the key challenge of moleculemaking: choosing from among hundreds of potential building blocks and thousands of chemical rules for linking them. For decades, some chemists have painstakingly programmed computers with known reactions, hoping to create a system that could quickly calculate the most facile molecular recipes. However, Segler says, chemistry can be very subtle. Its hard to write down all the rules in a binary way.

So Segler, along with computer scientist Mike Preuss at Mnster and Seglers adviser Mark Waller, turned to AI. Instead of programming in hard and fast rules for chemical reactions, they designed a deep neural network program that learns on its own how reactions proceed, from millions of examples. The more data you feed it the better it gets, Segler says. Over time the network learned to predict the best reaction for a desired step in a synthesis. Eventually it came up with its own recipes for making molecules from scratch.

The trio tested the program on 40 different molecular targets, comparing it with a conventional molecular design program. Whereas the conventional program came up with a solution for synthesizing target molecules 22.5% of the time in a 2-hour computing window, the AI figured it out 95% of the time, they reported at a meeting this year. Segler, who will soon move to London to work at a pharmaceutical company, hopes to use the approach to improve the production of medicines.

Paul Wender, an organic chemist at Stanford University in Palo Alto, California, says its too soon to know how well Seglers approach will work. But Wender, who is also applying AI to synthesis, thinks it could have a profound impact, not just in building known molecules but in finding ways to make new ones. Segler adds that AI wont replace organic chemists soon, because they can do far more than just predict how reactions will proceed. Like a GPS navigation system for chemistry, AI may be good for finding a route, but it cant design and carry out a full synthesisby itself.

Of course, AI developers have their eyes trained on those other tasks as well. Robert F. Service

Read more from the original source:

AI is changing how we do science. Get a glimpse - Science Magazine