Putting artificial intelligence at the heart of health care with help from MIT – MIT News

Artificial intelligence is transforming industries around the world and health care is no exception. A recent Mayo Clinic study found that AI-enhanced electrocardiograms (ECGs) have the potential to save lives by speeding diagnosis and treatment in patients with heart failure who are seen in the emergency room.

The lead author of the study is Demilade Demi Adedinsewo, a noninvasive cardiologist at the Mayo Clinic who is actively integrating the latest AI advancements into cardiac care and drawing largely on her learning experience with MIT Professional Education.

Identifying AI opportunities in health care

A dedicated practitioner, Adedinsewo is a Mayo Clinic Florida Women's Health Scholar and director of research for the Cardiovascular Disease Fellowship program. Her clinical research interests include cardiovascular disease prevention, women's heart health, cardiovascular health disparities, and the use of digital tools in cardiovascular disease management.

Adedinsewos interest in AI emerged toward the end of her cardiology fellowship, when she began learning about its potential to transform the field of health care. I started to wonder how we could leverage AI tools in my field to enhance health equity and alleviate cardiovascular care disparities, she says.

During her fellowship at the Mayo Clinic, Adedinsewo began looking at how AI could be used with ECGs to improve clinical care. To determine the effectiveness of the approach, the team retroactively used deep learning to analyze ECG results from patients with shortness of breath. They then compared the results with the current standard of care a blood test analysis to determine if the AI enhancement improved the diagnosis of cardiomyopathy, a condition where the heart is unable to adequately pump blood to the rest of the body. While she understood the clinical implications of the research, she found the AI components challenging.

Even though I have a medical degree and a masters degree in public health, those credentials arent really sufficient to work in this space, Adedinsewo says. I began looking for an opportunity to learn more about AI so that I could speak the language, bridge the gap, and bring those game-changing tools to my field.

Bridging the gap at MIT

Adedinsewos desire to bring together advanced data science and clinical care led her to MIT Professional Education, where she recently completed the Professional Certificate Program in Machine Learning & AI. To date, she has completed nine courses, including AI Strategies and Roadmap.

All of the courses were great, Adedinsewo says. I especially appreciated how the faculty, like professors Regina Barzilay, Tommi Jaakkola, and Stefanie Jegelka, provided practical examples from health care and nonhealth care fields to illustrate what we were learning.

Adedinsewos goals align closely with those of Barzilay, the AI lead for the MIT Jameel Clinic for Machine Learning in Health. There are so many areas of health care that canbenefit from AI, Barzilay says. Its exciting to see practitioners like Demijoin the conversation and help identify new ideas for high-impact AIsolutions.

Adedinsewo also valued the opportunity to work and learn within the greater MIT community alongside accomplished peers from around the world, explaining that she learned different things from each person. It was great to get different perspectives from course participants who deploy AI in other industries, she says.

Putting knowledge into action

Armed with her updated AI toolkit, Adedinsewo was able to make meaningful contributions to Mayo Clinics research. The team successfully completed and published their ECG project in August 2020, with promising results. In analyzing the ECGs of about 1,600 patients, the AI-enhanced method was both faster and more effective outperforming the standard blood tests with a performance measure (AUC) of 0.89 versus 0.80. This improvement could enhance health outcomes by improving diagnostic accuracy and increasing the speed with which patients receive appropriate care.

But the benefits of Adedinsewos MIT experience go beyond a single project. Adedinsewo says that the tools and strategies she acquired have helped her communicate the complexities of her work more effectively, extending its reach and impact. I feel more equipped to explain the research and AI strategies in general to my clinical colleagues. Now, people reach out to me to ask, I want to work on this project. Can I use AI to answer this question? she said.

Looking to the AI-powered future

Whats next for Adedinsewos research? Taking AI mainstream within the field of cardiology. While AI tools are not currently widely used in evaluating Mayo Clinic patients, she believes they hold the potential to have a significant positive impact on clinical care.

These tools are still in the research phase, Adedinsewo says. But Im hoping that within the next several months or years we can start to do more implementation research to see how well they improve care and outcomes for cardiac patients over time.

Bhaskar Pant, executive director of MIT Professional Education, says We at MIT Professional Education feel particularly gratified that we are able to provide practitioner-oriented insights and tools in machine learning and AI from expert MIT faculty to frontline health researchers such as Dr. Demi Adedinsewo, who are working on ways to enhance markedly clinical care and health outcomes in cardiac and other patient populations. This is also very much in keeping with MITs mission of 'working with others for the betterment of humankind!'

Read more from the original source:

Putting artificial intelligence at the heart of health care with help from MIT - MIT News

What is Artificial Intelligence as a Service (AIaaS)? | ITBE – IT Business Edge

Software as a Service, or SaaS, is a concept that is familiar to many. Long-time Photoshop users will recall when Adobe stopped selling its product and instead shifted to a subscriber model. Netflix and Disney+ are essentially Movies as a Service, particularly at a time when ownership of physical media is losing ground to media streaming. Artificial Intelligence as a Service (AIaaS) has been growing in market adoption in recent years, but the uninitiated might be asking: what exactly is it?

In a nutshell, AIaaS is what happens when a company develops and licenses use of an AI to another company, most often to solve a very specific problem. For example, Bill owns a company that sells hotdogs through his e-commerce site. While Bill offers a free returns policy for dissatisfied customers, he lacks the time to provide decent customer support, and rarely replies to emails. Separately, a software developer has created a chatbot that can handle most customer inquiries using natural language processing, and often solve the issue or answer a question before human intervention is even required. For a monthly fee, the chatbot is licensed to the hotdog vendor, and implemented on his website. Now, the bot is solving 80% of customer issues, leaving Bill with the time to respond to the remaining 20%. But Bill is still too preoccupied making hotdogs, so he subscribes to a service like Flowrite, that uses AI to intelligently write his emails on the fly.

AI is also being put in service to analyze large sets of data and make predictions, streamline information storage, or even detect fraudulent activity. Amazons personal recommendation engine, an AI powered by machine learning, is now available as a licensed service to other retailers, video stream platforms, and even the finance industry. Googles suite of AI services range from natural language processing, handwriting recognition, to real-time captioning and translation. IBMs groundbreaking AI, Watson, is now being deployed to fight financial crimes, target advertisements based on real-time weather analysis, and analyze data to help hospitals make treatment judgements.

Also read: AI-Enabled Payments: A Q&A with Tradeshift

Also read: How Quantum Computing Will Transform AI

Machine learning AIs improve with time, usage, and development. Some, like YouTubes recommendation engine, have become so sophisticated that it sometimes feels like we have entire television stations tailored perfectly to our interests. Others, like language model AI GPT-3, produce entire volumes of text that are nearly indistinguishable from an authentic human source.

Microsoft has even put GPT-3 to use to translate conversational language into a working computer code, potentially opening up a new frontier in how software can be written in the future, and giving coding novices a fighting chance. Microsoft has also partnered with NVIDIA to create a new natural language generation model, three times as powerful as GPT-3. Improvements in language recognition and generation have obvious carryover benefits for the future development of chatbots, home assistants, and document generation as well.

Industrial giant Siemens has announced they are integrating Googles AIaaS solutions to streamline and analyze data, and predict, for instance, the rate of wear-and-tear of machinery on their factory floor. This could reduce maintenance costs, improve the scheduling of routine inspections, and prevent unexpected equipment failures.

AIaaS is a rapidly growing field, and there will be many more niches discovered that it can fill for years to come.

Read next: Top 5 Benefits of AI in Banking and Finance

Visit link:

What is Artificial Intelligence as a Service (AIaaS)? | ITBE - IT Business Edge

Beethoven’s Unfinished 10th Symphony Brought to Life by Artificial Intelligence – Scientific American

Teresa Carey: This is Scientific Americans 60-Second Science. I'm Teresa Carey.

Every morning at five oclock, composer Walter Werzowa would sit down at his computer to anticipate a particular daily e-mail. It came from six time zones away, where a team had been working all night (or day, rather) to draft Beethovens unfinished 10th Symphonyalmost two centuries after his death.

The e-mail contained hundreds of variations, and Werzowa listened to them all.

Werzowa: So by nine, 10 oclock in the morning, its likeIm already in heaven.

Carey: Werzowa was listening for the perfect tunea sound that was unmistakably Beethoven.

But the phrases he was listening to werent composed by Beethoven. They were created by artificial intelligencea computer simulation of Beethovens creative process.

Werzowa: There werehundreds of options, and some are better than others. But then there is that one which grabs you, and that was just a beautiful process.

Carey: Ludwig van Beethoven was one of the most renowned composers in Western music history. When he died in 1827, he left behind musical sketches and notes that hinted at a masterpiece. There was barely enough to make out a phrase, let alone a whole symphony. But that didnt stop people from trying.

In 1988 musicologist Barry Cooper attempted. But he didnt get beyond the first movement. Beethovens handwritten notes on the second and third movements are meagernot enough to compose a symphony.

Werzowa: A movement of a symphony can have up to 40,000 notes. And some of his themes were three bars, like 20 notes. Its very little information.

Carey: Werzowa and a group of music experts and computer scientists teamed up to use machine learning to create the symphony. AhmedElgammal, the director of the Art and Artificial Intelligence Laboratory at Rutgers University, led the AI side of the team.

Elgammal: When you listen to music read by AIto continue a theme of music, usually its a very short few seconds, and then they start diverging and becoming boring and not interesting. They cannot really take that and compose a full movement of a symphony.

Carey: The teams first task was to teach the AI to think like Beethoven. To do that, they gave it Beethovens complete works, his sketchesand notes. They taught it Beethoven's processlike how he went from those iconic four notes to his entire Fifth Symphony.

[CLIP: Notes from Symphony no. 5]

Carey: Then they taught it to harmonize with a melody, compose a bridge between two sectionsand assign instrumentation. With all that knowledge, the AI came as close to thinking like Beethoven as possible. But it still wasnt enough.

Elgammal: The way music generation using AI works is very similar to the way, when you write an e-mail, you find that the e-mail thread predicts whats the next word for you or what the rest of the sentence is for you.

Carey: Butlet the computer predict your words long enough, and eventually, the text will sound like gibberish.

Elgammal: It doesnt really generate something that can continue for a long time and be consistent. So that was the main challenge in dealing with this project: How can you take a motif or a short phrase of music that Beethoven wrote in his sketchand continue it into a segment of music?

Carey: Thats where Werzowas daily e-mails came in. On those early mornings, he was selecting what he thought was Beethovens best. And, piece by piece, the team built a symphony.

Matthew Guzdial researches creativity and machine learning at the University of Alberta. He didnt work on the Beethoven project, but he says that AI is overhyped.

Guzdial: Modern AI, modern machine learning, is all about just taking small local patterns and replicating them. And its up to a human to then take what the AI outputs and find the genius. The genius wasnt there. The genius wasnt in the AI. The genius was in the human who was doing the selection.

Carey: Elgammal wants to make the AI tool available to help other artists overcome writers block or boost their performance. But both Elgammal and Werzowa say that the AI shouldnt replace the role of an artist. Insteadit should enhance their work and process.

Werzowa: Like every tool, you can use a knife to kill somebody or to save somebodys life, like with a scalpel in a surgery. So it can go any way. If you look at the kids, like kids are born creative.Its like everything is about being creative, creative and having fun. And somehow were losing this. I think if we could sit back on a Saturday afternoon in our kitchen, and because maybe were a little bit scared to make mistakes, ask the AI to help us to write us a sonata, song or whateverin teamwork, life will be so much more beautiful.

Carey: The team released the 10th Symphony over the weekend. When asked who gets credit for writing it Beethoven, the AIor the team behind itWerzowa insists it is a collaborative effort. But, suspending disbelief for a moment, it isnt hard to imagine that were listening to Beethoven once again.

Werzowa: I dare to say that nobody knows Beethovenas well as the AI, didas well as the algorithm. I think music, when you hear it, when you feel it, when you close your eyes, it does something to your body. Close your eyes, sit back and be open for it, and I would love to hear what you felt after.

Carey: Thanks for listening. For Scientific Americans60-Second Science, Im Teresa Carey.

[The above text is a transcript of this podcast.]

See the rest here:

Beethoven's Unfinished 10th Symphony Brought to Life by Artificial Intelligence - Scientific American

Understanding the UK Artificial Intelligence commercialisation – GOV.UK

The government is undertaking research to explore how AI R&D is successfully commercialised and brought to market.

The Department for Digital, Culture, Media and Sport (DCMS), along with the Office for Artificial Intelligence and Digital Standards and Internet Governance (DSIG), are leading the research project.

Research consultants Oxford Insights and Cambridge Econometrics have been commissioned with exploring the ways technology transfer happens for AI, and are seeking to conduct interviews with those with knowledge of the industry.

The research aims to increase understanding of the following topics:

Oxford Insights and Cambridge Econometrics would like to speak individuals with experience and knowledge of the AI development ecosystem, Innovate UK and other funding programmes, Standards Developing Organisations (SDOs), AI patents, AI R&D in the public and private sectors, AI funding and Venture Capital, and AI policy.

Our interviews will take approximately 45 mins -1 hour; however, we are happy to accommodate if time doesnt permit this length of interview. We may request your approval to follow up on specific points and themes identified across all our interactions.

Please get in touch with either aisha.naz@dcms.gov.uk or sam.hainsworth@dcms.gov.uk if you have any clarifications or questions. We look forward to working with you.

Read more here:

Understanding the UK Artificial Intelligence commercialisation - GOV.UK

The 5 articles you read in AI hell – The Next Web

The devil went down to Silicon Valley; he was looking for a soul to steal. But he ended up taking a consulting gig with Palantir instead.

In the meantime, the algorithms in charge of punishing the wicked now. And these days the sign above hells gates reads Abandon Open Source, with an Amazon smile beneath the print.

Those condemned to an eternity of pain and suffering in the modern era are now forced to read the same five AI articles over and over.

Which kind of sounds like what its like to read tech news back here on Earth anyway. Dont believe me? Lets dive in.

No it wasnt. These articles usually involve a text generator such as OpenAIs GPT-3. The big idea is that the journalist will either pay for access or collaborate with OpenAI to get GPT-3 to generate text from various prompts.

The journalist will ask something silly like can AI ever truly think like a human? and then GPT-3 will use that prompt to generate a specific number of outputs.

Then, the journalists and editors go to work. Theyll pick the best responses, mix and match sentences that make the most sense, and then discard the rest.

This is the editorial equivalent of taking the collected works of Stephen King, copy/pasting a single sentence from each book into a word doc, and then claiming youve published an entirely new book from the master of horror.

In hell, you stand in a long line to read hyperbolic, made-up stories about AIs capabilities. And, as your ultimate punishment, you have to rewrite them for the next person in line.

I remember reading about an early funding round for an AI company called PredPol. It had raised several million dollars to develop an AI system capable of predicting crime before it happens.

Im sorry. Perhaps you didnt read that right. It says: predicting crime before it happens.

This is something thats impossible. And I dont mean technologically impossible, I mean not possible within the realms of classical or quantum physics.

You see crime isnt generated from hotspots like mobs spawning in an MMO every 5 minutes. A first year statistics or physics student understands that no amount of historical data can predict where new crimes will occur. Mostly because the past isnt literally prescient. But, also, its impossible to know how many crimes have actually been committed.Most crimes go unreported.

PredPol cant predict crime. It predicts arrests based on historical data. In other words: PredPol tells you where youve already arrested people and then says try there again. Simply put: it doesnt work because it cant work.

But it raised money and raised money until one day it grew into a full-grown company worth billions all for doing nothing.

In hell, you have to read funding stories about billion-dollar AI startups that dont actually do anything or solve any problems. And youre not allowed to skim.

Theres variations on this one Googles AI demonstrates a 72% reduction in racial bias, Amazons new algorithm is 87% better at spotting and removing Naziproducts from its store front and theyre all bunk.

Big techs favorite PR company is the mainstream media.

Facebook will, as a hypothetical example, say something like our new algorithms are 80% more efficient at finding and removing toxic content in real time, and thats when the telephone game starts.

Youll see half a dozen reputable news outlets printing headlines that basically say Facebooks new algorithms make it 80% less toxic. And thats simply not true.

If a chef were to tell you theyve adopted a new cooking technique that results in 80% less fecal matter being detected in the soup theyre about to serve, you probably wouldnt think that was a good thing.

Increasing the efficiency of an algorithm doesnt result in a unilateral increase in overall system efficiency. And, because statistical correlations are incredibly difficult to make when you dont have access to the actual data being discussed, the people writing up these stories are simply taking the big tech marketing teams word for it.

In hell, you have to read articles about big tech companies that only have quotes from people who work at those companies and statistics that cant possibly be verified.

Weve all read these stories. They cover the biggest issues in the world of AI as if theyre writing about the weather.

The story will be something like Clearview AI gets new government contracts, and the coverage will quote a politician, the CEO of Clearview, and someone representing law enforcement.

The gist of the piece will be Ethics aside, law enforcement agencies say these products are invaluable.

And then, way down towards the end of the article, youll see the obligatory studies have shown that facial recognition struggles to identify some faces. Experts warn against the use of such technologies until this bias can be solved.

In hell, every AI article you read starts with the sentence this doesnt work as well for Black people or women, but were just going to move past that like it isnt important.

My least favorite AI article is the ones that profess to tell me what non-experts think.

These are the articles with headlines like Study: 80% of people believe AI will be sentient within a decade and 75% of moms think Alexa is a danger to children.

These studies are typically conducted by consultancy companies that specialize in this sort of thing. And usually theyre not out conducting studies on the speculation that some journalist will find their workappealing. They get paid to do their research.

And by research, I mean: sourcing answers on Amazons Mechanical Turk or giving campus students a gift card to fill out a survey.

These studies are often bought and paid for ahead of time by an AI company as a marketing tool.

These pitches, in my inbox, usually look something like Hey Tristan, did you hear that 92% of CEOs dont know what Kubernetes is? Are you interested in this exclusive study and a conversation with Dr Knows Itall, founder of the Online School For Learning AI Good? They can speak to the challenges of hiring quality IT talent.

Can you spot the rubbish?

In hell, the algorithm tells you that you can read articles covering actual computer science research as soon as you finish reading all the vapid survey pieces on AI published in mainstream outlets.

But youre never done are you? Theres always another. What do soccer dads think about gendered voice assistants? What percentage of people think data is a character on Star Trek? Will driverless cars be a reality in 2022? Heres what Tesla owners think.

Yes, AI hell is a place filled with horrors beyond comprehension. And, just in case you havent figured it out yet, were already here. This article has been your orientation.

Now if youll just sign in to Google News, well get started (Apple News is currently not available in hell due to legal issues concerning the App Store).

See the original post here:

The 5 articles you read in AI hell - The Next Web

Transactions in the Age of Artificial Intelligence: Risks and Considerations – JD Supra

Artificial Intelligence (AI) has become a major focus of, and the most valuable asset in, many technology transactions and the competition for top AI companies has never been hotter. According to CB Insights, there have been over 1,000 AI acquisitions since 2010. The COVID pandemic interrupted this trajectory, causing acquisitions to fall from 242 in 2019 to 159 in 2020. However, there are signs of a return, with over 90 acquisitions in the AI space as of June 2021 according to the latest CB Insights data. With tech giants helping drive the demand for AI, smaller AI startups are becoming increasingly attractive targets for acquisition.

AI companies have their own set of specialized risks that may not be addressed if buyers approach the transaction with their standard process. AIs reliance on data and the dynamic nature of its insights highlight the shortcomings of standard agreement language and the risks in not tailoring agreements to address AI specific issues. Sophisticated parties should consider crafting agreements specifically tailored to AI and its unique attributes and risks, which lend the parties a more accurate picture of an AI systems output and predictive capabilities, and can assist the parties in assessing and addressing the risks associated with the transaction. These risks include:

Freedom to use training data may be curtailed by contracts with third parties or other limitations regarding open source or scraped data.

Clarity around training data ownership can be complex and uncertain. Training data may be subject to ownership claims by third parties, be subject to third-party infringement claims, have been improperly obtained, or be subject to privacy issues.

To the extent that training data is subject to use limitations, a company may be restricted in a variety of ways including (i) how it commercializes and licenses the training data, (ii) the types of technology and algorithms it is permitted to develop with the training data and (iii) the purposes to which its technology and algorithms may be applied.

Standard representations on ownership of IP and IP improvements may be insufficient when applied to AI transactions. Output data generated by algorithms and the algorithms themselves trained from supplied training data may be vulnerable to ownership claims by data providers and vendors. Further, a third-party data provider may contract that, as between the parties, it owns IP improvements, resulting in companies struggling to distinguish ownership of their algorithms prior to using such third-party data from their improved algorithms after such use, as well as their ownership and ability to use model generated output data to continue to train and improve their algorithms.

Inadequate confidentiality or exclusivity provisions may leave an AI systems training data inputs and material technologies exposed to third parties, enabling competitors to use the same data and technologies to build similar or identical models. This is particularly the case when algorithms are developed using open sourced or publicly available machine learning processes.

Additional maintenance covenants may be warranted because an algorithms competitive value may atrophy if the algorithm is not designed to permit dynamic retraining, or the user of the algorithm fails to maintain and retrain the algorithm with updated data feeds.

In addition to the above, legislative protection in the AI space has yet to fully mature, and until such time, companies should protect their IP, data, algorithms, and models, by ensuring that their transactions and agreements are specifically designed to address the unique risks presented by the use and ownership of training data, AI-based technology and any output data generated by such technology.

Link:

Transactions in the Age of Artificial Intelligence: Risks and Considerations - JD Supra

SenseTime Co-hosts the 3rd International Artificial Intelligence Fair to Nurture AI Talent and Promote a Collaborative Education Ecosystem -…

Since launching in July this year, the highly anticipated IAIF has attracted 665 project submissions from over 300 schools in 8 countries and regions, with 121 projects from 98 schools selected for the final online presentation and verbal Q&A.During the final competition presentations, the project submissions were reviewed meticulously by 45 professional judges from top-tier universities, enterprises and research institutions, including University of Science and Technology of China, Tsinghua University, Fudan University, Shanghai Jiao Tong University, Nanyang Technological University, Peking University, Chinese University of Hong Kong and Shanghai Technology Art Center. The teaching and evaluation system of martial arts based on body posture recognition and machine learning by Li Lufei from Shanghai Nanyang Model High School and Wu Keyu from the High School Affiliated to Fudan University as well as the drone powered by OpenCV for flood fight and rescue by Huang Pucheng, Wang Bingyang and Lin Yinhang from Zhejiang Wenling High School became the winners of the grand prize.

Besides, the research of rehabilitation assessment and training system powered by 3D hand posture verification by Zhang Yihong from Shanghai World Foreign Language Academy stood out from many excellent projects and won the first prize. Leveraging the 3D hand posture verification, this project aims to design a low-cost and easy-to-operate product for the patient with hand movement disorders, realizing 89.9% accurate in assessment of hand rehabilitation and training.

Lin Junqiu, Deputy Director of Science and Education Department from Shanghai Science, Art and Education Center, said, "Artificial intelligence is critical to our future. As we continue to advance technology development, we must cultivate a larger pool of AI talent with even higher levels of expertise and innovation capability. The huge opportunities brought by the AI era will facilitate transformative applications across industry verticals and scenarios but also formulate optimal collaboration between human being and artificial intelligence."

Lynn Dai, General Manager of SenseTime's Education Product, said at the final competition, "AI has become an important driving force for technological innovation, we believe the IAIF can provide an innovative platform for young people to develop their interest in AI. Meanwhile, SenseTime Education is dedicated to nurturing young talent and broadening their horizons with advanced insights from an industry perspective, as well as preparing them for the AI-empowered future."

IAIF is also providing comprehensive services for participants, from scientific innovation training to project incubation, helping them solve practical industrial problems. The IAIF organizing committee hosted a four-week AI training course for students before the final competition. The students from the most outstanding project teams will have the chance to participate in other national or international competitions. In addition, the students from the most outstanding IAIF projects will participate in a roadshow training workshop for startups as part of incubator programmes organized by SenseTime; the company will provide technology for high-potential projects.

"IAIF provided me with a unique opportunity to exchange ideas on this exciting AI topic with participants from different schools around the world," said Wu Keyu, the winner of the grand prize. "Through this competition, I have gained a better understanding of the powerful impact from AI and humans working together to build novel solutions that will create a better tomorrow for human society."

The success of the 3rd International Artificial Intelligence Fair not only marks the formation of the foundations for the AI education ecosystem developed by the Shanghai Xuhui Education Bureau and SenseTime, but also boosts the collaboration among governments, academia, enterprises and industries in AI technology innovation. In the future, SenseTime Education will continue to act as a focal point and a platform for cultivating future AI talents.

About SenseTime

SenseTime is a leading AI software company focused on creating a better AI-empowered future through innovation. Upholding a vision of advancing the interconnection of the physical and digital worlds with AI, driving sustainable productivity growth and seamless interactive experiences, SenseTime is committed to advancing the state of the art in AI research, developing scalable and affordable AI software platforms that benefit businesses, people and society, and attracting and nurturing top talents, shaping the future together.

With our roots in the academic world, we invest in our original and cutting-edge research that allows us to offer and continuously improve industry-leading, full-stack AI capabilities, covering key fields across perception intelligence, decision intelligence, AI-enabled content generation and AI-enabled content enhancement, as well as key capabilities in AI chips, sensors and computing infrastructure. Our proprietary AI infrastructure, SenseCore, allows us to develop powerful and efficient AI software platforms that are scalable and adaptable for a wide range of applications.

Today, our technologies are trusted by customers and partners in many industry verticals including Smart Business, Smart City, Smart Life and Smart Auto.

We have offices in markets including Hong Kong, Mainland China, Taiwan, Macau, Japan, Singapore, Saudi Arabia, the United Arab Emirates, Malaysia, and South Korea, etc., as well as presences in Thailand, Indonesia and the Philippines. For more information, please visit SenseTime's website as well as its LinkedIn, Twitter and Facebook pages.

SOURCE SenseTime

See original here:

SenseTime Co-hosts the 3rd International Artificial Intelligence Fair to Nurture AI Talent and Promote a Collaborative Education Ecosystem -...

Alation Acquires Artificial Intelligence Vendor Lyngo Analytics – Business Wire

REDWOOD CITY, Calif.--(BUSINESS WIRE)--Alation Inc., the leader in enterprise data intelligence solutions, today announced the acquisition of Lyngo Analytics, a Los Altos, Calif.-based data insights company. The acquisition will elevate the business user experience within the data catalog, scale data intelligence, and help organizations drive data culture. Lyngo Analytics CEO and co-founder Jennifer Wu and CTO and co-founder Joachim Rahmfeld will join the company.

Lyngo Analytics uses a natural language interface to empower users to discover data and insights by asking questions using simple, familiar business terms. Alation offers the most intelligent and user-friendly machine-learning data catalog on the market. And by integrating Lyngo Analytics artificial intelligence (AI) and machine-learning (ML) technology into its platform, Alation deepens its support for the non-technical user, converting natural language questions into SQL.

The integration lowers the barrier to entry for business users. Now, they can acquire and develop data-driven insights from across an enterprise's broad range of data sources. This means even data consumers without SQL expertise can ask questions in natural language and find data and insights without the support of data analysts. The acquisition will help organizations drive data culture by putting data and analytics into the hands of the masses.

Wu will join Alation as Senior Director of Product Management, where she will be responsible for product strategy and delivery for natural language data search, discovery, and exploration experiences. Rahmfeld, who is also a part-time, graduate-level deep learning and natural language processing lecturer at UC Berkeleys Master of Information and Data Science Program, will be Senior Director of AI/ML Research. He will be responsible for Alations AI and machine learning center of excellence, building both platform and application experiences that leverage AI and ML to enhance Alations value for business and technical users.

Alation created the first machine learning data catalog and were known for providing the most user-friendly interface on the market, said Raj Gossain, Chief Product Officer, Alation. With this acquisition, were building on the best. Were doubling down on key aspects of the platform that will help drive data culture and spur innovation and growth. Jennifer and Joachim developed a unique solution for a complex data and analytics issue, and Im excited to welcome them to the Alation team.

The acquisition is the latest milestone for Alation, which announced a $110 million Series D funding round and a $1.2 billion market valuation in June 2021. Alation is growing quickly, earning the trust of nearly 300 customers, including leading global brands such as Cisco, Exelon, GE Aviation, Munich Re, NASDAQ, and Pfizer. The company has more than 450 employees globally and is hiring. Recently, Alation was named a leader in The Forrester Wave: Data Governance Solutions, Q3 2021 report and Snowflakes Data Governance Partner of the Year.

Learn More:

About Alation

Alation is the leader in enterprise data intelligence solutions including data search & discovery, data governance, data stewardship, analytics, and digital transformation. Alations initial offering dominates the data catalog market. Thanks to its powerful Behavioral Analysis Engine, inbuilt collaboration capabilities, and open interfaces, Alation combines machine learning with human insight to successfully tackle even the most demanding challenges in data and metadata management. Nearly 300 enterprises drive data culture, improve decision making, and realize business outcomes with Alation including AbbVie, American Family Insurance, Cisco, Exelon, Fifth Third Bank, Finnair, Munich Re, NASDAQ, New Balance, Parexel, Pfizer, US Foods and Vistaprint. Headquartered in Silicon Valley, Alation was named to Inc. Magazines Best Workplaces list and is backed by leading venture capitalists including Blackstone, Costanoa, Data Collective, Dell Technologies, Icon, ISAI Cap, Riverwood, Salesforce, Sanabil, Sapphire, and Snowflake Ventures. For more information, visit alation.com.

The rest is here:

Alation Acquires Artificial Intelligence Vendor Lyngo Analytics - Business Wire

AI in the courts – The Indian Express

Written by Kartik Pant

Artificial Intelligence (AI) seems to be catching the attention of a large section of people, no doubt because of the infinite possibilities it offers. It assimilates, contributes as well as poses challenges to almost all disciplines including philosophy, cognitive science, economics, law, and the social sciences. AI and Machine Learning (ML) have a multiplier effect on increasing the efficiency of any system or industry. If used effectively, it can bring about incremental changes and transform the ecosystem of several sectors. However, before applying such technology, it is important to identify the problems and the challenges within each sector and develop the specific modalities on how the AI architecture will have the highest impact.

In the justice delivery system, there are multiple spaces where the AI application can have a deep impact. It has the capacity to reduce the pendency and incrementally increase the processes. The recent National Judicial Data Grid (NJDG) shows that 3,89,41,148 cases are pending at the District and Taluka levels and 58,43,113 are still unresolved at the high courts. Such pendency has a spin-off effect that takes a toll on the efficiency of the judiciary, and ultimately reduces peoples access to justice.

The use of AI in the justice system depends on first identifying various legal processes where the application of this technology can reduce pendency and increase efficiency. The machine first needs to perceive a particular process and get information about the process under examination. For example, to extract facts from a legal document, the programme should be able to understand the document and what it entails. Over time, the machine can learn from experience, and as we provide more data, the programme learns and makes predictions about the document, thereby making the underlying system more intelligent every time. This requires the development of computer programmes and software which are highly-complex requiring advanced technologies. Additionally, there is a need of constantly nurturing to reduce any bias, and increase learning.

One such complex tool named SUPACE (Supreme Court Portal for Assistance in Court Efficiency) was recently launched by the Supreme Court of India. Designed to first understand judicial processes that require automation, it then assists the Court in improving efficiency and reducing pendency by encapsulating judicial processes that have the capability of being automated through AI.

Similarly, SUVAS is an AI system that can assist in the translation of judgments into regional languages. This is another landmark effort to increase access to justice. The technology, when applied in the long run to solve other challenges of translation in filing of cases, will reduce the time taken to file a case and assist the court in becoming an independent, quick, and efficient system.

Through these steps, the Supreme Court has become the global frontrunner in application of AI and Machine Learning into processes of the justice system. But we must remember that despite the great advances made by the apex court, the current development in the realm of AI is only scratching the surface.

Over time, as one understands and evaluates various legal processes, AI and related technologies will be able to automate and complement several tasks performed by legal professionals. It will allow them to invest more energy in creatively solving legal issues. It has the possibility of helping judges conduct trials faster and more effectively thereby reducing the pendency of cases. It will assist legal professionals in devoting more time in developing better legal reasoning, legal discussion and interpretation of laws.

However, the integration of these technologies will be a challenging task as the legal architecture is highly complex and technologies can only be auxiliary means to achieve legal justice. There is also no doubt that as AI technology grows, concerns about data protection, privacy, human rights and ethics will pose fresh challenges and will require great self-regulation by developers of these technologies. It will also require external regulation by the legislature through statute, rules, regulation and by judiciary through judicial review qua constitutional standards. But with increasing adoption of the technology, there will be more debates and conversations on these problems as well as their potential solutions. In the long-run all this would help in reducing the pendency of cases and improving overall efficiency of justice system.

The writer is founding partner, Prakant Law offices and a public policy consultant

Read more:

AI in the courts - The Indian Express

Infrared cameras and artificial intelligence provide insight into boiling – MIT News

Boiling is not just for heating up dinner. Its also for cooling things down. Turning liquid into gas removes energy from hot surfaces, and keeps everything from nuclear power plants to powerful computer chips from overheating. But when surfaces grow too hot, they might experience whats called a boiling crisis.

In a boiling crisis, bubbles form quickly, and before they detach from the heated surface, they cling together, establishing a vapor layer that insulates the surface from the cooling fluid above. Temperatures rise even faster and can cause catastrophe. Operators would like to predict such failures, and new research offers insight into the phenomenon using high-speed infrared cameras and machine learning.

Matteo Bucci, the Norman C. Rasmussen Assistant Professor of Nuclear Science and Engineering at MIT, led the new work,published June 23 inApplied Physics Letters. In previous research, his team spent almost five years developing a technique in which machine learning could streamline relevant image processing. In the experimental setup for both projects, a transparent heater 2 centimeters across sits below a bath of water. An infrared camera sits below the heater, pointed up and recording at 2,500 frames per second with a resolution of about 0.1 millimeter. Previously, people studying the videos would have to manually count the bubbles and measure their characteristics, but Bucci trained a neural network to do the chore, cutting a three-week process to about five seconds. Then we said, Lets see if other than just processing the data we can actually learn something from an artificial intelligence, Bucci says.

The goal was to estimate how close the water was to a boiling crisis. The system looked at 17 factors provided by the image-processing AI: the nucleation site density (the number of sites per unit area where bubbles regularly grow on the heated surface), as well as, for each video frame, the mean infrared radiation at those sites and 15 other statistics about the distribution of radiation around those sites, including how theyre changing over time. Manually finding a formula that correctly weighs all those factors would present a daunting challenge. But artificial intelligence is not limited by the speed or data-handling capacity of our brain, Bucci says. Further, machine learning is not biased by our preconceived hypotheses about boiling.

To collect data, they boiled water on a surface of indium tin oxide, by itself or with one of three coatings: copper oxide nanoleaves, zinc oxide nanowires, or layers of silicon dioxide nanoparticles. They trained a neural network on 85 percent of the data from the first three surfaces, then tested it on 15 percent of the data of those conditions plus the data from the fourth surface, to see how well it could generalize to new conditions. According to one metric, it was 96 percent accurate, even though it hadnt been trained on all the surfaces. Our model was not just memorizing features, Bucci says. Thats a typical issue in machine learning. Were capable of extrapolating predictions to a different surface.

The team also found that all 17 factors contributed significantly to prediction accuracy (though some more than others). Further, instead of treating the model as a black box that used 17 factors in unknown ways, they identified three intermediate factors that explained the phenomenon: nucleation site density, bubble size (which was calculated from eight of the 17 factors), and the product of growth time and bubble departure frequency (which was calculated from 12 of the 17 factors). Bucci says models in the literature often use only one factor, but this work shows that we need to consider many, and their interactions. This is a big deal.

This is great, says Rishi Raj, an associate professor at the Indian Institute of Technology at Patna, who was not involved in the work. Boiling has such complicated physics. It involves at least two phases of matter, and many factors contributing to a chaotic system. Its been almost impossible, despite at least 50 years of extensive research on this topic, to develop a predictive model, Raj says. It makes a lot of sense to us the new tools of machine learning.

Researchers have debated the mechanisms behind the boiling crisis. Does it result solely from phenomena at the heating surface, or also from distant fluid dynamics? This work suggests surface phenomena are enough to forecast the event.

Predicting proximity to the boiling crisis doesnt only increase safety. It also improves efficiency. By monitoring conditions in real-time, a system could push chips or reactors to their limits without throttling them or building unnecessary cooling hardware. Its like a Ferrari on a track, Bucci says: You want to unleash the power of the engine.

In the meantime, Bucci hopes to integrate his diagnostic system into a feedback loop that can control heat transfer, thus automating future experiments, allowing the system to test hypotheses and collect new data. The idea is really to push the button and come back to the lab once the experiment is finished. Is he worried about losing his job to a machine? Well just spend more time thinking, not doing operations that can be automated, he says. In any case: Its about raising the bar. Its not about losing the job.

Follow this link:

Infrared cameras and artificial intelligence provide insight into boiling - MIT News

COVID: Artificial intelligence in the pandemic – DW (English)

If artificial intelligence is the future, then the future is now. This pandemic has shown us just how fast artificial intelligence, or AI, works and what it can do in so many different ways.

Right from the start, AI has helped us learn about SARS-CoV-2, the virus that causes COVID-19 infections.

It's helped scientists analyse the virus' genetic information its DNA at speed. DNA is the stuff that makes the virus, indeed any living thing, what it is. And if you want to defend yourself, you had better know your enemy.

AI has also helped scientists understand how fast the virus mutates and helped them develop and test vaccines.

We won't be able to get into all of it this is just an overview. But let's start by recapping the basics about AI.

An AI is a set of instructions that tells a computer what to do, from recognizing faces in the photo albums on our phones to sifting through huge dumps of data for that proverbial needle in a haystack.

People often call them algorithms. It sounds fancy but an algorithm is nothing more thana static list of rules that tells a computer: "If this, then that."

A machine learning (ML) algorithm, meanwhile, is the kind of AI that many of us like to fear. It's an AI that can learn from the things it reads and analyzes and teach itself to do new things. And wehumansoften feel like we can't control or even know what ML algorithms learn. But actually, we can because we write the original code. Soyou can afford to relax. A bit.

In summary, AIs and MLs are programs that let us process lots and lots of information, a lot of it "raw" data, very fast. They are not all evil monsters out to kill us or steal our jobs not necessarily, anyway.

With COVID-19, AI and ML may have helped save a few lives. They have been used in diagnostic tools that read vast numbers of chest X-raysfaster than any radiologist. That's helped doctors identify and monitor COVID patients.

In Nigeria, the technology has been used at a very basic but practical level to help people assess their of risk of getting infected. People answer a series of questions online and depending on their answers, are offered remote medical advice or redirected to a hospital.

The makers, a company called Wellvis, say it has reduced the number of people calling disease control hotlines unnecessarily.

One of the most important things we've had to handle is finding out who is infected fast. And in South Korea, artificial intelligence gave doctors ahead start.

Way back when the rest of the world was still wondering whether it was time to go into the first lockdown, a company in Seoul used AI to develop a COVID-19 test in mere weeks. It would have taken them months without AI.

It was "unheard of," said Youngsahng "Jerry" Suh, head of data science and AI development at the company, Seegene, in an interview with DW.

Seegene's scientists ordered raw materials for the kits on January 24 and by February 5, the first version of the test was ready.

It was only the third time the company had used its supercomputer and Big Data analysis to design a test.

But they must have done something right because by mid-March 2020, international reports suggested that South Korea had tested 230,000 people.

And, at least for a while, the country was able to keep the number of new infections per day relatively flat.

"And we're constantly updating that as new variants and mutations come to light. So, that allows our machine learning algorithm to detect those new variants as well," says Suh.

One of the other major issues we've had to handle is tracking how the disease especially new variants and their mutations spread through a community and from country to country.

In South Africa, researchers used an AI-based algorithm to predictfuture daily confirmed cases of COVID-19.

It was based on historical data from South Africa's past infection history and other information, such as the way people move from one community to another.

In May, they say they showed the country had a low risk of a third wave of the pandemic.

"People thought the beta variant was going to spread around the continent and overwhelm our health systems, but with AI we were able to control that," says Jude Kong, who leadsthe Africa-Canada Artificial Intelligence and Data Innovation Consortium.

The project is a collaboration between Wits University and the Provincial Government of Gauteng in South Africa and York University in Canada, where Kong, who comes from Cameroon, is an assistant professor.

Kong says "data is very sparse in Africa" and one of the problems is getting over the stigma attached to any kind of illness, whether it's COVID, HIV, Ebola or malaria.

But AI has helped them "reveal hidden realities" specific to each area, and that's informed local health policies, he says.

They have deployed their AI modelling in Botswana, Cameroon, Eswatini, Mozambique, Namibia, Nigeria, Rwanda, South Africa, and Zimbabwe.

"A lot of information is one-dimensional," Kong says. "You know the number of people entering a hospital and those that get out. But hidden below that is their age, comorbidities, and the community where they live. We reveal that with AI to determine how vulnerable they are and inform policy makers."

Other types of AI, similar to facial recognition algorithms, can be used to detect infected people, or those with elevated temperatures, in crowds. And AI-driven robots can clean hospitals and other public spaces.

But, beyond that, there are experts who say AI's potential has been overstated.

They include Neil Lawrence, a professor of machine learning at the University of Cambridge who was quoted in April 2020, calling out AI as "hyped."

It was not surprising, he said, that in a pandemic, researchers fell back on tried and tested techniques, like simple mathematical modelling. But one day, he said, AI might be useful.

That was only 15 months ago. And look how far we've come.

That's how to do it: If humans have COVID-19, dogs had better cuddle with their stuffed animals. Researchers from Utrecht in the Netherlands took nasal swabs and blood samples from 48 cats and 54 dogs whose owners had contracted COVID-19 in the last 200 days. Lo and behold, they found the virus in 17.4% of cases. Of the animals, 4.2% also showed symptoms.

About a quarter of the animals that had been infected were also sick. Although the course of the illness was mild in most of the animals, three were considered to be severe. Nevertheless, medical experts are not very concerned. They say pets do not play an important role in the pandemic. The biggest risk is human-to-human transmission.

The fact that cats can become infected with coronaviruses has been known since March 2020. At that time, the Veterinary Research Institute in Harbin, China, had shown for the first time that the novel coronavirus can replicate in cats. The house tigers can also pass on the virus to other felines, but not very easily, said veterinarian Hualan Chen at the time.

But cat owners shouldn't panic. Felines quickly form antibodies to the virus, so they aren't contagious for very long. Anyone who is acutely ill with COVID-19 should temporarily restrict outdoor access for domestic cats. Healthy people should wash their hands thoroughly after petting strange animals.

Should this pet pig keep a safe distance from the dog when walking in Rome? That question may now also have to be reassessed. Pigs hardly come into question as carriers of the coronavirus, the Harbin veterinarians argued in 2020. But at that time they had also cleared dogs of suspicion. Does that still apply?

Nadia, a four-year-old Malaysian tiger, was one of the first big cats to be detected with the virus in 2020 at a New York zoo. "It is, to our knowledge, the first time a wild animal has contracted COVID-19 from a human," the zoo's chief veterinarian told National Geographic magazine.

It is thought that the virus originated in the wild. So far, bats are considered the most likely first carriers of SARS-CoV-2. However, veterinarians assume there must have been another species as an intermediate host between them and humans in Wuhan, China, in December 2019. Only which species this could be is unclear.

This racoon dog is a known carrier of the SARS viruses. German virologist Christian Drosten spoke about the species being a potential virus carrier. "Racoon dogs are trapped on a large scale in China or bred on farms for their fur," he said. For Drosten, the racoon dog is clearly the prime suspect.

Pangolins are also under suspicion for transmitting the virus. Researchers from Hong Kong, China and Australia have detected a virus in a Malaysian Pangolin that shows stunning similarities to SARS-CoV-2.

Hualan Chen also experimented with ferrets. The result: SARS-CoV-2 can multiply in the scratchy martens in the same way as in cats. Transmission between animals occurs as droplet infections. At the end of 2020, tens of thousands of martens had to be killed in various fur farms worldwide because the animals had become infected with SARS-CoV-2.

Experts have given the all-clear for people who handle poultry, such as this trader in Wuhan, China, where scientists believe the first case of the virus emerged in 2019. Humans have nothing to worry about, as chickens are practically immune to the SARS-CoV-2 virus, as are ducks and other bird species.

Author: Fabian Schmidt

View original post here:

COVID: Artificial intelligence in the pandemic - DW (English)

The smart role of Artificial Intelligence in todays world – BL on Campus

Artificial Intelligence (AI) has been redefining society in ways we have never anticipated. Technology is clinging to us in every walk of our lives, right from unlocking our smartphones to our day-to-day activities, online shopping, intelligent car dashboards, autonomous robots and so on. Though the concept of AI was first talked about in the early 1950s, forming a basis for many computer learning and complex decision-making processes, it is only of late, where processing huge amounts of data is required, that this field of technology is picking up pace.

What is in the AI basket?

AI is not a technology, rather it is a science or field of study. It is a constellation that encompasses a lot of statistical computation methods, pre and post analyses techniques for handling structured and unstructured data. It is an interesting endeavour of replicating and stimulating human intelligence through machine and deep learning platforms, natural language generation, virtual agents, text-voice-image recognition, AI optimised hardware, robotic process automation, cognitive search system and so on. It has a goal of utilising all the technologies to make intelligent machines.

Growth of AI in India

AI is the tool of innovation being experimented with, in almost all Indian domains, including healthcare, education, agriculture, finance, automobiles, energy, retail, manufacturing, scientific research with autonomous discoveries in place. In India, companies like Walmart, Google, Microsoft, Amazon, Samsung are into AI-based research and product offerings. Still, our country has a lot of potential to expand its research in this cutting-edge technology. Most of our educational, government and private institutes cradle and motivate AI researchers, innovations and start-ups.

The Government is pushing the private sector and offers many opportunities through DST, Niti Aayog, IndiaAI and many more, to create innovative technological solutions and fund AI-based start-ups. The start-ups are focussed in the cities like Bengaluru, Hyderabad, Ahmedabad, Mumbai, Delhi for AI-based businesses.

Why AI matters in todays scenario

AI, which emerged from the research world as a proof-of-concept has been strategically scaling up due to the pace of digitisation. AI is favoured for its large data processing, end-to-end efficiency of decoding complex processes, improved accuracy and help in decision-making, intelligent offerings, smart services - content, task automation and so on. We can see its overwhelming development in healthcare, pharmaceutical, scientific research, and e-commerce.

The interactive applications of Google, DeepMinds Alpha Fold, BenevolentAI, chatbots such as Clara and Zini; Aryoga Setu, Co-Win, Amazon, Zomato, Swiggy are among the few proving to be our pandemic tech saviours.

Impact of AI in business

Business over the years has evolved from local corner shops to the booming online shopping platforms. These modernised techniques not only make individual lives easier, but also streamline business processes for improving consumer experience, sales forecasting and automated decision making to meet business goals. Businesses work well when humans, machines and technologies integrate for each others benefit. Todays business world is solely dependent on AI, Cloud, Big Data technologies of which e-commerce and m-commerce are the mainstream, having a great business impact globally. In synch with global developments in innovation and automation, India too has brought about a digital transformation over the last two decades. Now, technological developments have gained pace more than what has been predicted; the pandemic played a great role in its quick transformation and adoption.

How to look for jobs in AI?

Today, Artificial Intelligence is a lucrative domain, promising job growth in a competitive IT industry. Four out of five C-suite executives believe that they need to speed up data processing and automation, if they have to survive in their business. So, recruiters look for advanced technical skills, extensive practical experience. AI skills secure the top place among the fastest growing job profiles over the recent years. The prominent job roles include big data engineer, business intelligence developer, data scientist, data analyst, cyber analyst and expert, AI-Deep learning-machine learning engineer, computer vision specialist along with equivalent research jobs.

How can one find a good job in AI? The answer is, there are several avenues and opportunities to be had by connecting with experts via LinkedIn, technical blogs, career fairs and company career sites. The tech talks given by companies in university, conclaves hosting academics-government-industry groups will help you understand the actual employment needs and goals. Always aim to seek opportunities at government and industry-funded research labs during the early years of your higher education. This will help you to nurture your skillset to the best. Work for open-source and stack overflow contributions which will add value to your technical profile. Technical competitions like hackathons/ideathons/makeathons will upskill your innovative ideas along with the required life skills.

Globally, today we are in a challenging situation. All, irrespective of the sectors, are working on the revamp strategy to balance the economy post Covid-19. AI will endeavour to revive the profitability and development of industry . New and advanced opportunities are expected to open up.

AI is and will be driving a promising future in the new normal. It will be the main driver for emerging and new technologies. So, take an interdisciplinary approach to hone your skills in an ever-evolving field. Think big, start small, act fast.

(The writer is Professor & Chairperson, School of Computing, SRM Institute of Science and Technology.)

See the article here:

The smart role of Artificial Intelligence in todays world - BL on Campus

Which companies are leading the way for artificial intelligence in the aerospace, defence & security sector? Companies leading the way for…

Fujitsu Ltd and Honeywell International Inc are leading the way for artificial intelligence investment among top aerospace, defense & security companies according to our analysis of a range of GlobalData data.

Artificial intelligence has become one of the key themes in the aerospace, defense & security sector of late, with companies hiring for increasingly more roles, making more deals, registering more patents and mentioning it more often in company filings.

These themes, of which artificial intelligence is one, are best thought of as any issue that keeps a CEO awake at night, and by tracking and combining them, it becomes possible to ascertain which companies are leading the way on specific issues and which are dragging their heels.

According to GlobalData analysis, Fujitsu Ltd is one of the artificial intelligence leaders in a list of high-revenue companies in the aerospace, defense & security industry, having advertised for 298 positions in artificial intelligence, made two deals related to the field, filed 375 patents and mentioned artificial intelligence three times in company filings between January 2020 and June 2021.

Our analysis classified nine companies as Most Valuable Players or MVPs due to their high number of new jobs, deals, patents and company filings mentions in the field of artificial intelligence. An additional seven companies are classified as Market Leaders and zero are Average Players. Five more companies are classified as Late Movers due to their relatively lower levels of jobs, deals, patents and company filings in artificial intelligence.

For the purpose of this analysis, weve ranked top companies in the aerospace, defense & security sector on each of the four metrics relating to artificial intelligence: jobs, deals, patents and company filings. The best-performing companies the ones ranked at the top across all or most metrics were categorised as MVPs while the worst performers companies ranked at the bottom of most indicators were classified as Late Movers.

Raytheon Technologies Corp is spearheading the artificial intelligence hiring race, advertising for 1,677 new jobs between January 2020 and June 2021. The company reached peak hiring in October 2020, when it listed 270 new job ads related to artificial intelligence.

Honeywell International Inc followed Raytheon Technologies Corp as the second most proactive artificial intelligence employer, advertising for 920 new positions. Leidos Holdings Inc was third with 902 new job listings.

When it comes to deals, Northrop Grumman Corp leads with 11 new artificial intelligence deals announced from January 2020 to June 2021. The company was followed by Honeywell International Inc with four deals and Fujitsu Ltd with two.

GlobalData's Financial Deals Database covers hundreds of thousands of M&A contracts, private equity deals, venture finance deals, private placements, IPOs and partnerships, and it serves as an indicator of economic activity within a sector.

One of the most innovative aerospace, defense & security companies in recent months was Fujitsu Ltd, having filed 375 patent applications related to artificial intelligence since the beginning of last year. It was followed by The Boeing Co with 56 patents and Honeywell International Inc with 54.

GlobalData collects patent filings from 100+ counties and jurisdictions. These patents are then tagged according to the themes they relate to, including artificial intelligence, based on specific keywords and expert input. The patents are also assigned to a company to identify the most innovative players in a particular field.

Finally, artificial intelligence was a commonly mentioned theme in aerospace, defense & security company filings. Leidos Holdings Inc mentioned artificial intelligence six times in its corporate reports between January 2020 and June 2021. Lockheed Martin Corp. filings mentioned it six times and General Dynamics Corp mentioned it four times.

Methodology:

GlobalDatas unique Job analytics enables understanding of hiring trends, strategies, and predictive signals across sectors, themes, companies, and geographies. Intelligent web crawlers capture data from publicly available sources. Key parameters include active, posted and closed jobs, posting duration, experience, seniority level, educational qualifications and skills.

Integrated Navigation Systems for High-Speed Vessels

28 Aug 2020

Rugged Military Servers for Shipborne Programmes and Applications

28 Aug 2020

View post:

Which companies are leading the way for artificial intelligence in the aerospace, defence & security sector? Companies leading the way for...

Navigating the Intersections of Data, Artificial Intelligence, and Privacy – JD Supra

Companies can expertly address AI-related privacy concerns with the right knowledge and team.

While the U.S. is figuring out privacy laws at the state and federal level, artificial and augmented intelligence (AI) is evolving and becoming commonplace for businesses and consumers. These technologies are driving new privacy concerns. Years ago, consumers feared a stolen Social Security number. Now, organizations can uncover political views, purchasing habits, and much more. The repercussions of data are broader and deeper than ever.

H5 recently convened a panel of experts to discuss these emerging issues and ways leaders can tackle their most urgent privacy challenges in the webinar Everything Personal: AI and Privacy.

The panel featured Nia M. Jenkins, Senior Associate General Counsel, Data, Technology, Digital Health & Cybersecurity at Optum (UnitedHealth Group); Kimberly Pack, Associate General Counsel, Compliance, at Anheuser-Busch; Jennifer Beckage, Managing Director at Beckage; and Eric Pender, Engagement Manager at H5; and was moderated by Sheila Mackay, Managing Director, Corporate Segment at H5.

While the regulatory and technology landscape continues to rapidly change, the panel highlighted some key takeaways and solutions to protect and manage sensitive data leaders should consider:

Build, nurture, and utilize cross-functional teams to tackle data challenges

Develop robust and well-defined workflows to work with AI technology

Understand the type and quality of data your organization collects and stores

Engage with experts and thought leadership to stay current with evolving technology and regulations

Collaborate with experts across your organization to learn the needs of different functions and business units and how they can deploy AI

Enable your companys innovation and growth by understanding the data, technology, and risks involved with new AI

While addressing challenges related to data and privacy certainly requires technical and legal expertise, the need for strong teamwork and knowledge sharing should not be overlooked. Nia Jenkins said her organization utilizes cross-functional teams, which can pull together privacy, governance, compliance, security, and other subject matter experts to gain a line of sight into the data thats coming in and going out of the organization.

We also have an infrastructure where people are able to reach out to us to request access to certain data pools, Jenkins said. With that team, we are able to think through, is it appropriate to let that team use the data for their intended purpose or use?

In addition to collaboration, well-developed workflows are paramount too. Kimberly Pack explained that her company does have a formalized team that comes together on a bi-monthly basis and defined workflows that are improving daily. She emphasized that it all begins with having clarity about how business gets done.

Jennifer Beckage highlighted the need for an organization to develop a plan, build a strong team, and understand the type and quality of the data it collects before adopting AI. Businesses have to address data retention, cybersecurity, intellectual property, and many other potential risks before taking full advantage of AI technology.

Keeping up with a dynamic regulatory landscape requires expanding your information network. Pack was frank that its too much for one person to learn themselves. She relies on following law firms, becoming involved in professional organizations and forums, and connecting with privacy professionals on LinkedIn. As she continually educates herself, she creates training for various teams at her organization, including human resources, procurement, and marketing.

Really cascade that information, said Pack. Really try to tailor the training so that it makes sense for people. Also, try to have tools and infographics, so people can use it, pass it along. Record all your trainings because everyones not going to show up.

The panel discussed how their companies are using AI and whether theres any resistance. Pack noted her organization has carefully taken advantage of AI for HR, marketing, enterprise tools, and training. She noted that providing your teams with information and assistance is key to comfort and adoption.

AI is just a tool, right? Pack said. Its not good, its not bad. The privacy team conducts a privacy impact assessment to understand how the business can use the technology. Then her team places any necessary limitations and builds controls to ensure the team uses the technology ethically. Pack and Jenkins both noted that the companies must proactively address potential bias and not allow automated decision-making.

The panel agreed organizations should adopt AI to remain competitive and meet consumer expectations. Pack pointed out the purpose of AI technology is for it to learn. Businesses adopting it now will see the benefits sooner than those that wait.

Eric Pender noted advanced technologies are becoming more common for particular uses: cybersecurity breach response, production of documents, including privilege review and identifying Personally Identifiable Information (PII), and defensible disposal. Many of these tasks have tight timelines and require efficiency and accuracy, which AI provides.

The risks of AI depend on the nature of the specific technology, according to Beckage. Its each organizations responsibility to perform a risk assessment, determine how to use the technology ethically, and perform audits to ensure the technology is working without unintended consequences.

It is also important to remember that in-house and outside counsel dont have to be dream killers when it comes to innovation. Lawyers with a good understanding of their companys data, technology, and ways to mitigate risk can guide their businesses in taking advantage of AI now and years down the road.

Pack encouraged compliance professionals to enjoy the problem-solving process. Continue to know your business. Be in front of what their desires are, what their goals are, what their dreams are, so that you can actively support that, she said.

Pender says companies are shifting from a reactive approach to a proactive approach, and advised that data thats been defensively disposed of is not a risk to the company. Though implementing AI technology is complex and challenging, managing sensitive, personal data is achievable, and the potential benefits are enormous.

Jenkins encouraged the four Bs. Be aware of the data, be collaborative with your subject matter experts, be willing to learn and ask tough questions of your team, and be open to learning more about the product, whats happening with your business team, and privacy in an ever-changing landscape.

Beckage closed out the webinar by warning organizations not to reinvent the wheel. While its risky to copy another organizations privacy policy word for word, organizations can learn from the people in the privacy space who know what theyre doing well.

See more here:

Navigating the Intersections of Data, Artificial Intelligence, and Privacy - JD Supra

Companies leading the way for artificial intelligence in the power sector – Power Technology

Siemens AG and Vestas Wind Systems AS are leading the way for artificial intelligence investment among top power companies according to our analysis of a range of GlobalData data.

Artificial intelligence has become one of the key themes in the power sector of late, with companies hiring for increasingly more roles, making more deals, registering more patents and mentioning it more often in company filings.

These themes, of which artificial intelligence is one, are best thought of as any issue that keeps a CEO awake at night, and by tracking and combining them, it becomes possible to ascertain which companies are leading the way on specific issues and which are dragging their heels.

According to GlobalData analysis, Siemens AG is one of the artificial intelligence leaders in a list of high-revenue companies in the power industry, having advertised for 1,397 positions in artificial intelligence, made zero deals related to the field, filed 81 patents and mentioned artificial intelligence one times in company filings between January 2020 and June 2021.

Our analysis classified two companies as Most Valuable Players or MVPs due to their high number of new jobs, deals, patents and company filings mentions in the field of artificial intelligence. An additional seven companies are classified as Market Leaders and one are Average Players. 11 more companies are classified as Late Movers due to their relatively lower levels of jobs, deals, patents and company filings in artificial intelligence.

For the purpose of this analysis, weve ranked top companies in the power sector on each of the four metrics relating to artificial intelligence: jobs, deals, patents and company filings. The best-performing companies the ones ranked at the top across all or most metrics were categorised as MVPs while the worst performers companies ranked at the bottom of most indicators were classified as Late Movers.

Siemens AG is spearheading the artificial intelligence hiring race, advertising for 1,397 new jobs between January 2020 and June 2021. The company reached peak hiring in February 2021, when it listed 134 new job ads related to artificial intelligence.

E.ON SE followed Siemens AG as the second most proactive artificial intelligence employer, advertising for 446 new positions. Schneider Electric SE was third with 178 new job listings.

When it comes to deals, Chubu Electric Power Co Inc leads with one new artificial intelligence deal announced from January 2020 to June 2021.

GlobalData's Financial Deals Database covers hundreds of thousands of M&A contracts, private equity deals, venture finance deals, private placements, IPOs and partnerships, and it serves as an indicator of economic activity within a sector.

One of the most innovative power companies in recent months was Siemens AG, having filed 81 patent applications related to artificial intelligence since the beginning of last year. It was followed by Vestas Wind Systems AS with three patents and Electricite de France SA with one.

GlobalData collects patent filings from 100+ counties and jurisdictions. These patents are then tagged according to the themes they relate to, including artificial intelligence, based on specific keywords and expert input. The patents are also assigned to a company to identify the most innovative players in a particular field.

Finally, artificial intelligence was a commonly mentioned theme in power company filings. Schneider Electric SE mentioned artificial intelligence four times in its corporate reports between January 2020 and June 2021. Centrica Plc filings mentioned it two times and Southern Co mentioned it two times.

Methodology:

GlobalDatas unique Job analytics enables understanding of hiring trends, strategies, and predictive signals across sectors, themes, companies, and geographies. Intelligent web crawlers capture data from publicly available sources. Key parameters include active, posted and closed jobs, posting duration, experience, seniority level, educational qualifications and skills.

Ship Agency, Surveys, Maintenance and Project Management Solutions for the Power Sector

28 Aug 2020

Fabric Expansion Joints, Metal Expansion Joints and Elastomer Expansion Joints

28 Aug 2020

View post:

Companies leading the way for artificial intelligence in the power sector - Power Technology

Military researchers ask for artificial intelligence (AI) and machine learning able to share experiences – Military & Aerospace Electronics

ARLINGTON, Va. U.S. military researchers are asking industry to devise a new kind of artificial intelligence (AI) computer programming that enables computers not only to learn from their experiences, but also to share their experiences with other computers.

Officials of the U.S. Defense Advanced research Projects Agency (DARPA) in Arlington, Va., issued an artificial intelligence exploration opportunity on Monday (DARPA-PA-20-02-11) for the Shared-Experience Lifelong Learning (ShELL) project.

ShELL seeks to advanced computer sciences in lifelong learning by computers that share experiences with each other. Lifelong learning is a relatively new area of machine learning research, in which computers continually learn as they encounter varying conditions and tasks while deployed in the field.

This differs from the train-then-deploy process for typical machine learning systems, which often results in unpredictable outcomes; catastrophic forgetting of previously learned knowledge; and the inability to execute new tasks effectively, if at all.

Related: IBM blending cloud computing and artificial intelligence to fuse data in decision-making

Current lifelong learning research assumes one independent computer that learns from its own actions and surroundings; it has not considered populations of lifelong learning computers that benefit from each others experiences.

The total award value for the combined phase-one base and phase-two option is limited to $1 million per proposal.

Algorithms used for lifelong learning typically require large amounts of computing resources, including server farms, graphics processing units (GPUs), and other resource-consuming hardware, and typically do not have to address communication resource limitations.

The Shared-Experience Lifelong Learning (ShELL) program extends current lifelong learning approaches to large numbers of originally identical computers. When these computers are deployed, they may encounter different input and environmental conditions, execute variants of a task, and therefore learn different lessons.

Related: Military cyber security: threats and solutions

Other computers could benefit if one computer could share what it has learned with the other computers. Such sharing of experiences could reduce the amount of training required by any individual computer.

ShELL is distinct from approaches that reward a federation of computers for collaborating or competing on a common global task, either by dividing the task into pieces, by assembling alternative approaches to the same task, or by evolving specialized roles.

ShELL rewards computers individually according to their performance on their own tasks using lessons learned from their own actions combined with those acquired from other computers.

Related: Deployable simulation and training

ShELL has three core challenges: what knowledge should be shared and incorporated; when and how should computers share their knowledge; and develop lifelong learning algorithms that account for the size, weight, computing, and communications constraints of the platforms supporting each learning computer.

DARPA researchers say they would like to award a ShELL contract by late September. Companies interested should upload proposals no later than 27 July 2021 to the DARPA BAA portal at https://baa.darpa.mil.

Email questions or concerns to Ted Senator, the ShELL program manager, at ShELL@darpa.mil. More information is online at https://sam.gov/opp/1afbf600f2e04b26941fad352c08d1f1/view.

See the original post here:

Military researchers ask for artificial intelligence (AI) and machine learning able to share experiences - Military & Aerospace Electronics

The Evolution of the Judiciary in the Age of Technology | Artificial Intelligence and the Delivery of Justice – Lexology

Judges are human. It is only natural that, like others in society, judges may have and are indeed entitled to their own personal views and beliefs. However, a judge must decide cases objectively and professionally, independent of his own personal views or beliefs, political or otherwise

- The Honourable Chief Justice Andrew Cheung

Introduction

The use of artificial intelligence (A.I.) in Courts to render justice has been theorized in science fiction since the dawn of the digital age. In an age where impartiality of judges is often challenged, it is easy to understand why humanity might opt to surrender difficult decisions over to A.I. which are devoid of emotion.

As with any application of technology to a specific task, there are of course advantages and disadvantages.

What is A.I.?

According to John McCarthy, the famed computer and cognitive scientist whom had been credited with coining the term artificial intelligence, A.I. is defined as:

allowing a machine to behave in such a way that it would be called intelligent if a human being behaved in such a way

- John McCarthy

Integral to operation of A.I. is therefore the availability of big data (e.g. collated judgments, etc.) and the ability to process such raw big data into actionable knowledge. In short, A.I. is:

Collection of Big Data Processed into Knowledge-Action through Logic Engine

As we enter into the new decade, access to big data is very much a reality. Quantum computing that will enable actionability of knowledge gleaned from such collected big data is also very much a reality.

Application of A.I., Big Data & Knowledge in Computer-Assisted Courts

It is trite that the administration of justice means the delivery of justice on a case by case basis. Each matter brought before a Judge must be decided on its individual facts and merits. Regardless of the subject matter in question, the work of a presiding justice is to process the information that the parties bring before a Court.

It is noteworthy that not all decisions which require the exercise of judicial powers are complex. Default judgments requiring the declaration of the Court (e.g. Order 19 rule 7 applications) and summary judgment and summary judgments are all matters which can be dealt with without the need of an actual hearing. Where the matter is overly complex, such applications will have deemed inappropriate and dismissed in any event (a process which can of course be undertaken by logic engine).

Conversely, A.I.s application in simple criminal cases (e.g. traffic violation, etc.) where fixed penalty are the norm can similarly be handled by A.I. (subject to human review if the situation so warrants).

It cannot be stressed enough that technology have much potential to ease the backlog of cases in our judiciary as well as achieving judicial economy with cases.

Existing Technology

It should be noted that the application of A.I. in judicial practice has already taken shape in various parts of the world. For example, in a recent research done in the European Union, A.I. prediction of verdicts of cases heard at the European Court of Human Rights had been able to achieve an accuracy range of 79%. The technology therefore already exists!

Hong Kongs Lag in Legal Technology Adaptation

As mentioned above, in order for A.I. to work properly, big data is a condition precedent. One of the hurdles that Hong Kong will undoubtedly encounter is the fact that much of our legal professional are still paper based. The digitization of our judicial process is therefore essential if we are to have an environment that will be accommodating to A.I. adaptation.

The Need of a Human Heart

Worlds governed by artificial intelligence often learned a hard lesson: Logic Doesnt care. Yin-Man Wi

- Quote from the Sic-Fi Series Andromeda

Whilst an A.I. assisted judiciary will undoubtedly have much value to assist in the way justice is rendered, it should be noted that the beauty of Common Law lies in the emphasis on equity and conscionability.

Whilst the outsourcing of justice to A.I. may have its attractiveness on hind sight, overly stringent application of the law is also known to have caused injustice. The acquittal of O.J. Simpsons for example have often times been criticized that whilst procedural justice was achieved, the same cannot be so certain in respect to moral justice. The fact remains, the human heart will always remain as the last bulwark for justice. Many judges will often agree:

sentencing is the most difficult part of the job

Further, given the fact that A.I. is still, as of this moment of writing at least, a novel technology which remains to be proven, caution dictates that it is better to have an A.I. assisted judiciary (which we should be encouraged to do everything to strive for) rather than a A.I. presided judiciary.

Conclusion

To take things to the next step, we must therefore be mindful of what A.I. can do for us in the decade of 2021:

Thisarticleisco-authoredbyJoshua ChufromONC Lawyers

See the original post:

The Evolution of the Judiciary in the Age of Technology | Artificial Intelligence and the Delivery of Justice - Lexology

How artificial intelligence is propelling the energy transition forward – Innovation Origins

Artificial intelligence (AI) is everywhere. You can use it to find the quickest way to a store. Netflix dishes out new series to you based on your viewing history. And even the word additions that you get when typing an email are created with AI. From healthcare to finance, AI is gaining a foothold in increasingly more areas. Artificial intelligence can also play an important role in the energy transition. But what should this look like? And what is the added value of AI?

To these and many other questions, Remy Gieling attempts to get an answer during Studio Connect: AI en Future Energy Gieling is not only a moderator, but also the founder of the platform AI.nl. This is where he wants to make the impact of AI on jobs and businesses more comprehensible. On July 13 next week, he will talk to various experts from the energy sector to find out what impact smart algorithms are having on the energy transition.

Studio ConnectStudio Connect is an initiative from the AI hub Middle-Netherlands to share the latest developments in the field of AI. In each edition, entrepreneurs, experts and experts talk about their own experiences or the latest developments on a particular topic concerning AI. Click here to watch the last edition about AI and health.

In the AI hub Middle-Netherlands, knowledge institutes, companies and public parties come together to gather and share knowledge around AI. They work together on challenges for the region that focus on e.g. media and culture, healthcare, mobility, energy, construction, fintech and digital services. The hub is set up by ROM Utrecht and is part of the action agenda of the Dutch AI Coalition. Read more about it here or register for Studio Connect.

Your weekly innovation overviewEvery sunday the best articles of the week in your inbox.

It works like this: Through a smart system, energy consumption is suspended when the price of electricity is high and resumes once the price drops back down again. This usually happens when a lot of wind or solar energy becomes available in the grid. Supply and demand are thus better matched and the grid remains more in balance

One of the experts joining us next week is Bouke Siebenga from Friday Energy. This Utrecht-based company offers companies a smart battery system for storing electricity from solar panels. The self-learning software determines for each company the best moment to use green power. Or to even store it for later use. This saves companies energy and connection costs. In addition, all batteries from Friday Energy are connected digitally. Users can share energy with each other via a smart algorithm. All connected companies form a mini power plant this way.

According to Siebenga, AI is going to enable the supply and demand of energy to be matched more effectively. This will allow for much smarter use of the existing capacity in the electricity grid. Green power is variable, the amount of energy depends on the number of hours of sunlight or wind power. In the present system, this leads to imbalances and fluctuating energy prices. By using AI, you are able to rebalance the grid. This can even limit the investments needed to expand grid capacity. A concept such as demand response is a good example of this, he explains..

But AI on its own is not going to solve the fluctuations in green power. This also calls for storage systems so that sustainable energy can be used at a later time. According to Siebenga, energy storage combined with artificial intelligence is going to play an important role in the transition. By storing self-generated solar power, companies will be able to scale down their capacity. By using this energy during peak hours, they avoid high tariffs. There are also companies that are located in an area where the grid is congested. They are not allowed to feed energy back into the grid, or only at limited times. With our smart software, we can anticipate this and choose the most favorable time for feeding back energy into the grid.

How does this contribute to the energy transition? Siebenga asserts that this is not so difficult to explain. First of all, with our solution we enable entrepreneurs to use green power from their solar panels as much as possible themselves. They are less dependent on the electricity grid this way. Whats more, all Friday Batteries are digitally connected via the cloud. Users with a surplus of solar energy can share it with someone who has an empty battery. This simply goes through the public grid. We use a smart distribution key In order to allocate the energy. That takes into account all kinds of factors such as weather, grid load, energy prices and information about other Friday Batteries. This helps the grid operator to optimize the grid.

Within 10 years, Friday Energy wants to be able to supply a city like Utrecht with power for 10 minutes with their virtual battery. Siebenga: The grid just cant handle 100 percent sustainable energy. You can compensate for that with batteries. The capacity required to supply Utrecht with power for 10 minutes is, of course, just a drop in the ocean. But it is something we are working towards. After all, if we can connect more smart batteries together, that will lead to greater use of green energy and a better balance in the public grid.

In order to be able to link more different types of systems with each other, it is important that these devices speak the same language. Siebenga is now noticing that this still sometimes causes problems. A battery from one brand sometimes does not work together with solar panels from another brand. We are now working on a kind of standard that works independently of the supplier or type of device. This will allow our smart software to work on all kinds of different systems. And we can, for example, install charging stations in places where the grid connection is still too small.

Ultimately, Siebenga believes that the key to the energy transition lies with smart systems. Such a system must know when the best time is to charge a battery. It must be able to switch quickly with any changes in the weather conditions, for example. Digitalization has taken off. Computers, sensors and other equipment have become faster and more powerful.

Battery technology is also continuing to evolve. I think technology has developed far enough to make the energy transition possible. But where there is still much to be gained is in AI applications. For instance, we can use smart algorithms to predict what energy prices will do next, so that batteries can respond accordingly. Ultimately, its not about which battery technology were going to use. It is much more important that a battery knows that it needs to store extra energy. For example, because the sun is not shining at some point or because the grid is overloaded.

Crossroads 2021Want to look further beyond AI? In the fall, StartupUtrecht and ROM Utrecht Region are organizing Crossroads 2021 together with a host of partners. From the 8th to the 11th of November, Utrecht will be the setting for all kinds of events for start-ups and scale-ups. This years theme is MEET > MATCH > MULTIPLY. How do you grow as a start-up? Where do you get funding from? This will all be discussed during interactive sessions or keynote speeches on topics such as sustainability, health and funding. Find more information here.

See more here:

How artificial intelligence is propelling the energy transition forward - Innovation Origins

Content management and Artificial Intelligence the future of ContentOps – ITProPortal

Artificial intelligence (AI) is eating the world, one boring, routine task at a time.

From navigation apps using AI to crunch a bunch of data at a super-fast speed to determine the best and fastest route from A to B, or automatic spam filters and categorizations that make email more manageable, AI is truly ubiquitous.

It was only a matter of time before AI applications in the content management space arose. And when it comes to content ops, the combination of content management and artificial intelligence is a great tool for giving workers back the time they need to perform more complex tasks that still require a human brain.

AI excels at understanding vast pools of data and automating routine tasks. This is typically orientated towards objectives of improving consumer experiences, saving the time and money invested in routine processes, and even exposing patterns that can uncover new revenue opportunities.

These are typically seen in a content operations workflow in four areas:

1. Smart Content Analysis

AI can analyze a piece of content to identify its sentiment and overall tone, very quickly. This is important for helping content managers determine whether a piece of content is right for their audience or if it needs tweaking before it will truly engage the intended consumer. IBM Watson, for example, uses AI to automate content categorization, text labeling, sentiment analysis, keyword extraction, and more.

2. Automatic Image Tagging

A picture still tells a thousand words. Images enhance content increase engagement. Unfortunately, there is almost nothing less engaging for workers than manually tagging image after image for search and SEO purposes. But it is still a supremely important task. And that is what makes it a great job for AI.

AI-powered automated image recognition is now smart enough to tag images in a matter of secondsletting content workers get back to deeper work instead of routine classification.

3. Scalable Personalisation and Predictions

AI also brings scalability to another important but nearly impossible task for human staff: tracking and making use of individual user behavior.

AI can automate the process of watching what each user on a website or app is doing simultaneously. Then, it can compile this data to look for patterns that will help it predict, based on past behavior, what each user might want next.

This information can dramatically improve personalization efforts, from serving dynamic content to making product recommendations and more. And improving personalization has never been more important. In the words of the management consulting firm McKinsey, Personalization will be the prime driver of marketing success within five years. In fact, they found that leaders in personalization were already able to increase revenue by 5 to 15 percent and improve efficiency on marketing spend by 10 to 30 percent. Achieving that kind of improvement automatically is a massive competitive advantage.

4. Time-Saving Content Creation Assistance

Controversially, AI can also be a big help when it comes to creating content.

Whilst artificial intelligence still is not great at coming up with original ideas or creating nuanced pieces of content, it is catching up fast. A well-trained AI tool should be able to contribute to straightforward writing projects such as news articles, factual reports, translations, transcriptions, and editing for accuracy.

At present, in the content creation workflow, AI is basically a tool for improving the ROI on content marketing, which can often be resource-intensive. Simply put, AI can do the legwork when it comes to research and data while human writers can take this material and do the deep work required to create high-value, relevant content for each target customer.

Based on these areas in which content management and artificial intelligence are already coming together to improve content operations, AI may improve marketing even more in the future.

Interactions Between AI Tools

AI interactions already abound in the consumer space. A voice-activated smart speaker to controlling house lights or audio is an everyday example. Similar interactions are in store for the future of AI-powered tools in the content operations space. It is only a matter of time before AI-enabled content management systems (CMSs) and other content platforms and tools will be able to interact with each other in smart, automatic ways to provide faster functionality and better experiences for consumers and marketers alike.

On-the-Spot SEO Improvements

Taking the idea of sentiment analysis one step further, soon AI-enabled CMSs may be able to identify opportunities for SEO improvements in real-time. This capability would empower marketing professionals to create more effective content in less time, which will outperform competitors and rank better in search engines.

Content Gap Identification

Whilst AI may not create great content on its own, it can spot a lack of it especially if it has access to huge pools of data on customer behavior preferences. This can then alert a business to where its content may be lacking (or where competitors content may be lacking).

Both situations are a huge opportunity to fill those gaps and capture more traffic. AI is becoming smart enough to flag gaps and make recommendations so businesses can create fresh content that adds value and generates new leads.

Customer Service Automations

Customer service is another expensive yet necessary element of business. Chatbots have already come to the for as a way of reducing both the time and money needed to deliver customer service excellence.

While many of todays chatbots can address very simple questions with answers pulled from a knowledge base, the future will see a large percentage of queries if not the majority - that do not have to be routed back to human agents. After all, it is the instantaneous and round-the-clock support that consumers are truly looking for when interacting with brand chatbots.

At the heart of this evolution from where AI and content management is today to where it could be tomorrow, is the need to integrate content management systems with an array of new technologies driven by AI.

In practical terms that means developing headless architectures that will enable content operations teams to explore and exploit everything from automated content analysis to smart content creation. Adopting this strategy will leave a business ready to capitalize on the next wave of AI-based innovation with minimal disruption, driving return on investment.

Varia Makagonova, director of marketing, Contentstack

Read this article:

Content management and Artificial Intelligence the future of ContentOps - ITProPortal

Astronomers use artificial intelligence to reveal the true shape of universe – WION

The universe comes off as a vast and immeasurable entity whose depths are imperceptible to Earthlings. But in the pursuit of simplifying all that surrounds us, scientists have made great strides in understanding the space we inhabit.

Now, Japanese astronomers have developed an astounding technique to measure the universe. Using artificial intelligence, scientists were able to remove noise in astronomical data which iscaused by random variations in the shapes of galaxies.

What did the scientists do?

Scientists used supercomputer simulations and tested large mock data before performing the same on real data from space. After extensive testing, scientists used the tool on data derived from Japans Subaru Telescope.

To their surprise, it worked! The results that followed remained largely in sync withthe currently accepted models of the universe. If employed on a bigger scale, the tool could help scientists analyse expansive data from astronomical surveys.

Current methods cannot effectively get rid of the noise which pervades all data from space. To avoid interference from noise data, the team used the worlds most advanced astronomy supercomputer called ATERUI II.

Using real data from the Subaru Telescope, they generated 25,000 mock galaxy catalogues.

Also read:Explosion on Sun equivalent to millions of hydrogen bombs causes biggest solar flare in 4 years

What's causing data distortion?

All data from space can be distorted by the gravity of whats in the foreground eclipsing its background. This is called gravitational lensing. Measurements of such lensing is used to better understand the universe. Essentially, a galaxy directly visible to us could be manipulating data about what lies behind it.

But its difficult to differentiate oddly-looking galaxies from distorting ones that manipulate data. Its called shape noise and regularly gets in the way of understanding the universe.

Based on these understandings, scientists added noise to the artificial data sets and trained AI to recover lensing data from the mock data. The AI was able to highlight previously unobservable details from this data.

Building on this, scientists used the AI model on the real world, covering 21 square degrees of the sky. They found that the details registered about the foreground were actually consistent with existing knowledge about the cosmos.

Also read:'Orphan cloud' bigger than Milky Way found in 'no-galaxy's land' by scientists

The research was published in the April issueof Monthly Notices of the Royal Astronomical Society.

Original post:

Astronomers use artificial intelligence to reveal the true shape of universe - WION