Monthly Archives: May 2022

Top 7 Ways to Cultivate Creative Excellence with AI – Entrepreneur

Posted: May 27, 2022 at 2:07 am

Opinions expressed by Entrepreneur contributors are their own.

In a time when the pace of change is accelerating, the presence of creative excellence for businesses is crucial for success. However, it is easier said than done. Creative excellence with humans alone has its setbacks, preventing it from reaching its full potential.

That's where artificial intelligence comes in. AI is an extraordinary force for creative excellence. The power of AI to improve creativity is just beginning to be tapped. It can help artists and designers create on a previously unimaginable scale and transform how we interact with creative content. AI can also make creative workflows more efficient and effective by removing menial tasks from the creative process, uncovering new insights about what people want to see and using data to make content more compelling for audiences.

A new frontier of creative possibility lies at the crossroads of creativity and technology. It's an exciting time to be creative the doors to innovation are open, and those with bold visions stand to gain the most.

The tools that have facilitated our determination to be in control of everything are now helping us break out of our molds and explore uncharted territory, leading us to all sorts of possibilities that we would never have stumbled upon on our own.

We can now use AI in tandem with human intelligence to create works that defy conventional categorization and redefine what it means to be creative in today's world. Here's how to cultivate a culture of creative excellence with AI for your business.

Related: This Is How AI Content Marketing Will Shake Up 2022

Artificial intelligence (AI) enables a new era of creative excellence that will transform creative workflows, unleash new value and create new competitive advantages. By providing deep insights into customer desires and needs, AI can help companies rapidly prototype and test concepts to identify which ones most resonate with consumers. It can also aid in executing campaigns and measuring the results, enabling continuous optimization against marketing objectives.

The cognitive tools that make up artificial intelligence can transform creative workflows. Cognitive technologies can amplify human creativity by enabling creative professionals such as designers, writers and filmmakers to spend less time on repetitive processes or tasks that require rote memory and instead focus on higher-value, more imaginative work.

Related:5 Ways Artificial Intelligence Is Radically Transforming Creativity in Business

Creative expression involves generating new ideas, which has always been understood as an inherently human skillset. But a growing number of research projects are demonstrating that deep learning, an advanced machine-learning technique, can be used to model and simulate many aspects of human creativity. One example is Project Magenta, an initiative by Google Brain researchers to use machine learning for creating music and art and also teach computers how to make compelling art on their own. Another example is the automated content creators, which use algorithms to generate different kinds of text based on a computer-learned model of an author's style.

And while these projects haven't yet reached the point where they can effectively create truly original works without human assistance, they provide valuable insights into how you can enhance creative output through technology.

Data is a double-edged sword for marketers: it provides insights into consumers' activities and preferences, but it can be time-consuming and costly to sift through it manually. Machine learning algorithms have made it easier to analyze data, turning it from raw numbers into actionable insight. Advances in natural language processing now mean that you can present this insight in easier ways for humans to understand and use.

Writing is often seen as a purely human activity, but AI systems are increasingly being used to write articles on their own some are even capable of generating full stories. The technology can scan thousands of pages in seconds and then use what it learns to generate content that mimics a human writer's style.

Related:Top 5 Ways AI Can Enhance Your Content-Creation Process

You can apply AI to creative workflows in many ways, from enhancing production processes to informing decisions about what content will resonate with consumers. With AI-powered tools, marketers can create more relevant, targeted campaigns by analyzing large amounts of data such as market research, customer profiles and brand performance metrics to predict consumer behavior and recommend which messages will influence them most effectively. This kind of technology enhances strategy by providing real-time guidance based on what has worked in the past and helps brands anticipate consumer needs before they arise.

Traditional market research methods can be time-consuming, expensive, and difficult to scale when developing new creative assets. Cognitive technology leverages natural language processing and machine learning technologies to quickly ingest large amounts of unstructured data reviews, social media posts, news articles and identify patterns and trends that help brands understand their customers' desires better. Marketers can then use these insights to develop messaging that resonates with consumers.

The creativity of the human mind is a uniquely powerful resource but one that has historically been difficult to harness effectively. With the aid of AI technologies, however, talent can be identified and nurtured in previously overlooked places, thereby opening new opportunities for creative professionals who once faced barriers.

Suppose these are indeed the early days of a creative renaissance driven by AI. In that case, it's already clear that the changes brought about by this technology will extend far beyond how individuals work and collaborate. It will also revolutionize how we recruit and reward employees across all industries.

Related:The Complete Guide to AI for Businesses and How It's Making a Difference

No matter where you fall on the spectrum whether you're a traditionalist who sees AI as a threat to your livelihood or an innovator who sees it as a tool for exciting new opportunities it's important not to let the fear of change hold you back from exploring the possibilities that lie ahead. Be open-minded about the potential for creativity, even if it looks different than what you're used to. After all, when we look at the current developments in art and technology, we see just how much can happen when creative people think outside the box.

See the original post:

Top 7 Ways to Cultivate Creative Excellence with AI - Entrepreneur

Posted in Ai | Comments Off on Top 7 Ways to Cultivate Creative Excellence with AI – Entrepreneur

AI Adoption Will Cause Workforce Reorganization – SHRM

Posted: at 2:07 am

Human resource executives at companies that are investing in artificial intelligence (AI) technology can expect to scout for higher-skilled IT workers as demand for their skills rises. They will also be faced with managing labor composition disruptions and workforce reorganizations as more companies use AI's predictive technology capabilities to solve business problems.

In a recently published research paper titled Firm Investments in Artificial Intelligence Technologies and Changes in Workforce Composition, professors from Columbia University; the University of California, Berkeley; and the University of Maryland pored over almost a decade's worth of data and found that AI adoption will change the employment landscape as well as HR managers' priorities.

Researchers examined changes in labor outcomes from 2010 to 2018 using several datasets, sourced from Cognism Inc., a London-based sales intelligence firm.

Researchers also used 180 million job postings provided by Boston-based Burning Glass Technologies, an analytics software company that conducts research on labor market trends. The data details job descriptions and specific requirements such as years of education and experience.

Additional data sources were wage and education data grouped by commuting zone from the U.S. Census Bureau's American Community Survey and wage and employment data grouped by industry from the U.S. Census Quarterly Workforce Indicators. From Compustat, researchers obtained firm-level data on operational variables such as sales, cash and assets.

The research shows that when companies invested in AI, there was a corresponding demand for workers who possess undergraduate and graduate degrees in the science, technology, engineering and mathematics (STEM) fields.

"As firms invest in AI, they tend to transition to more educated workforces, with higher shares of workers with undergraduate and graduate degrees and more specialization in STEM fields and IT and analysis skills," the report stated. "Furthermore, AI investments are associated with a flattening of the firms' hierarchical structure, with significant increases in the share of workers at the junior level and decreases in shares of workers in middle-management and senior roles."

Junior level workers are those with less than two years of experience, or have two to five years of experience but do not manage anyone directly.

A junior-level worker entering the workforce will know more about how to use AI data to make predictions, said Alex He, co-author of the report and assistant professor of finance at the Robert H. Smith School of Business at the University of Maryland. This is a shift from the days when managers were the ones who analyzed AI data, gained insights and made decisions accordingly.

In short, AI empowers junior-level workersa shift that has implications for the worker and manager relationship.

"We found that AI is making the firm less top-heavy and flatter. It's not surprising, because AI has the ability to make predictions, and that makes the entry-level workers more capable to make decisions. They can do more, and there is less need for middle managers," He said.

As companies that invest in AI operate with more employees in entry-level or single contributor roles and fewer workers in either middle management or senior positions, He predicted that several issues will arise that HR executives will be forced to manage in a restructured workforce.

"For example, right now, entry-level employees are paid less and managers are paid more, but if there are fewer managers, you can afford to pay the entry-level workers more to attract the required skills," He said.

Another significant finding is that there are some jobs that can't be replaced no matter how much investment is made in AI technology.

"Interestingly, firms that invest more heavily in AI do not reduce their demand for some of the skill groups that are most often predicted to be replaced by AI, such as customer service, HR, and legal," the report stated.

James Hodson is a co-author of the report. He is the chief science officer at Cognism and chief executive officer at the AI for Good Foundation, a nonprofit organization headquartered in Berkeley, Calif. Hodson said HR managers have an opportunity to use AI to hire highly skilled people, to reskill and train people faster, and to build more productive teams. AI also allows HR managers to track data on employees, which can help HR managers understand workers better.

However, the report's findings present both difficulties and opportunities for HR managers who will be asked to oversee an AI-induced workforce reorganization while maintaining or even advancing the competitive advantage of their companies.

"In general, HR executives need to be aware of managing organizational change, especially when it relates to the adoption of AI technology. Essentially, AI is bringing the HR function to the forefront of the business," Hodson said.

Nicole Lewis is a freelance journalist based in Miami.

Read the original:

AI Adoption Will Cause Workforce Reorganization - SHRM

Posted in Ai | Comments Off on AI Adoption Will Cause Workforce Reorganization – SHRM

China and Europe are leading the push to regulate A.I. one of them could set the global playbook – CNBC

Posted: at 2:07 am

A robot plays the piano at the Apsara Conference, a cloud computing and artificial intelligence conference, in China, on Oct. 19, 2021. While China revamps its rulebook for tech, the European Union is thrashing out its own regulatory framework to rein in AI but has yet to pass the finish line.

Str | Afp | Getty Images

As China and Europe try to rein in artificial intelligence, a new front is opening up around who will set the standards for the burgeoning technology.

In March, China rolled out regulations governing the way online recommendations are generated through algorithms, suggesting what to buy, watch or read.

It is the latest salvo in China's tightening grip on the tech sector, and lays down an important marker in the way that AI is regulated.

"For some people it was a surprise that last year, China started drafting the AI regulation. It's one of the first major economies to put it on the regulatory agenda," Xiaomeng Lu, director of Eurasia Group's geo-technology practice, told CNBC.

While China revamps its rulebook for tech, the European Union is thrashing out its own regulatory framework to rein in AI, but it has yet to pass the finish line.

With two of the world's largest economies presenting AI regulations, the field for AI development and business globally could be about to undergo a significant change.

At the core of China's latest policy is online recommendation systems. Companies must inform users if an algorithm is being used to display certain information to them, and people can choose to opt out of being targeted.

Lu said that this is an important shift as it grants people a greater say over the digital services they use.

Those rules come amid a changing environment in China for their biggest internet companies. Several of China's homegrown tech giants including Tencent, Alibaba and ByteDance have found themselves in hot water with authorities, namely around antitrust.

I see China's AI regulations and the fact that they're moving first as essentially running some large-scale experiments that the rest of the world can watch and potentially learn something from.

Matt Sheehan

Carnegie Endowment for International Peace

"I think those trends shifted the government attitude on this quite a bit, to the extent that they start looking at other questionable market practices and algorithms promoting services and products," Lu said.

China's moves are noteworthy, given how quickly they were implemented, compared with the timeframes that other jurisdictions typically work with when it comes to regulation.

China's approach could provide a playbook that influences other laws internationally, said Matt Sheehan, a fellow at the Asia program at the Carnegie Endowment for International Peace.

"I see China's AI regulations and the fact that they're moving first as essentially running some large-scale experiments that the rest of the world can watch and potentially learn something from," he said.

The European Union is also hammering out its own rules.

The AI Act is the next major piece of tech legislation on the agenda in what has been a busy few years.

In recent weeks, it closed negotiations on the Digital Markets Act and the Digital Services Act, two major regulations that will curtail Big Tech.

The AI law now seeks to impose an all-encompassing framework based on the level of risk, which will have far-reaching effects on what products a company brings to market. It defines four categories of risk in AI: minimal, limited, high and unacceptable.

France, which holds the rotating EU Council presidency, has floated new powers for national authorities to audit AI products before they hit the market.

Defining these risks and categories has proven fraught at times, with members of the European Parliament calling for a ban on facial recognition in public places to restrict its use by law enforcement. However, the European Commission wants to ensure it can be used in investigations while privacy activists fear it will increase surveillance and erode privacy.

Sheehan said that although the political system and motivations of China will be "totally anathema" to lawmakers in Europe, the technical objectives of both sides bear many similarities and the West should pay attention to how China implements them.

"We don't want to mimic any of the ideological or speech controls that are deployed in China, but some of these problems on a more technical side are similar in different jurisdictions. And I think that the rest of the world should be watching what happens out of China from a technical perspective."

China's efforts are more prescriptive, he said, and they include algorithm recommendation rules that could rein in the influence of tech companies on public opinion. The AI Act, on the other hand, is a broad-brush effort that seeks to bring all of AI under one regulatory roof.

Lu said the European approach will be "more onerous" on companies as it will require premarket assessment.

"That's a very restrictive system versus the Chinese version, they are basically testing products and services on the market, not doing that before those products or services are being introduced to consumers."

Seth Siegel, global head of AI at Infosys Consulting, said that as a result of these differences, a schism could form in the way AI develops on the global stage.

"If I'm trying to design mathematical models, machine learning and AI, I will take fundamentally different approaches in China versus the EU," he said.

At some point, China and Europe will dominate the way AI is policed, creating "fundamentally different" pillars for the technology to develop on, he added.

"I think what we're going to see is that the techniques, approaches and styles are going to start to diverge," Siegel said.

Sheehan disagrees there will be splintering of the world's AI landscape as a result of these differing approaches.

"Companies are getting much better at tailoring their products to work in different markets," he said.

The greater risk, he added, is researchers being sequestered in different jurisdictions.

The research and development of AI crosses borders and all researchers have much to learn from one another, Sheehan said.

"If the two ecosystems cut ties between technologists, if we ban communication and dialog from a technical perspective, then I would say that poses a much greater threat, having two different universes of AI which could end up being quite dangerous in how they interact with each other."

Read the original post:

China and Europe are leading the push to regulate A.I. one of them could set the global playbook - CNBC

Posted in Ai | Comments Off on China and Europe are leading the push to regulate A.I. one of them could set the global playbook – CNBC

Microsofts Code-Writing AI Points to the Future of Computers – WIRED

Posted: at 2:07 am

Microsoft just showed how artificial intelligence could find its way into many software applicationsby writing code on the fly.

At the Microsoft Build developer conference today, the companys chief technology officer, Kevin Scott, demonstrated an AI helper for the game Minecraft. The non-player character within the game is powered by the same machine learning technology Microsoft has been testing for auto-generating software code. The feat hints at how recent advances in AI could change personal computing in years to come by replacing interfaces that you tap, type, and click to navigate into interfaces that you simply have a conversation with.

The Minecraft agent responds appropriately to typed commands by converting them into working code behind the scenes using the software API for the game. The AI model that controls the bot was trained on vast amounts of code and natural language text, then shown the API specifications for Minecraft, along with a few usage examples. When a player tells it to come here, for instance, the underlying AI model will generate the code needed to have the agent move toward the player. In the demo shown at Build, the bot was also able to perform more complex tasks, like retrieving items and combining them to make something new. And because the model was trained on natural language as well as code, it can even respond to simple questions about how to build things.

While its unclear how reliably the system might work outside the demo, similar tricks could be used to make other applications respond to typed or spoken commands.

Microsoft has built an AI coding tool called GitHub Copilot on top of the same technology. It automatically suggests code when a developer starts typing, or in response to the comments added to a piece of code. Scott says Copilot is the first instance of what will likely be a slew of AI-first products in the coming years, from Microsoft and others. Code-writing AI lets you think about doing software development in a different wayso you can express an intention for something that you want to accomplish, he says.

Scott doesnt provide specific examples, but this could one day mean a version of Windows that locates a particular document and emails it to a colleague when you ask it to, or an AI-imbued version of Excel that turns a dataset into a chart when you ask. Were gonna see lots and lots and lots of big productivity wins for all sorts of routine cognitive work that none of us especially enjoys, Scott says.

In recent years, AI has proven adept at tasks such as classifying images, transcribing audio, and translating text. Recent algorithmic advances, combined with huge amounts of computer power, have yielded new AI programs capable of more sophisticated feats, including generating coherent textsuch as computer code.

The Minecraft bot was built using an AI model called Codex that was developed by OpenAI, an AI company that received funding from Microsoft in 2019. Codex was trained on natural language text scraped from the web, as well as billions of lines of code from GitHub, a popular repository for software owned by Microsoft.

Microsofts Copilot was made available to a limited number of testers in June 2021 and is now being used by over 10,000 developers who are producing, on average, around 35 percent of their code in popular languages like Python and Java using Copilot, Microsoft says. The company plans to make Copilot available for anyone to download this summer. To build something like the Minecraft bot, developers would need to work with the underlying AI model, Codex.

Both Codex and Copilot have stirred up some anxiety among developers, who fear they could be automated out of a job. The Minecraft demo could inspire similar concerns. But Scott says the feedback on Copilot has been largely positive, suggesting that it simply automates more tedious coding tasks. If you talk to a developer who actually uses a Copilot, they'll say this is such a great tool, he says.

Read the original post:

Microsofts Code-Writing AI Points to the Future of Computers - WIRED

Posted in Ai | Comments Off on Microsofts Code-Writing AI Points to the Future of Computers – WIRED

Singapore touts need for AI transparency in launch of test toolkit – ZDNet

Posted: at 2:07 am

Businesses in Singapore now will be able to tap a governance testing framework and toolkit to demonstrate their "objective and verifiable" use of artificial intelligence (AI). The move is part of the government's efforts to drive transparency in AI deployments through technical and process checks.

Coined A.I. Verify, the new toolkit was developed by the Infocomm Media Development Authority (IMDA) and Personal Data Protection Commission (PDPC), which administers the country's Personal Data Protection Act.

The government agencies underscored the need for consumers to know AI systems were "fair, explainable, and safe", as more products and services were embedded with AI to deliver more personalised user experience or make decisions without human intervention. They also needed to be assured that organisations that deploy such offerings were accountable and transparent.

Singapore already has published voluntary AI governance frameworks and guidelines, with its Model AI Governance Framework currently in its second iteration.

A.I Verify now will allow market players to demonstrate to relevant stakeholders their deployment of responsible AI through standardised tests. The new toolkit currently is available as a minimum viable product, which offers "just enough" features for early adopters to test and provide feedback for further product development.

Specifically, it delivers technical testing against three principles on "fairness, explainability, and robustness", packaging commonly used open-source libraries into one toolkit for self-assessment. These include SHAP (SHapley Additive exPlanations) for explainability, Adversarial Robustness Toolkit for adversarial robustness, and AIF360 and Fairlearn for fairness testing.

The pilot toolkit also generates reports for developers, management, and business partners, covering key areas that affect AI performance, testing the AI model against what it claims to do.

For example, the AI-powered product would be tested on how the model reached a decision and whether the predicted decision carried unintended bias. The AI system also could be assessed for its security and resilience.

The toolkit currently works with some common AI models, such as binary classification and regression algorithms from common frameworks including scikit-learn, Tensorflow, and XGBoost.

IMDA added that the test framework and toolkit would enable AI systems developers to conduct self-testing not only to maintain the product's commercial requirements, but also offer a common platform to showcase these test results.

Rather than define ethical standards, A.I. Verify aimed to validate claims made by AI systems developers about their AI use as well as the performance of their AI products

However, the toolkit would not provide guarantee that the AI system tested was free from biases or free from security risks, IMDA stressed.

It could, though, facilitate interoperability of AI governance frameworks and could help organisations plug gaps between such frameworks and regulations, the Singapore government agency said.

It added that it was working with regulations and standards organisations to map A.I. Verify to established AI frameworks, so businesses could offer AI-powered products and services in different global markets. The US Department of Commerce is amongst agencies Singapore was working with to ensure interoperability between their AI governance frameworks.

According to IMDA, 10 organisations already had tested and offered feedback on the new toolkit, including Google, Meta, Microsoft, Singapore Airlines, and Standard Chartered Bank.

IMDA added that A.I. Verify was aligned with globally accepted principles and guidelines on AI ethics, including those from Europe and OECD that encompassed key areas such as repeatability, robustness, fairness, and societal and environmental wellbeing. The framework also leveraged testing and certification regimes that comprised components such as cybersecurity and data governance.

Singapore would look to continue developing A.I. Verify to incorporate international AI governance standards and industry benchmarks, IMDA said. More functionalities also would be gradually added with industry contribution and feedback.

In February, the Asian country also released a software toolkit to help financial institutions ensure they were using AI responsibly as well as five whitepapers to guide these companies on assessing their deployment based on predefined principles. Industry regulator Monetary Authority of Singapore (MAS) said the documents detailed methodologies for incorporating theFEAT principles--of Fairness, Ethics, Accountability, and Transparency--into the use of AI within the financial services sector.

Go here to see the original:

Singapore touts need for AI transparency in launch of test toolkit - ZDNet

Posted in Ai | Comments Off on Singapore touts need for AI transparency in launch of test toolkit – ZDNet

Three Ways Companies Can Cope with the AI and Analytics Talent Crunch – Datanami

Posted: at 2:07 am

(Aleutie/Shutterstock)

With inflation in the United States at a 40-year high and unemployment near a 50-year low, these are tough times to attract and retain employees in just about every sector. When you add the growing demand for talent in high tech sectors like big data and AI, you get a job market thats great for these workers, but tough for companies.

Whatever you call it the Great Resignation, the Great Reshuffle, or the Global Talent Shortage theres no denying that employers are under the gun when it comes to keeping skilled workers. Companies are scrambling to fill open positions in data and AI, let alone creating new ones to handle additional data and AI projects. This is causing employers to take drastic measures to keep up with the Joneses.

Here are three ways companies are dealing with the talent crunch:

This is probably the most obvious solution to attracting and retaining AI and analytics staffbut also the most painful for companies. With inflation currently at 8.3%, employees have a great excuse to seek higher pay, even if it results in higher costs and more inflation down the road. And with so much churn in the labor marketnearly 48 million people quit their jobs in 2021the conditions are perfect for them to get it.

Some tech firms are taking drastic measures. Microsoft for example is doubling its budget for employee salary increases, according to an article in Bloomberg. With the starting salary for a new engineer estimated to be around $160,000 per year, that is no small chunk of change for the second largest American company by market capitalization.

Inflation is expected to drive tech salaries up (Creativa Images/Shutterstock)

The move will help Microsoft to keep up with other tech giants eager to poach talent, including Amazon, which recently announced its doubling the maximum base salary for employees from $160,000 to $350,00 per year. That will certainly help to attract people who are looking for new jobs, which according to a recent survey accounted for 44% of all workers.

Companies will pay significant sums for coveted positions. According to a 2021 survey Hired conducted for its 2022 State of Software Engineers study, NLP engineers made an average of about $160,000 per year, machine learning engineers earned about $158,000, and data engineers grossed about $156,000.

The good news (for employers) is that salaries for these positions were flat relative to 2020. The biggest increase? Security engineers, which saw a 7.6% bump in salary to about $165,500.

2021 salaries may have been flat because they are a lagging indicator, according to the 2022 Dice Tech Salary Report. These high-growth and high-value occupations may begin to see an uptick in early 2022 and throughout the year, it suggested.

However, the odds of a recession have grown in recent weeks, as inflation takes a toll on consumer spending. That has led to speculation that the hiring binge will begin to slow. A spokesperson for Meta (parent company of Facebook) told CNBC earlier this week that the company was slowing its growth in hiring.

A recession would be painful for a lot of people and firms, but it likely would cool demand for tech talent.

Not too long ago, amenities like ping pong tables, bean bag chairs, and on-site chefs were enough to lure the best and brightest to tech startups. These days, folks are looking for something a little bit different, with a flexible working arrangement being near the top.

Postings for remote jobs on LinkedIn are getting a significantly higher response rate than jobs in specific locations (Source: LinkedIn)

Interest in jobs that allow workers to work from home is quite high. According to a post to the LinkedIn Talent Blog last month, remote jobs accounted for 20% of all job postings on LinkedIn, but accounted for 50% of all applications.

The message was crystal clear to Greg Lewis, the blogs author: As many companies seem eager to return workers to the office, candidates are sending a strong message that many of them would prefer to work remotely.

Data from a recent Pew Research study bears this out. Since 2020, the reason that people work from home has changed, the group says. During the early days of the pandemic, working from home was a matter of survival for the company, but not anymore.

Today, more workers say they are doing this by choice rather than necessity, Pew writes. Among those who have a workplace outside of their home, 61% now say they are choosing not to go into their workplace, while 38% say theyre working from home because their workplace is closed or unavailable to them.

For employers looking to satisfy a fickle workforce, allowing employees to work from home at least a few days a week could help keep them on the payrollfor at least a little longer.

Its long been observed that as technology improve, it displaces human workers. Weve seen this play out many times, including with the armies of clerks 100 years ago who manually tracked company spending on paper, only to be replaced with those Hollerith Tabulating Machines.

Africas demographics make it a promising location for BPO and IT outsourcing (monaliza0024/Shutterstock)

Fast forward to 2020 and the worst viral pandemic in decades, and automation is continuing to take over potentially dangerous jobs. For instance, toll booth operators for the Carquinez Bridge in the San Francisco Bay Area were replaced with FastTrak tags, displacing hundreds of workers, according to this story in Time.

In the world of analytics, the rise of self-service tools and techniques is helping to democratize data, but could it also make a dent in the hiring shortfall? According to Dice, which tracked a 11.5% increase in data analyst salaries from 2020 to 2021 (to about $85,000), the answer is yes.

For instance, numerous data-analytics apps allow employees of all backgrounds to crunch their organizations databases for key insights, Dice wrote in its recent salary survey. While these tools wont replace a highly specialized technologist, theyre a good way to streamline other employees workflows. With tech unemployment low and hiring managers having difficulty finding key talent, some organizations may be holding off on hiring some roles and relying on stopgap measures (and tools) instead.

Outsourcing also remains a possible tool to help companies through the Great Reshuffle. While its not really possible to outsource high-valued, strategic positions like data scientists, relying on business process outsourcing (BPO) providers to fill in for other positions can help companies free up resources and personnel to direct to the problem areas, which may include data and AI.

David Rickard of the Everest Group, a respected provider of insight for the global BPO industry, says that while countries like India have a lot to offer now, there are some other locales that should be on your radar, including Africa.

We talk about doubling down in India for the next three to five years in terms of looking for the talent, because theyve got the talent now, Rickard tells Datanami in a recent interveiw. But boldly go where no one has gone before and actually consider Africa as a long-term potential solution as it matures more and as people are coming into the workforce who are educated from an IT perspective.

Africa has a lot of things going for it, Rickard says. First and foremost, while the pipeline for workers entering the workforce in the future is shrinking in many developed countries, its actually getting bigger in Africa. If you look at the population in that 10 to 14 age range and the 15 to 19 age range, were talking about over a billion people coming into the workforce over the next few years, he says.

Everest Group ranks the countries across various criteria, including infrastructure, safety, security, economics, digital readiness, and quality of life, Rickard says. But then also we assess, whats the standard of English?

Tech giants are already investing in the leading countries. For example, Microsoft is investing in Rwanda, Rickard says, and Google is also making investments. In addition to Rwandan, other East African countries on Everests list include Kenya, Mauritius and Uganda. In West Africa, Ghana and Nigeria are good sources of workers from a BPO perspective, while in North Africa, its Egypt, Morocco, and Tunisia. South Africa also makes the list.

Rickard specializes in call center work, which is slightly different than IT work. But both require good education and English proficiency, so theres some possibility that Africa could play a bigger role in data work in the future.

Related Items:

Hiring, Pay for Data Science and Analytics Pros Picks Up Steam

In Search of Data Science Talent with Dr. Kirk Borne

Data Salaries Get a COVID Bump

Follow this link:

Three Ways Companies Can Cope with the AI and Analytics Talent Crunch - Datanami

Posted in Ai | Comments Off on Three Ways Companies Can Cope with the AI and Analytics Talent Crunch – Datanami

Use AI in Tackling Climate Change: Experience Sharing from Taiwan and the World – PR Newswire

Posted: at 2:07 am

In the opening remarks, Jiunn-Shiow Lin, the director of the IDB, summarized the AI development and strategies in Taiwan. Since 2019, to assist local companies in obtaining more business opportunities, the government-funded project, "AI Application Service Development Environment Promotion Program", has been exploring merging issues and trends for global AI applications.

The co-founder of the Centre for AI & Climate, Mr. Peter Clutton-Brock, portrayed a general picture of using AI to tackle climate change, from emerging technologies to possible solutions. Most importantly, he provided six potential fields in which AI can be the solution to the climate crisis. Followed by Dr. Vu Thuy Linh, the research fellow of AI for Operations Management Research Center, carried out an AI sensor system to reduce carbon emissions in smart buildings.

In terms of the agricultural production, Dr. I-Chun Chang, the general secretary of the Taiwan Smart Aquaculture Glow Association, shared their experience in promoting and implementing intelligent and automated modern production in Taiwanese aquaculture; Alan Yu, the founder of ID Water Technology Co., provided an AI-aid solution in shrimp farming which can boost the economic value compared with the traditional shrimp management methods.

When it comes to solving climatic problems, AI has been proven to be an accurate, fast, and reliable method to mitigate the effects of climate change on the economy, industry, and society. This webinar provided a viewpoint and hands-on experience in how companies can use AI to solve climatic problems across the world and strengthen the resiliency of businesses and societies.

For more information about this event, please visit: https://www.ai-hub.online/.

SOURCE AIHUB

Here is the original post:

Use AI in Tackling Climate Change: Experience Sharing from Taiwan and the World - PR Newswire

Posted in Ai | Comments Off on Use AI in Tackling Climate Change: Experience Sharing from Taiwan and the World – PR Newswire

‘Collaborative, Portable Autonomy’ Is the Future of AI for Special Operations – Defense One

Posted: at 2:07 am

For a future fight against a near-peer military, U.S. special operators say they need smart, networked sensors and drones that can work together in contested environments with little human supervision. But as collaborative autonomy comes within technical reach, just how independent should these things get?

We are going to use a lot of sensors, whether they're unmanned aerial systems, unmanned ground systems, unmanned maritime systems, unintended sensors, all working together, and what our goal is to have those working together collaboratively and autonomously, SOCOMs top acquisition executive, James Smith, said at NDIAs SOFIC conference in Tampa, Florida, last week.

SOCOM has a specific line of effort where we're focused on what we're calling collaborative autonomy, said David Breede, who runs a program executive office at SOCOM. That line of effort is concerned with such questions as How do I get an unattended ground sensor talking to an unmanned aircraft and having an unmanned aircraft react based on the information that it got from that unmanned ground sensor? Not only collaboration across technologies and capabilities, but collaboration across program offices, right? Breede said. In other words, special operations forces need sensors on the ground, in the air, and in space constantly working together to autonomously detect changes and sound the alert.

But SOCOM doesnt just want networks of sensors to collaborate better. They also want the underlying autonomy software to work the same on everything from a 3-D printer to a $10,000 drone.

We have a goal of what we're calling portable autonomy, so being able to port software, virtual algorithms across different classes, of small drones. We would have an autonomy developer actually have their software algorithms on a payload and then integrate that onto a third-party platform and demonstrate the ability to control that platform without talking to that third-party platform provider, Breede said.

Among the obstacles: battlefield radio communications are expected to become much more difficult. SOCOMs Col. Paul Weizer said the command is trying to untether itself from radio.

So how do I operate completely in an untethered way, whether it's with unattended ground sensors or whether it's unmanned vehicles or otherwise?

Part of the answer is to put more information and computing power on the battlefield instead of counting on being able to reach back for it. The military and SOCOM in particular have been trying to bring cloud capabilities much closer to the battlefield, an effort exemplified by the Armys XVIII Airborne Corps and Amazon Web Services to create a tactical cloud environment.

That will also make battlefield decision-making much easier, said Quentin Donnellan, the president of the Space and Defense division of AI company Hypergiant, which is working with AWS and the Army on the effort.

If I turn on my radios, people are going to know where I'm at. So I don't want to turn on my radios, right? Donnellan said. So the idea for these use-cases is How do we deploy AI and machine learning out tactically where I can make those decisions in a communications-denied environment. If I've got the tools that allow me to, like, leverage AI to put it out to the edge, I should be able to do my job even if my cloud connection is denied.

One example of tactical cloud use is integrating radar sensor data for air defense in the fieldcloser to the threatrather than receiving an alert from a headquarters. That's kind of a really tactical and specific example of, Hey, if you deploy AI out there, [you could] potentially leverage weather or ground-based radar to be able to do things like object detection and classification, but not not relying on the connectivity back down, Donnellan said.

Shield AI co-founder Brandon Tseng said his companyknown for drones that navigate without GPSis working with SOCOM on portable autonomy to operate ever-larger drone swarms. Since 2018, Shield AI has been developing a software-based autonomy product called Hivemind for drone piloting; theyre integrating it ontoV-BAT drones to develop swarming and maritime domain awareness capabilities.

The company is working closely with the U.S. military to figure out how to penetrate enemy air defenses with drone swarms, he said. Something that we're super excited about is operationalizing swarms of three V-BAT aircraft in 2023, four craft in [20]24, eight aircraft in [20]25 and 16 aircraft in [20]26 that are working as a highly intelligent team together. I think it's adjacent to where SOCOM is and it definitely plays into their interests. But we're also integrating it on fighter jets and we expect to have it running on an F-16 later this year.

But the technology aspect of portable, collaborative autonomy isnt actually the hardest part of the challenge; the larger policy and ethics questions are.

Take the Switchblade from AeroVironment, the small kamikaze drones that have helped the Ukrainian military push back Russian forces. The drone sends video directly to an operator nearby without having to travel long distances over radio.

Brett Hush, vice president of tactical mission systems at AeroVironment, said his company is experimenting with artificial intelligence for automatic target recognition. Those capabilities are in development. We've demonstrated with the DOD our ability to do that to identify like 32 tanks and potentially strike them with no need for communication with an operator.

Now, fielding that capability is where we're gonna cross, you know, policy, he said. Today, everything that's done with our loitering missiles, there's a man on the loop. Once we go to field in the [automatic target recognition] and with more autonomy, we've got to really, as a country, think through where would that be allowed and not allowed.

View original post here:

'Collaborative, Portable Autonomy' Is the Future of AI for Special Operations - Defense One

Posted in Ai | Comments Off on ‘Collaborative, Portable Autonomy’ Is the Future of AI for Special Operations – Defense One

AI could help us spot viruses like monkeypox before they cross over and help conserve nature – The Conversation

Posted: at 2:07 am

When a new coronavirus emerged from nature in 2019, it changed the world. But COVID-19 wont be the last disease to jump across from the shrinking wild. Just this weekend, it was announced that Australia, is no longer an onlooker, as Canada, the US and European countries scramble to contain monkeypox, a less dangerous relative of the feared smallpox virus we were able to eradicate at great cost.

As we push nature to the fringes, we make the world less safe for both humans and animals. Thats because environmental destruction forces animals carrying viruses closer to us, or us to them. And when an infectious disease like COVID does jump across, it can easily pose a global health threat given our deeply interconnected world, the ease of travel and our dense and growing cities.

We can no longer ignore that humans are part of the environment, not separate to it. Our health is inextricably linked to the health of animals and the environment. This will not be the last pandemic.

To be better prepared for the next spillover of viruses from animals, we must focus on the connections between human, environmental and animal health. This is known as the One Health approach, endorsed by the World Health Organization and many others.

We believe artificial intelligence can help us better understand this web of connection, and teach us how to keep life in balance.

Fully 60% of all infectious diseases affecting humans are zoonoses, meaning they came from animals. That includes the lethal Ebola virus, which came from primates, swine flu, from pigs, and the novel coronavirus, most likely from bats. Its also possible for humans to give animals our diseases, with recent research suggesting transmission of COVID-19 from humans to cats as well as deer.

Early warning of new zoonoses is vital, if we are to be able to tackle viral spillover before it becomes a pandemic. Pandemics such as swine flu (influenza H1N1) and COVID-19 have shown us the enormous potential of AI-enabled prediction and disease surveillance. In the case of monkeypox, the virus has already been circulating in African countries, but has now made the leap internationally.

Read more: On the trail of the origins of Covid-19

What does this look like? Think of collecting and analysing real-time data on infection rates. In fact, AI was used to first flag the novel coronavirus as it was becoming a pandemic, with work done by AI company Bluedot and HealthMap at Boston Childrens Hospital.

How? By tracking vast flows of data in ways humans simply cannot do. Healthmap, for instance, uses natural language processing and machine learning to analyse data from government reports, social media, news sites, and other online sources to track the global spread of outbreaks.

We can also use AI to mine social media data to understand where and when the next COVID surge will occur. Other researchers are using AI to examine the genomic sequences of viruses infecting animals in order to predict whether they could potentially jump from their animal hosts into humans.

As climate change alters the earths systems, it is also changing the ways disease spreads and their distributions. Here, too, AI can be put to use in new surveillance methods.

There are clear links between our destruction of the environment and the emergence of new infectious diseases and zoonotic spillovers. That means protecting and conserving nature also helps our health. By keeping ecosystems healthy and intact, we can prevent future disease outbreaks.

In conservation, too, AI can help. For instance, Wildbook uses computer-vision algorithms to detect individual animals in images, and track them over time. This allows researchers to produce better estimates of population sizes.

Trashing the environment by deforestation or illegal mining can also be spotted by AI, such as through the Trends.Earth project, which monitors satellite imagery and earth observation data for signs of unwelcome change.

Citizen scientists can pitch in as well by helping train machine learning algorithms to get better at identifying endangered plants and animals on platforms like Zooniverse.

Researchers are beginning to consider the ethics of AI research on animals. If AI is used carelessly, we could actually see worse outcomes for domestic and wild animal species, for example, animal tracking data can be prone to errors if not double-checked by humans on the ground, or even hacked by poachers.

AI is ethically blind. Unless we take steps to embed values into this software, we could end up with a machine which replicates existing biases. For instance, if there are existing inequalities in human access to water resources, these could easily be recreated in AI tools which would maintain this unfairness. Thats why organisations such as the AINowInstitute are focusing on bias and environmental justice in AI.

In 2019, the EU released ethical guidelines for trustworthy AI. The goal was to ensure AI tools are transparent and prioritise human agency and environmental health.

Read more: How to prevent mass extinction in the ocean using AI, robots and 3D printers

AI tools have real potential to help us tackle the next pandemic by keeping tabs on viruses and helping us keep nature intact. But for this to happen, we will have to widen AI outwards, away from the human-centredness of most AI tools, towards embracing the fullness of the environment we live in and share with other species.

We should do this while embedding our AI tools with principles of transparency, equity and protection of rights for all.

Read this article:

AI could help us spot viruses like monkeypox before they cross over and help conserve nature - The Conversation

Posted in Ai | Comments Off on AI could help us spot viruses like monkeypox before they cross over and help conserve nature – The Conversation

The UAE’s AI minister wants ‘murder’ in the metaverse to be a real crime – The Next Web

Posted: at 2:07 am

Omar Sultan Al Olama, the United Arab Emirates minister of artificial intelligence, yesterday told an audience at the World Economic forum in Davos that its his belief that people who commit serious crimes in the metaverse should be punished with real-world criminal consequences.

Per an article by CNBCs Sam Shead, the minister views this as a necessary measure to protect peoples mental health:

If I send you a text on WhatsApp, its text right? It might terrorize you but to a certain degree it will not create the memories that you will have PTSD (post-traumatic stress disorder) from it.

But if I come into the metaverse and its a realistic world that were talking about in the future and I actually murder you, and you see it it actually takes you to a certain extreme where you need to enforce aggressively across the world because everyone agrees that certain things are unacceptable.

Tell me you dont understand how post-traumatic stress disorder (PTSD) works without telling me you dont understand how PTSD works.

Subscribe now for a weekly recap of our favorite AI stories

Upfront: There is no medical threshold by which PTSD occurs. Clinical diagnosis involves observation and interviews with a medical professional.

Anecdotally speaking, PTSD isnt necessarily triggered in the manner which Al Olama indicated. I was diagnosed with PTSD while on active duty military service after learning about the death of someone Id mentored. Other peoples diagnosis have come after entirely different experiences.

Jennifer Kobelt, a survivor of the NXIVM cult, told investigators and documentarians that her PTSD was triggered after being subjected to a horrific experiment in which she was exposed to graphic violence from Hollywood cinema and a real-world snuff film.

Deeper: You cant murder an avatar. At least not in the legitimate legal sense. Its a stupid idea that doesnt deserve much attention, but lets just lay it bare real quick so we can move.

Lets say, 10 years from now, youre wandering around in Metas version of the metaverse. Youre probably wearing a VR headset, and maybe the techs advanced to the point where the visual and audio fidelity are nearly indistinguishable from reality.

All of the sudden, someone pushes the buttons on their control pad to cause their avatar to leap out of a digital bush and then they push the buttons on their control pad that cause them to stab your avatar.

Your avatar bleeds out and dies. You have to witness the knife going in! Oh! The horror!

But wait, lets rewind for a second. How did the knife get there? Who programmed the leaping out of the bush animation? Are there more kill moves? Whats the combo for a silent takedown?

Whoops. Im getting ahead of myself. I forgot, were not talking about a video game. Were talking about murder most foul, in the metaverse.

Im not sure what the UAEs minister of AI knows about the field that the rest of us dont, but in this particular version of reality, theres no basis for this fantasy.

Rock bottom: You may as well pass a law against murdering people in video games. And that means all of you people who play Call of Duty are screwed some of you have more kills than old age.

The point is that, no matter how traumatizing it might be to see yourself murdered in first person, its not like Zuckerbergs planning on making that a feature.

Maybe Al Olamas thinking the metaverse is going to be a splintered internet experience like web, where dark corners of the platform could be host to anything.

But, at least for now, the companies such as Meta, Nvidia, Microsoft, Google, and Epic that are investing billions of dollars into creating bespoke experiences probably arent going to put together a team of designers focused on adding PTSD-inducing gore to their production models.

Sure, a hacker could hack some violence onto a server or find an exploit that shows violence. And its possible some sort of underground mod scene could develop over time.

But seriously. The idea that somehow, youll be casually shopping in the Nike section of Metas billions-of-dollars and counting metaverse and suddenly a digital Jack the Ripper is going to appear in front of you in a rabid frenzy is just plain silly.

If you can murder people in the metaverse, itll be a feature that people log in specifically to experience. For the same reason so many of us play Dead By Daylight, Resident Evil, and Call of Duty, or watch R-rated horror movies, theres plenty of people whod enjoy a good old-fashioned fake-murdering in a VR world.

Quick take: Everything about the idea of criminalizing digitized violence in virtual reality is dumb. This kind of blathering rhetoric just demonstrates how far detached from reality some technologists can be. Nobodys worried about logging onto a VR version of Facebook and being murdered in their headset.

There are plenty of real ethical concerns that the minister of AI for the sixth richest country in the world could spend their time on.

Read more:

The UAE's AI minister wants 'murder' in the metaverse to be a real crime - The Next Web

Posted in Ai | Comments Off on The UAE’s AI minister wants ‘murder’ in the metaverse to be a real crime – The Next Web