Page 70«..1020..69707172..8090..»

Category Archives: Ai

Wearable AI Market 2022 is projected to grow at a healthy CAGR Cleveland Sports Zone – Cleveland Sports Zone

Posted: February 7, 2022 at 6:21 am

Global Wearable AI Market Overview:

Global Wearable AI Market presents insights on the current and future industry trends, enabling the readers to identify the products and services, hence driving the revenue growth and profitability. The research report provides a detailed analysis of all the major factors impacting the market on a global and regional scale, including drivers, constraints, threats, challenges, opportunities, and industry-specific trends. Further, the report cites global certainties and endorsements along with downstream and upstream analysis of leading players. The research report comes up with the base year 2021 and the forecast between 2022 and 2028.

This report covers all the recent development and changes recorded during the COVID-19 outbreak.

This Wearable AI market report aims to provide all the participants and the vendors will all the details about growth factors, shortcomings, threats, and the profitable opportunities that the market will present in the near future. The report also features the revenue share, industry size, production volume, and consumption in order to gain insights about the politics to contest for gaining control of a large portion of the market share.

Request Sample Report @ https://www.marketreportsinsights.com/sample/12758

Top Key Players in the Wearable AI Market:Apple, Samsung, Google, Microsoft, Sony, Garmin, Fitbit, Huawei, Amazon, IBM, Oracle

The Wearable AI Industry is severely competitive and fragmented due to the existence of various established players taking part in different marketing strategies to increase their market share. The vendors operating in the market are profiled based on price, quality, brand, product differentiation, and product portfolio. The vendors are turning their focus increasingly on product customization through customer interaction.

Major Types of Wearable AI covered are:Smart WatchEar WearEye Wear

Major end-user applications for Wearable AI market:Consumer ElectronicsEnterpriseHealthcare

Regional Analysis For Wearable AIMarket

North America(the United States, Canada, and Mexico)Europe(Germany, France, UK, Russia, and Italy)Asia-Pacific(China, Japan, Korea, India, and Southeast Asia)South America(Brazil, Argentina, Colombia, etc.)The Middle East and Africa(Saudi Arabia, UAE, Egypt, Nigeria, and South Africa)

Go For Interesting Discount Here: https://www.marketreportsinsights.com/discount/12758

Points Covered in The Report:

Reasons for Buying Global Wearable AI Market Report:

Access full Report Description, TOC, Table of Figure, Chart, etc. @ https://www.marketreportsinsights.com/industry-forecast/global-wearable-ai-market-growth-2021-12758

Thanks for reading this article; you can also get individual chapter wisesection or region wise report version like Asia, United States, Europe.

Contact Us:Market Reports Insights[emailprotected]

Originally posted here:

Wearable AI Market 2022 is projected to grow at a healthy CAGR Cleveland Sports Zone - Cleveland Sports Zone

Posted in Ai | Comments Off on Wearable AI Market 2022 is projected to grow at a healthy CAGR Cleveland Sports Zone – Cleveland Sports Zone

AI in retail has to be semi-automated. Heres why – VentureBeat

Posted: at 6:21 am

Join today's leading executives online at the Data Summit on March 9th. Register here.

Retailers need more decision automation, faster coordination of supply chains, and faster interactions with consumers, which means they will increasingly rely on AI. Automated decisioning systems will soon be making fine-grained micro-decisions on the retailers behalf, impacting customers, employees, partners, and suppliers. But these systems cant run autonomously they need human managers.

So what exactly should this human management look like?

Every system for making micro-decisions needs to be monitored. Monitoring ensures the decision-making is good enough while also creating the data needed to spot problems and systematically improve the decision-making over time.

Consider the following retail example: A fashion retailer that had historically applied blunt rules to determine markdowns decided to implement a new AI-powered solution. The system performed well for the first few weeks, making more frequent and more surgical decisions than human managers were able to contemplate. But at the start of the swimwear season, the system identified a slow initial sell-through that triggered all swimwear to be marked down to sell-out. As a result, the retailer lost millions of dollars of margin and was left with no swimwear.

Why did this happen? The aggressive markdowns were triggered because the first three weeks of sales were lower than expected. A human merchandiser would not have panicked and would have realized that this was due to a couple of particularly cold weeks. But the unmanaged and unmonitored AI system simply executed on its logic.

The example above illustrates why the best approach to deploying AI is typically semi-automation: automation that involves some level of human oversight. When optimized for each decision, semi-automation can help retailers save time, empower employees, and greatly improve profitability, while avoiding costly pitfalls.

The four models for semi-automation range from heavy to very light human involvement.

First, human in the loop (HITL) is the most basic framework for semi-automation, where decisions are rarely made without human involvement. Such a system provides recommendations based on automated calculations, but a human ultimately makes the decision. For example, pricing software calculates the ideal price of a dress to maximize profitability, but the pricing manager must sign off on each decision.

The next model is human in the loop for exceptions (HITLFE), where humans are removed from standard decision-making, but the system engages a manager when human judgment is required. For instance, if the automated system has two vendor options for stock replenishment, the buyer is required to step in and make the final call.

Then there is human on the loop (HOTL), which means the machine is assisted by a human. The machine makes the micro-decisions, but the human reviews the decision outcomes and can adjust rules and parameters for future decisions. In a more advanced setup, the machine also recommends parameters or rule changes that are then approved by a human.

Finally, there is human out of the loop (HOOTL), which is where a human simply monitors the machine. The machine makes every decision, and the human intervenes only by setting new constraints and objectives.

Selecting the right model to use is a design problem. As we have seen, automation is not all or nothing, and decisions are not created equal. The right model should be determined based on the decisions complexity, volume, velocity, and blast radius, which measures the potential downside.For example, if the decision is simply to recommend a blue dress instead of a red one because blue is out of stock, its a low-risk decision that can be fully automated with limited oversight. However, if the worst outcome results in misordering thousands of dresses or in expensive markdowns like in the swimwear example, then human oversight and accountability is more critical. Its also important to recognize that automated systems can and will evolve over time, enabled by new technology, the desire to make ever more fine-grained decisions, and managements confidence in automating business operations.

The key to the successful deployment of any AI system is to start with a quantified business problem. With this, retailers must foster a data-driven culture where the whole team is engaged in determining how best to improve specific business decisions. This also necessitates a change in how retailers do their jobs. For example, merchandising managers in the past might have had to set prices for several dozen dresses a day based on stock, sales data, and competitor activity. But now, with personalized promotions and recommendations, the same manager might be responsible for millions of decisions a day. This requires a fundamental shift from making decisions to making decisions about decisions i.e., managing rules and parameters rather than making specific pricing decisions.

Semi-automation of business-critical decisions must be approached carefully, with regard to the potential heightening of the blast radius of risk. Once the decision to automate has been made, retailers must shift their attention to decision algorithms the logic and rules that enable retailers to execute on the micro-decisions. Miscommunication between the data science team and the rest of the organization can lead to errors and missed opportunities, potentially creating a reluctance to change that can be quite difficult to reverse.

Whichever model you adopt, its critical to put AI on the organization chart to ensure that human managers feel responsible for its output. To succeed, retailers must understand the different ways they can interact with AI and pick the right management option for each AI system. Selecting the best level of semi-automation will ensure that the retail businesses realize the full potential of AI.

Michael Ross is Senior Vice President of retail data science at EDITED. He is a non-executive director at Sainsburys Bank and N Brown Group plc. He also cofounded several companies, including DynamicAction, ecommera, and figleaves.com. Prior to that, he was a consultant at McKinsey and Company.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More

Read the original post:

AI in retail has to be semi-automated. Heres why - VentureBeat

Posted in Ai | Comments Off on AI in retail has to be semi-automated. Heres why – VentureBeat

(New Report) AI in ICT (Information and Communications Technology) Market In 2022 : The Increasing use in Natural Language Processing, Machine…

Posted: at 6:20 am

[126 Pages Report] AI in ICT (Information and Communications Technology) Market Insights 2022 The main purpose of AI in Information and Communications Technology is to process and passinformation safely and accuratly since the information is vulnerable.

Market Analysis and Insights: Global AI in ICT (Information and Communications Technology) Market

In 2021, the global AI in ICT (Information and Communications Technology) market size will be USD million and it is expected to reach USD million by the end of 2027, with a CAGR of % during 2021-2027.

With industry-standard accuracy in analysis and high data integrity, the report makes a brilliant attempt to unveil key opportunities available in the global AI in ICT (Information and Communications Technology) market to help players in achieving a strong market position. Buyers of the report can access verified and reliable market forecasts, including those for the overall size of the global AI in ICT (Information and Communications Technology) market in terms of revenue.

On the whole, the report proves to be an effective tool that players can use to gain a competitive edge over their competitors and ensure lasting success in the global AI in ICT (Information and Communications Technology) market. All of the findings, data, and information provided in the report are validated and revalidated with the help of trustworthy sources. The analysts who have authored the report took a unique and industry-best research and analysis approach for an in-depth study of the global AI in ICT (Information and Communications Technology) market.

Global AI in ICT (Information and Communications Technology) Scope and Market Size

AI in ICT (Information and Communications Technology) market is segmented by company, region (country), by Type, and by Application. Players, stakeholders, and other participants in the global AI in ICT (Information and Communications Technology) market will be able to gain the upper hand as they use the report as a powerful resource. The segmental analysis focuses on revenue and forecast by Type and by Application in terms of revenue and forecast for the period 2016-2027.

Get a Sample PDF of report https://www.360researchreports.com/enquiry/request-sample/19697016

Leading key players of AI in ICT (Information and Communications Technology) Market are

AI in ICT (Information and Communications Technology) Market Type Segment Analysis (Market size available for years 2022-2027, Consumption Volume, Average Price, Revenue, Market Share and Trend 2015-2027): Software, Services

Regions that are expected to dominate the AI in ICT (Information and Communications Technology) market are North America, Europe, Asia-Pacific, South America, Middle East and Africa and others

If you have any question on this report or if you are looking for any specific Segment, Application, Region or any other custom requirements, then Connect with an expert for customization of Report.

Get a Sample PDF of report https://www.360researchreports.com/enquiry/request-sample/19697016

For More Related Reports Click Here :

Lithium Battery Manufacturing Equipment Market In 2022

Market Research Report 2021Market In 2022

View original post here:

(New Report) AI in ICT (Information and Communications Technology) Market In 2022 : The Increasing use in Natural Language Processing, Machine...

Posted in Ai | Comments Off on (New Report) AI in ICT (Information and Communications Technology) Market In 2022 : The Increasing use in Natural Language Processing, Machine…

How this robotics startup uses AI to power its tethered drones – Analytics India Magazine

Posted: at 6:20 am

The announcement of Kisan Drones in the Union budget 2022 to promote drone technology for crop assessment, digitisation of land records, spraying of insecticides and nutrients, is poised to give a major fillip to drone startups. The finance minister said startups would be encouraged to facilitate Drone Shakti through varied applications and Drone-As-A-Service (DrAAS). In addition, the government will start required courses for skilling in select ITIs.

Reacting to the announcement in the budget, Athishay Jain, Co-founder and COO, ISPARGO, said adopting cutting-edge technology will increase farmers income. Using drones in farms is a low-risk proposition, definitely a great move from the government. If drone technology is adopted on a large scale which would increase the demand, the product will get better, and the ecosystem would naturally develop, he added.

Mangaluru-based robotic startup ISPARGO uses drones to apply insecticide on areca nut trees. Jain said the startup uses ultra-lightweight, compact power conversion technology to power the drones from the ground from any AC source.

AI/ML in drone

AI/ML models are used to develop drones capable of landing on a fixed image or a marker on the ground. The same solution is being extended to follow a moving vehicle. This complex AI/ML model captures the images, processes them, and predicts the next set of navigation points for the drone to follow, all within milliseconds.

ISPARGO, established in 2017, focuses on developing and customising drones for work.

Batteries prominently power drones, but they do not last beyond 30 minutes. Any long endurance work using drones will need 15 to 20 sets of batteries which is economically unviable. To upgrade the drone technology, ISPAGRO has developed non-battery powered drones which can work for a whole day with improved efficiency. They are also looking into customising the firmware for specific use cases like painting or pesticide spray to make operations almost autonomous.

The unique proposition of ISPARGO is the tethered drones used for surveillance, plantation crop spray and inspection/monitoring. The salient features of these drones are:

Use cases

For border surveillance/intrusion detection: Tethered drones solution comes with quick and easy deployment; secured data transmission with OFC; no cabling issues; supports 24 hours of uninterrupted video feed; thermal EO/IR, RGB cameras.

For temporary mobile towers: Tethered drone solution supports all-terrain operation; easy plus and fly operation on the go; easily transportable for quick fix deployment; requires minimum resources for installation; can be used for communication and surveillance for intrusion detection.

For agriculture/pesticide spray: Tethered drone solution has one skilled pilot who can cover more trees in a short time; precise spray, low wastage; timely and effective spray increase productivity; onboard camera aids in crop health analysis.

For windmill inspection: Tethered drone solution has quickly deployable precise inspection; no operator insurance and training cost; can fit sophisticated camera for stress/heat analysis.

Challenges

Solutions

Continued here:

How this robotics startup uses AI to power its tethered drones - Analytics India Magazine

Posted in Ai | Comments Off on How this robotics startup uses AI to power its tethered drones – Analytics India Magazine

The 20 MostAnd 20 LeastStressful Romance Movies, According to AI – Mental Floss

Posted: at 6:20 am

Sometimes, youre in the mood for a melodramatic story about ill-fated lovers thatll keep your heart rate high and your eyes wide open. Other times, youd rather watch a rom-com with stakes so low that you know everyones headed straight for happily-ever-after.

To find out which films best fit each mood, search marketing agency Honch used Rotten Tomatoes to compile a list of romances, and then fed all the screenplays through a sentiment analysis tool called TensiStrength. The program processed the language and determined how stressful or relaxing it was overall.

The results were interesting, to say the least. Topping the list of most stressful movies was 2011s 50/50, in which a 20-something-year-old (played by Joseph Gordon-Levitt) battles aggressive cancer and falls for his therapist (Anna Kendrick). You can understand how a computer system would construe that as a stressful watch, though it didnt exactly account for all the comic relief.

An inability to detect comic relief seems to be a trend. Behind 50/50 comes The Princess Bride (1987), which, while admittedly containing plenty of stressful elementsattempted murder, actual murder, poison, quicksand, torture, a Fire Swamp teeming with Rodents of Unusual Size, etc.is widely considered a delightful, kid-friendly fairytale. Other relatively light fare on the most stressful list includes 1999s Notting Hill, 1986s Ferris Buellers Day Off, and 1998s Shakespeare in Love.

What the least stressful list lacks in cancer and quicksand, it makes up for in interpersonal drama, loneliness, and character-driven commentary on mental health. Witnessing George Clooneys Ryan Bingham systematically lay off employees in Up in the Air (2009)which came in first placemay not seem too anxiety-inducing on paper, but the effect in the film itself isnt exactly relaxing. Similar things could be said about 2002s Punch-Drunk Love (in fifth place) and 2012s Silver Linings Playbook (ninth). But wehumans and machines alikecan all agree that 1995's Clueless, number 15, is a very fun flick.

While the lists below might not be a perfect watch guide, theyre a good reminder of just how much goes into making a movie beyond the screenplay. You can also take heart in the notion that human intelligence may still have an edge over artificial intelligence, at least when it comes to watching movies.

Read more from the original source:

The 20 MostAnd 20 LeastStressful Romance Movies, According to AI - Mental Floss

Posted in Ai | Comments Off on The 20 MostAnd 20 LeastStressful Romance Movies, According to AI – Mental Floss

Pentagon names acting chief digital and AI officer as it moves toward full capability – C4ISRNet

Posted: at 6:20 am

WASHINGTON The Pentagons chief information officer will also serve as the head of a new organization overseeing the Defense Departments various digital and artificial intelligence efforts, the department announced Feb. 2.

DoD Chief Information Officer John Sherman will serve as the acting chief digital and artificial intelligence officer, or CDAO, a newly created office designed to oversee the Defense Digital Service, the Joint Artificial Intelligence Center and the CIO office he was already leading. The new office was established to better align a number of data, analytics, digital solutions and AI efforts across the DoD. Previously, all three of those offices reported directly to the deputy defense secretary.

Sherman will serve as DoD CIO and CDAO as the Pentagon continues to look for a director.

Im honored to be able to help get this organization stood up, again, while performing my chief information officer duties and also serving as the acting CDAO, Sherman said. In addition to getting CDAO up and ready for [full operational capability], rest assured well remain laser focused on our CIO duties of cybersecurity, digital modernization, C3 [command, control and communication], and other areas that the department relies on.

Sherman has a long history working on national security and information issues. After serving a three-year stint as the chief information officer of the intelligence community, Sherman joined the DoD as the principal deputy chief information officer. Just over a year ago, he was named acting chief information officer by the incoming Biden administration.

The CDAO was established Dec. 8 and achieved initial operating capability Feb. 1, the Pentagon noted. Full operating capability is expected by June 1, and the Pentagon wants a leader selected for the position by then. Once that is accomplished, the Pentagon will submit proposals to Congress to adjust authorities and reporting lines accordingly. The department expects about 200-300 people to be combined under the new office with an approximately $500 million budget.

The Pentagon also issued a memo clarifying how the new CDAO position differs from other high-level positions within the Office of the Secretary of Defense.

For example, the undersecretary of defense for research and engineering will lead data, analytics and Al policy related to basic research through prototyping, while the CDAO will carry those efforts from prototyping to operations. The CIO will continue to lead on core infrastructure such as cybersecurity, cloud, data transport and networks while the CDAO will set requirements for that core infrastructure and provide policy and guidance for the data, analytics and AI that interact with it.

Nathan Strout is the staff editor at C4ISRNET where he covers the intelligence community.

Continue reading here:

Pentagon names acting chief digital and AI officer as it moves toward full capability - C4ISRNet

Posted in Ai | Comments Off on Pentagon names acting chief digital and AI officer as it moves toward full capability – C4ISRNet

Explore all things AI and Data at Digital Health Rewired 2022 – Digital Health

Posted: at 6:20 am

From monitoring the spread of Covid-19 to helping determine the priority list for the vaccine data and AI became vital tools during the pandemic and is one of the reasons why there is a dedicated stage to it at Digital Health Rewired 2022.

The AI and Data Stage will bring together data scientists and researchers, clinicians, and health IT professionals to explore how AI and data are transforming healthcare.

Highlights of the programme include Dr Nicola Byrne, the National Data Guardian for health and adult social care in England, who will be speaking on 15 March.

Dr Byrne will share her insights on ensuring future public trust in how data is held and used, as the Department of Health and Social Care brings forward an ambitious new data strategy.

With the use of data vital to the future of health and care the role and advice of the National Data Guardian is key in ensuring that citizens confidential information is safeguarded securely and used properly.

Other speakers include Ayub Bhayat, director of insight and data platform at NHS England and Improvement, Mathew Watt, senior programme manager for AI imaging at NHS Transformation Directorate, and I-Lin Hall, head of data and digital applications at North of England Commissioning Support (NECS).

GP IT provider EMIS is the confirmed sponsor of the AI and Data Stage. Alex Eavis, director of analytics, EMIS, said the potential for AI and data usage in healthcare is enormous.

Not only can we get a better systemic understanding of what is happening right now but we have the opportunity tocompletely transform care pathways, he added.

The better insight we have and the quicker we can get to that insight, the quicker we can improve thedetection of disease, develop new treatments and deliver care enabling rapid change and the ability to evidence the impact of that change.

The ethical use of AI and analytics will not only enable us to make informed changes to the way we deliver services, but it will allow us to shift from looking at what happened in the past to predicting what will happen in the future. It is the key to moving from reactive care to proactive and personalised interventions.

Taking place on March 15-16 at the Business Design Centre in London,Digital Health Rewired 2022is a conference and exhibition which brings together all parts of the digital health community to celebrate the best of digital, data and innovation in health and care.

Health and care professionals will be able to network, collaborate and learn in person during two days of educational conference sessions, exhibitions and meetings, all focused on sharing best practice and innovation.

All the conference sessions will be CPD accredited.

Also taking place will be thePitchfest competition, which returns for its fourth year. The competition will see another 16 digital health start-ups battle it out to the live final to win an NHS test bed site for their idea or solution.

You canregister hereto secure your place at Rewired 2022.

See more here:

Explore all things AI and Data at Digital Health Rewired 2022 - Digital Health

Posted in Ai | Comments Off on Explore all things AI and Data at Digital Health Rewired 2022 – Digital Health

AI, the brain, and cognitive plausibility – TechTalks

Posted: February 5, 2022 at 5:09 am

By Rich Heimann

This article is part of the philosophy of artificial intelligence, a series of posts that explore the ethical, moral, and social implications of AI today and in the future.

Is AI about the brain?

The answer is often, but not always. Many insiders and most outsiders believe that if a solution looks like a brain, it might act as the brain. If a solution acts like a brain, then the solution will solve other problems like humans solve other problems. What insiders have learned is that solutions that are not cognitively plausible teach them nothing about intelligence or at least nothing more than before they started. This is the driving force behind connectionism and artificial neural networks.

That is also why problem-specific solutions designed to actually play to their strengthsstrengths that are not psychologically or cognitively plausiblefall short of artificial intelligence. For example, Deep Blue is not real AI because it is not cognitively plausible and will not solve other problems. The accomplishment, while profound, is an achievement in problem-solving, not intelligence. Nevertheless, chess-playing programs like Deep Blue have shown that the human mind can no longer claim superiority over a computer on this task.

Lets consider approaches to AI that are not based on the brain but still seek cognitive plausibility. Shane Legg and Marcus Hutter are both a part of Google DeepMind. They explain the goal of artificial intelligence as an autonomous, goal-seeking system; [for which] intelligence measures an agents ability to achieve goals in a wide range of environments.

This definition is an example of behaviorism. Behaviorism was a reaction to 19th-century philosophy of the mind which focused on the unconscious, and psychoanalysis, which was ultimately challenging to test experimentally. John Watson, professor of psychology at John Hopkins University, spearheaded the scientific movement in the first half of the twentieth century. Watsons 1913 Behaviorist Manifesto sought to reframe psychology as a natural science by focusing only on observable behaviorhence the name.

Behaviorism aims to predict human behavior by appreciating the environment as a determinant of that behavior. By concentrating only on observable behavior and not the origin of the behavior in the brain, behaviorism became less and less a source of knowledge about the brain. In fact, to the behaviorist, intelligence does not have mental causes. All the real action is in the environment, not the mind. Ironically, DeepMind embraces the philosophy of operant conditioning, not the mind.

In operant conditioning, also known as reinforcement learning, an agent learns that getting a reward depends on action within its environment. The behavior is said to have been reinforced when the action becomes more frequent and purposeful. This is why DeepMind does not define intelligence: it believes there is nothing special about it. Instead, intelligence is stimulus and response. While an essential component of human intelligence is the input it receives from the outside world, and learning from the environment is critical, behaviorism purges the mind and other internal cognitive processes from intellectual discourse.

This point was made clear in a recent paper by David Silver, Satinder Singh, Doina Precup, and Richard Sutton from DeepMind titled Reward is Enough. The authors argue that maximizing reward is enough to drive behavior that exhibits most if not all attributes of intelligence. However, reward is not enough. The statement itself is simplistic, vague, circular, and explains little because the assertion is meaningless outside highly structured and controlled environments. Besides, humans do many things for no reward at all, like writing fatuous papers about rewards.

The point is that suppose you or your team talk about how intelligent or cognitively plausible your solution is? I see this kind of solution arguing quite a bit. If so, you are not thinking enough about a specific problem or the people impacted by that problem. Practitioners and business-minded leaders need to know about cognitive plausibility because it reflects the wrong culture. Real-world problem solving solves the problems the world presents to intelligence whose solutions are not ever cognitively plausible. While insiders want their goals to be understood and shared by their solutions, your solution does not need to understand that it is solving a problem, but you do.

If you have a problem to solve that aligns with a business goal and seek an optimal solution to accomplish that goal, then how cognitively plausible some solution is, is unimportant. How a problem is solved is always secondary to if a problem is solved, and if you dont care how, you can solve just about anything. The goal itself and how optimal a solution is for a problem are more important than how the goal is accomplished, if the solution was self-referencing, or what a solution looked like after you didnt solve the problem.

About the author

Rich Heimann is Chief AI Officer atCybraics Inc,a fully managed cybersecurity company. Founded in 2014, Cybraics operationalized many years of cybersecurity and machine learning research conducted at the Defense Advanced Research Projects Agency. Rich is also the author of Doing AI, a book that exploreswhat AI is, is not, what others want AI to become, what you need solutions to be, and how to approach problem-solving. Find out more about his bookhere.

See the rest here:

AI, the brain, and cognitive plausibility - TechTalks

Posted in Ai | Comments Off on AI, the brain, and cognitive plausibility – TechTalks

The US can compete with China in AI education here’s how | TheHill – The Hill

Posted: at 5:09 am

The artificial intelligence (AI) strategic competition with China is more intense than ever. To many, the stakes have never been higher who leads in AI will lead globally.

At first glance, China appears to be well-positioned to take the lead when it comes to AI talent. China is actively integrating AI into every level of its education system, while the United States has yet to embrace AI education as a strategic priority. This will not do.To maintain its competitive edge, the United States must adopt AI education and workforce policies that are targeted and coordinated. Such policies must also increase AI-specific federal investment and encourage industry partnerships.

Upon first glance, the state of U.S. AI education appears to be on a positive trajectory. Recent years have seen a proliferation of AI education materials outside the classroom: a rise in online AI education programs at all levels, including K-12 summer camps, boot camps, and a range of certificates and industry-academia partnerships.Nearly 300different organizations now offer AI or computer science summer camps to K-12 students. Other K-12 learning opportunities include after-school programs, competitions and scholarships, including explicit outreach to underrepresented groups in computer science education to address race and gender disparities.

However, the reach and effectiveness of these piecemeal efforts tell a different story. There are no standardization or quality benchmarks for the maze of online offerings or data on reach. Moreover, outside of a handful of schools, very little AI education is happening in the classroom. Integrating any new education into classrooms is notoriously slow and difficult, and AI education will be no exception. If anything, it faces an even steeper uphill battle as schools across the country are in a constant struggle over competing priorities.

Meanwhile, Chinas rollout and scale of AI education dramatically eclipse U.S. initiatives. While it is too early to assess the effectiveness and quality of Chinas AI education programs, our research at Georgetown Universitys Center for Security and Emerging Technology (CSET)revealsthat Chinas Ministry of Education is rapidly implementing AI curricula across all education levels and has evenmandated high schools to teach AI courseworksince 2018. In Beijing, as well as Zhejiang and Shandong provinces, education authorities haveintegratedPython into the notoriously difficultGaokaocollege entrance exam.

At the postsecondary level, Chinas progress appears even more impressive. In 2019, the Ministry of Educationstandardizedan undergraduate AI major, which today is offered at 345 universities and has been the most popular new major in China. Additionally, our tally indicates at least 34 universities have AI institutes that often train both undergraduate and graduate students, and research areas such as natural language processing, robotics, medical imaging, smart green technology and unmanned systems. The U.S. has a world-class university system, but AI majors in large part remain a specialization of computer science.

The U.S. education system is not designed to operate like Chinas. Nor should it be. There are inherent advantages in a system that allows for a greater degree of educational autonomy. This gives breathing room for experimentation, creativity and innovation among U.S. educational institutions and opens doors for collaboration with the local community, private sector, philanthropic organizations and other relevant stakeholders.

But for experimental AI education initiatives to be successful, they must be evaluated and scaled inclusively throughout the education system. In this context, the decentralized nature of the U.S. education systems can pose a challenge curricula, teacher training and qualifications and learning standards are all fragmented by different state approaches.

For instance, computer science coursework is currently available at51 percentof U.S. high schools but unlike in China, is not required in most cases. Initiatives are cropping up in various schools around the country, but a lack of coordination delivering comprehensive awareness, cross-state collaboration and shared assessment metrics hinder these nascent programs from having a nationwide, widespread impact on AI education.

Implementing competitive AI education across the United States is no easy task there are no shortcuts and no single solution. There are, however, two elements that education leaders and policymakers should prioritize: coordination and investment.

For coordination at the federal level, one path forward is through the White HousesNational Artificial Intelligence Initiative Office for Education and Training, which can help coordinate AI education, training and workforce development policy across the country. At the same time, community and state-level engagement to implement, evaluate and scale AI education initiatives are likely to be just as important as federal efforts.

For example, the Rhode Island Department of Elementary and Secondary Education is leveraging partnerships with private universities and nonprofits to strengthen its K-12 computer science initiative. Results are starting to show promise: There has been a17-fold increasein advanced placement computer science exams taken since 2016; however, this still represents a small fraction of the overall student body.

Adequate and diversified investment in AI education is also essential. Federal funding can helpclose accessibility gapsbetween states. To that end, Congress can appropriate funding for states to provide public K-12 students with AI experiential learning opportunities and K-12 educators with the required training and support. State and local governments can also fund teacher training initiatives to encourage more educators to become certified in computer science or offer ongoing professional development. Concurrently, funding from nonprofit and private sectors can complement federal, state-level and local investments.

Ultimately, successful AI education implementation and adoption will be a national endeavor requiring participation from federal, state and local governments, as well as nonprofits, academia and industry. Coordination within the education ecosystem will help to spur ideas and initiatives.

For those touting U.S. innovation as a competitive strength vis--vis China, it should be nothing less.

KaylaGoodeis a research analyst at Georgetown Universitys Center for Security and Emerging Technology (CSET), where she works on the CyberAI Project.

Dahlia Peterson is a research analyst at Georgetown Universitys Center for Security and Emerging Technology (CSET). Follow her on Twitter@dahlialpeterson.

See the original post:

The US can compete with China in AI education here's how | TheHill - The Hill

Posted in Ai | Comments Off on The US can compete with China in AI education here’s how | TheHill – The Hill

Here’s What Henry Kissinger Thinks About the Future of Artificial Intelligence – Gizmodo

Posted: at 5:09 am

Photo: Adam Berry (Getty Images)

One of the core tenants running throughout The Age of AI is also, undoubtedly, one of the least controversial. With artificial intelligence applications progressing at break-neck speed, both in the U.S. and other tech hubs like China and India, government bodies, thought leaders, and tech giants have all so far failed to establish a common vocabulary or a shared vision for whats to come.

As with most issues discussed in The Age of AI, the stakes are exponentially higher when the potential military uses for AI enter the picture. Here, more often than not, countries are talking past each other and operating with little knowledge of what the other is doing. This lack of common understanding, Kissinger and Co. wager, is like a forest of bone-dry kindling waiting for an errant spark.

Major countries should not wait for a crisis to initiate a dialogue about the implicationsstrategic. doctrinal, and moralof these [AIs] evolutions, the authors write. Instead, Kissinger and Schmidt say theyd like to see an environment where major powers, both government and business, pursue their competition within a framework of verifiable limits.

Negotiation should not only focus on moderating an arms race but also making sure that both sides know, in general terms, what the other is doing. In a general sense, the institutions holding the AI equivalent of a nuclear football have yet to even develop a shared vocabulary to begin a dialogue.

See the rest here:

Here's What Henry Kissinger Thinks About the Future of Artificial Intelligence - Gizmodo

Posted in Ai | Comments Off on Here’s What Henry Kissinger Thinks About the Future of Artificial Intelligence – Gizmodo

Page 70«..1020..69707172..8090..»