Hstoday How Artificial Intelligence Can Reshape Homeland Security in 2024 – HS Today – HSToday

The Department of Homeland Security is on a mission to make sure customer experience (CX) permeates through everything the agency does, and artificial intelligence (AI) can help.

As DHS acting secretary Dana Chinell said, Were really here to inject customer experience design, human-centered design, product management, digital services and skills into everything across the department from service delivery to acquisitions and procurement.

One path to success, as DHS IT leaders have outlined in their FY 2024-2028 IT Strategic Plan, is the use of AI to bolster CX efforts and meet IT modernization goals.

AI is a valuable tool for DHS due to the incredible volume of data and information collected, stored, and shared on a daily basis. DHS can use AI to extract actionable insights from these troves of information. In turn, the DHS workforce is better equipped to make informed decisions quickly, so issues are resolved promptly.

The benefits of using AI have already been realized by DHS for specific issues, such as border protection and immigration.

U.S. Citizenship and Immigration Services (USCIS), for example, has started to use AI to deliver service more efficiently through machine learning models that eliminate redundant paperwork by pulling together information from disparate systems.

Another critical use case of AI is its use for research and development, as the Science and Technology Directorate (S&T) has implemented to help the people on the front lines of our homeland security mission, like first responders. S&Ts AI use has fueled efforts including data analysis, imaging, visualization, and predictive analytics to provide more insight into ongoing DHS efforts.

These data-driven insights can also illuminate solutions to key CX challenges, which any agency will encounter as they update and improve their processes. When working to optimize available data, there are three crucial considerations for agency leaders to keep in mind:

DHS will achieve success when it comes to CX goals if it takes a human-centered design (HCD) approach, which places emphasis on feedback and real-time adjustments. When feedback is continually captured and used to inform design, agencies can build and deliver trustworthy, accessible services for all Americans that help them feel heard and understood.

In December, the agency published an update on its Artificial Intelligence Task Force (AITF) which was created earlier in the year to guide the use of AI.

The Task Force collaborated with DHS Components and offices to initiate several pilot projects, including projects based on internet-accessible, commercially available Generative Al/Large Language Models (LLMs) to advance mission capabilities using AI, read the memo. These pilots will support the Departments understanding of the capabilities, limitations, and risks associated with AI while testing potential solutions. The pilots will also provide real-world data and information on how DHS can scale these technologies across the Department.

Its an important step for the agency, as the proper infrastructure must be in place for AI to be truly effective in reshaping how an agency operates. One example, as stated in the update memo, is the work being done by Customs and Border Patrol (CBP), which began establishing infrastructure requirements for developing, deploying, and managing machine learning models. Specifically, agency leaders have been developing operational pipelines and best practices for deploying and operating machine learning and AI models.

The transformation for DHS will not take place overnight. The IT strategic plan is a five-year strategic plan for a reason.

But the agency, working in concert with industrys top technology companies, has the ability to make significant strides very quickly.

That impact will have far-ranging, and incredibly positive, implications for the American people.

See the article here:

Hstoday How Artificial Intelligence Can Reshape Homeland Security in 2024 - HS Today - HSToday

3 Artificial Intelligence Stocks to Buy as the Technology Advances in 2024 – InvestorPlace

According to the International Monetary Fund, we are on the brink of a technology revolution spearheaded by artificial intelligence. The technology will boost productivity, accelerate global growth and raise incomes globally. Multiple AI stocks will benefit from this transformation.

Today, businesses are analyzing how they can leverage AI to improve productivity and how it affects the competitive landscape. Already, organizations are using AI in various use cases, such as application development, customer service, pharmaceutical discovery and creative design.

However, to achieve the promise of AI, companies must make significant investments. That starts with the data and models they use. First, organizations are investing in systems to collect, store, manage and access massive data sets. Then, using the right models, they can derive patterns from their data to drive decision-making, customer service and innovation.

Across the artificial intelligence stack, several companies are meeting various needs. From chip companies producing chips for training large language models to companies developing large language models. These AI stocks are at the forefront of this race and will be winners in 2024.

Source: Sundry Photography / Shutterstock.com

As the largest semiconductor foundry, Taiwan Semiconductor Manufacturing (NYSE:TSM) is one of the leading AI stocks. The company is seeing soaring demand as the scramble for AI chips for data centers and edge computing grows. Over the years, many integrated device manufacturers transitioned to fabless designers, cementing Taiwan Semiconductors importance in chip production.

Taiwan Semiconductor has built an unassailable competitive advantage in its process technology. By investing heavily in research and development, it has maintained leadership in node advancement. As a result, it attracts and retains high-quality, fabless customers. For instance, Apple (NASDAQ:AAPL) in mobile chips and Nvidia (NASDAQ:NVDA) in graphic processing units.

Today, the company is producing leading-edge node chips for its AI customers. Competitors have had challenges producing these chips, enabling Taiwan Semi to dominate the market and charge higher prices. Already the firm is producing 3-nanometer chips for Nvidia, Apple and Intel (NASDAQ:INTC) as competitors struggle to catch up.

On January 19, the company released results and issued upbeat guidance. The company is benefitting from cloud service providers upgrading their data centers with chips supporting AI capabilities. Management was optimistic on high-performance computing demand related to AI, forecasting more than 20% revenue growth in 2024.

While AI adoption has been mainly in the data center, it will become ubiquitous, supporting Taiwan Semis growth. Consumer devices such as smartphones and industrial equipment will need AI capabilities, ushering in the next growth cycle.

Source: rafapress / Shutterstock.com

Although it is primarily a social media company, Meta Platforms (NASDAQ:META) has been a player in the AI for a while. Its increased efforts in the field came out of necessity after Apples IDFA changes curtailed user tracking. Faced with diminished advertising accuracy, Meta Platforms pivoted to AI.

Two years later, the company has become one of the top AI stocks. Notably, CEO Mark Zuckerberg has touted AI as the foundation of our discovery engine and our ads business. The company has amassed hundreds of top A.I. researchers and invested in significant computer power to power these systems. These efforts are paying off with impressive results.

In February 2023, it released LLaMA (Large Language Model Meta AI) a foundational large language model. Metas aim was to advance AI research. Then, in July 2023 it released LLaMA 2 for research and commercial use. Impressively, the model outperforms other open-source language models in coding, reasoning, proficiency, and knowledge tests.

Meta has adopted a unique approach, giving away its models for free. By open-sourcing its models, Meta hopes third-party developers will help improve the platform. Like Linux became the open-source operating system, Meta hopes Llama will become integral to building the next generation of AI applications.

Zuckerberg recently committed to developing artificial general intelligence. He pledged to spend heavily on compute infrastructure to support this effort. If Meta Platforms manages to standardize AI development through its open-source models, it will be a key player in the ecosystem. Considering Zuckerbergs focus on winning in AI, it remains one of the top artificial intelligence stocks, and certainly one of the artificial intelligence stocks you should grab.

Source: IgorGolovniov / Shutterstock.com

Googles parent company, Alphabet (NASDAQ:GOOG, NASDAQ:GOOGL), has been derided for losing the AI war to Microsoft (NASDAQ:MSFT). These assessments seem overly pessimistic. The company is an innovator and will prove the doubters wrong.

The search giant has made significant investments in artificial intelligence. The most significant one was the 2014 acquisition of DeepMind. Secondly, Google has been using artificial intelligence in search for a while, even before OpenAI launched ChatGPT.

Moreover, it is introducing products that will integrate generative AI into search. Today, users can rely on Bard for chat-like responses. It is also testing Search Generative Experience and has expanded its features since its May 2023 launch.

One competitive advantage that positions Google to succeed is its high-quality training data. Today, Google has six products with over 2 billion users and 15 products, each with more than 500 million users. This data has been crucial in creating context-aware AI functions. For instance, features such as Smart Compose in Gmail have significantly improved the user experience.

Alphabets leadership among artificial intelligence stocks extends beyond software to hardware, making it one of the best AI stocks. It developed a specialized chip, a Tensor Processing Unit (TPU), specifically for AI applications. Its TPUs and open-source framework, Tensor, power services such as Gmail, Maps and YouTube.

As Google integrates AI into its products, it will see more growth. For example, serving context-rich ads into generative search results will improve ad conversion and monetization. In December 2023, Google released Gemini, its latest and most powerful LLM, proving Alphabet is still in the AI race. Dont count out this technology giant yet; it has the resources to win!

On the date of publication, Charles Munyi did not hold (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

Charles Munyi has extensive writing experience in various industries, including personal finance, insurance, technology, wealth management and stock investing. He has written for a wide variety of financial websites including Benzinga, The Balance and Investopedia.

Originally posted here:

3 Artificial Intelligence Stocks to Buy as the Technology Advances in 2024 - InvestorPlace

This Week in AI: FTC, Partnerships and Enterprise Deployments – PYMNTS.com

Generative artificial intelligence tends to grow in leaps and bounds.

And increasingly, so do the companies behind the technology, including Microsoft, which reached a market valuation of $3 trillion Wednesday (Jan. 24) due to the impact of AI on investors appetites, consolidating its position as one of the largest public stocks.

While the addition of ChatGPT has reportedly not helped Microsofts Bing search product take on Googles flagship 800-pound Gorilla, it still puts the Redmond, Washington-based tech giant in rarified company. Apple is the only other public business to have crossed the $3 trillion threshold.

Still, all that growth hasnt come without scrutiny.

The Federal Trade Commission (FTC) opened an inquiry into ongoing investments and partnerships in the AI sector Thursday (Jan. 25), ordering Big Tech companies Alphabet, Amazon, Anthropic, Microsoft and OpenAI to turn over information about their ecosystem investments and partnerships as it investigates any effect they may be having on AIs competitive landscape.

Here is the weekly roundup of the cant-miss AI news that PYMNTS found, tracked and covered.

Read also: AIs Role in Advancing Real-Time Payments

Platforms, providers and businesses are embedding AI into end-user touchpoints.

PayPal announced Thursday that it is introducing what it termed a reimagined checkout experience that will reduce checkout times by as much as half and use AI to craft personalized recommendations to consumers. Meanwhile, online payments firm AffiniPay announced Wednesday that it embedded generative AI into its legal technology offerings.

On the commerce front, Etsy launched a hub Wednesday that uses AI human curation to help shoppers find gifts for any occasion. And PYMNTS took a piercing look Monday (Jan. 22) at how retailers are folding generative AI capabilities into their 2024 playbooks.

PYMNTS Intelligence found that shopping beats out banking for consumer preference around AI-enabled experiences. And AI is increasingly giving the fitness category a workout.

To help firms more seamlessly integrate AI into their operations, OpenAI unveiled Thursday new embedding models, application programming interface usage management tools and plans to reduce pricing for one of its models.

But its not just commerce and checkout where AI is having an impact; the innovation is also being embedded across hardware devices.

Apple is reportedly pushing to bring AI to the iPhone, quietly making a series of acquisitions, hires and updates to its hardware. Meanwhile, Samsung earlier this month introduced its new Galaxy S series phone, billing it as a new era of mobile AI. It is part of what researchers believe will be a wave of more than 1 billion AI-powered smartphones expected to ship in the next three years.

Due to the volume of resources and costs involved in developing and deploying AI systems, and despite the FTCs scrutiny, partnerships are emerging as a popular and even necessary approach to commercializing contemporary AI and building out the frontier capabilities of the technology.

Increasingly, the government itself wants to get in on the action.

The National Science Foundation announced Wednesday a federal program designed to increase access to AI resources, including tools, data and computing infrastructure, beyond just the worlds most valuable tech businesses.

The pilot program, called the National Artificial Intelligence Research Resource comes after the White Houses executive order mandating that barriers to entry to AI infrastructure be lowered. Several Big Tech companies will be tasked with providing resources, funding and tools alongside 11 federal agencies.

Elsewhere on the federal front, the White House said it wants AI to be good news for small businesses.

In the private sphere, months after investing in Hugging Face, Google launched a partnership Thursday with the open-source AI firm. The collaboration will let developers use Google Cloud infrastructure for all Hugging Face services, while also allowing for the training of Hugging Face AI models on Google Cloud.

This comes as Meta is intensifying competition in the AI market by consolidating its two advanced AI divisions the Fundamental AI Research team and its top-level generative AI product team into a single group.

The move underscores how Meta is now prioritizing product-level progress in developing general-purpose AI chatbots and securing top talent in the field of AI engineering as opposed to attempting to lure top researchers to work on strategies like Metas metaverse, which is losing over $1 billion a month.

Organizations are determining how to move from sharpening their AI strategies to deploying them.

But when it comes to effectively deploying AI systems within the enterprise, there are some tech terms business leaders need to know, and some that they can leave to their engineering and data teams (for now).

PYMNTS wrote Tuesday about how anthropomorphizing AI systems, or attributing human-like characteristics to them, can pose several dangers. For many business use cases of the technology, doing so can serve as a fatal distraction from the utility AI can offer.

Education and communication about the nature of AI systems can help manage expectations and ensure responsible use. Within an enterprise environment, deploying AI systems with a clear-eyed approach to quantifiable goals and expected return on investment is key to success.

While news of AI that can surpass human intelligence is helping fuel the hype of the technology, the reality is far more driven by math than it is by myth.

At a fundamental level, generative AI models are built to generate reasonable continuations of text by drawing from a ranked list of words, each given different weighted probabilities based on the dataset the model was trained on.

PYMNTS reported Tuesday about how colleges and universities are increasingly weaving AI into their lesson plans.

If you look at the maturity of AI models over the years, if you go back 20 years, AI was more around recognition, and gradually that evolved into coming up with insights and serving as a recommendation engine, RXO Chief Information Officer Yoav Amiel told PYMNTS in an interview posted Friday (Jan. 26). Today, AI is capable of task completion and thats what gets me excited.

Finally, economist David Evans penned a piece for PYMNTS on how to think about AI regulation.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Continue reading here:

This Week in AI: FTC, Partnerships and Enterprise Deployments - PYMNTS.com

3 of the Smartest AI Stocks to Buy Now for Long-Term Growth – InvestorPlace

Are you looking for AI stocks that could provide long-term growth? Artificial intelligence is undoubtedly the most popular theme of 2023, grabbing headlines left and right and significantly impacting the performance of most of the IT industry.

The impact of AI has created new opportunities for various industries to grow and expand, ranging from health care to banking and even real estate. The applications of AI are virtually limitless. Now that weve arrived at the 2024 starting line, the next question is, will the 2023 darlings remain at the top, or will a new generation of AI stocks take the lead? Investors should consider purchasing the top AI stocks in 2024 for long-term growth.

First on our list is a global leader in Artificial Intelligence and computational science. Altair Engineering (NASDAQ:ALTR) has been a groundbreaker in AI and computational science by revolutionizing the cloud solutions and high-performance computing landscape (HPC). Its specialization in data analytics, simulation software, and optimization products opens its doors to diverse clients.

ALTRs latest version of HPCWorks 2024 provided its users with an enhanced user interface and functionality with the integration of AI that helps streamline workflow distribution and optimize cloud scaling. The company also offers client engineering services to help support customers in the long term with its ongoing expertise.

Looking closer at its financials, ALTRs most recent report points to a robust 14.8% YoY growth in software product revenue and a 12.3% total revenue growth YoY. Non-GAAP metrics also showed a 126.3% rise in adjusted EBITDA and a 197.0% increase in net income YoY.

Analysts also praise ALTR with their Strong Buy recommendation and a projected high target price of $95.00. That might not look like a lot in terms of current price levels, but investors should also remember that ALTR is one of the AI and computational science leaders. As the technology moves towards adoption in every part of our society, so does its growth. In our view, thats a growth story we want to be a part of, so we recommend ALTR as a buy-rated AI stock for long-term growth.

Source: shutterstock.com/Victor Runov

The next one on our list may not be a direct AI play, but it is one of the primary beneficiaries of the AI boom. Super Micro Computer (NASDAQ:SMCI) is an application-optimized server and storage systems provider for enterprise data centers in high-demanding tasks like cloud computing, artificial intelligence, and edge computing. Unlike other companies like Nvidia, which sells GPUs, the company offers a complete IT solution wherein it packages these and other components into server racks for sale to a broad audience. That means that as AI adoptions grow to a full scale, so will SMCI.

The company has also announced its upcoming support for the new NVIDIA HGX H200, which comes with H200 Tensor Core GPU and newer rack solutions optimized for AI, edge computing, and cloud service providers.

Despite missing the mark in a few key points, its latest financials and announcements make a strong case for a worthy investment for AI enthusiasts. Net sales were strong at$2.12 billion, growing 14.5% YoY. Meanwhile, net income ended at $157 million, lower YoY due to increased operating expenses, mainly research and development, and lower operating income. Despite that, SMCI maintains a positive outlook for FY24, which helps boost investor confidence. YTD, the stock is already up over 52% and the sky might really be the limit on this stock in 2024.

The final company on our list of AI stocks is another frontrunner in the AI race that could provide long-term growth. Baidu Inc (NASDAQ:BIDU) is mainly known for its Chinese language Internet search engine, one of the most popular ones used globally.

The companys operations fall under two segments: Baidu Core for its search-based, feed-based, and online marketing services with AI initiatives, and iQIYI for its online entertainment content produced originally or with other content partners. BIDU has been investing heavily in AI, and its latest ERNIE 4.0, accessible through ERNIE Bot and Cloud API, shows the companys commitment to being at the forefront of AI advancements.

Baidus latest financials indicate a 6% YoY revenue growth. Meanwhile, the Baidu Core segment reported a 5% growth in total revenue and a 6% growth in non-online marketing revenue. In addition, the companys Apollo Go ride-hailing service also had a substantial 73% increase in reported rides, with cumulative rides for the public reaching 4.1 million. The Baidu app also saw a 5% increase in monthly active users YoY.

Like most stocks in China, the 1-YR performance hasnt been stellar. In fact, BIDU is down ~22% over the past year. However, with stellar numbers and a solid commitment to advancing AI, Baidu is one of the best stocks for investors looking to ride the AI boom.

On the date of publication, Rick Orford did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Rick Orford is a Wall Street Journal best-selling author, investor, influencer, and mentor. His work has appeared in the most authoritative publications, including Good Morning America, Washington Post, Yahoo Finance, MSN, Business Insider, NBC, FOX, CBS, and ABC News.

Read the original:

3 of the Smartest AI Stocks to Buy Now for Long-Term Growth - InvestorPlace

10 Artificial Intelligence Projects Revolutionising E-commerce – AutoGPT

Artificial intelligence has become a driving force in modern businesses, enabling efficiency, innovation, and strategic advantage. In the realm of e-commerce, the impact of AI continues to grow, transforming various aspects of online retail operations. By delving into the cutting-edge AI projects that are revolutionising e-commerce, businesses can harness these innovative technologies to stay ahead of the curve and thrive in the competitive digital landscape.

One of the key AI projects revolutionising e-commerce is AutoGPT, an artificial intelligence system with the capability to write its own code and execute Python scripts. Its advanced features open up a world of possibilities for automating and developing e-commerce platforms.

A. The capabilities of AutoGPT in writing code and executing Python scripts AutoGPT is designed to understand and generate code based on natural language input. This enables it to write code snippets or even complete programs in response to specific tasks. Its ability to execute Python scripts further enhances its potential to automate e-commerce operations and streamline various processes.

B. Potential applications in e-commerce automation and development AutoGPTs capabilities can be harnessed in numerous ways to revolutionise e-commerce. For instance, it can help create customised product recommendations based on customer behaviour, automate inventory management, and even optimise pricing strategies. Additionally, it can be used to improve website design, enhance user experience, and facilitate more efficient customer support.

C. How to leverage AutoGPT for e-commerce operations To make the most of AutoGPTs potential, e-commerce businesses should first identify areas where automation and development can have the most significant impact. They can then integrate AutoGPT into their existing systems and processes to optimise operations, reduce manual effort, and drive growth. As the technology continues to evolve, businesses should also stay updated on the latest advancements and explore new ways to leverage AI in e-commerce.

The collaboration between NVIDIA and Microsoft to bring AI to personal computing is a game-changer in the technology industry. This partnership aims to develop cutting-edge AI solutions that can be integrated into everyday devices, making advanced computing accessible to a broader audience. As a result, this collaboration has significant implications for e-commerce platforms, opening up new opportunities for enhanced user experiences and operations.

E-commerce platforms can benefit from the AI advancements brought about by the NVIDIA and Microsoft partnership. By leveraging AI technologies in areas such as image recognition, natural language processing, and recommendation engines, e-commerce businesses can offer more personalised and engaging shopping experiences. Improved user experiences can lead to higher customer satisfaction, ultimately resulting in increased sales and brand loyalty.

Moreover, the partnership between NVIDIA and Microsoft has the potential to revolutionise e-commerce operations. With AI-driven analytics and automation, businesses can optimise their supply chain management, inventory control, and marketing strategies. Additionally, AI can assist in identifying trends and patterns in customer behaviour, enabling e-commerce platforms to anticipate and adapt to changing consumer demands. This level of insight and adaptability is crucial for e-commerce businesses to thrive in the increasingly competitive digital marketplace.

In conclusion, the NVIDIA and Microsoft partnership represents a significant step forward in the integration of AI into personal computing. The resulting advancements in AI technology have far-reaching implications for e-commerce platforms, providing opportunities for enhanced user experiences and streamlined operations. By embracing AI-driven solutions, e-commerce businesses can stay ahead of the curve and ensure their continued success in the digital landscape.

As the e-commerce landscape becomes increasingly competitive, the importance of captivating product imagery cannot be overstated. In recent years, Google Pixels AI-powered photo editing tools have emerged as a game-changer, offering innovative features that can help e-commerce businesses create more dynamic and engaging visual content.

Google Pixels photo editing tools leverage artificial intelligence to enhance image quality, fine-tune colours, and optimise lighting conditions. These AI-powered features enable users to effortlessly create professional-grade product images, even without advanced editing skills. The technology can also intelligently identify and remove unwanted elements in photos, allowing e-commerce businesses to present their products in the best possible light.

The AI capabilities of Google Pixels photo editing tools hold immense potential for revolutionising e-commerce product imagery. By streamlining the editing process and enabling businesses to create high-quality visuals with ease, these tools can significantly improve product presentation and customer engagement. In turn, this can lead to increased conversion rates and enhanced customer satisfaction.

With the help of Google Pixels AI-powered photo editing tools, e-commerce businesses can create more dynamic and engaging visual content that stands out from the competition. By harnessing the power of artificial intelligence, these tools allow businesses to experiment with different styles, colours, and lighting effects to create captivating product images that resonate with their target audience. In the competitive world of e-commerce, embracing such cutting-edge technology can be a crucial factor in staying ahead of the curve and ensuring continued growth and success.

The AutoGPT Arena Hackathon is an event designed to inspire and encourage the development of innovative artificial intelligence solutions within the realm of e-commerce. The goals and objectives of this hackathon revolve around fostering creativity, collaboration, and problem-solving among participants. By focusing on the creation of AI agents to improve customer service and task handling, the AutoGPT Arena Hackathon has the potential to significantly benefit e-commerce businesses.

One of the primary areas of focus in the hackathon is the development of AI agents that can effectively handle customer service tasks, such as answering questions, processing returns, and resolving issues. These AI-driven customer service solutions can streamline the support process for e-commerce companies, providing customers with accurate, timely, and helpful assistance. By automating these tasks, businesses can reduce costs, improve efficiency, and ultimately enhance the overall customer experience.

Another key objective of the AutoGPT Arena Hackathon is the development of AI agents capable of handling various e-commerce tasks, such as inventory management, order processing, and sales forecasting. By leveraging artificial intelligence in these areas, e-commerce businesses can make more informed decisions, optimize their operations, and ultimately boost their bottom line. The potential benefits for e-commerce businesses participating in this hackathon are numerous, making it an excellent opportunity for companies to explore and experiment with cutting-edge AI technologies.

One of the key aspects of any successful e-commerce platform is the ability to offer responsive and helpful customer support. ChatGPT, an advanced AI technology, has the potential to revolutionise this aspect of e-commerce with its new voice and image capabilities. This allows for a more interactive and accessible way of providing support to customers, ensuring their questions and concerns are addressed efficiently and effectively.

Integrating ChatGPT into e-commerce platforms can significantly enhance the overall customer experience. The AI-driven technology can be programmed to handle a wide range of customer queries, from product information and order tracking to returns and refunds. With its advanced natural language processing capabilities, ChatGPT can understand user inputs in both text and voice formats, enabling a seamless communication experience.

Moreover, the ability of ChatGPT to process and understand images adds another layer of convenience for customers. For instance, if a customer has an issue with a product they received, they can simply send a photo of the item to the chatbot, which can then analyse the image and provide an appropriate response or solution.

By harnessing the power of ChatGPT, e-commerce businesses can create a more engaging and accessible customer support system that caters to the diverse needs of their users. This can ultimately lead to improved customer satisfaction, increased brand loyalty, and enhanced overall business performance.

When it comes to leveraging artificial intelligence in the realm of e-commerce, it is crucial to choose the most suitable AI solution for your specific needs. Two popular AI models, Auto-GPT and ChatGPT, offer distinct capabilities and advantages. Understanding the differences between these models and their potential applications in e-commerce can help you make an informed decision.

Auto-GPT and ChatGPT, both developed by OpenAI, share some similarities but have different focuses. Auto-GPT is designed to write code and execute scripts, making it an excellent choice for automation and development tasks. On the other hand, ChatGPT is more focused on generating human-like text responses, making it ideal for customer support and communication purposes.

One of the key advantages of Auto-GPT is its self-prompting capability, which allows it to autonomously generate and execute code without human intervention. This feature can be particularly beneficial for e-commerce businesses looking to automate various tasks and processes, such as inventory management, order processing, and customer support. By integrating Auto-GPT into your e-commerce platform, you can potentially create a more efficient and autonomous system that can handle routine tasks with minimal human input.

To choose the right AI solution for your e-commerce business, it is essential to consider your specific needs and objectives. If your primary goal is to streamline your e-commerce development and automation, Auto-GPT might be the better choice. However, if your focus is on enhancing customer support and communication, ChatGPT might be more suitable. By carefully evaluating the capabilities and strengths of each AI model, you can select the most appropriate solution to revolutionise your e-commerce operations.

The ambitions and setbacks of the Arrakis project offer valuable insights for e-commerce businesses looking to leverage AI technology. Initially, OpenAI aimed to create a highly efficient AI system with the Arrakis project. Despite its promising start, the project faced several challenges, ultimately leading to its discontinuation. Nonetheless, the experiences gained from this project provide important lessons for the future development of AI in e-commerce.

Ongoing efforts to create more efficient AI systems continue to drive innovation in the field. Researchers and developers are constantly pushing the boundaries of AI technology, exploring new possibilities and applications. As a result, e-commerce businesses have the opportunity to benefit from these advancements, using AI to enhance their operations, improve customer experiences, and boost their overall performance.

Key takeaways for e-commerce operations from the Arrakis project include the importance of setting realistic goals and expectations, learning from setbacks, and adapting strategies as needed. Furthermore, it highlights the need for continuous investment in AI research and development, as well as the value of collaborations and partnerships in driving innovation. By embracing these lessons, e-commerce businesses can better position themselves to leverage AI technology and remain at the forefront of their industry.

Artificial intelligence has made significant strides in enhancing various aspects of e-commerce, including communication and marketing. One notable innovation is the use of AI-generated prompts, which play a crucial role in crafting highly effective e-commerce strategies. By generating relevant and engaging prompts, AI can help businesses tailor their messaging and target customers more accurately.

One valuable resource in this regard is the Ultimate ChatGPT Prompt Collection, which offers a wide range of AI-generated prompts suitable for various e-commerce applications. Utilising this collection can bring about numerous benefits, such as improving the overall quality of marketing communications, saving time spent on creating content, and ensuring that messaging remains consistent across different channels. The collection also allows for rapid iteration of marketing ideas, enabling e-commerce businesses to test different approaches and quickly refine their strategies.

By incorporating the Ultimate ChatGPT Prompt Collection and other AI-driven tools into their communication and marketing efforts, e-commerce businesses can enjoy a more streamlined and efficient approach. This integration can lead to better customer engagement, higher conversion rates, and ultimately, a more successful e-commerce venture. As the field of AI continues to advance, businesses can expect even greater benefits from incorporating this cutting-edge technology into their operations.

In this article, we have discussed various artificial intelligence projects that are revolutionising the e-commerce industry. From AutoGPTs capabilities in automation and development to ChatGPTs enhanced customer support, these AI solutions offer a wide range of benefits for businesses in the online retail space.

As we look towards the future of AI in e-commerce, it is evident that its applications will continue to expand and evolve. Businesses that embrace these cutting-edge technologies will undoubtedly stand to gain a competitive edge and enjoy continued growth and success.

Encouraging businesses to adopt AI solutions is not only essential for their own progress but also for the overall advancement of the e-commerce industry. By staying informed about the latest AI projects and understanding their potential applications, companies can make informed decisions about implementing these technologies and shaping their strategies for the future.

Unlock the potential of artificial intelligence for your e-commerce business by exploring the mentioned AI projects and their applications. Delve into the innovative technologies that are revolutionising e-commerce operations:

Stay ahead of the curve and enhance your e-commerce strategies with the power of AI.

Read more:

10 Artificial Intelligence Projects Revolutionising E-commerce - AutoGPT

AI, poverty, hunters and hellos among our top global topics of the year : Goats and Soda – NPR

Images from some of our most popular global stories of 2023 (left to right): A woman from Brazil's Awa people holds her bow and arrow after a hunt; an artificial intelligence program made this fake photo to fulfill a request for "doctors help children in Africa" AI added the giraffe; researchers are learning that a stranger's hello can do more than just brighten your day. Scott Wallace/Getty Images, Midjourney Bot Version 5.1. Annotation by NPR, David Rowland/AP hide caption

Images from some of our most popular global stories of 2023 (left to right): A woman from Brazil's Awa people holds her bow and arrow after a hunt; an artificial intelligence program made this fake photo to fulfill a request for "doctors help children in Africa" AI added the giraffe; researchers are learning that a stranger's hello can do more than just brighten your day.

We did a lot of coverage of viruses this year (see this post) but other stories went viral as well.

The post with the most pageviews tackled a diverse array of topics. New research upends hunter/gatherer gender stereotypes. Preliminary results from a study in Kenya on how to help peope who are poor show the power of handing over cash and a lump sum seems more effective than a monthly payout. And psychologists are finding that when a stranger gives a greeting, it's not just an empty gesture.

Here are our most popular stories (not about viruses) from 2023.

It's one of the biggest experiments in fighting global poverty. Now the results are in

The study focuses on a universal basic income and spans 12 years and thousands of people in Kenya. How did the money change lives? What's better: monthly payouts or a lump sum. Published December 7, 2023.

Men are hunters, women are gatherers. That was the assumption. A new study upends it

For decades, scientists have believed that early humans had a division of labor: Men generally did the hunting and women did the gathering. And this view hasn't been limited to academics. Now a new study suggests the vision of early men as the exclusive hunters is simply wrong and that evidence that early women were also hunting has been there all along. July 1, 2023.

It's one of the world's toughest anti-smoking laws. The Mori see a major flaw

New Zealand has declared war on tobacco with a remarkable new law. The indigenous Mori population, with the country's highest smoking rate, has a lot to gain. But they have a bone of contention. October 1, 2023. (Editor's note: New Zealand's new conservative government has vowed to repeal the anti-smoking law; we covered that development as well.)

Why a stranger's hello can do more than just brighten your day

Just saying "hello" to a passerby can be a boon for both of you. As researchers explore the impact of interactions with strangers and casual acquaintances, they're shedding light on how seemingly fleeting conversations affect your happiness and well-being. August 23, 2023.

AI was asked to create images of Black African docs treating white kids. How'd it go?

Researchers were curious if artificial intelligence could fulfill the order. Or would built-in biases short-circuit the request? Let's see what an image generator came up with. October 6, 2023.

MacKenzie Scott is shaking up philanthropy's traditions. Is that a good thing?

On December 14, 2022 billionaire philanthropist and novelist MacKenzie Scott announced that her donations since 2019 have totaled more than $14 billion and helped fund around 1,600 nonprofits. But as much as the scale, it is the style of giving that is causing a stir; it's targeted at a wide spectrum of causes, without a formal application process and it appears no strings attatched. January 10, 2023.

This is not a joke: Chinese people are eating and poking fun at #whitepeoplefood

The playful term is trending on social media: Urban workers are embracing (even while joking about) easy-to-fix, healthy Western-style lunches think sandwiches, veggies ... a lonely baked potato. July 10, 2023.

Read the original:

AI, poverty, hunters and hellos among our top global topics of the year : Goats and Soda - NPR

Why Data Annotation Remains a Human Domain: The Boundaries of Artificial Intelligence – Medium

The Unmatched Complexity of Context Photo by julien Tromeur on Unsplash

Artificial Intelligence (AI) has undeniably revolutionized the way we interact with technology and process vast amounts of information.

From self-driving cars to virtual assistants, the scope of AIs capabilities seems limitless.

However, amidst this wave of technological advancement, there is a crucial question that often goes unnoticed: Can AI truly replace the human touch in data annotation?

As someone who has dipped their toes into this complex world, I can assure you that data annotation remains, and will likely always remain, a human domain.

In this blog post, we will explore the reasons behind this assertion, delve into the intricate boundaries of artificial intelligence, and reflect on personal experiences that illuminate the essence of human involvement in data annotation.

Data annotation is more than just labeling images, text, or audio; it involves deciphering context, nuance, and the subtleties that are inherent to human communication. While AI algorithms have made incredible strides in understanding language and visual data, they still struggle to grasp the intricacies of context.

For instance, consider the sentence, She plays a mean guitar. To a human, its evident that mean in this context means exceptionally skilled.

However, an AI might misinterpret it as derogatory, missing the nuance completely. This illustrates the limits of AI when it comes to understanding the richness of human language.

One of the most fascinating aspects of data annotation is the dance between subjectivity and interpretation. When humans annotate data, their unique perspectives and cultural backgrounds come into play. This subjectivity can be a double-edged sword, as it introduces biases, but it also adds depth and authenticity to the annotations. In contrast, AI algorithms strive for objectivity, which might seem like a noble pursuit. Still, it

Read the original here:

Why Data Annotation Remains a Human Domain: The Boundaries of Artificial Intelligence - Medium

Why AI struggles to predict the future : Short Wave – NPR

Muharrem Huner/Getty Images

Muharrem Huner/Getty Images

Artificial intelligence is increasingly being used to predict the future. Banks use it to predict whether customers will pay back a loan, hospitals use it to predict which patients are at greatest risk of disease and auto insurance companies use it to determine insurance rates by predicting how likely a customer is to get in an accident.

"Algorithms have been claimed to be these silver bullets, which can solve a lot of societal problems," says Sayash Kapoor, a researcher and PhD candidate at Princeton University's Center for Information Technology Policy. "And so it might not even seem like it's possible that algorithms can go so horribly awry when they're deployed in the real world."

But they do.

Issues like data leakage and sampling bias can cause AI to give faulty predictions, to sometimes disastrous effects.

Kapoor points to high stakes examples: One algorithm falsely accused tens of thousands of Dutch parents of fraud; another purportedly predicted which hospital patients were at high risk of sepsis, but was prone to raising false alarms and missing cases.

After digging through tens of thousands of lines of machine learning code in journal articles, he's found examples abound in scientific research as well.

"We've seen this happen across fields in hundreds of papers," he says. "Often, machine learning is enough to publish a paper, but that paper does not often translate to better real world advances in scientific fields."

Kapoor is co-writing a blog and book project called AI Snake Oil.

Want to hear more of the latest research on AI? Email us at shortwave@npr.org we might answer your question on a future episode!

Listen to Short Wave on Spotify, Apple Podcasts and Google Podcasts.

This episode was produced by Berly McCoy and edited by Rebecca Ramirez. Brit Hanson checked the facts. Maggie Luthar was the audio engineer.

Read the rest here:

Why AI struggles to predict the future : Short Wave - NPR

Artificial Intelligence in Natural Hazard Modeling: Severe Storms, Hurricanes, Floods, and Wildfires – Government Accountability Office

What GAO Found

GAO found that machine learning, a type of artificial intelligence (AI) that uses algorithms to identify patterns in information, is being applied to forecasting models for natural hazards such as severe storms, hurricanes, floods, and wildfires, which can lead to natural disasters. A few machine learning models are used operationallyin routine forecastingsuch as one that may improve the warning time for severe storms. Some uses of machine learning are considered close to operational, while others require years of development and testing.

GAO identified potential benefits of applying machine learning to this field, including:

Forecasting natural disasters using machine learning

GAO also identified challenges to the use of machine learning. For example:

GAO identified five policy options that could help address these challenges. These options are intended to inform policymakers, including Congress, federal and state agencies, academic and research institutions, and industry of potential policy implementations. The status quo option illustrates a scenario in which government policymakers take no additional actions beyond current ongoing efforts.

Policy Options to Help Address Challenges to the Use of Machine Learning in Natural Hazard Modeling

Government policymakers could expand use of existing observational data and infrastructure to close gaps, expand access to certain data, and (in conjunction with other policymakers) establish guidelines for making data AI-ready.

Government policymakers could update education requirements to include machine learning-related coursework and expand learning and support centers, while academic policymakers could adjust physical science curricula to include more machine learning coursework.

Government policymakers could address pay scale limitations for positions that include machine learning expertise and work with private sector policymakers to expand the use of public-private partnerships (PPP).

Policymakers could establish efforts to better understand and mitigate various forms of bias, support inclusion of diverse stakeholders for machine learning models, and develop guidelines or best practices for reporting methodological choices.

Government policymakers could maintain existing policy efforts and organizational structures, along with existing strategic plans and agency commitments.

Source: GAO. | GAO-24-106213

Natural disasters cause on average hundreds of deaths and billions of dollars in damage in the U.S. each year. Forecasting natural disasters relies on computer modeling and is important for preparedness and response, which can in turn save lives and protect property. AI is a powerful tool that can automate processes, rapidly analyze massive data sets, enable modelers to gain new insights, and boost efficiency.

This report on the use of machine learning in natural hazard modeling discusses (1) the emerging and current use of machine learning for modeling severe storms, hurricanes, floods, and wildfires, and the potential benefits of this use; (2) challenges surrounding the use of machine learning; and (3) policy options to address challenges or enhance benefits of the use of machine learning.

GAO reviewed the use of machine learning to model severe storms, hurricanes, floods, and wildfires across development and operational stages; interviewed a range of stakeholder groups, including government, industry, academia, and professional organizations; convened a meeting of experts in conjunction with the National Academies; and reviewed key reports and scientific literature. GAO is identifying policy options in this report.

For more information, contact Brian Bothwell at (202) 512-6888 or bothwellb@gao.gov.

See the article here:

Artificial Intelligence in Natural Hazard Modeling: Severe Storms, Hurricanes, Floods, and Wildfires - Government Accountability Office

OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say – Reuters

Nov 22 (Reuters) - Ahead of OpenAI CEO Sam Altmans four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.

The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*s future success, the source said.

Reuters could not independently verify the capabilities of Q* claimed by the researchers.

[1/2]Sam Altman, CEO of ChatGPT maker OpenAI, arrives for a bipartisan Artificial Intelligence (AI) Insight Forum for all U.S. senators hosted by Senate Majority Leader Chuck Schumer (D-NY) at the U.S. Capitol in Washington, U.S., September 13, 2023. REUTERS/Julia Nikhinson/File Photo Acquire Licensing Rights

Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math where there is only one right answer implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.

Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.

In their letter to the board, researchers flagged AIs prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

Researchers have also flagged work by an "AI scientist" team, the existence of which multiple sources confirmed. The group, formed by combining earlier "Code Gen" and "Math Gen" teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.

Altman led efforts to make ChatGPT one of the fastest growing software applications in history and drew investment - and computing resources - necessary from Microsoft to get closer to AGI.

In addition to announcing a slew of new tools in a demonstration this month, Altman last week teased at a summit of world leaders in San Francisco that he believed major advances were in sight.

"Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit.

A day later, the board fired Altman.

Anna Tong and Jeffrey Dastin in San Francisco and Krystal Hu in New York; Editing by Kenneth Li and Lisa Shumaker

Our Standards: The Thomson Reuters Trust Principles.

Anna Tong is a correspondent for Reuters based in San Francisco, where she reports on the technology industry. She joined Reuters in 2023 after working at the San Francisco Standard as a data editor. Tong previously worked at technology startups as a product manager and at Google where she worked in user insights and helped run a call center. Tong graduated from Harvard University. Contact:4152373211

Jeffrey Dastin is a correspondent for Reuters based in San Francisco, where he reports on the technology industry and artificial intelligence. He joined Reuters in 2014, originally writing about airlines and travel from the New York bureau. Dastin graduated from Yale University with a degree in history. He was part of a team that examined lobbying by Amazon.com around the world, for which he won a SOPA Award in 2022.

Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.

See more here:

OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say - Reuters

Artificial Intelligence: Actions Needed to Improve DOD’s Workforce Management – Government Accountability Office

Fast Facts

The Department of Defense has invested billions of dollars to integrate artificial intelligence into its operations. This includes analyzing intelligence, surveillance, and reconnaissance data, and operating deadly autonomous weapon systems.

We found, however, that DOD can't fully identify who is part of its AI workforce or which positions require personnel with AI skills. As a result, DOD can't effectively assess the state of its AI workforce or forecast future AI workforce needs.

We made 3 recommendations, including that DOD establish a timeline for completing the steps needed to define and identify its AI workforce.

The Department of Defense (DOD) typically establishes standard definitions of its workforces to make decisions about which personnel are to be included in that workforce, and identifies its workforces by coding them in its data systems. DOD has taken steps to begin to identify its artificial intelligence (AI) workforce, but has not assigned responsibility and does not have a timeline for completing additional steps to fully define and identify this workforce. DOD developed AI work rolesthe specialized sets of tasks and functions requiring specific knowledge, skills, and abilities. DOD also identified some military and civilian occupations, such as computer scientists, that conduct AI work. However, DOD has not assigned responsibility to the organizations necessary to complete the additional steps required to define and identify its AI workforce, such as coding the work roles in various workforce data systems, developing a qualification program, and updating workforce guidance. DOD also does not have a timeline for completing these additional steps. Assigning responsibility and establishing a timeline for completion of the additional steps would enable DOD to more effectively assess the state of its AI workforce and be better prepared to forecast future workforce requirements (see figure).

Questions DOD Cannot Answer Until It Fully Defines and Identifies Its AI Workforce

DOD's plans and strategies address some AI workforce issues, but are not fully consistent with each other. Federal regulation and guidance state that an agency's Human Capital Operating Plan should support the execution of its Strategic Plan. However, DOD's Human Capital Operating Plan does not consistently address the human capital implementation actions for AI workforce issues described in DOD's Strategic Plan. DOD also uses inconsistent terms when addressing AI workforce issues, which could hinder a shared understanding within DOD. The military services are also developing component-level human capital plans that encompass AI and will cascade from the higher-level plans. Updating DOD's Human Capital Operating Plan to be consistent with other strategic documents would better guide DOD components' planning efforts and support actions necessary for achieving the department's strategic goals and objectives related to its AI workforce.

DOD has invested billions of dollars to integrate AI into its warfighting operations. This includes analyzing intelligence, surveillance, and reconnaissance data, and operating lethal autonomous weapon systems. DOD identified cultivating a workforce with AI expertise as a strategic focus area in 2018. However, in 2021 the National Security Commission on Artificial Intelligence concluded that DOD's AI talent deficit is one of the greatest impediments to the U.S. being AI-ready by the Commission's target date of 2025.

House Report 117-118, accompanying a bill for the National Defense Authorization Act for Fiscal Year 2022, includes a provision for GAO to review DOD's AI workforce. This report evaluates the extent to which DOD has (1) defined and identified its AI workforce and (2) established plans and strategies to address AI workforce issues, among other objectives. GAO assessed DOD strategies and plans, reviewed laws and guidance that outline requirements for managing an AI workforce, and interviewed officials.

Read more from the original source:

Artificial Intelligence: Actions Needed to Improve DOD's Workforce Management - Government Accountability Office

The OpenAI Drama Has a Clear Winner: The Capitalists – The New York Times

What happened at OpenAI over the past five days could be described in many ways: A juicy boardroom drama, a tug of war over one of Americas biggest start-ups, a clash between those who want A.I. to progress faster and those who want to slow it down.

But it was, most importantly, a fight between two dueling visions of artificial intelligence.

In one vision, A.I. is a transformative new tool, the latest in a line of world-changing innovations that includes the steam engine, electricity and the personal computer, and that, if put to the right uses, could usher in a new era of prosperity and make gobs of money for the businesses that harness its potential.

In another vision, A.I. is something closer to an alien life form a leviathan being summoned from the mathematical depths of neural networks that must be restrained and deployed with extreme caution in order to prevent it from taking over and killing us all.

With the return of Sam Altman on Tuesday to OpenAI, the company whose board fired him as chief executive last Friday, the battle between these two views appears to be over.

Team Capitalism won. Team Leviathan lost.

OpenAIs new board will consist of three people, at least initially: Adam DAngelo, the chief executive of Quora (and the only holdover from the old board); Bret Taylor, a former executive at Facebook and Salesforce; and Lawrence H. Summers, the former Treasury secretary. The board is expected to grow from there.

OpenAIs largest investor, Microsoft, is also expected to have a larger voice in OpenAIs governance going forward. That may include a board seat.

Gone from the board are three of the members who pushed for Mr. Altmans ouster: Ilya Sutskever, OpenAIs chief scientist (who has since recanted his decision); Helen Toner, a director of strategy at Georgetown Universitys Center for Security and Emerging Technology; and Tasha McCauley, an entrepreneur and researcher at the RAND Corporation.

Mr. Sutskever, Ms. Toner and Ms. McCauley are representative of the kinds of people who were heavily involved in thinking about A.I. a decade ago an eclectic mix of academics, Silicon Valley futurists and computer scientists. They viewed the technology with a mix of fear and awe, and worried about theoretical future events like the singularity, a point at which A.I. would outstrip our ability to contain it. Many were affiliated with philosophical groups like the Effective Altruists, a movement that uses data and rationality to make moral decisions, and were persuaded to work in A.I. out of a desire to minimize the technologys destructive effects.

This was the vibe around A.I. in 2015, when OpenAI was formed as a nonprofit, and it helps explain why the organization kept its convoluted governance structure which gave the nonprofit board the ability to control the companys operations and replace its leadership even after it started a for-profit arm in 2019. At the time, protecting A.I. from the forces of capitalism was viewed by many in the industry as a top priority, one that needed to be enshrined in corporate bylaws and charter documents.

But a lot has changed since 2019. Powerful A.I. is no longer just a thought experiment it exists inside real products, like ChatGPT, that are used by millions of people every day. The worlds biggest tech companies are racing to build even more powerful systems. And billions of dollars are being spent to build and deploy A.I. inside businesses, with the hope of reducing labor costs and increasing productivity.

The new board members are the kinds of business leaders youd expect to oversee such a project. Mr. Taylor, the new board chair, is a seasoned Silicon Valley deal maker who led the sale of Twitter to Elon Musk last year, when he was the chair of Twitters board. And Mr. Summers is the Ur-capitalist a prominent economist who has said that he believes technological change is net good for society.

There may still be voices of caution on the reconstituted OpenAI board, or figures from the A.I. safety movement. But they wont have veto power, or the ability to effectively shut down the company in an instant, the way the old board did. And their preferences will be balanced alongside others, such as those of the companys executives and investors.

Thats a good thing if youre Microsoft, or any of the thousands of other businesses that rely on OpenAIs technology. More traditional governance means less risk of a sudden explosion, or a change that would force you to switch A.I. providers in a hurry.

And perhaps what happened at OpenAI a triumph of corporate interests over worries about the future was inevitable, given A.I.s increasing importance. A technology potentially capable of ushering in a Fourth Industrial Revolution was unlikely to be governed over the long term by those who wanted to slow it down not when so much money was at stake.

There are still a few traces of the old attitudes in the A.I. industry. Anthropic, a rival company started by a group of former OpenAI employees, has set itself up as a public benefit corporation, a legal structure that is meant to insulate it from market pressures. And an active open-source A.I. movement has advocated that A.I. remain free of corporate control.

But these are best viewed as the last vestiges of the old era of A.I., in which the people building A.I. regarded the technology with both wonder and terror, and sought to restrain its power through organizational governance.

Now, the utopians are in the drivers seat. Full speed ahead.

Read the original:

The OpenAI Drama Has a Clear Winner: The Capitalists - The New York Times

OpenAI’s Board Set Back the Promise of Artificial Intelligence – The Information

I was the first venture investor in OpenAI. The weekend drama illustrated my contention that the wrong boards can damage companies. Fancy titles like Director of Strategy at Georgetowns Center for Security and Emerging Technology can lead to a false sense of understanding of the complex process of entrepreneurial innovation. OpenAIs board members' religion of effective altruism and its misapplication could have set back the worlds path to the tremendous benefits of artificial intelligence. Imagine free doctors for everyone and near free tutors for every child on the planet. Thats whats at stake with the promise of AI.

The best companies are those whose visions are led and executed by their founding entrepreneurs, the people who put everything on the line to challenge the status quofounders like Sam Altmanwho face risk head on, and who are focusedso totallyon making the world a better place. Things can go wrong, and abuse happens, but the benefits of good founders far outweigh the risks of bad ones.

View post:

OpenAI's Board Set Back the Promise of Artificial Intelligence - The Information

The Future of AI: What to Expect in the Next 5 Years – TechTarget

For the first half of the 20th century, the concept of artificial intelligence held meaning almost exclusively for science fiction fans. In literature and cinema, androids, sentient machines and other forms of AI sat at the center of many of science fiction's high-water marks -- from Metropolis to I, Robot. In the second half of the last century, scientists and technologists began earnestly attempting to realize AI.

At the 1956 Dartmouth Summer Research Project on Artificial Intelligence, co-host John McCarthy introduced the phrase artificial intelligence and helped incubate an organized community of AI researchers.

Often AI hype outpaced the actual capacities of anything those researchers could create. But in the last moments of the 20th century, significant AI advances started to rattle society at large. When IBM's Deep Blue defeated chess master Gary Kasparov, the game's reigning champion, the event seemed to signal not only a historic and singular defeat in chess history -- the first time that a computer had beaten a top player -- but also that a threshold had been crossed. Thinking machines had left the realm of sci-fi and entered the real world.

The era of big data and the exponential growth of computational power in accord with Moore's Law has subsequently enabled AI to sift through gargantuan amounts of data and learn how to accomplish tasks that had previously been accomplished only by humans.

The effects of this machine renaissance have permeated society: Voice recognition devices such as Alexa, recommendation engines like those used by Netflix to suggest which movie you should watch next based on your viewing history, and the modest steps taken by driverless cars and other autonomous vehicles are emblematic. But the next five years of AI development will likely lead to major societal changes that go well beyond what we've seen to date.

Speed of life. The most obvious change that many people will feel across society is an increase in the tempo of engagements with large institutions. Any organization that engages regularly with large numbers of users -- businesses, government units, nonprofits -- will be compelled to implement AI in the decision-making processes and in their public- and consumer-facing activities. AI will allow these organizations to make most of the decisions much more quickly. As a result, we will all feel life speeding up.

End of privacy. Society will also see its ethical commitments tested by powerful AI systems, especially privacy. AI systems will likely become much more knowledgeable about each of us than we are about ourselves. Our commitment to protecting privacy has already been severely tested by emerging technologies over the last 50 years. As the cost of peering deeply into our personal data drops and more powerful algorithms capable of assessing massive amounts of data become more widespread, we will probably find that it was a technological barrier more than an ethical commitment that led society to enshrine privacy.

Thicket of AI law. We can also expect the regulatory environment to become much trickier for organizations using AI. Presently all across the planet, governments at every level, local to national to transnational, are seeking to regulate the deployment of AI. In the U.S. alone, we can expect an AI law thicket as city, state and federal government units draft, implement and begin to enforce new AI laws. And the European Union will almost certainly implement its long-awaited AI regulation within the next six to 12 business quarters. The legal complexity of doing business will grow considerably in the next five years as a result.

Human-AI teaming. Much of society will expect businesses and government to use AI as an augmentation of human intelligence and expertise, or as a partner, to one or more humans working toward a goal, as opposed to using it to displace human workers. One of the effects of artificial intelligence having been born as an idea in century-old science fiction tales is that the tropes of the genre, chief among them dramatic depictions of artificial intelligence as an existential threat to humans, are buried deep in our collective psyche. Human-AI teaming, or keeping humans in any process that is being substantially influenced by artificial intelligence, will be key to managing the resultant fear of AI that permeates society.

The following industries will be affected most by AI:

The notion that AI poses an existential risk to humans has existed almost as long as the concept of AI itself. But in the last two years, as generative AI has become a hot topic of public discussion and debate, fear of AI has taken on newer undertones.

Arguably the most realistic form of this AI anxiety is a fear of human societies losing control to AI-enabled systems. We can already see this happening voluntarily in use cases such as algorithmic trading in the finance industry. The whole point of such implementations is to exploit the capacities of synthetic minds to operate at speeds that outpace the quickest human brains by many orders of magnitude.

However, the existential threats that have been posited by Elon Musk, Geoffrey Hinton and other AI pioneers seem at best like science fiction, and much less hopeful than much of the AI fiction created 100 years ago.

The more likely long-term risk of AI anxiety in the present is missed opportunities. To the extent that organizations in this moment might take these claims seriously and underinvest based on those fears, human societies will miss out on significant efficiency gains, potential innovations that flow from human-AI teaming, and possibly even new forms of technological innovation, scientific knowledge production and other modes of societal innovation that powerful AI systems can indirectly catalyze.

Michael Bennett is director of educational curriculum and business lead for responsible AI in The Institute for Experiential Artificial Intelligence at Northeastern University in Boston. Previously, he served as Discovery Partners Institute's director of student experiential immersion learning programs at the University of Illinois. He holds a J.D. from Harvard Law School.

View post:

The Future of AI: What to Expect in the Next 5 Years - TechTarget

OpenAI staff reportedly warned the board about an AI breakthrough that could threaten humanity before Sam Altman … – Fortune

A potential breakthrough in the field of artificial intelligence may have contributed to Sam Altmans recent ouster as CEO of OpenAI.

According to a Reutersreportciting two sources acquainted with the matter, several staff researchers wrote a letter to the organizations board warning of a discovery that could potentially threaten the human race.

The two anonymous individuals claim this letter, which informed directors that a secret project named Q* resulted in A.I. solving grade school level mathematics, reignited tensions over whether Altman was proceeding too fast in a bid tocommercialize the technology.

Just a day before he was sacked, Altman may have referenced Q* (pronounced Q-star) at a summit of world leaders in San Francisco when he spoke of what he believed was a recent breakthrough.

Four times now in the history of OpenAIthe most recent time was just in the last couple of weeksIve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, said Altmanat a discussion during the Asia-Pacific Economic Cooperation.

He has since beenreinstated as CEOin a spectacular reversal of events after staffthreatened to mutinyagainst the board.

According to one of the sources, after being contacted by Reuters, OpenAIs chief technology officer Mira Murati acknowledged in an internal memo to employees the existence of the Q* project as well as a letter that was sent by the board.

OpenAI could not be reached immediately by Fortune for a statement, but it declined to provide a comment to Reuters.

So why is all of this special, let alone alarming?

Machines have been solving mathematical problems for decades going back to the pocket calculator.

The difference is conventional devices were designed to arrive at a single answer using a series of deterministic commands that all personal computers employ where values can only either be true or false, 0 or 1.

Under this rigid binary system, there is no capability to diverge from their programming in order to think creatively.

By comparison, neural nets are not hard coded to execute certain commands in a specific way. Instead, they are trained just like a human brain would be with massive sets of interrelated data, giving them the ability to identify patterns and infer outcomes.

Think of Googles helpful Autocomplete function that aims to predict what an internet user is searching for using statistical probabilitythis is a very rudimentary form of generative AI.

Thats why Meredith Whittaker, a leading expert in the field, describesneural netslike ChatGPT as probabilistic engines designed to spit out what seems plausible.

Should generative A.I. prove able to arrive at the correct solution to mathematical problems on its own, it suggests a capacity for higher reasoning.

This could potentially be the first step towards developing artificial general intelligence, a form of AI that can surpass humans.

The fear is that an AGI needs guardrails since it one day might come to view humanity as a threat to its existence.

See the article here:

OpenAI staff reportedly warned the board about an AI breakthrough that could threaten humanity before Sam Altman ... - Fortune

First international benchmark of artificial intelligence and machine … – Nuclear Energy Agency

Recent performance breakthroughs in artificial intelligence (AI) and machine learning (ML) have led to unprecedented interest among nuclear engineers. Despite the progress, the lack of dedicated benchmark exercises for the application of AI and ML techniques in nuclear engineering analyses limits their applicability and broader usage. In line with the NEA strategic target to contribute to building a solid scientific and technical basis for the development of future generation nuclear systems and deployment of innovations, theTask Force on Artificial Intelligence and Machine Learning for Scientific Computing in Nuclear Engineering was established within theExpert Group on Reactor Systems Multi-Physics (EGMUP) of the Nuclear Science Committees Working Party on Scientific Issues and Uncertainty Analysis of Reactor Systems (WPRS). The Task Force will focus on designing benchmark exercises that will target important AI and ML activities, and cover various computational domains of interest, from single physics to multi-scale and multi-physics.

A significant milestone has been reached with the successful launch of a first comprehensive benchmark of AI and ML to predict the Critical Heat Flux (CHF). This CHF corresponds in a boiling system to the limit beyond which wall heat transfer decreases significantly, which is often referred to as critical boiling transition, boiling crisis and (depending on operating conditions) departure from nucleate boiling (DNB), or dryout. In a heat transfer-controlled system, such as a nuclear reactor core, CHF can result in a significant wall temperature increase leading to accelerated wall oxidation, and potentially to fuel rod failure. While constituting an important design limit criterion for the safe operation of reactors, CHF is challenging to predict accurately due to the complexities of the local fluid flow and heat exchange dynamics.

Current CHF models are mainly based on empirical correlations developed and validated for a specific application case domain. Through this benchmark, improvements in the CHF modelling are sought using AI and ML methods directly leveraging a comprehensive experimental database provided by the US Nuclear Regulatory Commission (NRC), forming the cornerstone of this benchmark exercise. The improved modelling can lead to a better understanding of the safety margins and provide new opportunities for design or operational optimisations.

The CHF benchmark phase 1 kick-off meeting on 30 October 2023 gathered 78 participants, representing 48 institutions from 16 countries. This robust engagement underscores the profound interest and commitment within the global scientific community toward integrating AI and ML technologies into nuclear engineering. The ultimate goal of the Task Force is to leverage insights from the benchmarks and distill lessons learnt to provide guidelines for future AI and ML applications in scientific computing in nuclear engineering.

eNrVmE1z2jAQhu/8Co/v/iBpAukYMi1NWmaaKSVh2umFEfICIkJyVjIf/fWVgbSkYzdFRIcc7ZV319K7z66dXK7m3FsAKiZFy6+Hse+BoDJlYtLyB3fXQdO/bNeSGVmQ/WXNMB6exCe+RzlRquUX9nAERKjw+83nD2A8APrtmpfI0QyofrIu14yHn4ia3pCsWOMlC8lSbw56KtOWn+V6c9dLlEaTR3sp8V5lhEIS7e7sWzXK4dnZebxvTKLC43+4zhXgZyImpZ5BWPmkOSII3SEaJhLXlUk3Lhp2STPVByVzpNAjetpDuWAppKVxxoQrsAoyXqa3gAsOughS6jya0bmyck5mZNWHh2550u+MtaNXOoiDeiOOm/X6xcXpyfmpVSjc26rSaMVLRBkfNptv4kY0Zqh0wIQGFEQbtRMejExFTOcE7wM5DghqNmaUmfvFKs7ZxJghICIN5oROmYCAA0FhfJsVgchpcRkgEKolBtl0rRhVlgffk6gJd3TkTHWeStdRHISHZ6WVMpVxsg5nKrPdKoLEmAENZdy9SPEGd2i4x82e/eVf5JxHB2Y92AHJUcYF7zoyF7qCS9d9243oSFMNq+oTtUOpXu20yEC9nNufUpT3kl4+4oza8tIQLQelB/1uNS5fDWneEwUDdIeab0ykcqkqumNuTbB9xThKPttA+F/DyLl1cf4w0qxoi1c5ygwiwzWmjsFVV4zlsaAyai939aj11yHzzfgnKeFQMQAOLYFo9P04tDqrIHfVuTWUOv14dWerva854Pp2c1nqmqWtR9XYNQsXHcgIvTLvw8tmS45nvwwMQep2esZyLE21ztTbKFoul6EEmgYCSChx8rpa0t6I4e6jx8kcs53rtmx3lPpo268PO37bUn5u0jl2et89v/tKeOkZYfCb+s7Y3L16edz/Gd2dpd17wid3YTZj9oYwrka0fFTq8agGY45VXKPhw5exgSAcJssk2v4Ja9eSqPgL1q79AnXDevo=

ermjsW29gKwt2T3e

Read the original:

First international benchmark of artificial intelligence and machine ... - Nuclear Energy Agency

What the OpenAI drama means for AI progress and safety – Nature.com

OpenAI fired its charismatic chief executive, Sam Altman, on 17 November but has now reinstated him.Credit: Justin Sullivan/Getty

OpenAI the company behind the blockbuster artificial intelligence (AI) bot ChatGPT has been consumed by frenzied changes for almost a week. On 17 November, the company fired its charismatic chief executive, Sam Altman. Five days, and much drama, later, OpenAI announced that Altman would return with an overhaul of the companys board.

The debacle has thrown the spotlight on an ongoing debate about how commercial competition is shaping the development of AI systems, and how quickly AI can be deployed ethically and safely.

The push to retain dominance is leading to toxic competition. Its a race to the bottom, says Sarah Myers West, managing director of the AI Now Institute, a policy-research organization based in New York City.

Altman, a successful investor and entrepreneur, was a co-founder of OpenAI and its public face. He had been chief executive since 2019, and oversaw an investment of some US$13 billion from Microsoft. After Altmans initial ousting, Microsoft, which uses OpenAI technology to power its search engine Bing, offered Altman a job leading a new advanced AI research team. Altmans return to OpenAI came after hundreds of company employees signed a letter threatening to follow Altman to Microsoft unless he was reinstated.

The OpenAI board that ousted Altman last week did not give detailed reasons for the decision, saying at first that he was fired because he was not consistently candid in his communications with the board and later adding that the decision had nothing to do with malfeasance or anything related to our financial, business, safety or security/privacy practice.

But some speculate that the firing might have its origins in a reported schism at OpenAI between those focused on commercial growth and those uncomfortable with the strain of rapid development and its possible impacts on the companys mission to ensure that artificial general intelligence benefits all of humanity.

OpenAI, which is based in San Francisco, California, was founded in 2015 as a non-profit organization. In 2019, it shifted to an unusual capped-profit model, with a board explicitly not accountable to shareholders or investors, including Microsoft. In the background of Altmans firing is very clearly a conflict between the non-profit and the capped-profit; a conflict of culture and aims, says Jathan Sadowski, a social scientist of technology at Monash University in Melbourne, Australia.

Ilya Sutskever, OpenAIs chief scientist and a member of the board that ousted Altman, this July shifted his focus to superalignment, a four-year project attempting to ensure that future superintelligences work for the good of humanity.

Its unclear whether Altman and Sutskever are at odds about speed of development: after the board fired Altman, Sutskever expressed regret about the impacts of his actions and was among the employees who signed the letter threatening to leave unless Altman returned.

With Altman back, OpenAI has reshuffled its board: Sutskever and Helen Toner, a researcher in AI governance and safety at Georgetown Universitys Center for Security and Emerging Technology in Washington DC, are no longer on the board. The new board members include Bret Taylor, who is on the board of e-commerce platform Shopify and used to lead the software company Salesforce.

It seems likely that OpenAI will shift further from its non-profit origins, says Sadowski, restructuring as a classic profit-driven Silicon Valley tech company.

OpenAI released ChatGPT almost a year ago, catapulting the company to worldwide fame. The bot was based on the companys GPT-3.5 large language model (LLM), which uses the statistical correlations between words in billions of training sentences to generate fluent responses to prompts. The breadth of capabilities that have emerged from this technique (including what some see as logical reasoning) has astounded and worried scientists and the general public alike.

OpenAI is not alone in pursuing large language models, but the release of ChatGPT probably pushed others to deployment: Google launched its chatbot Bard in March 2023, the same month that an updated version of ChatGPT, based on GPT-4, was released. West worries that products are appearing before anyone has a full understanding of their behaviour, uses and misuses, and that this could be detrimental for society.

The competitive landscape for conversational AI is heating up. Google has hinted that more AI products lie ahead. Amazon has its own AI offering, Titan. Smaller companies that aim to compete with ChatGPT include the German effort Aleph Alpha and US-based Anthropic, founded in 2021 by former OpenAI employees, which released the chatbot Claude 2.1 on 21 November. Stability AI and Cohere are other often-cited rivals.

West notes that these start-ups rely heavily on the vast and expensive computing resources provided by just three companies Google, Microsoft and Amazon potentially creating a race for dominance between these controlling giants.

Computer scientist Geoffrey Hinton at the University of Toronto in Canada, a pioneer of deep learning, is deeply concerned about the speed of AI development. If you specify a competition to make a car go as fast as possible, the first thing you do is remove the brakes, he says. (Hinton declined to comment to Nature on the events at OpenAI since 17 November.)

OpenAI was founded with the specific goal of developing an artificial general intelligence (AGI) a deep-learning system thats trained not just to be good at one specific thing, but to be as generally smart as a person. It remains unclear whether AGI is even possible. The jury is very much out on that front, says West. But some are starting to bet on it. Hinton says he used to think AGI would happen on the timescale of 30, 50 or maybe 100 years. Right now, I think well probably get it in 520 years, he says.

The imminent dangers of AI are related to it being used as a tool by human bad actors people who use it to, for example, create misinformation, commit scams or, potentially, invent new bioterrorism weapons1. And because todays AI systems work by finding patterns in existing data, they also tend to reinforce historical biases and social injustices, says West.

In the long term, Hinton and others worry about an AI system itself becoming a bad actor, developing sufficient agency to guide world events in a negative direction. This could arise even if an AGI was designed in line with OpenAIs superalignment mission to promote humanitys best interests, says Hinton. It might decide, for example, that the weight of human suffering is so vast that it would be better for humanity to die than to face further misery. Such statements sound like science fiction, but Hinton argues that the existential threat of an AI that cant be turned off and veers onto a destructive path is very real.

The AI Safety Summit hosted by the United Kingdom in November was designed to get ahead of such concerns. So far, some two dozen nations have agreed to work together on the problem, although what exactly they will do remains unclear.

West emphasizes that its important to focus on already-present threats from AI ahead of far-flung concerns and to ensure that existing laws are applied to tech companies developing AI. The events at OpenAI, she says, highlight how just a few companies with the money and computing resources to feed AI wield a lot of power something she thinks needs more scrutiny from anti-trust regulators. Regulators for a very long time have taken a very light touch with this market, says West. We need to start by enforcing the laws we have right now.

Continued here:

What the OpenAI drama means for AI progress and safety - Nature.com

Live chat: A new writing course for the age of artificial intelligence – Yale News

How is academia dealing with the influence of AI on student writing? Just ask ChatGPT, and itll deliver a list of 10 ways in which the rapidly expanding technology is creating both opportunities and challenges for faculty everywhere.

On the one hand, for example, while there are ethical concerns about AI compromising students academic integrity, there is also growing awareness of the ways in which AI tools might actually support students in their research and writing.

Students in Writing Essays with AI, a new English seminar taught by Yales Ben Glaser, are exploring the many ways in which the expanding number of AI tools are influencing written expression, and how they might help or harm their own development as writers.

We talk about how large language models are already and will continue to be quite transformative, Glaser said, not just of college writing but of communication in general.

An associate professor of English in Yales Faculty of Arts and Sciences, Glaser sat down with Yale News to talk about the need for AI literacy, ChatGPTs love of lists, and how the generative chatbot helped him write the course syllabus.

Ben Glaser: Its more the former. None of the final written work for the class is written with ChatGPT or any other large language model or chatbot, although we talk about using AI research tools like Elicit and other things in the research process. Some of the small assignments directly call for students to engage with ChatGPT, get outputs, and then reflect on it. And in that process, they learn how to correctly cite ChatGPT.

The Poorvu Center for Teaching and Learning has a pretty useful page with AI guidelines. As part of this class, we read that website and talked about whether those guidelines seem to match students own experience of usage and what their friends are doing.

Glaser: I dont get the sense that they are confused about it in my class because we talk about it all the time. These are students who simultaneously want to understand the technology better, maybe go into that field, and they also want to learn how to write. They dont think theyre going to learn how to write by using those AI tools better. But they want to think about it.

Thats a very optimistic take, but I think that Yale makes that possible through the resources it has for writing help, and students are often directed to those resources. If youre in a class where the writing has many stages drafting, revision its hard to imagine where ChatGPT is going to give you anything good, partly because youre going to have to revise it so much.

That said, its a totally different world if youre in high school or a large university without those resources. And then of course there are situations that have always led to plagiarism, where youre strung out at the last minute and you copy something from Google.

Glaser: First of all, its a really interesting thing to study. Thats not what youre asking youre asking what it can do or where does it belong in a writing process. But when you talk to a chatbot, you get this fuzzy, weird image of culture back. You might get counterpoints to your ideas, and then you need to evaluate whether those counterpoints or supporting evidence for your ideas are actually good ones. Theres no understanding behind the model. Its based on statistical probabilities its guessing which word comes next. It sometimes does so in a way that speeds things along.

If you say, give me some points and counterpoints in, say, AI use in second-language learning, it might spit out 10 good things and 10 bad things. It loves to give lists. And theres a kind of literacy to reading those outputs. Students in this class are gaining some of that literacy.

Glaser: I dont love the word brainstorming, but I think there is a moment where you have a blank page, and you think you have a topic, and the process of refining that involves research. ChatGPTs not the most wonderful research tool, but it sure is an easy one.

I asked it to write the syllabus for this course initially. What it did was it helped me locate some researchers that I didnt know, it gave me some ideas for units. And then I had to write the whole thing over again, of course. But that was somewhat helpful.

Glaser: It can be. I think thats a limited and effective use of it in many contexts.

One of my favorite class days was when we went to the library and had a library session. Its an insanely amazing resource at Yale. Students have personal librarians, if they want them. Also, Yale pays for these massive databases that are curating stuff for the students. The students quickly saw that these resources are probably going to make things go smoother long-term if they know how to use them.

So it's not a simple AI tool bad, Yale resource good. You might start with the quickly accessible AI tool, and then go to a librarian, and say, like, heres a different version of this. And then youre inside the research process.

Glaser: One thing that some writers have done is, if you interact with it long enough, and give it new prompts and develop its outputs, you can get something pretty cool. At that point youve done just as much work, and youve done a different kind of creative or intellectual project. And Im all for that. If everythings cited, and you develop a creative work through some elaborate back-and-forth or programming effort including these tools, youre just doing something wild and interesting.

Glaser: Im glad that I could offer a class that students who are coming from computer science and STEM disciplines, but also want to learn how to write, could be excited about. AI-generated language, thats the new medium of language. The Web is full of it. Part of making students critical consumers and readers is learning to think about AI language as not totally separate from human language, but as this medium, this soup if you want, that were floating around in.

See the article here:

Live chat: A new writing course for the age of artificial intelligence - Yale News

US agency streamlines probes related to artificial intelligence – Reuters

[1/2]AI (Artificial Intelligence) letters and robot hand miniature in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo Acquire Licensing Rights

WASHINGTON, Nov 21 (Reuters) - Investigations of cases where artificial intelligence (AI) is used to break the law will be streamlined under a new process approved by the U.S. Federal Trade Commission, the agency said on Tuesday.

The move, along with other actions, highlights the FTC's interest in pursuing cases involving AI. Critics of the technology have said that it could be used to turbo-charge fraud.

The agency, which now has three Democrats, voted unanimously to make it easier for staff to issue a demand for documents as part of an investigation if it is related to AI, the agency said in a statement.

In a hearing in September, Commissioner Rebecca Slaughter, a Democrat who has been nominated to another term, agreed with two Republicans nominated to the agency that the agency should focus on issues like use of AI to make phishing emails and robocalls more convincing.

The agency announced a competition last week aimed at identifying the best way to protect consumers against fraud and other harms related to voice cloning.

Reporting by Diane Bartz Editing by Marguerita Choy

Our Standards: The Thomson Reuters Trust Principles.

Here is the original post:

US agency streamlines probes related to artificial intelligence - Reuters

Artificial intelligence and church – UM News

Key Points:

Artificial intelligence technology, the subject of buzz and anxiety at the moment, has made its way to religion circles.

Pastor Jay Cooper, who heads Violet Crown City Church, a United Methodist congregation in Austin, Texas, took AI out for a spin recently at his Sept. 17 worship service.

The verdict? Interesting, but something was missing.

They were glad we did it, Cooper said of his congregation, and let's not do it again.

Cooper usedChatGPTto put together the entireworship service, including the sermon and an original song. He said the result was a stilted atmosphere.

The human element was lacking, he said. It seemed to in some way prevent us from connecting with each other. The heart was missing.

AI leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind, according to theIBM website. It has been around since the 1950s, and is used to power web search engines and self-driving cars, can compete in games of strategy such as chess, and create works such as songs, sermons and prose by using data collected on the internet.

AI-based software transcribed the interviews for this story. The remaining Beatles created a new song, Now and Then, using AI to extract John Lennons vocals from a poorly recorded demo cassette tape he made in the 1970s.

The CEO of Google said that this is bigger than fire, bigger than electricity, said the Rev. James Lee, director of communications for the Greater New Jersey and Eastern Pennsylvania conferences. I really believe that this is going to be how we do everything within the next five to 10 years.

Cooper said he has strong feelings against using AI to write a sermon again.

Even if it's not as eloquent or if it's a little messy or last minute, it needs to be from the heart of the pastor.

Lee concurs. ChatGBT is pretty bad at writing good sermons. That's my own opinion, but they're very vanilla, he said.

Philip Clayton, Ingraham Professor of Theology at Claremont School of Theology, said that religion tends to be slow to pick up on new technology.

I think our fear of technology is not a good thing, especially when we're trying to attract younger people to be involved in churches, he said.

AI is a means to get something done, like using a typewriter years ago, he added. For us as Christians, the key question is, Do the means become the end?

Like what you're reading and want to see more? Sign up for our free daily and weekly digests of important news and events in the life of The United Methodist Church.

Keep me informed!

A sermon is an attempt to speak the word of God to people of God assembled at a particular time and place, Clayton said.

It takes prayer, it takes the knowledge of the people, it takes allusions to my community in my country and all kinds of frameworks, he said. If I don't do that task, what have I carried out? What are my responsibilities as one who rightly divines the word of God?

Lee suggests treating AI technology as an intern.

They are able to do a lot of work for you and support you, and almost treat them like an additional member of the team, he said.

The Rev. Stacy Minger, associate professor of preaching at Asbury Theological Seminary, believes AI could be helpful as long as the preacher does their due diligence of preparation.

The way I teach preaching is that the preacher invests in praying over the text, reading the text and using all of their biblical studies and skills, and then they consult the commentaries or the scholars, she said.

If you're maybe missing an illustration or missing a transition or theres something that just hasn't kind of come together and you're banging your head against the wall, I think at that point, after you've done all of your own work, that it could be a helpful tool.

Ted Vial is the Potthoff Professor of Theology and Modern Western Religious Thought and vice president of innovation, learning and institutional research at Iliff School of Theology. Photo by EB Pixs, courtesy of Iliff School of Theology.

It is important to verify the work of programs like ChatGPT, saidTed Vial, the Potthoff Professor of Theology and Modern Western Religious Thought and vice president of innovation, learning and institutional research at Iliff School of Theology.

Theres a lot of bad information (on the internet), Vial said. My experience with the current level of (AI) sophistication is they can produce a clearly written and well-organized essay. They're not very inspirational.

AI programs do not include the most current information, he said.

I think ChatGPT is built on data that goes through November of 2021, Vial said. So, if sermons are supposed to relate what's happening in the world to the Bible, its going to be out of date.

Humans have emotions and creativity that are hard for a computer to emulate, he said.

But the technology continues to improve.

Whatever humans can do, I'm pretty sure AI will be able to do it soon also, Vial said. So, the question isn't, Would you need a human? The question is, Are you and your congregation OK with a service that's produced by a machine?

Even if the answer to that is No, there will be pastors who want to use it because it makes their lives easier, he added.

If it's a personal connection between the pastor and a community, then it's important to have the pastor's voice and personality, Vial said. If it's exegesis of a text, there may not be anything wrong with having a computer produce it.

Looking at it from another direction, a pastor might be cheating themselves as well as their congregation if they skip doing most of the work, Minger said.

I would be concerned that if you're not spending that time, using all of your biblical study skills and prayerfully invested in the reading of Scripture, that you as a preacher are skipping over a wonderful formative opportunity in your own life, she said.

As I'm hammering out a sermon, I'm really wrestling with it, she said. You need images and metaphors, word choices and illustrations.

And so, as preachers, it's not only that we would be short-circuiting the congregation, I think we would be tamping down our own creative outlets in the effort to become more efficient.

Patterson is a UM News reporter in Nashville, Tennessee. Contact him at 615-742-5470 or [emailprotected]. To read more United Methodist news,subscribeto the free Daily or Weekly Digests.

Read the original post:

Artificial intelligence and church - UM News