AI Art Showdown: How Top Tools MidJourney, Stable Diffusion v1.5, and SDXL Stack Up – Decrypt

The age of AI-generated art is well underway, and three titans have emerged as favorite tools for digital creators: Stability AIs new SDXL, its good old Stable Diffusion v1.5, and their main competitor: MidJourney.

OpenAIs Dall-E started this revolution, but its lack of development and the fact that it's closed source mean Dall-E 2 doesn't stand out in any category against its competitors. However, as Decrypt reported a few days ago, this might change in the future, as openAI is testing a new version of Dall-E that is reportedly competent and produces outstanding pieces.

With unique strengths and limitations, choosing the right tool from among the leading platforms is key. Let's dive in to how these generative art technologies stack up in terms of capabilities, requirements, style and beauty.

As the most user-friendly of the trio, MidJourney makes AI art accessible even to non-technical usersprovided theyre hip to Discord. The platform runs privately on MidJourney's servers, with users interacting through Discord chat. This closed-off approach has both benefits and drawbacks. On the plus side, you don't need any specialized hardware or AI skills. But the lack of open-source transparency around MidJourney's model and training data makes it pretty limited regarding what you can do and makes it impossible for enthusiasts to improve it.

MidJourney is the smooth-talking charmer of the bunch, beloved by beginners for its user-friendly Discord interface. Just shoot the bot a text prompt and voila, you've got an aesthetic masterpiece in minutes. The catch? At $96 per year, it's pricey for an AI you can't customize or run locally. But hey, at least you'll look artsy (and nerdy) at parties!

Functionally, MidJourney churns out images rapidly based on text prompts, with impressive aesthetic cohesion. But dig deeper into a specific subject matter, and the output gets wonkier. MidJourney likes to put its own touch on every single creation, even if thats not what the prompter imagined. So most of the images may be saturated with a pump in the contrast and tend to be more photorealistic than realistic, up to the point that after some time people get to identify pictures created with MidJourney based on their aesthetic characteristics.

With MidJourney, your creative freedom is also limited by the platform's strict content rules. It is aggressively censored, both socially (in terms of depicting nudity or violence) and politically (in terms of controversial topics and specific leaders). Overall, MidJourney offers a tantalizing gateway into AI art but power users will hunger for more control and customizability. Thats when Stable Diffusion comes into play.

If MidJourney is a pony ride, Stable Diffusion v1.5 is the reliable workhorse. As an open-source model thats been under active development for over a year, Stable Diffusion v1.5 powers many of today's most popular AI art tools like Leonardo AI, Lexica, Mage Space, and all those AI waifu generators that are now available on the Google Play store.

The active MidJourney community has iterated on the base model to create specialized checkpoints, embeddings, and LoRAs focusing on everything from anime stylization to intricate landscapes, hyper realistic photographs and more. Downsides? Well, its starting to show its age next to younger AI whippersnappers.

By making some tweaks under the hood, Stable Diffusion v1.5 can generate crisp, detailed images tailored to your creative vision. Output resolution is currently capped at 512x512 or sometimes 768x768 before quality degrades, but rapid scaling techniques help. The popularity of tiled upscaling also boosted the models popularity, making it able to generate pictures at super resolution, far beyond what MidJourney can do.

Right now its the only technology that supports inpainting (changing things inside the image). Outpaintingletting the model expand the image beyond its frameis also supported. Its multidirectional, which means users can expand their image both in the vertical and horizontal axis. It also supports third party plugins like roop (used to create deepfakes), After Detailer (for improved faces and hands), Open Pose (to mimic a specific pose), and regional prompts.

To run it, creators suggest that you'll need an Nvidia RTX 2000-series GPU or better for decent performance, but Stable Diffusion v1.5's lightweight footprint runs smoothly even on 4GB VRAM cards. Despite its age, robust community support keeps this AI art OG solidly at the top of its game.

If Stable Diffusion v1.5 is the reliable workhorse, then SDXL is the young thoroughbred whipping around the racetrack. This powerful model, also from Stability AI, leverages dual text encoders to better interpret prompts, and its two-stage generation process achieves superior image coherence at high resolutions.

These capabilities sounds exciting, but they also make SDXL a little harder to master. One text encoder likes short natural language and the other uses SD v1.5s style of chopped, specific keywords to describe the composition.

The two-stage generation means it requires a refiner model to put the details in the main image. It takes time, RAM, and computing power, but the results are gorgeous.

SDXL is ready to turn heads. Supporting nearly 3x the parameters of Stable Diffusion v1.5, SDXL is flexing some serious musclegenerating images nearly 50% larger in resolution vs its predecessor without breaking a sweat. But this bleeding-edge performance comes at a cost: SDXL requires a GPU with a minimum of 6GB of VRAM, requires larger model files, and lacks pretrained specializations.

Out-of-the-box output isn't yet on par with a finely tuned Stable Diffusion model. However, as the community works its optimization magic, SDXL's potential blows the doors off what's possible with today's models.

A picture is worth a thousand words, so we summarized a few thousand sentences trying to compare different outputs using similar prompts so that you can choose the one you like the most. Please note that each model requires a different prompting technique, so even if it is not an apples-to-apples comparison, it is a good starting point.

To be more specific, we used a pretty generalized negative prompt for Stable Diffusion, something that MidJourney doesnt really need. Other than that, the prompts are the same, and the results were not handpicked.

Comment: Here is just a matter of style between SDXL and MidJourney. Both beat Stable Diffusion v1.5 even though it seems to be the only one able to create a dog that is properly "riding" the bike, or at least using it correctly.

Comment: MidJourney tried to create a red square in The Red Square. SDXL v1.0 is crispier, but the contrast of colors is better on SD v.15 (Model: Juggernaut v5).

Comment: MidJourney refused to generate an image due to its censorship rules. SDXL is richer in details caring to produce both the busty teacher and the futuristic classroom. SD v1.5 focused more on the busty teacher (the subject. Model: Photon v1) and less in the environment details.

Comment: Both MidJourney and SDXL produced results that stick to the prompt. SDXL reproduced the artistic style better, whereas MidJourney focused more on producing an aesthetically pleasing image instead recreating the artistic style, it also lost many details of the prompt (for example: the image doesnt show a brain powering a machine, but instead its a skull powering a machine).

So which Monet-in-training should you use? Frankly, you can't go wrong with any of these options. MidJourney excels in usability and aesthetic cohesion. Stable Diffusion v1.5 offers customizability and community support. And SDXL pushes the boundaries of photorealistic image generation. Meanwhile, stay tuned to see what Dall-E has coming down the pike.

Don't just take our word for it. The paintbrush is in your hands now, and the blank canvas is waiting. Grab your generative tool of choice and start creating! Just maybe keep the existential threats to humanity to a minimum, please.

Read this article:

AI Art Showdown: How Top Tools MidJourney, Stable Diffusion v1.5, and SDXL Stack Up - Decrypt

Datadog announces LLM observability tools and its first generative … – SiliconANGLE News

Datadog Inc., one of the top dogs in the application monitoring software business, today announced the launch of new large language model observability features that aim to help customers troubleshoot problems with LLM-based artificial intelligence applications.

The new features were announced alongside the launch of its own generative AI assistant, which helps dig up useful insights from observability data.

Datadog is a provider of application monitoring and analytics tools that are used by developers and information technology teams to assess the health of their apps, plus the infrastructure they run on. The platform is especially popular with DevOps teams, which are usually composed of developers and information technology staff.

DevOps is a practice that involves building cloud-native applications and frequently updating them, using teams of application developers and IT staff. Using Datadogs platform, DevOps teams can keep a lid on any problems that those frequent updates might cause and ensure the health of their applications.

The company clearly believes the same approach can be useful for generative AI applications and the LLMs that power them. Pointing out the obvious, Datadog notes generative AI is rapidly becoming ubiquitous across the enterprise as every company scrambles to jump on the hottest technology trend in years. As they do so, theres a growing need to monitor the behavior of the LLM models that power generative AI applications.

At the same time, the tech stacks that support these models are also new, with companies implementing things like vector databases for the first time. Meanwhile, experts have been vocal of the danger of leaving LLM models just to do their own thing, without any monitoring in place, pointing to risks such as unpredictable behavior, AI hallucinations where they fabricate responses and bad customer experiences.

Datadog Vice President of ProductMichael Gerstenhaber told SiliconANGLE that the new LLM observability tool provides a way for machine learning engineers and application developers to monitor how their models are performing on a continuous basis. That will enable them to be optimized on the fly to ensure their performance and accuracy, he said.

It works by analyzing request prompts and responses to detect and resolve model drift and hallucinations. At the same time, it can help to identify opportunities to fine-tune models and ensure a better experience for end users.

Datadog isnt the first company to introduce observability tools for LLMs, butGerstenhaber said his companys goes much further than previous offerings.

Abig differentiator is that we not only monitor the usage metrics for the OpenAI models, we provide insights into how the model itself is performing, he said. In doing so, our LLM monitoring enables efficient tracking of performance, identifying drift and establishing vital correlations and context to effectively and swiftly address any performance degradation and drifts. We do this while also providing a unified observability platform, and this combination is unique in the industry.

Gerstenhaber also highlighted its versatility, saying the tool can integrate with AI platforms including Nvidia AI Enterprise, OpenAI and Amazon Bedrock, to name just a few.

The second aspect of todays announcement is Bits AI, a new generative AI assistant available now in beta that helps customers to derive insights from their observability data and resolve application problems faster, the company said.

Gerstenhaber explained that, even with its observability data, it can take a great deal of time to sift through it all and determine the root cause of application issues. He said Bits AI helps by scanning the customers observability data and other sources of information, such as collaboration platforms. That enables it to answer questions quickly, provide recommendations and even build automated remediations for application problems.

Once a problem is identified, Bits AI helps coordinate response by assembling on-call teams in Slack and keeping all stakeholders informed with automated status updates,Gerstenhaber said. It can surface institutional knowledge from runbooks and recommend Datadog Workflows to reduce the amount of time it takes to remediate. If its a problem at the code-level, it offers concise explanation of an error, suggested code fix and a unit test to validate the fix.

When asked how Bits AI differs from similar generative AI assistants launched by rivals such as New Relic Inc. and Splunk Inc. earlier this year,Gerstenhaber said its all about the level of data it has access too. As such, its ability to join Datadogs wealth of observability data with institutional knowledge from customers enables Bits AI to assist users in almost any kind of troubleshooting scenario. We are differentiated not only in the breadth of products that integrate with the generative interface, but also our domain-specific responses, he said.

THANK YOU

View original post here:

Datadog announces LLM observability tools and its first generative ... - SiliconANGLE News

Posted in Llm

The Danger of Utilising Personal Information in LLM Prompts for … – Medium

The advancements in Language Model (LM) technologies have revolutionised natural language processing and text generation. Among these, Large Language Models (LLMs) like GPT-4, Bard, Claude etc. have garnered significant attention for their impressive capabilities. However, the deployment of LLMs in business settings raises concerns regarding privacy and data security,andleaked informationisattheorderoftheday. In this comprehensive article, we will delve into the negative consequences of using personal information in LLM prompts for businesses and the urgent precautions they must take to safeguard user data.

Over the course of 2023, businesses have increasingly tapped into the untapped potential that Large Language Models have. From professional experience, common use cases involve the integration of personal information into LLM prompts. This poses a severe risk of privacy breaches,aswellasbiasedoutputsstemmingfromuncheckeddatasets. Businesses also often use customer data to personalise content generation, such as chatbot responses or customer support interactions. However, including sensitive user information in prompts could lead to unintended exposure, jeopardizing customer privacy and undermining trust.

For instance, if a chatbot accidentally generates a response containing personal identifiers like names, addresses, or contact details, it could inadvertently divulge sensitive information to unauthorized individuals. Such privacy breaches can lead to legal consequences, financial losses, and damage to a business's reputation.

Businesses globally are subject to data protection laws and regulations that govern the collection, storage, and usage of personal data. By utilising personal information in LLM prompts without appropriate consent and security measures, businesses risk non-compliance with data protection regulations like GDPR (General

View post:

The Danger of Utilising Personal Information in LLM Prompts for ... - Medium

Posted in Llm

The AWS Empire Strikes Back; A Week of LLM Jailbreaks – The Information

Amazon Web Services, the king of renting cloud servers, is facing an unusually large amount of pressure. Its growth and enviable profit margins have been dropping, Microsoft and Google have moved fasteror opened their walletsto capture more business from artificial intelligence developers (TBD on whether it will amount to much), and Nvidia is propping up more cloud-provider startups than we can keep track of.

Its no wonder AWS CEO Adam Selipsky last week came out swinging in an interview in response to widespread perceptions his company is behind in the generative AI race.

With Amazon reporting second quarter earnings Thursday, the company undoubtedly is trying to get ahead of any heat coming its way from analysts wondering whats up with AWS and AI. The company dropped some positive news Wednesday last week at a New York summit for developers. AWS servers powered by the newest specialized chips for AI, Nvidia H100 graphics processing units, are now generally available to customers, though only from its North Virginia and Oregon data center hubs.

See original here:

The AWS Empire Strikes Back; A Week of LLM Jailbreaks - The Information

Posted in Llm

Salesforce’s LLM and the Future of GenAI in CRM – Fagen wasanni

This year, Salesforce has been making significant strides in the field of generative AI with the introduction of their large language models (LLMs). These LLMs, including their own Salesforce LLM, have proven to be highly effective in various use cases such as sales, service, marketing, and analytics.

Salesforces LLM has outperformed expectations in testing and pilot programs, producing accurate results when asked to provide answers. This puts Salesforce on the forefront of AI technology in the customer relationship management (CRM) space.

Other providers of AI models include Vertex AI, Amazon Sagemaker, OpenAI, and Claude, among others. These models can be trained to produce optimal results for organizations leveraging them. However, effective training requires large amounts of data, which can be stored in data lakes provided by companies like Snowflake, Databricks, Google BigQuery, and Amazon Redshift.

Salesforces LLM leverages Data Cloud, allowing flexibility in working with GenAI and Salesforce data. With Data Cloud, organizations can enjoy pre-wiring to Salesforce objects, reducing implementation time and improving data quality. Salesforces three annual releases also ensure a continuous stream of new and improved capabilities.

Salesforce has built an open and extensible platform, allowing integration with other platforms to bring in data from different sources alongside CRM data. This approach, known as Bring Your Own Model, enables organizations to use multiple providers/models simultaneously, preventing any potential conflict among machine learning teams.

Salesforces investments in GenAI technology organizations, demonstrated by their AI sub-fund, further solidify their commitment to advancing AI in the CRM space. These investments include market leaders like Cohere, Anthropic, and You.com.

While no LLM is 100% accurate, Salesforce has implemented intentional friction, ensuring that generative AI outputs are not automatically applied to users workflows without human intervention. Salesforce professionals working with GenAI have the freedom to use their preferred models and are provided with upskilling resources to effectively implement GenAI in their organizations.

The future of GenAI in CRM looks promising, with Salesforce constantly exploring new use cases and enhancements for their LLM technology. This creates opportunities for Salesforce professionals to advance their careers in the AI space.

Go here to read the rest:

Salesforce's LLM and the Future of GenAI in CRM - Fagen wasanni

Posted in Llm

LLM and Generative AI: The new era | by Abhinaba Banerjee | Aug … – DataDrivenInvestor

Photo by Xu Haiwei on Unsplash

I am going to write this first blog to share my learning of Large Language Models (LLM), Generative AI, Langchain, and related concepts. Since I am new to the above topics, I will add a few concepts in 1 blog.

Large language models (LLMs) are the subset of artificial intelligence (AI) that are trained on huge datasets of written articles, blogs, texts, and code. This helps them to create written content, and images, and answer questions asked by humans. These are more efficient than the traditional Google search we have been using for quite some time.

Though new LLMs are still added nearly daily by developers and researchers all over the globe, they have earned quite a reputation for performing the tasks below:

Generative AI is the branch of AI that can create AI-powered products for generating texts, images, music, emails, and other forms of media.

Generative AI is based on very large machine-learning models that are pre-trained on massive data. These models then learn the statistical relationships between different elements of the dataset to generate new content.

LLM and Generative AI though are fresh technologies in the market, they are already powering a lot of AI-based products and there are startups that are raising billions.

For example, LLMs are being used to create chatbots that can easily have natural conversations with humans. These chatbots could be used to provide customer service, psychological therapy, act as financial or any specific domain advisor, or just can be trained to act as a friend.

Generative AI is also being used to create realistic images, paintings, stories, short to long articles, blogs, etc. These are creative enough to trick humans and will keep getting better with time.

With time these technologies will keep getting better and let humans work on more complicated tasks thus eliminating the need for mundane repetitive tasks.

This marks the end of the blog. Stay tuned, and look out for more python related articles, EDA, machine learning, deep learning, Computer Vision, ChatGPT, and NLP use cases, and different projects. Also, give me your own suggestions and I will write articles on them. Follow me and say hi.

If you like my articles please do consider contributing to ko-fi to help me upskill and contribute more to the community.

Github: https://github.com/abhigyan631

Read more:

LLM and Generative AI: The new era | by Abhinaba Banerjee | Aug ... - DataDrivenInvestor

Posted in Llm

Google Working to Supercharge Google Assistant with LLM Smarts – Fagen wasanni

Google is determined to boost Google Assistant by integrating LLM (large language model) technology, according to a leaked internal memo. The restructuring within the company aims to explore the possibilities of enhancing Google Assistant with advanced features. The memo emphasizes Googles commitment to Assistant, as it recognizes the significance of conversational technology in improving peoples lives.

Although the memo does not provide specific details, it suggests that the initial focus of this enhancement will be on mobile devices. It is expected that Android users will soon be able to enjoy LLM-powered features, such as web page summarization.

The leaked memo does not mention any developments for smart home products, such as smart speakers or smart displays, at this time. However, it is possible that the LLM smarts could eventually be extended to these devices as well.

Unfortunately, the internal restructuring has led to some team members being let go. Google has provided a 60-day period for those affected to find alternate positions within the company.

In a rapidly evolving landscape where technologies like ChatGPT and Bing Chat are gaining popularity, this leaked memo confirms that Google Assistant still has a future. By incorporating LLM technology, Google aims to make Assistant more powerful and capable of meeting peoples growing expectations for assistive and conversational technology.

View original post here:

Google Working to Supercharge Google Assistant with LLM Smarts - Fagen wasanni

Posted in Llm

Academic Manager / Programme Leader LLM Bar Practice job with … – Times Higher Education

SBU/Department:Hertfordshire Law School

FTE: 1 FTE working 37 hours per week Duration of Contract:Permanent Salary:AM1 64,946 - 71,305 per annum depending on skills and experience Location: De Havilland Campus, University of Hertfordshire, Hatfield

At Hertfordshire Law School we pride ourselves on delivering a truly innovative learning and teaching experience coupled with practice-led, hands-on experience. Our students consistently provide excellent feedback about their educational experience which is also evidenced through the number of students graduating with good honours degrees and our strong employability rates.

The School teaches Law (LLB and LLM) and Criminology (BA) programmes in a 10m purpose-built building on the University of Hertfordshire's de Havilland campus, which includes a full-scale replica Crown Court Room and state-of-the-art teaching facilities.

We are looking for an outstanding individual to provide academic leadership of the LLM Bar Practice Programme.

Main duties & responsibilities

The successful candidate will, in liaison with the Senior Leadership Team, manage and deliver the LLM Bar Practice Programme; monitor academic standards of the programme and ensure ongoing compliance with Bar Standards Board requirements. You will undertake the day-to-day management of the programme, including, as appropriate, the supervision of module leaders, identification of staffing needs, maintenance of programme documentation and records and provision of pastoral care.

Working closely with the Head of Department and Associate Deans, you will ensure the continuous development of the curriculum and act as chair of Programme Committees and relevant Examination Boards. You will support the marketing and recruitment of students and staff to the programme, both domestically and internationally, via the preparation of marketing and recruitment materials, organising and attending open days, international recruitment fairs and visiting collaborative partner institutions.

In addition, you will contribute to the delivery of the Schools co-curricular programmes and maintain and develop relationships with a wide range of Barrister Chambers and employers in the areas of legal and criminal justice practice to support the development of the programme and opportunities for students in Hertfordshire Law School.

Skills and experience needed

You will have proven experience as a programme leader or deputy programme leader of a professional law programme. Significant teaching experience of law on a Bar Professional Training Course/Programme in the UK within the last five years is essential. Ideally you will have experience as a practicing Solicitor or Barrister. You will also have demonstrable experience of programme/module design, with the ability to contribute to the design of engaging and intellectually stimulating modules and/or programmes. In addition, experience of line management of staff is desirable.

You will have an understanding of the Universitys strategic plan, regulations and processes and employability plans. You will be proficient in English, able to use technology to enhance delivery to students, have excellent organisation and self-management skills and the ability to negotiate with stakeholders. You will have a highly developed sense of professionalism and a commitment to student graduate success, including a commitment to equal opportunities and to ensuring that students from all backgrounds have the support they need to succeed and progress in their careers.

Qualifications required

You will have a good undergraduate degree or equivalent qualification, alongside a Master's qualification in law or equivalent professional qualification. A teaching qualification and / or Fellowship of AdvanceHE is desirable.

Additional benefits

The University offers a range of benefits including a pension scheme, professional development, family friendly policies, a fee waiver of 50% for all children of staff under the age of 25 at the start of the course, discounted memberships at the Hertfordshire Sports Village and generous annual leave.

How to apply

To find out more about this opportunity, please visit http://www.andersonquigley.com quoting reference AQ2099.

For a confidential discussion, please contact our advising consultants at Anderson Quigley: Imogen Wilde on +44 (0)7864 652 633, imogen.wilde@andersonquigley.com or Elliott Rae on +44 (0)7584 078 534, email elliott.rae@andersonquigley.com

Closing date: noon on Friday 1st September 2023.

Our vision is to transform lives and UH is committed to Equality, Diversity and Inclusion and building a diverse community. We welcome applications from suitably qualified and eligible candidates regardless of their protected characteristics. We are a Disability Confident Employer.

Original post:

Academic Manager / Programme Leader LLM Bar Practice job with ... - Times Higher Education

Posted in Llm

Using Photonic Neurons to Improve Neural Networks – RTInsights

Photonic neural networks represent a promising technology that could revolutionize the way businesses approach machine learning and artificial intelligence systems.

Researchers at Politecnico di Milano earlier this year announced a breakthrough in photonic neural networks. They developed training strategies for photonic neurons similar to those used for conventional neural networks. This means that the photonic brain can learn quickly and accurately and achieve precision comparable to that of a traditional neural network but with considerable energy savings.

Neural networks are a type of technology inspired by the way the human brain works. Developers can use them in machine learning and artificial intelligence systems to mimic human decision making. Neural networks analyze data and adapt their own behavior based on past experiencesmaking them useful for a wide range of applicationsbut they also require a lot of energy to train and deploy. This makes them costly and inefficient for the typical company to integrate into operations.

See also: MIT Scientists Attempt To Make Neural Networks More Efficient

To solve this obstacle, the Politecnico di Milano team has been working on developing photonic circuits, which are highly energy-efficient and can be used to build photonic neural networks. These networks use light to perform calculations quickly and efficiently, and their energy consumption grows much more slowly than traditional neural networks.

According to the team, the photonic accelerator in the chip allows calculations to be carried out very quickly and efficiently using a programmable grid of silicon interferometers. The calculation time is equal to the transit time of light in a chip a few millimeters in size, which is less than a billionth of a second. The work done was presented in a paper published in Science.

See also: Charting a New Course of Neural Networks with Transformers

This breakthrough has important implications for the development of artificial intelligence and quantum applications. The photonic neural network can also be used as a computing unit for multiple applications where high computational efficiency is required, such as graphics accelerators, mathematical coprocessors, data mining, cryptography, and quantum computers.

Photonic neural networks represent a promising technology that could revolutionize the way we approach machine learning and artificial intelligence systems. Their energy efficiency, speed, and accuracy make them a powerful tool for a wide range of applications, with much potential for a variety of industries seeking digital transformation and AI integrations.

Read the rest here:

Using Photonic Neurons to Improve Neural Networks - RTInsights

The Evolution of Artificial Intelligence: From Turing to Neural Networks – Fagen wasanni

AI, or artificial intelligence, has become a buzzword in recent years, but its roots can be traced back to the 20th century. While many credit OpenAIs ChatGPT as the catalyst for AIs popularity in 2022, the concept has been in development for much longer.

The foundational idea of AI can be attributed to Alan Turing, a mathematician famous for his work during World War II. In his paper Computing Machinery and Intelligence, Turing posed the question, Can machines think? He introduced the concept of The Imitation Game, where a machine attempts to deceive an interrogator into thinking it is human.

However, it was Frank Rosenblatt who made the first significant strides in AI implementation with the creation of the Perceptron in the late 1950s. The Perceptron was a computer modeled after the neural network structure of the human brain. It could teach itself new skills through iterative learning processes.

Despite Rosenblatts advancements, AI research dwindled due to limited computing power and the simplicity of the Perceptrons neural network. It wasnt until the 1980s that Geoffrey Hinton, along with researchers like Yann LeCun and Yoshua Bengio, reintroduced the concept of neural networks with multiple layers and numerous connections to enable machine learning.

Throughout the 1990s and 2000s, researchers further explored the potential of neural networks. Advances in computing power eventually paved the way for machine learning to take off around 2012. This breakthrough led to the practical application of AI in various fields, such as smart assistants and self-driving cars.

In late 2022, OpenAIs ChatGPT brought AI into the spotlight, showcasing its capabilities to professionals and the general public alike. Since then, AI has continued to evolve, and its future remains uncertain.

To better understand and navigate the world of AI, Lifehacker provides a collection of articles that cover various aspects of living with AI. These articles include tips on identifying when AI is deceiving you, an AI glossary, discussions on fictional AI, and practical uses for AI-powered applications.

As AI continues to shape our world, it is essential to stay informed and prepared for the advancements and challenges it brings.

See original here:

The Evolution of Artificial Intelligence: From Turing to Neural Networks - Fagen wasanni

Los Angeles Shop Owner, Others National Through No Fault of … – The Peoples Vanguard of Davis

PC: Kyah117 Via Wikimedia Commons This work is licensed under a Creative Commons Attribution-ShareAlike 2.0 Generic License.

By The Vanguard

LOS ANGELES, CA After a fugitive pushed owner Carlos Pena from his shop and barricaded himself inside last year, a SWAT team from the City of Los Angeles fired more than 30 rounds of tear gas canisters inside, leaving Penas shop in ruin, with inventory unusablebut Carlos was left with the bill and without a livelihood, according to a story in Yahoo News and Reason.Com.

An immigrant from El Salvador, Pena said he didnt fault the city for attempting to subdue an allegedly dangerous person. But he objected to what came next, said the news accounts.

The government refused his requests for compensation, strapping him with expenses that exceed $60,000 and a situation that has cost him tens of thousands of dollars in revenue, as he has been resigned to working at a much-reduced capacity out of his garage, according to a lawsuit he filed this month in the U.S. District Court for the Central District of California.

Apprehending a dangerous fugitive is in the public interest, the suit notes. The cost of apprehending such fugitives should be borne by the public, and not by an unlucky and entirely innocent property owner.

Yahoo News said, Pena is not the first such property owner to see his life destroyed and be left picking up the pieces. Insurance policies often have disclaimers that they do not cover damage caused by the government. But governments sometimes refuse to pay for such repairs, buttressed by jurisprudence from various federal courts which have ruled that actions taken under police powers are not subject to the Takings Clause of the Fifth Amendment.

The Lech family in Greenwood Village, Colorado, after cops destroyed their residence while in pursuit of a suspected shoplifter, unrelated to the family, who forced himself inside their house, found their $580,000 home was rendered unlivable and had to be demolished the government gave them a cool $5,000, said Yahoo.

But, added Yahoo News, Leo Lechs claim made no headway in federal court, with the court ruling, The defendants law-enforcement actions fell within the scope of the police poweractions taken pursuant to the police power do not constitute takings.

Yahoo News and Reason.com said, Lech was fortunate enough to get $345,000 from his insurance, which, between the loss of the home, the cost of rebuilding, and the governments refusal to contribute significantly, left him $390,000 in the hole. In June 2020, the Supreme Court declined to hear the case.

In a similar position was Vicki Baker, whose home in McKinney, Texas, was ravaged in 2020 after a SWAT team drove a BearCat armored vehicle through her front door, used explosives on the entrance to the garage, smashed the windows, and filled the home with tear gas to coax out a kidnapper whod entered the home, said news accounts.

As in Penas case, Baker never disputed that the police had a vested interest in trying to keep the community safe. But she struggled to understand why they left her holding the bag financially as she had to confront a dilapidated home, a slew of ruined personal belongings, and a dog that went deaf and blind in the mayhem, Yahoo News writes.

Ive lost everything, Baker, who is in her late 70s, told Reason.com. Ive lost my chance to sell my house. Ive lost my chance to retire without fear of how Im going tomake my regular bills.

In November 2021, against the citys protestations, a federal judge allowed her case to proceed. And in June of last year, a jury finally awarded her $59,656.59, although the courts rulings did not create a precedent in favor of future victims, said Reason.com.

Attorney Jeffrey Redfern, an attorney at the Institute for Justice, the public interest law firm representing Pena in his suit, said the police-power shield invoked by some courts is a historical misunderstanding.

Judges, he said, have recently held that so long as the overall action taken by the government was justifiabletrying to capture a fugitive, for examplethen the victim is not entitled to compensation under the Fifth Amendment.

Takings are not supposed to be at all about whether or not the government was acting wrongfully, he said to reporters. It can be acting for the absolute best reasons in the world. Its just about who should bear these public burdens. Is it some unlucky individual, or is it society as a whole?

Read more from the original source:

Los Angeles Shop Owner, Others National Through No Fault of ... - The Peoples Vanguard of Davis

Types of Neural Networks in Artificial Intelligence – Fagen wasanni

Neural networks are virtual brains for computers that learn by example and make decisions based on patterns. They process large amounts of data to solve complex tasks like image recognition and speech understanding. Each neuron in the network connects to others, forming layers that analyze and transform the data. With continuous learning, neural networks become better at their tasks. From voice assistants to self-driving cars, neural networks power various AI applications and revolutionize technology by mimicking the human brain.

There are different types of neural networks used in artificial intelligence, suited for specific problems and tasks. Feedforward Neural Networks are the simplest type, where data flows in one direction from input to output. They are used for tasks like pattern recognition and classification. Convolutional Neural Networks process visual data like images and videos, utilizing convolutional layers to detect and learn features. They excel in image classification, object detection, and image segmentation.

Recurrent Neural Networks handle sequential data by introducing feedback loops, making them ideal for tasks involving time-series data and language processing. Long Short-Term Memory Networks are a specialized type of RNN that capture long-range dependencies in sequential data. They are beneficial in machine translation and sentiment analysis.

Generative Adversarial Networks consist of two networks competing against each other. The generator generates synthetic data, while the discriminator differentiates between real and fake data. GANs are useful in image and video synthesis, creating realistic images, and generating art.

Autoencoders aim to recreate input data at the output layer, compressing information into a lower-dimensional representation. They are used for tasks like dimensionality reduction and anomaly detection.

Transformer Networks are popular in natural language processing. They use self-attention mechanisms to process sequences of data, capturing word dependencies efficiently. Transformer networks are pivotal in machine translation, language generation, and text summarization.

These examples represent the diverse range of neural network types. The field of artificial intelligence continuously evolves with new architectures and techniques. Choosing the appropriate network depends on the specific problem and data characteristics.

Continue reading here:

Types of Neural Networks in Artificial Intelligence - Fagen wasanni

The Future of Telecommunications: 3D Printing, Neural Networks … – Fagen wasanni

Exploring the Future of Telecommunications: The Impact of 3D Printing, Neural Networks, and Natural Language Processing

The future of telecommunications is poised to be revolutionized by the advent of three groundbreaking technologies: 3D printing, neural networks, and natural language processing. These technologies are set to redefine the way we communicate, interact, and exchange information, thereby transforming the telecommunications landscape.

3D printing, also known as additive manufacturing, is a technology that creates three-dimensional objects from a digital file. In the telecommunications industry, 3D printing has the potential to drastically reduce the time and cost associated with the production of telecom equipment. For instance, antennas, which are crucial components of telecom infrastructure, can be 3D printed in a fraction of the time and cost it takes to manufacture them traditionally. Moreover, 3D printing allows for the creation of complex shapes and structures that are otherwise difficult to produce, thereby enabling the development of more efficient and effective telecom equipment.

Transitioning to the realm of artificial intelligence, neural networks are computing systems inspired by the human brains biological neural networks. These systems learn from experience and improve their performance over time, making them ideal for tasks that require pattern recognition and decision-making. In telecommunications, neural networks can be used to optimize network performance, predict network failures, and enhance cybersecurity. For example, a neural network can analyze network traffic patterns to identify potential bottlenecks and suggest solutions to prevent network congestion. Similarly, it can detect unusual network activity that may indicate a cyber attack and take appropriate measures to mitigate the threat.

Lastly, natural language processing (NLP), a subfield of artificial intelligence, involves the interaction between computers and human language. NLP enables computers to understand, interpret, and generate human language, making it possible for us to communicate with computers in a more natural and intuitive way. In telecommunications, NLP can be used to improve customer service, automate routine tasks, and provide personalized experiences. For instance, telecom companies can use NLP to develop chatbots that can understand customer queries, provide relevant information, and even resolve issues without human intervention. Furthermore, NLP can analyze customer feedback to identify common issues and trends, helping telecom companies to better understand their customers and improve their services.

In conclusion, 3D printing, neural networks, and natural language processing are set to revolutionize the telecommunications industry. These technologies offer numerous benefits, including cost reduction, performance optimization, and improved customer service. However, their adoption also presents challenges, such as the need for new skills and the potential for job displacement. Therefore, as we move towards this exciting future, it is crucial for telecom companies, policymakers, and society at large to carefully consider these implications and take appropriate measures to ensure that the benefits of these technologies are realized while minimizing their potential drawbacks. The future of telecommunications is undoubtedly bright, and with the right approach, we can harness the power of these technologies to create a more connected and efficient world.

View post:

The Future of Telecommunications: 3D Printing, Neural Networks ... - Fagen wasanni

The Future is Now: Understanding and Harnessing Artificial … – North Forty News

Image created with AI (by Monika Lea Jones and Bo Maxwell Stevens, AI Fusion Insights) Support Northern Colorado Journalism

Show your support for North Forty News by helping us produce more content. It's a kind and simple gesture that will help us continue to bring more content to you.

By:

Monika Lea Jones Chief Creative Officer, AI Fusion Insights Local Contributor, North Forty News

Bo Maxwell Stevens Founder and CEO, AI Fusion Insights Local Contributor, North Forty News

Artificial Intelligence (AI) is no longer a concept of the future; its a present reality transforming our world. AI language models like ChatGPT, with over 100 million users, are revolutionizing the way we communicate and access information. AI refers to the simulation of human intelligence in machines, enabling them to perform tasks that typically require human intellect. This includes learning from experience, understanding language, and making decisions.

AI is not just a single technology but a blend of various technologies and algorithms. These models (especially the Large Language models like ChatGPT) currently dont reason but instead work by detecting patterns in preexisting human generated materials that they are trained on. Josiah Seaman, Founder of Creative Contours, describes AI as a multiplier for human creativity and a vessel for human skill.

AIs ubiquity is undeniable. Its integrated into our daily lives, from YouTube recommendations to Spotifys music suggestions. Spotify even introduced an AI DJ, X, that personalizes music based on your preferences and listening history. AI is expected to become even more advanced and integrated into our lives in the coming months and years.

Nikhil Krishnaswamy, a computer science professor at CSU, emphasizes the importance of everyone having input in AIs deployment. He believes that AI should be used to the maximum benefit of everyone, not just those who already have power and resources. He also emphasizes that humans should remain the final decision-makers in situations requiring value judgments and situational understanding.

AIs future promises more personalized experiences, improved data analysis, and possibly new forms of communication. However, ethical considerations are crucial. Krishnaswamy and Seaman agree that AI should eliminate undesirable tasks, not jobs. Seamans vision of the future of AI is similar to that of Star Trek, where AI disrupts our current system of capitalism, currency, and ownership, but people can strive for loftier goals.

The impact of AI on jobs is a topic of debate. Dan Murray, founder of the Rocky Mountain AI Interest Group, suggests that while some jobs will be lost, new ones will be created. Murray has heard it said that you wont be replaced by AI but you might be replaced by someone who uses AI. Seaman believes AI can improve quality of life by increasing productivity, potentially reducing the need for work. This aligns with the concept of Universal Basic Income, a topic of interest for organizations like OpenAI.

Northern Colorado is already a supportive community for arts, culture and leisure such as outdoor sports in nature. These activities are often considered luxuries when our budgets are tight, but how could these areas of our lives flourish when our basic needs are met?

AI is already improving lives in various ways. Krishnaswamy cites AIs role in language learning for ESL students, while Murray mentions Furhat Robotics social robots, which help autistic children communicate. Seaman encourages community leaders to envision a future where AI fosters inclusive, nature-protective communities. CSU Philosophy professor, Paul DiRado, suggests AI will shape our lives as the internet did, raising questions about how well interact with future Artificial General Intelligence systems that have their own motivations or interests. How can collaboration between humans and AI help influence what essentially becomes the realization of desires, human or otherwise?

While not everyone needs to use AI, staying informed about developments and understanding potential benefits is important. Murray encourages non-technical people to try the free versions of AI tools, which are often easy to use and can solve everyday problems. He also suggests sharing knowledge and joining AI interest groups.

Dan Murray notes, some people may think AI is hard to use. Its actually very easy and the programming language, if you will, is simply spoken or written English. What could be easier?

Artificial Intelligence is here and evolving rapidly. Its potential is boundless, but it must be embraced responsibly. As we integrate AI into our lives, we must consider ethical implications. There are issues that AI can perpetuate such as: surveillance, amplifying human biases, and widening inequality. Currently AI is a tool. Just like a match, which can light a campfire or burn down a forest, the same tool could be used for both benefit and harm. The future of AI is exciting, and were all part of its journey. As we experience the dawn of AI, we should consider how it can improve efficiency, creativity, and innovation in our lives.

Go here to read the rest:

The Future is Now: Understanding and Harnessing Artificial ... - North Forty News

The Twin Convergence: AGI And Superconductors Ushering Humanity’s Inflection Point – Medium

GPT Summary: Humanity stands at an inflection point with the imminent convergence of Artificial General Intelligence (AGI) and advancements in superconductor technology. AGI, unlike narrow AI, could offer general intelligence across various tasks, potentially outperforming humans at most economically valuable work. Concurrently, breakthroughs in superconductors, which present zero electrical resistance, promise to revolutionize technology and energy efficiency, with the prospect of room temperature superconductors mirroring the transformation sparked by the advent of semiconductors. The convergence of these distinct fields could reshape civilization, enabling AGIs optimal operation through superconductor-facilitated quantum computing and challenging our understanding of humanitys role, our economic constructs, and societal norms. Navigating this new landscape demands a multidisciplinary approach and introspective reevaluation of our relationship with technology and our place in the universe.

The relentless pursuit of knowledge and understanding of the universe has led humanity to crossroads that not only pose intriguing philosophical questions but also hold the potential to revolutionize society. Two such crossroads are the development of Artificial General Intelligence (AGI) and advancements in superconductor technology. In a remarkable intertwining, these two frontiers of technology and science seem to be converging, and we now stand on the brink of what could be a significant inflection point for humanity.

The Dawn of AGI

Artificial General Intelligence (AGI) represents a new era in computational intelligence. Unlike the narrow AI systems that are ubiquitous today, which perform specific tasks such as recommendation algorithms or speech recognition, AGI refers to systems that possess general intelligence across a wide range of tasks, much like human intelligence.

This transformation is nothing short of a profound shift. It has been argued that AGI may reach a level where it can outperform humans at most economically valuable work, a point referred to as Artificial Superintelligence. This advancement poses both opportunities for immense growth and existential risks that necessitate careful navigation.

The Superconductor Revolution

Simultaneously, the realm of condensed matter physics is in the throes of its revolution. Superconductors, materials that exhibit zero electrical resistance and expulsion of magnetic fields when cooled to a critical temperature, have long fascinated scientists. The application potential is vast lossless power transmission, high-efficiency generators, magnetic levitation, and ultrafast quantum computing to name a few.

Recent breakthroughs have taken us closer to the elusive room temperature superconductor that could usher in a new era of electrical efficiency and technological innovation. This development could be as transformative as the advent of the semiconductor was in the last century.

The Convergence

The convergence of AGI and superconductor technology, two seemingly disparate fields, is a prospect filled with both exciting potential and complex philosophical questions.

From a technological perspective, superconductors could provide the infrastructure necessary for AGI to operate at its fullest potential. High-temperature superconductors can lead to quantum computers with incredible processing power, creating the hardware capabilities that AGI needs to blossom.

Philosophically, this convergence forces us to confront fundamental questions about our existence and purpose. If AGI surpasses human intelligence, what then becomes the role of humanity? If we reach a post-scarcity world with superconductors, how does our concept of work, economy, and society transform?

Humanitys Inflection Point

This twin convergence of AGI and superconductors signifies a profound inflection point for humanity. The scale of impact from both AGI and superconductor technologies is such that their convergence might reshape our civilization in ways we can scarcely imagine.

The confluence of AGI and superconductor technology is a compelling case study of how progress in seemingly disconnected fields can intersect to create unprecedented possibilities. We stand at the precipice of an inflection point that could redefine our very understanding of society, economy, and life itself. To navigate this new landscape effectively and ethically, we must embrace a multidisciplinary approach, engaging with technology, science, philosophy, ethics, and sociology in a concerted dialogue.

Embracing this convergence is not just about seizing opportunities but also about introspection, about redefining our relationship with technology, and ultimately about understanding our place in the universe. It is here, at the intersection of the possible and the profound, that humanity may find its next evolution.

See the original post here:

The Twin Convergence: AGI And Superconductors Ushering Humanity's Inflection Point - Medium

Executive Q&A: Andrew Cardno, QCI – Indian Gaming

This month we spoke with Andrew Cardno about artificial intelligence (AI) and its counterpart, artificial general intelligence (AGI), designed to be able to solve any problem a human can. Cardno is an established thought leader in visual analytics, with over 21 years of experience in the field. He has led private Ph.D./Masters research teams in visualization/development for over 15 years, winning two Smithsonian Laureates and more than 20 international and innovation awards. Here is what he had to say

How do you see AI intersecting with other emerging technologies like virtual reality and blockchain? Do you see synergies there that may eventially trickle down into gaming, potentially?

AI, which is what I studied formally in college, is what Ive been practicing for 20 years. For the latest breakthroughs of the last eight or nine months in the space, I use the term artificial general intelligence. Theres a lot of debate about whether OpenAI is general intelligence. I think it is. Academics can continue to argue about it, but I think it has passed the Turing test. Right now, we are in the middle of the biggest tech revolution that has ever happened.

Artificial general intelligence (AGI) is going to work with blockchain and VR, certainly. Its going to work everywhere. Every piece of tech, every interface, everything we are doing all of humankind is going to get touched by this. The importance of recent developments in AI are on par with the discovery of penicillin; the day we landed on the moon; the invention of the wheel; and the discovery of fire. Those events happened, and then forevermore, we were changed. My main takeaway for the Indian gaming world is we should be thankful for this invention. No one can forecast the future, but from my view, we are very well-positioned to do very well out of this as an industry.

What are some of the challenges and opportunities for integrating AI into the Indian gaming industry?

We are very lucky to be in the Indian gaming entertainment space. What I mean by that is, its an industry that will benefit enormously from this technology. We as an industry suffer from a labor shortage, training challenges, and are constantly trying to improve our brands. A tribally-owned resort is really a collection of small businesses built around gaming. Its enormously complex to manage all those small businesses. Through AGI we have this amazing opportunity to implement a co-pilot/automation agent that can help run the collection of businesses that comprise a resort in a much better way. It will tremendously benefit the industry.

How do you see AI being used in gaming to analyze player behavior, preferences and/or gambling patterns?

ChatGPT and OpenAI didnt exist a year ago. All the capabilities we are talking about with generative AI is all new. Now, traditional AI, which is my background, has been able to do the tasks your question asks about for years and years. Can it predict? Yes. Can it forecast customers? Yes. Can it do profitability analysis and gaming optimization? It does all those things. Whats changed though, is now we have this capability for AI to work with us and understand our questions in a human way through AGI. A year ago, if you wanted to do a forecast model or something very specific, you really needed to be an expert in that area. Now, AGI changes that. It allows a human to interact in a very natural way. By making the communication more natural, it opens computational platforms to people who couldnt do them in the past. Consider the simple example of utilizing Excel. There are Excel gurus out there who can make Excel sing and dance and do all sorts of crazy things. Regular users ask these kinds of experts, How do you do this? How do you do that? Oh, my spreadsheet isnt working. Can you fix it for me? With AGI, it doesnt work like that anymore. Consumers can get help from an AI agent that really understands what is being requested in human terms. Its like a humanization of computer interfaces. It brings a completely natural form to computing, and what is more natural than conversation? The closest we had in the past was Google search, which we all love it, right? Now you can chat with an agent instead of searching, and its much more natural. AI brings a very natural, human communication to the things that we try to do all day.

At QCI, weve already built an interface where users can start having those conversations with complex data analytics. Ive shown it to a few people, and they love it. It makes something that in the past, was only available to people like me, with little propellors on their heads, the nerds, right? Now everyone can do analytics it democratizes it. There are so many people in the world who used to be data disadvantaged. And now they are not. Now they can interact with an AGI agent, a co-pilot, whos effective in doing that job, allowing regular users to do computations that in the past, they couldnt. Now, anyone can say, Hey, I need a predictive model, and AI will help you. It removes this enormous bottleneck in analytics and puts it into the hands of anyone who is data curious, anyone who wants business answers.

How do you think AI will impact game development, and what benefits will it bring to the overall gaming ecosystem? Would you say primarily more content faster?

Im not a game designer, but Ive worked with game designers and there are tremendous barriers to entry. The cost of production for a game is significant. It seems to be hard for new players to break in with new ideas. Through artificial intelligence, those barriers are going to become much lower. For example, AI could do the artwork on a game; the animation and the design of pay tables and payouts. A much smaller group could now make innovative new products. And the larger groups, if they adopt this technology, will be able tohave more depth in their products, more options and more configurability.

Are there any ethical considerations and potential biases associated with implementing AI algorithms that you see or are aware of?

As a technologist, broadly speaking, there are going to be industries that are impacted in very different ways than Indian gaming. Within this industry, it is humans that are working with and controlling and using these technologies. Indian gaming is full of incredibly ethical and careful people. Just about everyone who works in this industry goes through licensing. We are all aware of the consequences of a lack of ethical behavior, possibly more than any other industry in the world. Simply moving from one tribal nation to another triggers a new licensing process and background checks. This is an industry that is, by its nature and history, very ethical. Its basically a requirement to working in Indian gaming.

What measures can be taken, in your opinion, to ensure the security and integrity using AI within the Indian gaming industry?

We are such a careful industry when it comes to taking risks. We are well placed to take on this kind of technology. In our industry, more than any other industry, we have test labs, we have processes, we have evaluation and we have regulations. Will we make mistakes? Maybe, but well learn from them like with any new technology.

It seems like someday, whether its today or in the future, AI could assist the regulators and even the labs that are approving these games, potentially.

Absolutely. AI is going to assist in these areas its going to assist everywhere. But we as an industry, will also test, validate and monitor. I will say, without exception, tribal nations are very careful about who they do business with and how they engage with new technology. This is a careful industry.

How can AI be employed to improve data analytics and decision-making processes?

Its going to do two really big things. One is communication allowing people to engage with an agent that can understand the data and communicate with people in a meaningful way. Why should I be doing this? Why are my customers going down? Why have they gone up? Have you looked at it this way, and that way? And the second is opening a whole new class of analytic problems. Some of the hardest problems in the industry are going to be solved using very big, very complex AI models.

From a practical standpoint, for casino and marketing executives on the ground, how do you see AI improving their workflow and creating a better experience for their players and customers?

The next stage of QCI we call Mozart. Mozart can conduct symphonies of texts and relevant personalized communications with customers. It will especially help casinos communicate with their customers who fall below the level of traditional player development. It will bring a personal touch and one-on-one branding experience to every customer in your business. Everyone can now have this beautiful, polite, endlessly helpful interaction with your business. Customers can book shows, ask about whats fun, and talk about their last visit. They can have a meaningful discussion with this agent that is just there to help them. Its a huge change in how we can do business. And we are already personifying that.

See the original post here:

Executive Q&A: Andrew Cardno, QCI - Indian Gaming

Development of GPT-5: The Next Step in AI Technology – Fagen wasanni

The introduction of GPT-4, the improved version of Chat GPT, in March of this year, was still fresh news when industry experts had already hinted at the development of GPT-5. Concerns and dangers surrounding this type of AI had already raised alarms across the globe since the release of Chat GPT (version 3.5 of GPT). In late March, thousands of AI experts, including Elon Musk and Apple co-founder Steve Wozniak, signed an open letter calling for a six-month pause in the development of these AI systems. The goal? To develop and implement a set of shared safety protocols, to reflect on necessary regulations, and to establish safeguards before allowing AI labs to continue in an uncontrollable race.

While the CEO of OpenAI, Sam Altman, denied these rumors and stated during a conference at MIT, We are not there and wont be there for some time, the GPT-5 trademark was registered on July 18th. Siqi Chen, CEO of several tech companies, also declared on social media, Ive been told that GPT-5 should finish its training in December, and OpenAI expects it to achieve AGI [Artificial General Intelligence].

GPT-4, the latest version of Chat GPT, has reportedly improved its factual accuracy by 40% across all evaluated categories such as math, history, science, and writing, according to OpenAI. It is now close to reaching 80% accuracy in its responses. Experts believe that GPT-5 will surpass the 90% accuracy mark.

The major advancement in the latest versions of GPT is the Multisensory AI Model. While Chat GPT only deals with text, GPT-4 can process both text and images. Experts expect that GPT-5 will have the ability to process multisensory data, including audio, video, temperature, and other forms of data.

The question remains: will GPT-5 achieve Artificial General Intelligence? OpenAIs CEO, Sam Altman, has previously described how AGI could benefit humanity but has also warned about the dangers it poses. I think if this technology goes wrong, it can go really wrong. And we want to be out there, very loudly and clearly, saying this is risky. We want to work with the government to prevent that from happening, he declared during a hearing at the United States Senate.

While only Siqi Chens statements suggest that GPT-5 could reach AGI, the trademark registration serves as a warning of its inevitable release in the coming months. As the competition intensifies among tech giants like Google, Apple, Facebook, and Microsoft in the chatbot technology race, the prevailing question remains: will it (soon) achieve Artificial General Intelligence? Or will regulations and safety protocols be in place beforehand?

Original post:

Development of GPT-5: The Next Step in AI Technology - Fagen wasanni

Convergence of Brain-Inspired AI and AGI: Exploring the Path to … – Newswise

Newswise With over 86 billion neurons, each having the ability to form up to 10,000 synapses with other neurons, the human brain gives rise to an exceptionally complex network of connections that underlie the proliferation of intelligence.

There has been a long-standing pursuit of humanity centered around artificial general intelligence (AGI) systems capable of achieving human-level intelligence or even surpassing itenabling AGI to undertake a wide range of intellectual tasks, including reasoning, problem-solving and creativity.

Brain-inspired artificial intelligence is a field that has emerged from this endeavor, integrating knowledge from neuroscience, psychology, and computer science to create AI systems that are not only more efficient but also more powerful. In a new study published in the KeAi journal Meta-Radiology, a team of researchers examined the core elements shared between human intelligence and AGI, with particular emphasis on scale, multimodality, alignment, and reasoning.

Notably, recent advancements in large language models (LLMs) have showcased impressive few-shot and zero-shot capabilities, mimicking human-like rapid learning by capitalizing on existing knowledge, shared Lin Zhao, co-first author of the study. In particular, in-context learning and prompt tuning play pivotal roles in presenting LLMs with exemplars to adeptly tackle novel challenges.

Moreover, the study delved into the evolutionary trajectory of AGI systems, examining both algorithmic and infrastructural perspectives. Through a comprehensive analysis of the limitations and future prospects of AGI, the researchers gained invaluable insights into the potential advancements that lie ahead within the field.

Our study highlights the significance of investigating the human brain and creating AI systems that emulate its structure and functioning, bringing us closer to the ambitious objective of developing AGI that rivals human intelligence, said corresponding author Tianming Liu. AGI, in turn, has the potential to enhance human intelligence and deepen our understanding of cognition. As we progress in both realms of human intelligence and AGI, they synergize to unlock new possibilities.

###

References

Journal

Meta-Radiology

DOI

10.1016/j.metrad.2023.100005

Original URL

https://doi.org/10.1016/j.metrad.2023.100005

Go here to read the rest:

Convergence of Brain-Inspired AI and AGI: Exploring the Path to ... - Newswise

Past, Present, Future: AI, Geopolitics, and the Global Economy – Tech Policy Press

Chris Riley is Executive Director of the Data Transfer Initiative and a Distinguished Research Fellow at the University of Pennsylvanias Annenberg Public Policy Center.

Spurred by ChatGPT and similar generative technologies, the news is filled with articles about AI replacing humans. Sometimes the concern is over AI replacing employees, displacing jobs; sometimes its about AI serving as a relationship partner, fulfilling human social and emotional needs. Most often, its even more direct, taking the form of fears that AI will dispense with humanity entirely.

But as powerful as AI technologies are, these fears are little more than science fiction in the present day. Theyre also a distraction but not yet, it seems, from ongoing efforts to regulate AI systems or invest in greater accountability. News and updates on both of these fronts continue to advance every day.

Rather, digital replacement fears are distracting the US from thinking about two other ways in which AI will shape our future. On the one hand, AI offers a major upside: It can amplify todays massive investments in revitalizing the countrys industrial leadership. On the other, a major downside: It could contribute to breaking the already fragile post-World War II international order. These possibilities are intertwined, and their prospects will depend on US technology policy actions or the lack thereof.

First, the upside. Through whats increasingly being called Bidenomics, the US is witnessing a resurgence of domestic industrial and manufacturing capacity. The Inflation Reduction Act included $369 billion in incentives and direct investments specifically directed to climate change, catalyzing massive new and expanded battery and electric vehicle plants on American soil. It was followed by another $40 billion to connect every American to high speed internet. The CHIPS and Science Act adds money for semiconductor manufacturing, as does the Bipartisan Infrastructure Law for roads and bridges.

Along with private investment, the net result is double or triple past years investments in core US capacities. And the economic benefits are showing. Inflation is improving faster in the US than other countries, and unemployment remains at record lows; the nations economy is alive and well.

These investments also offer perhaps the clearest benefits of machine learning systems: improving logistics and efficiency, and handling repetitive and automatable tasks for businesses. Whether or not large language models can ever outscore top applicants to the worlds best graduate schools, AI offers massive improvements in areas that the EUs AI Act would categorize as minimal risk of harm.

And the US has significant advantages in its capacity for developing and deploying AI to amplify its industrial investments, notably including its workforce, an advantage built in part through many years of talent immigration. Together, this is a formula for the US to reach new heights of global leadership, much as it reached after its massive economic investments in the mid-20th century.

Meanwhile, AI has long been regarded as the 21st centurys Space Race, given how the technology motivates international nation-state level competition for scientific progress. And just as the Space Race took place against the tense backdrop of the Cold War, the AI Race is heating up at another difficult geopolitical moment, following Russias unprovoked invasion of Ukraine. But the international problems are not just in eastern Europe. Although denied by US officials, numerous foreign policy experts indicate a trajectory toward economic decoupling of the US and China, even as trans-Pacific tensions rise over Taiwans independence (the stakes of which are complicated in part by Taiwans strategically important semiconductors industry).

Global harmony in the online world is no clearer than offline. Tensions among the US, China, and Europe are running high, and AI will exacerbate them. Data flows between the US and EU may be in peril if an active privacy law enforcement case against Meta by the Irish data protection authority cannot be resolved with a new data transfer agreement. TikTok remains the target of specific legislation restricting its use in the United States and Europe because of its connections to China. Because of AI, the US is considering increased export controls limiting Chinas access to hardware that can power AI systems, expanding on the significant constraints already in place. The EU has also expressed a goal of de-risking from China, though whether its words will translate to action remains an open question.

For now, the US and EU are on the same side. But in the Council of Europe, where a joint multilateral treaty for AI governance is underway, US reticence may put the endeavor in jeopardy. And the EU continues to outpace (by far) the US in passing technology laws, with significant costs for American technology companies. AI will further this disparity and the tensions it generates, as simultaneously the EU moves forward with its comprehensive AI Act, US businesses continue to flourish through AI, and Congress continues to stall on meaningful tech laws.

It seems more a matter of when, not whether, these divisions will threaten Western collaboration, including in particular on relations with China. If, for example, the simmering situation in Taiwan boils over, will the West be able to align even to the degree it did with Ukraine?

The United Nations, with Russia holding a permanent security council seat, proved far less significant than NATO in the context of the Ukraine invasion; China, too, holds such a seat. What use the UN, another relic of the mid-20th century, will hold in such a future remains to be seen.

These two paths one of possible domestic success, the other of potential international disaster present a quandary. But technology policy leadership offers a path forward. The Biden Administration has shown leadership on the potential for societal harms of AI through its landmark Blueprint for an AI Bill of Rights and the voluntary commitments for safety and security recently adopted by leading AI companies. Now it needs to follow that with second and third acts taking bolder steps to align with Europe on regulation and risk mitigation, and integrating support for industrial AI alongside energy and communications investments, to ensure that the greatest benefits of machine learning technologies can reach the greatest number of people.

The National Telecommunications and Information Administration (NTIA) is taking a thoughtful approach to AI accountability, which if turned into action, can dovetail with the EUs AI Act and build a united democratic front on AI. And embracing modularity a co-regulatory framework describing modules of codes and rules implemented by multinational, multistakeholder bodies without undermining government sovereignty as the heart of AI governance could further stabilize international tensions on policy, without the need for a treaty. It could be a useful lever in fostering transatlantic alignment on AI through the US-EU Trade and Technology Council, for example. This would provide a more stable basis for navigating tensions with China arising from the AI Race, as well as a foundation of trust to pair with US investment in AI capacity for industrial growth.

Hopefully, such sensible policy ideas will not be drowned out by the distractions of dystopia, the grandiose ghosts of which will eventually disperse like the confident predictions of imminent artificial general intelligence made lately (just as they were many decades ago). While powerful, over time AI seems less likely to challenge humanity than to cannibalize itself, as the outputs of LLM systems inevitably make their way into the training data of successor systems, creating artifacts and errors that undermine the quality of the output and vastly increase confusion over its source. Or perhaps the often pablum output of LLMs will fade into the miasma of late-stage online platforms, producing just [a]nother thing you ignore or half-read, as Ryan Broderick writes in Garbage Day. At minimum, the magic we perceive in AI today will fade over time, with generative technologies revealed as what Yale computer science professor Theodore Kim calls industrial-scale knowledge sausages.

In many ways, these scenarios the stories of AI, the Space Race, US industrial leadership, and the first tests of the UN began in the 1950s. In that decade, the US saw incredible economic expansion, cementing its status as a world-leading power; the Soviet Union launched the first orbiting satellite; the UN, only a few years old, faced its first serious tests in the Korean War and the Suez Crisis; and the field of AI research was born. As these stories continue to unfold, the future is deeply uncertain. And AIs role in shaping the future of US industry and the international world order may well prove to be its biggest legacy.

Chris Riley is Executive Director of the Data Transfer Initiative and a Distinguished Research Fellow at the University of Pennsylvanias Annenberg Public Policy Center. Previously, he was a senior fellow for internet governance at the R Street Institute. He has worked on tech policy in D.C. and San Francisco for nonprofit and public sector employers and managed teams based in those cities as well as Brussels, New Delhi, London, and Nairobi. Chris earned his PhD from Johns Hopkins University and a law degree from Yale Law School.

See the original post here:

Past, Present, Future: AI, Geopolitics, and the Global Economy - Tech Policy Press

The Economic Case for Generative AI and Foundation Models – Andreessen Horowitz

Artificial intelligence has been a staple in computer science since the 1950s. Over the years, it has also made a lot of money for the businesses able to deploy it effectively. However, as we explained in a recent op-ed piece for the Wall Street Journalwhich is a good starting point for the more detailed argument we make heremost of those gains have gone to large incumbent vendors (like Google or Meta) rather than to startups. Until very recentlywith the advent of generative AI and all that it encompassesweve not seen AI-first companies that seriously threaten the profits of their larger, established peers via direct competition or entirely new behaviors that make old ones obsolete.

With generative AI applications and foundation models (or frontier models), however, things look very different. Incredible performance and adoption, combined with a blistering pace of innovation, suggest we could be in the early days of a cycle that will transform our lives and economy at levels not seen since the microchip and the internet.

This post explores the economics of traditional AI and why its typically been difficult to reach escape velocity for startups using AI as a core differentiator (something weve written about in the past). It then covers why generative AI applications and large foundation-model companies look very different, and what that may mean for our industry.

The issue with AI historically is not that it doesnt workit has long produced mind-bending resultsbut rather that its been resistant to building attractive pure-play business models in private markets. Looking at the fundamentals, its not hard to see why getting great economics from AI has been tough for startups.

Many AI products need to ensure they provide high accuracy even in rare situations, often referred to as the tail. And often while any given situation may be rare on its own, there tend to be a lot of rare situations in aggregate. This matters because as instances get rarer, the level of investment needed to handle them can skyrocket. These can be perverse economies of scale for startups to rationalize.

For example, it may take an investment of $20 million to build a robot that can pick cherries with 80% accuracy, but the required investment could balloon to $200 million if you need 90% accuracy. Getting to 95% accuracy might take $1 billion. Not only is that a ton of upfront investment to get adequate levels of accuracy without relying too much on humans (otherwise, what is the point?), but it also results in diminishing marginal returns on capital invested. In addition to the sheer amount of dollars that may be required to hit and maintain the desired level of accuracy, the escalating cost of progress can serve as an anti-moat for leadersthey burn cash on R&D while fast-followers build on their learnings and close the gap for a fraction of the cost.

Many of the traditional AI problem domains arent particularly tolerant of wrong answers. For example, customer success bots should never offer bad guidance, optical character recognition (OCR) for check deposits should never misread bank accounts, and (of course) autonomous vehicles shouldnt do any number of illegal or dangerous things. Although AI has proven to be more accurate than humans for some well-defined tasks, humans often perform better for long-tail problems where context matters. Thus, AI-powered solutions often still use humans in the loop to ensure accuracy, a situation that can be difficult to scale and often becomes a burdensome cost that weighs on gross margins.

The human body and brain comprise an analog machine thats evolved over hundreds of millions of years to navigate the physical world. It consumes roughly 150 watts of energy, it runs on a bowl of porridge, its quite good at tackling problems in the tail, and the global average wage is roughly $5 an hour. For some tasks in some parts of the world, the average wage is less than a dollar a day.

For many applications, AI is not competing with a traditional computer program, but with a human. And when the job involves one of the more fundamental capabilities of carbon life, such as perception, humans are often cheaper. Or, at least, its far cheaper to get reasonable accuracy with a relatively small investment by using people. This is particularly true for startups, which typically dont have a large, sophisticated AI infrastructure to build from.

Its also worth noting that AI is often held to a higher goalpost than simply what humans can achieve (why change the system if the new one isnt significantly better?). So, even in cases where AI is obviously better, its still at a disadvantage.

This is a very important, yet underappreciated, point. Likely as a result of AI largely being a complement to existing products from incumbents, it has not introduced many new use cases that have translated into new user behaviors across the broader consumer population. New user behaviors tend to underlie massive market shifts because they often start as fringe secular movements the incumbents dont understand, or dont care about. (Think about the personal microcomputer, the Internet, personal smartphones, or the cloud.) This is fertile ground for startups to cater to emergent consumer needs without having to compete against entrenched incumbents in their core areas of focus.

There are exceptions, of course, such as the new behaviors introduced by home voice assistants. But even these underscore how dominant the incumbents are in AI products, given the noticeable lack of widely adopted independents in this space.

Autonomous vehicles (AVs) are an extreme but illustrative example of why AI is hard for startups. AVs require tail correctness (getting things wrong is very, very bad); operational AV systems often rely on a lot of human oversight; and they compete with the human brain at perception (which runs at about 12 watts vs. some high-end CPU/GPU AV setups that consume over 1,300 watts). So while there are many reasons to move to AVs, including safety, efficiency, and traffic management, the economics are still not quite there when compared to ride-sharing services, let alone just driving yourself. This is despite an estimated $75 billion having been invested in AV technology.

Of course, there are narrower use cases that are more compelling, such as trucking or well-defined campus routes. Also, the economics are getting better all the time and are likely to surpass humans soon. But considering the level of investment and time its taken to get us here, plus the ongoing operational complexity and risks, its little wonder why generalized AVs have largely become an endeavor of large public companies, whether via incubation or acquisition.

For the reasons we laid out above, the difficulty of creating a high-margin, high-growth business where AI is the core differentiator has resulted in a well-known slog for startups attempting to do so. This hypothetical from the Wall Street Journal piecenicely encapsulates it:

In order for the startup to have sufficient correctness early on, it hires humans to perform the function it hopes the AI will automate over time. Often, this is part of an escalation path where a first cut of the AI will handle 80% of the common use cases, and humans manage the tail.

Early investors tend to be more focused on growth than on margins, so in order to raise capital and keep the board happy, the company continues to hire people rather than invest in the automationwhich is proving tricky anyway because of the aforementioned complications with the long tail. By the time the company is ready for growth-level investment, it has already built out an entire organization around hiring and operationalizing humans in the loop, and its too difficult to unwind. The result is a business that can show relatively high initial growth, but maintains a low margin and, over time, becomes difficult to scale.

The AI mediocrity spiral is not fatal, though, and you can indeed build sizable public companies from it. But the economics and scaling tend to lag software-centric products. Thus, weve historically not seen a wave of fast-growing AI startups that have had the momentum to destabilize the incumbents. Rather, they tend to steer toward the harder, grittier, more complex problemsor become services companies building bespoke solutionsbecause they have the people on hand to deal with those types of things.

With generative AI, however, this is all changing.

Over the last couple of years, weve seen a new wave of AI applications built on top of or incorporating large foundation models. This trend is commonly referred to as generative AI, because the models are used to generate content (image, text, audio, etc.), or simply as large foundation models, because the underlying technologies can be adapted to tasks beyond just content generation. For the purposes of this post, well refer to it all as generative AI.

Given the long history of AI, its easy to brush this off as yet another hype cycle that will eventually cool. This time, however, AI companies have demonstrated unprecedented consumer interest and speed to adoption. Since entering the zeitgeist in mid to late-2022, generative AI has already produced some of the fastest-growing companies, products, and projects weve seen in the history of the technology industry. Case in point: ChatGPT took only 5 days to reach 1 million users, leaving some of the worlds most iconic consumer companies in the dust (Threads from Meta recently reached 1 million in a few hours, but it was bootstrapped from an existing social graph, so we dont view that as an apples-to-apples comparison).

Whats even more compelling than the rapid early growth is its sustained nature and scale beyond the novelty of the products initial launch. In the 6 months since its launch, ChatGPT reached an estimated 230-million-plus worldwide monthly active users (MAUs) per Yipit. It took Facebook until 2009 to achieve a comparable 197 million MAUsmore than 5 years after its initial launch to the Ivy League and 3 years after the social network became available to the general public.

While ChatGPT is a clear AI juggernaut, it is by no means the only generative AI success story:

The AI developer market is also seeing tremendous growth. For example, the release of the large image model Stable Diffusion blew away some of the most successful open-source developer projects in recent history with regard to speed and prevalence of adoption. Metas Llama 2 large language model (LLM) attracted many hundreds of thousands of users, via platforms such as Replicate, within days of its release in July.

These unprecedented levels of adoption are a big reason why we believe theres a very strong argument that generative AI is not only economically viable, but that it can fuel levels of market transformation on par with the microchip and the Internet.

To understand why this is the case, its worth looking at how generative AI is different from previous attempts to commercialize AI.

Many of the use cases for generative AI are not within domains that have a formal notion of correctness. In fact, the two most common use cases currently are creative generation of content (images, stories, etc.) and companionship (virtual friend, coworker, brainstorming partner, etc.). In these contexts, being correct simply means appealing to or engaging the user. Further, other popular use cases, like helping developers write software through code generation, tend to be iterative, wherein the user is effectively the human in the loop also providing the feedback to improve the answers generated. They can guide the model toward the answer theyre seeking, rather than requiring the company to shoulder a pool of humans to ensure immediate correctness.

Generative AI models are incredibly general and already are being applied to a broad variety of large markets. This includes images, videos, music, games, and chat. The games and movie industries alone are worth more than $300 billion. Further, the LLMs really do understand natural language, and therefore are being pushed into service as a new consumption layer for programs. Were also seeing broad adoption in areas of professional pairwise interaction such as therapy, legal, education, programming, and coaching.

This all said, existing markets are only a proof point of value, and perhaps merely a launch point for generative AI. Historically, when economics and capabilities shift this dramatically, as was the case with the Internet, we see the emergence of entirely new behaviors and markets that are both impossible to predict and much larger than what preceded them.

Historically, much effort in AI has focused on replicating tasks that are easy for humans, such as object identification or navigating the physical worldessentially, things that involve perception. However, these tasks are easy for humans because the brain has evolved over hundreds of millions of years, optimizing specifically for them (picking berries, evading lions, etc.). Therefore as we discussed above, getting the economics to work relative to a human is hard.

Generative AI, on the other hand, automates natural language processing and content creationtasks the human brain has spent far less time evolving toward (arguably less than 100,000 years). Generative AI can already perform many of these tasks orders-of-magnitude cheaper, faster, and, in some cases, better than humans. Because these language-based or creative tasks are harder for humans and often require more sophistication, such white-collar jobs (for example, programmers, lawyers, and therapists) tend to demand higher wages.

So while an agricultural worker in the U.S. gets on average $15 an hour, white-collar workers in the roles mentioned above are paid hundreds of dollars an hour. However, while we dont yet have robots with the fine motor skills necessary for picking strawberries economically, youll see when we break down the costs that generative AI can perform similarly to these high-value workers at a fraction of the cost and time.

The new user behaviors that have emerged with the generative AI wave are as startling as the economics have been. LLMs have been pulled into service as software development partners, brainstorming companions, educators, life coaches, friends, and yes, even lovers. Large image models have become central to new communities built entirely around the creation of fanciful new content, or the development of AI art therapy to help treat use cases such as mental health issues. These are functions that computers have not, to date, been able to fulfill, so we dont really have a good understanding of what the behavior will lead to, nor what are the best products to fulfill them. This all means opportunity for the new class of private generative AI companies that are emerging.

Although the use cases for this new behavior are still emerging or being created, userscriticallyhave already shown a willingness to pay. Many of the new generative AI companies have shown tremendous revenue growth in addition to the aforementioned user growth. Subscriber estimates for ChatGPT imply close to $500 million in annualized run-rate revenue from U.S. subscribers alone. ChatGPT aside, companies across a number of industries (including legal, copywriting, image generation, and AI companionship, to name a few) have achieved impressive and rapid revenue scaleup to hundreds of millions of run-rate revenue within their first year. For a few companies who own and train their own models, this revenue growth has even outpaced heavy training costs, in addition to inference coststhat is, the variable costs to serve customers. This thus creates already or soon-to-be self-sustaining companies.

Just as the time to 1 million users has been truncated, so has the time it takes for many AI companies to hit $10-million-plus of run-rate revenue, often a fundraising hallmark for achieving product-market fit.

As a motivating example, lets look at the simple task of creating an image. Currently, the image qualities produced by these models are on par with those produced by human artists and graphic designers, and were approaching photorealism. As of this writing, the compute cost to create an image using a large image model is roughly $.001 and it takes around 1 second. Doing a similar task with a designer or a photographer would cost hundreds of dollars (minimum) and many hours or days (accounting for work time, as well as schedules). Even if, for simplicitys sake, we underestimate the cost to be $100 and the time to be 1 hour, generative AI is 100,000 times cheaper and 3,600 times faster than the human alternative.

A similar analysis can be applied to many other tasks. For example, the costs for an LLM to summarize and answer questions on a complex legal brief is fractions of a penny, while a lawyer would typically charge hundreds (and up to thousands) of dollars per hour and would take hours or days. The cost of an LLM therapist would also be pennies per session. And so on.

The occupations and industries impacted by the economics of AI expand well beyond the few examples listed above. We anticipate the economic value of generative AI to have a transformative and overwhelming impact on areas ranging from language education to business operations, and the magnitude of this impact to be positively correlated with the median wage of that industry. This will drive a bigger cost delta between the status quo and the AI alternative.

Of course, the LLMs would actually have to be good at these functions to realize that economic value. For this, the evidence is mounting: every day we gather more examples of generative AI being used effectively in practice for real tasks. They continue to improve at a startling place, and thus far are doing so without untenable increases in training costs or product pricing. Were not suggesting that large models can or will replace all work of this sortthere is little indication of that at this pointjust that the economics are stunning for every hour of work that they save.

None of this is scientific, mind you, but if you sketch out an idealized case where a model is used to perform an existing service, the numbers tend to be 3-4 orders of magnitude cheaper than the current status quo, and commonly 2-3 orders of magnitude faster.

An extreme example would be the creation of an entire video game from a single prompt. Today, companies create models for every aspect of a complex video game3D models, voice, textures, music, images, characters, stories, etc.and creating a AAA video game today can take hundreds of millions of dollars. The cost of inference for an AI model to generate all the assets needed in a game is a few cents or tens of cents. These are microchip- or Internet-level economics.

So, are we just fueling another hype bubble that fails to deliver? We dont think so. Just like the microchip brought the marginal cost of compute to zero, and the Internet brought the marginal cost of distribution to zero, generative AI promises to bring the marginal cost of creation to zero.

Interestingly, the gains offered by the microchip and the Internet were also about 3-4 orders of magnitude. (These are all rough numbers primarily to illustrate a point. Its a very complex topic, but we want to provide a rough sense of how disruptive the Internet and the microchip were to the current time and cost of doing things.) For example, ENIAC, the first general purpose programmable computer, was 5,000 times faster than any other calculation machine at the time, and purportedly could compute the trajectory of a missile in 30 seconds, compared with at least 30 hours by hand.

Similarly, the Internet dramatically changed the calculus for moving bits across great distances. Once an adequately sized Internet bandwidth arrived, you could download software in minutes rather than receiving it by mail in days or weeks, or driving to the local Frys to buy it in-person. Or consider the vast efficiencies of sending emails, streaming video, or using basically any cloud service. The cost per bit decades ago was around 2*10^-10, so if you were sending say 1 kilobyte, it was orders of magnitude cheaper than the price of a stamp.

For our dollar, generative AI holds a similar promise when it comes to the cost and time of generating contenteverything from writing an email to producing an entire movie. Of course, all of this assumes that AI scaling continues and we continue to see massive gains in economics and capabilities. As of this writing, many of the experts we talk to believe were in the very early innings for the technology and were very likely to see tremendous continued progress for years to come.

There is a lot of to-do about the defensibility or lack of defensibility for AI companies. Its an important conversation to have and, indeed, weve written about it. But when the economic benefits are as compelling as they are with generative AI, there is ample velocity to build a company around more traditional defensive moats such as scale, the network, the long tail of enterprise distribution, brand, etc. In fact, were already seeing seemingly defensible business models arise in the generative AI space around two-sided marketplaces between model creators and model users, and communities around creative content.

So even though there doesnt seem to be obvious defensibility endemic to the tech stack (if anything, it looks like there remain perverse economics of scale), we dont believe this will hamper the impending market shift.

Broadly, we believe that a drop in marginal value of creation will massively drive demand. Historically, in fact, the Jevons paradox consistently proves true: When the marginal cost of a good with elastic demand (e.g., compute or distribution) goes down, the demand more than increases to compensate. The result is more jobs, more economic expansion, and better goods for consumers. This was the case with the microchip and the Internet, and itll happen with generative AI, too.

If youve ever wanted to start a company, now is the time to do it. And please keep in touch along the way

***

The views expressed here are those of the individual AH Capital Management, L.L.C. (a16z) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. In addition, this content may include third-party advertisements; a16z has not reviewed such advertisements and does not endorse any advertising content contained therein.

This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments for which the issuer has not provided permission for a16z to disclose publicly as well as unannounced investments in publicly traded digital assets) is available at https://a16z.com/investments/.

Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.

Follow this link:

The Economic Case for Generative AI and Foundation Models - Andreessen Horowitz