Development of GPT-5: The Next Step in AI Technology – Fagen wasanni

The introduction of GPT-4, the improved version of Chat GPT, in March of this year, was still fresh news when industry experts had already hinted at the development of GPT-5. Concerns and dangers surrounding this type of AI had already raised alarms across the globe since the release of Chat GPT (version 3.5 of GPT). In late March, thousands of AI experts, including Elon Musk and Apple co-founder Steve Wozniak, signed an open letter calling for a six-month pause in the development of these AI systems. The goal? To develop and implement a set of shared safety protocols, to reflect on necessary regulations, and to establish safeguards before allowing AI labs to continue in an uncontrollable race.

While the CEO of OpenAI, Sam Altman, denied these rumors and stated during a conference at MIT, We are not there and wont be there for some time, the GPT-5 trademark was registered on July 18th. Siqi Chen, CEO of several tech companies, also declared on social media, Ive been told that GPT-5 should finish its training in December, and OpenAI expects it to achieve AGI [Artificial General Intelligence].

GPT-4, the latest version of Chat GPT, has reportedly improved its factual accuracy by 40% across all evaluated categories such as math, history, science, and writing, according to OpenAI. It is now close to reaching 80% accuracy in its responses. Experts believe that GPT-5 will surpass the 90% accuracy mark.

The major advancement in the latest versions of GPT is the Multisensory AI Model. While Chat GPT only deals with text, GPT-4 can process both text and images. Experts expect that GPT-5 will have the ability to process multisensory data, including audio, video, temperature, and other forms of data.

The question remains: will GPT-5 achieve Artificial General Intelligence? OpenAIs CEO, Sam Altman, has previously described how AGI could benefit humanity but has also warned about the dangers it poses. I think if this technology goes wrong, it can go really wrong. And we want to be out there, very loudly and clearly, saying this is risky. We want to work with the government to prevent that from happening, he declared during a hearing at the United States Senate.

While only Siqi Chens statements suggest that GPT-5 could reach AGI, the trademark registration serves as a warning of its inevitable release in the coming months. As the competition intensifies among tech giants like Google, Apple, Facebook, and Microsoft in the chatbot technology race, the prevailing question remains: will it (soon) achieve Artificial General Intelligence? Or will regulations and safety protocols be in place beforehand?

Original post:

Development of GPT-5: The Next Step in AI Technology - Fagen wasanni

Convergence of Brain-Inspired AI and AGI: Exploring the Path to … – Newswise

Newswise With over 86 billion neurons, each having the ability to form up to 10,000 synapses with other neurons, the human brain gives rise to an exceptionally complex network of connections that underlie the proliferation of intelligence.

There has been a long-standing pursuit of humanity centered around artificial general intelligence (AGI) systems capable of achieving human-level intelligence or even surpassing itenabling AGI to undertake a wide range of intellectual tasks, including reasoning, problem-solving and creativity.

Brain-inspired artificial intelligence is a field that has emerged from this endeavor, integrating knowledge from neuroscience, psychology, and computer science to create AI systems that are not only more efficient but also more powerful. In a new study published in the KeAi journal Meta-Radiology, a team of researchers examined the core elements shared between human intelligence and AGI, with particular emphasis on scale, multimodality, alignment, and reasoning.

Notably, recent advancements in large language models (LLMs) have showcased impressive few-shot and zero-shot capabilities, mimicking human-like rapid learning by capitalizing on existing knowledge, shared Lin Zhao, co-first author of the study. In particular, in-context learning and prompt tuning play pivotal roles in presenting LLMs with exemplars to adeptly tackle novel challenges.

Moreover, the study delved into the evolutionary trajectory of AGI systems, examining both algorithmic and infrastructural perspectives. Through a comprehensive analysis of the limitations and future prospects of AGI, the researchers gained invaluable insights into the potential advancements that lie ahead within the field.

Our study highlights the significance of investigating the human brain and creating AI systems that emulate its structure and functioning, bringing us closer to the ambitious objective of developing AGI that rivals human intelligence, said corresponding author Tianming Liu. AGI, in turn, has the potential to enhance human intelligence and deepen our understanding of cognition. As we progress in both realms of human intelligence and AGI, they synergize to unlock new possibilities.

###

References

Journal

Meta-Radiology

DOI

10.1016/j.metrad.2023.100005

Original URL

https://doi.org/10.1016/j.metrad.2023.100005

Go here to read the rest:

Convergence of Brain-Inspired AI and AGI: Exploring the Path to ... - Newswise

Past, Present, Future: AI, Geopolitics, and the Global Economy – Tech Policy Press

Chris Riley is Executive Director of the Data Transfer Initiative and a Distinguished Research Fellow at the University of Pennsylvanias Annenberg Public Policy Center.

Spurred by ChatGPT and similar generative technologies, the news is filled with articles about AI replacing humans. Sometimes the concern is over AI replacing employees, displacing jobs; sometimes its about AI serving as a relationship partner, fulfilling human social and emotional needs. Most often, its even more direct, taking the form of fears that AI will dispense with humanity entirely.

But as powerful as AI technologies are, these fears are little more than science fiction in the present day. Theyre also a distraction but not yet, it seems, from ongoing efforts to regulate AI systems or invest in greater accountability. News and updates on both of these fronts continue to advance every day.

Rather, digital replacement fears are distracting the US from thinking about two other ways in which AI will shape our future. On the one hand, AI offers a major upside: It can amplify todays massive investments in revitalizing the countrys industrial leadership. On the other, a major downside: It could contribute to breaking the already fragile post-World War II international order. These possibilities are intertwined, and their prospects will depend on US technology policy actions or the lack thereof.

First, the upside. Through whats increasingly being called Bidenomics, the US is witnessing a resurgence of domestic industrial and manufacturing capacity. The Inflation Reduction Act included $369 billion in incentives and direct investments specifically directed to climate change, catalyzing massive new and expanded battery and electric vehicle plants on American soil. It was followed by another $40 billion to connect every American to high speed internet. The CHIPS and Science Act adds money for semiconductor manufacturing, as does the Bipartisan Infrastructure Law for roads and bridges.

Along with private investment, the net result is double or triple past years investments in core US capacities. And the economic benefits are showing. Inflation is improving faster in the US than other countries, and unemployment remains at record lows; the nations economy is alive and well.

These investments also offer perhaps the clearest benefits of machine learning systems: improving logistics and efficiency, and handling repetitive and automatable tasks for businesses. Whether or not large language models can ever outscore top applicants to the worlds best graduate schools, AI offers massive improvements in areas that the EUs AI Act would categorize as minimal risk of harm.

And the US has significant advantages in its capacity for developing and deploying AI to amplify its industrial investments, notably including its workforce, an advantage built in part through many years of talent immigration. Together, this is a formula for the US to reach new heights of global leadership, much as it reached after its massive economic investments in the mid-20th century.

Meanwhile, AI has long been regarded as the 21st centurys Space Race, given how the technology motivates international nation-state level competition for scientific progress. And just as the Space Race took place against the tense backdrop of the Cold War, the AI Race is heating up at another difficult geopolitical moment, following Russias unprovoked invasion of Ukraine. But the international problems are not just in eastern Europe. Although denied by US officials, numerous foreign policy experts indicate a trajectory toward economic decoupling of the US and China, even as trans-Pacific tensions rise over Taiwans independence (the stakes of which are complicated in part by Taiwans strategically important semiconductors industry).

Global harmony in the online world is no clearer than offline. Tensions among the US, China, and Europe are running high, and AI will exacerbate them. Data flows between the US and EU may be in peril if an active privacy law enforcement case against Meta by the Irish data protection authority cannot be resolved with a new data transfer agreement. TikTok remains the target of specific legislation restricting its use in the United States and Europe because of its connections to China. Because of AI, the US is considering increased export controls limiting Chinas access to hardware that can power AI systems, expanding on the significant constraints already in place. The EU has also expressed a goal of de-risking from China, though whether its words will translate to action remains an open question.

For now, the US and EU are on the same side. But in the Council of Europe, where a joint multilateral treaty for AI governance is underway, US reticence may put the endeavor in jeopardy. And the EU continues to outpace (by far) the US in passing technology laws, with significant costs for American technology companies. AI will further this disparity and the tensions it generates, as simultaneously the EU moves forward with its comprehensive AI Act, US businesses continue to flourish through AI, and Congress continues to stall on meaningful tech laws.

It seems more a matter of when, not whether, these divisions will threaten Western collaboration, including in particular on relations with China. If, for example, the simmering situation in Taiwan boils over, will the West be able to align even to the degree it did with Ukraine?

The United Nations, with Russia holding a permanent security council seat, proved far less significant than NATO in the context of the Ukraine invasion; China, too, holds such a seat. What use the UN, another relic of the mid-20th century, will hold in such a future remains to be seen.

These two paths one of possible domestic success, the other of potential international disaster present a quandary. But technology policy leadership offers a path forward. The Biden Administration has shown leadership on the potential for societal harms of AI through its landmark Blueprint for an AI Bill of Rights and the voluntary commitments for safety and security recently adopted by leading AI companies. Now it needs to follow that with second and third acts taking bolder steps to align with Europe on regulation and risk mitigation, and integrating support for industrial AI alongside energy and communications investments, to ensure that the greatest benefits of machine learning technologies can reach the greatest number of people.

The National Telecommunications and Information Administration (NTIA) is taking a thoughtful approach to AI accountability, which if turned into action, can dovetail with the EUs AI Act and build a united democratic front on AI. And embracing modularity a co-regulatory framework describing modules of codes and rules implemented by multinational, multistakeholder bodies without undermining government sovereignty as the heart of AI governance could further stabilize international tensions on policy, without the need for a treaty. It could be a useful lever in fostering transatlantic alignment on AI through the US-EU Trade and Technology Council, for example. This would provide a more stable basis for navigating tensions with China arising from the AI Race, as well as a foundation of trust to pair with US investment in AI capacity for industrial growth.

Hopefully, such sensible policy ideas will not be drowned out by the distractions of dystopia, the grandiose ghosts of which will eventually disperse like the confident predictions of imminent artificial general intelligence made lately (just as they were many decades ago). While powerful, over time AI seems less likely to challenge humanity than to cannibalize itself, as the outputs of LLM systems inevitably make their way into the training data of successor systems, creating artifacts and errors that undermine the quality of the output and vastly increase confusion over its source. Or perhaps the often pablum output of LLMs will fade into the miasma of late-stage online platforms, producing just [a]nother thing you ignore or half-read, as Ryan Broderick writes in Garbage Day. At minimum, the magic we perceive in AI today will fade over time, with generative technologies revealed as what Yale computer science professor Theodore Kim calls industrial-scale knowledge sausages.

In many ways, these scenarios the stories of AI, the Space Race, US industrial leadership, and the first tests of the UN began in the 1950s. In that decade, the US saw incredible economic expansion, cementing its status as a world-leading power; the Soviet Union launched the first orbiting satellite; the UN, only a few years old, faced its first serious tests in the Korean War and the Suez Crisis; and the field of AI research was born. As these stories continue to unfold, the future is deeply uncertain. And AIs role in shaping the future of US industry and the international world order may well prove to be its biggest legacy.

Chris Riley is Executive Director of the Data Transfer Initiative and a Distinguished Research Fellow at the University of Pennsylvanias Annenberg Public Policy Center. Previously, he was a senior fellow for internet governance at the R Street Institute. He has worked on tech policy in D.C. and San Francisco for nonprofit and public sector employers and managed teams based in those cities as well as Brussels, New Delhi, London, and Nairobi. Chris earned his PhD from Johns Hopkins University and a law degree from Yale Law School.

See the original post here:

Past, Present, Future: AI, Geopolitics, and the Global Economy - Tech Policy Press

The Economic Case for Generative AI and Foundation Models – Andreessen Horowitz

Artificial intelligence has been a staple in computer science since the 1950s. Over the years, it has also made a lot of money for the businesses able to deploy it effectively. However, as we explained in a recent op-ed piece for the Wall Street Journalwhich is a good starting point for the more detailed argument we make heremost of those gains have gone to large incumbent vendors (like Google or Meta) rather than to startups. Until very recentlywith the advent of generative AI and all that it encompassesweve not seen AI-first companies that seriously threaten the profits of their larger, established peers via direct competition or entirely new behaviors that make old ones obsolete.

With generative AI applications and foundation models (or frontier models), however, things look very different. Incredible performance and adoption, combined with a blistering pace of innovation, suggest we could be in the early days of a cycle that will transform our lives and economy at levels not seen since the microchip and the internet.

This post explores the economics of traditional AI and why its typically been difficult to reach escape velocity for startups using AI as a core differentiator (something weve written about in the past). It then covers why generative AI applications and large foundation-model companies look very different, and what that may mean for our industry.

The issue with AI historically is not that it doesnt workit has long produced mind-bending resultsbut rather that its been resistant to building attractive pure-play business models in private markets. Looking at the fundamentals, its not hard to see why getting great economics from AI has been tough for startups.

Many AI products need to ensure they provide high accuracy even in rare situations, often referred to as the tail. And often while any given situation may be rare on its own, there tend to be a lot of rare situations in aggregate. This matters because as instances get rarer, the level of investment needed to handle them can skyrocket. These can be perverse economies of scale for startups to rationalize.

For example, it may take an investment of $20 million to build a robot that can pick cherries with 80% accuracy, but the required investment could balloon to $200 million if you need 90% accuracy. Getting to 95% accuracy might take $1 billion. Not only is that a ton of upfront investment to get adequate levels of accuracy without relying too much on humans (otherwise, what is the point?), but it also results in diminishing marginal returns on capital invested. In addition to the sheer amount of dollars that may be required to hit and maintain the desired level of accuracy, the escalating cost of progress can serve as an anti-moat for leadersthey burn cash on R&D while fast-followers build on their learnings and close the gap for a fraction of the cost.

Many of the traditional AI problem domains arent particularly tolerant of wrong answers. For example, customer success bots should never offer bad guidance, optical character recognition (OCR) for check deposits should never misread bank accounts, and (of course) autonomous vehicles shouldnt do any number of illegal or dangerous things. Although AI has proven to be more accurate than humans for some well-defined tasks, humans often perform better for long-tail problems where context matters. Thus, AI-powered solutions often still use humans in the loop to ensure accuracy, a situation that can be difficult to scale and often becomes a burdensome cost that weighs on gross margins.

The human body and brain comprise an analog machine thats evolved over hundreds of millions of years to navigate the physical world. It consumes roughly 150 watts of energy, it runs on a bowl of porridge, its quite good at tackling problems in the tail, and the global average wage is roughly $5 an hour. For some tasks in some parts of the world, the average wage is less than a dollar a day.

For many applications, AI is not competing with a traditional computer program, but with a human. And when the job involves one of the more fundamental capabilities of carbon life, such as perception, humans are often cheaper. Or, at least, its far cheaper to get reasonable accuracy with a relatively small investment by using people. This is particularly true for startups, which typically dont have a large, sophisticated AI infrastructure to build from.

Its also worth noting that AI is often held to a higher goalpost than simply what humans can achieve (why change the system if the new one isnt significantly better?). So, even in cases where AI is obviously better, its still at a disadvantage.

This is a very important, yet underappreciated, point. Likely as a result of AI largely being a complement to existing products from incumbents, it has not introduced many new use cases that have translated into new user behaviors across the broader consumer population. New user behaviors tend to underlie massive market shifts because they often start as fringe secular movements the incumbents dont understand, or dont care about. (Think about the personal microcomputer, the Internet, personal smartphones, or the cloud.) This is fertile ground for startups to cater to emergent consumer needs without having to compete against entrenched incumbents in their core areas of focus.

There are exceptions, of course, such as the new behaviors introduced by home voice assistants. But even these underscore how dominant the incumbents are in AI products, given the noticeable lack of widely adopted independents in this space.

Autonomous vehicles (AVs) are an extreme but illustrative example of why AI is hard for startups. AVs require tail correctness (getting things wrong is very, very bad); operational AV systems often rely on a lot of human oversight; and they compete with the human brain at perception (which runs at about 12 watts vs. some high-end CPU/GPU AV setups that consume over 1,300 watts). So while there are many reasons to move to AVs, including safety, efficiency, and traffic management, the economics are still not quite there when compared to ride-sharing services, let alone just driving yourself. This is despite an estimated $75 billion having been invested in AV technology.

Of course, there are narrower use cases that are more compelling, such as trucking or well-defined campus routes. Also, the economics are getting better all the time and are likely to surpass humans soon. But considering the level of investment and time its taken to get us here, plus the ongoing operational complexity and risks, its little wonder why generalized AVs have largely become an endeavor of large public companies, whether via incubation or acquisition.

For the reasons we laid out above, the difficulty of creating a high-margin, high-growth business where AI is the core differentiator has resulted in a well-known slog for startups attempting to do so. This hypothetical from the Wall Street Journal piecenicely encapsulates it:

In order for the startup to have sufficient correctness early on, it hires humans to perform the function it hopes the AI will automate over time. Often, this is part of an escalation path where a first cut of the AI will handle 80% of the common use cases, and humans manage the tail.

Early investors tend to be more focused on growth than on margins, so in order to raise capital and keep the board happy, the company continues to hire people rather than invest in the automationwhich is proving tricky anyway because of the aforementioned complications with the long tail. By the time the company is ready for growth-level investment, it has already built out an entire organization around hiring and operationalizing humans in the loop, and its too difficult to unwind. The result is a business that can show relatively high initial growth, but maintains a low margin and, over time, becomes difficult to scale.

The AI mediocrity spiral is not fatal, though, and you can indeed build sizable public companies from it. But the economics and scaling tend to lag software-centric products. Thus, weve historically not seen a wave of fast-growing AI startups that have had the momentum to destabilize the incumbents. Rather, they tend to steer toward the harder, grittier, more complex problemsor become services companies building bespoke solutionsbecause they have the people on hand to deal with those types of things.

With generative AI, however, this is all changing.

Over the last couple of years, weve seen a new wave of AI applications built on top of or incorporating large foundation models. This trend is commonly referred to as generative AI, because the models are used to generate content (image, text, audio, etc.), or simply as large foundation models, because the underlying technologies can be adapted to tasks beyond just content generation. For the purposes of this post, well refer to it all as generative AI.

Given the long history of AI, its easy to brush this off as yet another hype cycle that will eventually cool. This time, however, AI companies have demonstrated unprecedented consumer interest and speed to adoption. Since entering the zeitgeist in mid to late-2022, generative AI has already produced some of the fastest-growing companies, products, and projects weve seen in the history of the technology industry. Case in point: ChatGPT took only 5 days to reach 1 million users, leaving some of the worlds most iconic consumer companies in the dust (Threads from Meta recently reached 1 million in a few hours, but it was bootstrapped from an existing social graph, so we dont view that as an apples-to-apples comparison).

Whats even more compelling than the rapid early growth is its sustained nature and scale beyond the novelty of the products initial launch. In the 6 months since its launch, ChatGPT reached an estimated 230-million-plus worldwide monthly active users (MAUs) per Yipit. It took Facebook until 2009 to achieve a comparable 197 million MAUsmore than 5 years after its initial launch to the Ivy League and 3 years after the social network became available to the general public.

While ChatGPT is a clear AI juggernaut, it is by no means the only generative AI success story:

The AI developer market is also seeing tremendous growth. For example, the release of the large image model Stable Diffusion blew away some of the most successful open-source developer projects in recent history with regard to speed and prevalence of adoption. Metas Llama 2 large language model (LLM) attracted many hundreds of thousands of users, via platforms such as Replicate, within days of its release in July.

These unprecedented levels of adoption are a big reason why we believe theres a very strong argument that generative AI is not only economically viable, but that it can fuel levels of market transformation on par with the microchip and the Internet.

To understand why this is the case, its worth looking at how generative AI is different from previous attempts to commercialize AI.

Many of the use cases for generative AI are not within domains that have a formal notion of correctness. In fact, the two most common use cases currently are creative generation of content (images, stories, etc.) and companionship (virtual friend, coworker, brainstorming partner, etc.). In these contexts, being correct simply means appealing to or engaging the user. Further, other popular use cases, like helping developers write software through code generation, tend to be iterative, wherein the user is effectively the human in the loop also providing the feedback to improve the answers generated. They can guide the model toward the answer theyre seeking, rather than requiring the company to shoulder a pool of humans to ensure immediate correctness.

Generative AI models are incredibly general and already are being applied to a broad variety of large markets. This includes images, videos, music, games, and chat. The games and movie industries alone are worth more than $300 billion. Further, the LLMs really do understand natural language, and therefore are being pushed into service as a new consumption layer for programs. Were also seeing broad adoption in areas of professional pairwise interaction such as therapy, legal, education, programming, and coaching.

This all said, existing markets are only a proof point of value, and perhaps merely a launch point for generative AI. Historically, when economics and capabilities shift this dramatically, as was the case with the Internet, we see the emergence of entirely new behaviors and markets that are both impossible to predict and much larger than what preceded them.

Historically, much effort in AI has focused on replicating tasks that are easy for humans, such as object identification or navigating the physical worldessentially, things that involve perception. However, these tasks are easy for humans because the brain has evolved over hundreds of millions of years, optimizing specifically for them (picking berries, evading lions, etc.). Therefore as we discussed above, getting the economics to work relative to a human is hard.

Generative AI, on the other hand, automates natural language processing and content creationtasks the human brain has spent far less time evolving toward (arguably less than 100,000 years). Generative AI can already perform many of these tasks orders-of-magnitude cheaper, faster, and, in some cases, better than humans. Because these language-based or creative tasks are harder for humans and often require more sophistication, such white-collar jobs (for example, programmers, lawyers, and therapists) tend to demand higher wages.

So while an agricultural worker in the U.S. gets on average $15 an hour, white-collar workers in the roles mentioned above are paid hundreds of dollars an hour. However, while we dont yet have robots with the fine motor skills necessary for picking strawberries economically, youll see when we break down the costs that generative AI can perform similarly to these high-value workers at a fraction of the cost and time.

The new user behaviors that have emerged with the generative AI wave are as startling as the economics have been. LLMs have been pulled into service as software development partners, brainstorming companions, educators, life coaches, friends, and yes, even lovers. Large image models have become central to new communities built entirely around the creation of fanciful new content, or the development of AI art therapy to help treat use cases such as mental health issues. These are functions that computers have not, to date, been able to fulfill, so we dont really have a good understanding of what the behavior will lead to, nor what are the best products to fulfill them. This all means opportunity for the new class of private generative AI companies that are emerging.

Although the use cases for this new behavior are still emerging or being created, userscriticallyhave already shown a willingness to pay. Many of the new generative AI companies have shown tremendous revenue growth in addition to the aforementioned user growth. Subscriber estimates for ChatGPT imply close to $500 million in annualized run-rate revenue from U.S. subscribers alone. ChatGPT aside, companies across a number of industries (including legal, copywriting, image generation, and AI companionship, to name a few) have achieved impressive and rapid revenue scaleup to hundreds of millions of run-rate revenue within their first year. For a few companies who own and train their own models, this revenue growth has even outpaced heavy training costs, in addition to inference coststhat is, the variable costs to serve customers. This thus creates already or soon-to-be self-sustaining companies.

Just as the time to 1 million users has been truncated, so has the time it takes for many AI companies to hit $10-million-plus of run-rate revenue, often a fundraising hallmark for achieving product-market fit.

As a motivating example, lets look at the simple task of creating an image. Currently, the image qualities produced by these models are on par with those produced by human artists and graphic designers, and were approaching photorealism. As of this writing, the compute cost to create an image using a large image model is roughly $.001 and it takes around 1 second. Doing a similar task with a designer or a photographer would cost hundreds of dollars (minimum) and many hours or days (accounting for work time, as well as schedules). Even if, for simplicitys sake, we underestimate the cost to be $100 and the time to be 1 hour, generative AI is 100,000 times cheaper and 3,600 times faster than the human alternative.

A similar analysis can be applied to many other tasks. For example, the costs for an LLM to summarize and answer questions on a complex legal brief is fractions of a penny, while a lawyer would typically charge hundreds (and up to thousands) of dollars per hour and would take hours or days. The cost of an LLM therapist would also be pennies per session. And so on.

The occupations and industries impacted by the economics of AI expand well beyond the few examples listed above. We anticipate the economic value of generative AI to have a transformative and overwhelming impact on areas ranging from language education to business operations, and the magnitude of this impact to be positively correlated with the median wage of that industry. This will drive a bigger cost delta between the status quo and the AI alternative.

Of course, the LLMs would actually have to be good at these functions to realize that economic value. For this, the evidence is mounting: every day we gather more examples of generative AI being used effectively in practice for real tasks. They continue to improve at a startling place, and thus far are doing so without untenable increases in training costs or product pricing. Were not suggesting that large models can or will replace all work of this sortthere is little indication of that at this pointjust that the economics are stunning for every hour of work that they save.

None of this is scientific, mind you, but if you sketch out an idealized case where a model is used to perform an existing service, the numbers tend to be 3-4 orders of magnitude cheaper than the current status quo, and commonly 2-3 orders of magnitude faster.

An extreme example would be the creation of an entire video game from a single prompt. Today, companies create models for every aspect of a complex video game3D models, voice, textures, music, images, characters, stories, etc.and creating a AAA video game today can take hundreds of millions of dollars. The cost of inference for an AI model to generate all the assets needed in a game is a few cents or tens of cents. These are microchip- or Internet-level economics.

So, are we just fueling another hype bubble that fails to deliver? We dont think so. Just like the microchip brought the marginal cost of compute to zero, and the Internet brought the marginal cost of distribution to zero, generative AI promises to bring the marginal cost of creation to zero.

Interestingly, the gains offered by the microchip and the Internet were also about 3-4 orders of magnitude. (These are all rough numbers primarily to illustrate a point. Its a very complex topic, but we want to provide a rough sense of how disruptive the Internet and the microchip were to the current time and cost of doing things.) For example, ENIAC, the first general purpose programmable computer, was 5,000 times faster than any other calculation machine at the time, and purportedly could compute the trajectory of a missile in 30 seconds, compared with at least 30 hours by hand.

Similarly, the Internet dramatically changed the calculus for moving bits across great distances. Once an adequately sized Internet bandwidth arrived, you could download software in minutes rather than receiving it by mail in days or weeks, or driving to the local Frys to buy it in-person. Or consider the vast efficiencies of sending emails, streaming video, or using basically any cloud service. The cost per bit decades ago was around 2*10^-10, so if you were sending say 1 kilobyte, it was orders of magnitude cheaper than the price of a stamp.

For our dollar, generative AI holds a similar promise when it comes to the cost and time of generating contenteverything from writing an email to producing an entire movie. Of course, all of this assumes that AI scaling continues and we continue to see massive gains in economics and capabilities. As of this writing, many of the experts we talk to believe were in the very early innings for the technology and were very likely to see tremendous continued progress for years to come.

There is a lot of to-do about the defensibility or lack of defensibility for AI companies. Its an important conversation to have and, indeed, weve written about it. But when the economic benefits are as compelling as they are with generative AI, there is ample velocity to build a company around more traditional defensive moats such as scale, the network, the long tail of enterprise distribution, brand, etc. In fact, were already seeing seemingly defensible business models arise in the generative AI space around two-sided marketplaces between model creators and model users, and communities around creative content.

So even though there doesnt seem to be obvious defensibility endemic to the tech stack (if anything, it looks like there remain perverse economics of scale), we dont believe this will hamper the impending market shift.

Broadly, we believe that a drop in marginal value of creation will massively drive demand. Historically, in fact, the Jevons paradox consistently proves true: When the marginal cost of a good with elastic demand (e.g., compute or distribution) goes down, the demand more than increases to compensate. The result is more jobs, more economic expansion, and better goods for consumers. This was the case with the microchip and the Internet, and itll happen with generative AI, too.

If youve ever wanted to start a company, now is the time to do it. And please keep in touch along the way

***

The views expressed here are those of the individual AH Capital Management, L.L.C. (a16z) personnel quoted and are not the views of a16z or its affiliates. Certain information contained in here has been obtained from third-party sources, including from portfolio companies of funds managed by a16z. While taken from sources believed to be reliable, a16z has not independently verified such information and makes no representations about the enduring accuracy of the information or its appropriateness for a given situation. In addition, this content may include third-party advertisements; a16z has not reviewed such advertisements and does not endorse any advertising content contained therein.

This content is provided for informational purposes only, and should not be relied upon as legal, business, investment, or tax advice. You should consult your own advisers as to those matters. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. Furthermore, this content is not directed at nor intended for use by any investors or prospective investors, and may not under any circumstances be relied upon when making a decision to invest in any fund managed by a16z. (An offering to invest in an a16z fund will be made only by the private placement memorandum, subscription agreement, and other relevant documentation of any such fund and should be read in their entirety.) Any investments or portfolio companies mentioned, referred to, or described are not representative of all investments in vehicles managed by a16z, and there can be no assurance that the investments will be profitable or that other investments made in the future will have similar characteristics or results. A list of investments made by funds managed by Andreessen Horowitz (excluding investments for which the issuer has not provided permission for a16z to disclose publicly as well as unannounced investments in publicly traded digital assets) is available at https://a16z.com/investments/.

Charts and graphs provided within are for informational purposes solely and should not be relied upon when making any investment decision. Past performance is not indicative of future results. The content speaks only as of the date indicated. Any projections, estimates, forecasts, targets, prospects, and/or opinions expressed in these materials are subject to change without notice and may differ or be contrary to opinions expressed by others. Please see https://a16z.com/disclosures for additional important information.

Follow this link:

The Economic Case for Generative AI and Foundation Models - Andreessen Horowitz

Exploring the future of AI: The power of decentralization – Cointelegraph

The field of artificial intelligence (AI) is taking the world by storm, but many people have found themselves looking up at the sky, wondering where all the rain came from.

Those who didnt realize the place AI already has held in our everyday lives are having a hard time understanding what further advancements mean for society as a whole.

Wrapping your head around the technology itself is a challenge for most, but it gets even more complicated when broken down. No longer are people just using the umbrella term artificial intelligence they are saying narrow AI, superintelligence and artificial general intelligence (AGI). Companies are using the terms machine learning and deep learning when explaining the technologies they have incorporated to streamline their business practices.

The push to advance AI started long before the conversation about it did, and those advancements have benefited businesses across industries. The potential for what the future holds with this technology has been particularly enthralling for those in the Web3 space.

Irina Jadallah, co-founder of Ticketmeta a nonfungible token-based ticketing solution and decentralized streaming service for sports events told Cointelegraph:

But the impact of AI does not stop with the metaverse; it has already been proven that AI has the potential to revolutionize various fields, from marketing to finance. As exciting as it may be, the popularity of this technology, as Jadallah pointed out, now poses a rather significant question.

As it becomes more advanced and more desired by the public, it also becomes more expensive, enhancing the risk of centralization. This collective concern has created a new buzzword decentralized AI.

As with all things, centralization is not inherently a bad thing, but it does pose some issues where AI is concerned.

When only a small number of organizations can afford to use the technology, they would be able to control how the technology advances, risking it becoming everything many people fear it to be.

Recent:Chinese police vs. Web3, blockchain centralization continues: Asia Express

This concern of centralized AI is one that many in the space are already discussing and working against. Marcello Mari, founder of SingularityDAO an asset management company that uses AI for trading strategies told Cointelegraph:

In contrast, decentralized AI could allow individuals to have more of a say in the products they use while having a broader range of models to choose from.

This is why we even founded our company back in 2017 because its very important that we start thinking now about what the next AGI or superhuman intelligence will look like, said Mari. In order to make it benevolent, you want to have a decentralized layer so that the community can actually influence and be comfortable with the development of AGI.

Decentralized AI could incorporate blockchain technology, which already has a reputation for security and transparency.

Blockchain technology is a safe and open system for monitoring information and ensuring it stays unaltered, said Anna Ivanchenko, co-founder and CEO of Ticketmeta. Its used to create credibility and trust.

People have a preference for public blockchains because they are often governed by the community and not a central authority. Code becomes law and adds a level of trustlessness that is not seen in other industries. According to CoinGecko, there are already more than 50 blockchain-based AI companies, with many people expecting this number to grow exponentially over the coming years. Companies such as Render, Fetch.ai and SingularityNET have led the charge in 2023.

Maris SingularityDAO is democratically governed by the community, who can have input into how their AI-DeFi model operates. People having a say is the main differentiating factor between centralized and decentralized AI. With centralized AI, the average user has negligible influence over how the AI models function.

Encouraging the community to take part in the development and direction of AI, allowing them to influence where it goes and what it does, will likely play a significant role in easing their concerns. Decentralized AI could very well make people more comfortable with AI as a whole, easing the transition of the technology into one that we use every day.

Of course, its never easy with new tech, and decentralized AI is no exception. It shares a common challenge with centralized AI, namely the black box problem, which involves a lack of transparency in how AI models operate and reach conclusions.

This opacity can understandably breed distrust. However, as Cointelegraph recently highlighted, there is hope: Explainable AI (XAI) and open-source models are emerging as promising avenues to address the black box issue in decentralized AI.

Decentralized AI enhances security in several ways. For example, by leveraging blockchain technology, it offers encryption and immutability, ensuring that data remains both secure and unchanged.

It can proactively detect anomalies or suspicious patterns in data, acting as an early warning system against potential breaches. The need for decentralization arises from its inherent design: Instead of having a single point of vulnerability, data is distributed across multiple nodes, making unauthorized access or tampering significantly more challenging.

Recent:AI can be a creative amplifier Grammy chief exec Harvey Mason Jr.

Decentralized AI is championing the cause of transparency and trust in a world thats becoming more data-driven by the day. Traditional AI systems often suffer from opaque decision-making processes, raising trust and accountability issues. However, decentralized AI systems, like SingularityNET, stand out with their inherent transparency, recording every transaction and decision on the blockchain.

Despite still being in its infancy, decentralized AI provides hope of solving the aforementioned black box issue because of the inherent transparency that comes with blockchain technology.

Collect this article as an NFT to preserve this moment in history and show your support for independent journalism in the crypto space.

View original post here:

Exploring the future of AI: The power of decentralization - Cointelegraph

IT pros mull observability tools, devx and generative AI – TechTarget

As platform engineering teams increasingly take on enterprise performance management tasks in production, there have been missed opportunities to give developers insights into their applications, experts say.

Observability is an area where platform engineers and SREs have stepped in on behalf of some application developers, who aren't as steeped in the complexities of distributed cloud infrastructure systems such as Kubernetes. Analysts have also seen an increase in observability teams, specifically, within the platform engineering discipline that connect developers' application performance insights with underlying infrastructure data.

"[There's] a move toward centralizing observability teams and centers of excellence," said Nancy Gohring, an analyst at IDC. "One driver for doing that is to try to control costs -- and one way those teams are trying to control costs is setting up data [storage] quotas for teams."

Such teams don't replace the need for developers to instrument their own application code but have helped ease the burden of managing the ongoing operational costs associated with collecting observability data, Gohring said.

There are some aspects of infrastructure monitoring, too, that developers don't need to concern themselves with, said Gregg Siegfried, an analyst at Gartner. Still, there remains a divide between the interests of platform teams in production observability and the interests of application developers, Siegfried said.

"I see an emergence of tools trying to give developers closer access to that data, to give them more insight, maybe allow them to put better instrumentation into the software," he said. "But none of them have really set the world on fire yet."

It's a commonly understood best practice in observability that developers instrument their own code before it's deployed to production, the better to manage its performance in the "you build it, you run it" mode of DevOps.

"I'm part of the OpenTelemetry End User Working Group, and recently we had somebody come in and talk to our user community about how they work in a company that really fosters an observability culture," said Adriana Villela, developer advocate at ServiceNow's Lightstep, an observability vendor, in a presentation at the recent Monitorama conference. "The wonderful thing about it is that there is a directive from the executive saying, 'Thou shalt do observability and also developers are the ones instrumenting their own code,' which means that if you've got some disgruntled development team saying, 'I don't have time to instrument my code,' tough [s---]."

But some newer entrants to the market and their early customers question whether the developer experience (devx) with observability needs to be quite so tough.

"Developers being able to add custom metrics to their code or spans or use observability tools is really critical to help developers take ownership of what they run in production," said Joseph Ruscio, a general partner at Heavybit, an early-stage investor in cloud infrastructure startups, in a Monitorama presentation.

However, to a new engineer, the overwhelming number of tools available for observability is "inscrutable and not at all welcoming to someone new to the craft," Ruscio said.

A production engineering team at a market research company is trying to make this task less onerous for developers using a new Kubernetes-based APM tool from Groundcover. Groundcover uses eBPF to automatically gather data from Kubernetes clusters and associate it with specific applications, which could eventually replace the language-specific SDKs developers used to instrument applications using incumbent vendor Datadog.

"For what we are calling custom metrics that monitor a specific application's behavior, these will continue to be the responsibility of the developers," said Eli Yaacov, a production engineer at SimilarWeb, based in New York. "But we, the production engineers, can provide the developers the [rest of] the ecosystem. For example, if they are running Kubernetes, they don't need to worry about [instrumenting for] the default CPU or memory. Groundcover collects all this data in Kubernetes without requiring the developers to integrate with anything into their services."

Other emerging vendors also offer automated instrumentation features in debugging tools to instrument developers' apps without requiring code changes. These include Lightrun and Rookout.

Amid this year's general hype about generative AI, observability vendors have been quick to roll out natural language interfaces for their tools, mostly to add a user-friendly veneer over their relatively complex, often proprietary, data query languages. Such vendors include Honeycomb, Splunk, and most recently, Dynatrace and Datadog.

However, generative AI interfaces are not necessarily an obvious slam dunk to improve the developer experience of using observability tools, Siegfried said, as most developers are comfortable working in code.

"They have better things to do with their time than learn how to use an [application performance management] solution," he said.

Long term, generative AI and artificial general intelligence may have a significant effect, Ruscio said. But in the short term, he said he is skeptical that large language models such as ChatGPT will make a major impact on observability, particularly the developer experience.

Instead, unlike security and production-level systems monitoring, observability has yet to shift very far left in the development lifecycle -- and developers would be best served by changing that, Ruscio said during his presentation. New and emerging vendors, some of which are among Heavybit's portfolio companies, are working in this area, termed observability-driven development.

"There's this missing mode where, wouldn't it be nice if you had some input when you are actually writing code as to, what does this code look like in production?" Ruscio said. "It's cool that when I ship it, I'll get a graph. But why shouldn't I just know now, in my IDE, [how it will perform?]"

Beth Pariseau, senior news writer at TechTarget, is an award-winning veteran of IT journalism. She can be reached at[emailprotected]or on Twitter @PariseauTT.

See the original post:

IT pros mull observability tools, devx and generative AI - TechTarget

The future of learning and skilling with AI in the picture – Chief Learning Officer

Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think well augment our intelligence.

Ginni Rometty, former chairman and CEO of IBM

Imagine if you could learn anything, at any time, with speed. Thats the utopia that artificial intelligence could promise us. Our capabilities to learn could become limitless with AI enhancing our ability to consume information, discover new things and explore alternative career paths.

This isnt something well do far off in the future, its happening right now in some professions. In law, for example, AI is being used to sift through mounds of legal paperwork and data in minutes. In health care, AI has been found to detect some cancers with more accuracy than human doctors. It promises to be as revolutionary in the learning and development industry.

First, generative AI (as weve seen with ChatGPT and MidJourney) can help L&D teams to create content with just a few prompts and clicks. Of course, quality control is essential here to ensure what youre creating is actually valuable and skill-building. But given that creating content for online learning can take up to 155 hours, the time savings of using generative AI cannot be overlooked.

Other forms of AI, like recommendation engines, will be able to suggest L&D content to individuals based on their existing skills, skills gaps (identified through their career goals or business needs), learning preferences, role and interests. With AI, learning will become more relevant and tailored to each person, which also levels the playing field for those from non-traditional academic backgrounds, neurodiverse employees and those who havent had time or access to traditional learning opportunities.

By feeding an AI tool with data on an individual, the algorithm can sift through all of the L&D content available in your organization and show them the best opportunities for their needs. That might be something highly visual for someone who identifies as a visual learner, or it might be consumed on the go by others who are often commuting.

People may even be able to request learning content as and when they need it. For instance, a driver with a spare half hour waiting for a delivery will be able to ask the AI tool for a module they can consume in their truck. In this way, people are continuously engaged and challenged, even during idle moments.

This is the utopian ideal of learning in the flow of work where relevant content is delivered to all workers, in their moment of need, to build critical skills they need to succeed and remain employable in the future.

In some ways, AI could also become a personal career coach for every worker. Something really quite elite, coaching, may become as common in the future as WhatsApp. Similarly, generative AI can be used to guide and facilitate learning, including prompting employees to engage with a new relevant opportunity, or answering their questions about what to learn next.

Harvard University has been experimenting with a form of this AI in its Computer Science 50 course. The AI model helps students with real-time feedback, guiding them to solutions for their questions and troubleshooting. With AI augmenting our learning, we can be inspired to learn things we never considered before.

So far, weve covered how current forms of AI will make learning faster, more accessible and personalized. This will ultimately make it more likely that humans will want to continuously engage with learning that can only be a good thing when you consider the dwindling half-life of skills and chronic skills shortages that all organizations are grappling with.

Simply put, without a well-embedded practice of lifelong learning in your organization, your workforce skills wont keep up with the changing skills needs created by AI and other emerging technologies. Consider how quickly ChatGPT appeared and disrupted entire industries. As AI advances, expect to see more roles and industries upturned by super-powered apps right, left and center.

But, its as AI advances that the applications for L&D teams get really exciting. What were seeing today is just the tip of the iceberg in terms of AIs capabilities. Its limited to a narrow scope of activities, like generating writing from pre-existing content or identifying objects in an image based on thousands of hours of training the computer to do that task. Were moving closer to general AI, where software mimics a humans ability to do many different tasks and figure out what to do in novel situations. Imagine what a general AI can do for L&D. Plus, consider how it will change work as a whole and therefore the skills mix needed in your organization.

Of course, you cannot mention AI without bringing up some of the concerns surrounding the technology. Skills data, for example, needs to be used for the benefit of individuals and any collection and analysis needs to respect their privacy. Collecting a wide range of data is also needed to prevent biases and human oversight will always be required to ensure an AIs recommendations are fair and equal.

As we move forward, we will learn and benefit from AI augmenting our work, including the pitfalls. Were all on the learning curve right now and sharing our experiences, successes and concerns will help society embrace and partner with AI correctly.

People will undoubtedly be working alongside machines in all departments, so skills like teamwork and collaboration will take on a new meaning. Working with other humans is one ability, but combining this with a machine, even the basic models we have today, is a whole different story. We cannot yet predict what skills will be needed to work alongside our AI colleagues, so being agile in your approach to analyzing your skills mix, building skills and deploying them in your organization will be critical. In uncertainty, its best to remain flexible in your thinking, strategy and infrastructure.

We are living in an era where AI is involved in nearly everything we do. As the frontrunners of change, learning must embrace and understand AI, including how to use it to improve L&D and how it changes the skills needed by your business. There is so much that current AI models can do today for L&D, but also innovative applications promised in the near future. Its a fast-moving space, so being open to change and keeping up with developments will put you in a strong position to navigate the next chapter of AIs revolution.

View post:

The future of learning and skilling with AI in the picture - Chief Learning Officer

The Threat Of Climate Misinformation Propagated by Generative AI … – Unite.AI

Artificial intelligence (AI) has transformed how we access and distribute information. In particular, Generative AI (GAI) offers unprecedented opportunities for growth. But, it also poses significant challenges, notably in climate change discourse, especially climate misinformation.

In 2022, research showed that around 60 Twitter accounts were used to make 22,000 tweets and spread false or misleading information about climate change.

Climate misinformation means inaccurate or deceptive content related to climate science and environmental issues. Propagated through various channels, it distorts climate change discourse and impedes evidence-based decision-making.

As the urgency to address climate change intensifies, misinformation propagated by AI presents a formidable obstacle to achieving collective climate action.

False or misleading information about climate change and its impacts is often disseminated to sow doubt and confusion. This propagation of inaccurate content hinders effective climate action and public understanding.

In an era where information travels instantaneously through digital platforms, climate misinformation has found fertile ground to propagate and create confusion among the general public.

Mainly there are three types of climate misinformation:

In 2022, several disturbing attempts to spread climate misinformation came to light, demonstrating the extent of the challenge. These efforts included lobbying campaigns by fossil fuel companies to influence policymakers and deceive the public.

Additionally, petrochemical magnates funded climate change denialist think tanks to disseminate false information. Also, corporate climate skeptic campaigns thrived on social media platforms, exploiting Twitter ad campaigns to spread misinformation rapidly.

These manipulative campaigns seek to undermine public trust in climate science, discourage action, and hinder meaningful progress in tackling climate change.

Image Source

Generative AI technology, particularly deep learning models like Generative Adversarial Networks (GANs) and transformers, can produce highly realistic and plausible content, including text, images, audio, and videos. This advancement in AI technology has opened the door for the rapid dissemination of climate misinformation in various ways.

Generative AI can make up stories that aren't true about climate change. Although 5.18 billion people use social media today, they are more aware of current world issues. But, they are 3% less likely to spot false tweets generated by AI than those written by humans.

Some of the ways generative AI can promote climate misinformation:

Generative AI tools that produce realistic synthetic content are becoming increasingly accessible through public APIs and open-source communities. This ease of access allows for the deliberate generation of false information, including text and photo-realistic fake images, contributing to the spread of climate misinformation.

Generative AI enables the creation of longer, authoritative-sounding articles, blog posts, and news stories, often replicating the style of reputable sources. This sophistication can deceive and mislead the audience, making it difficult to distinguish AI-generated misinformation from genuine content.

Large language models (LLMs) integrated into AI agents can engage in elaborate conversations with humans, employing persuasive arguments to influence public opinion. Generative AI's ability to generate personalized content is undetectable by current bot detection tools. Moreover, GAI bots can amplify disinformation efforts and enable small groups to appear larger online.

Hence, it is crucial to implement robust fact-checking mechanisms, media literacy programs, and close monitoring of digital platforms to combat the dissemination of AI-propagated climate misinformation effectively. Strengthening information integrity and critical thinking skills empowers individuals to navigate the digital landscape and make informed decisions amidst the rising tide of climate misinformation.

Though AI technology has facilitated the rapid spread of climate misinformation, it can also be part of the solution. AI-driven algorithms can identify patterns unique to AI-generated content, enabling early detection and intervention.

However, we are still in the early stages of building robust AI detection systems. Hence, humans can take the following steps to minimize the risk of climate misinformation:

In the battle against AI-propagated climate misinformation, upholding ethical principles in AI development and responsible usage is paramount. By prioritizing transparency, fairness, and accountability, we can ensure that AI technologies serve the public good and contribute positively to our understanding of climate change.

To learn more about generative AI or AI-related content, visit unite.ai.

Read the original here:

The Threat Of Climate Misinformation Propagated by Generative AI ... - Unite.AI

AI and the Next Phase of Human Evolution: What Can We Expect? – Fagen wasanni

Exploring the Intersection of AI and Human Evolution: Predicting the Next Phase

Artificial Intelligence (AI) has already started to reshape the world as we know it, bringing about transformative changes in various sectors such as healthcare, transportation, and finance. However, its potential impact on human evolution is an intriguing prospect that warrants a closer look. As we delve into the intersection of AI and human evolution, we are led to contemplate the possible outcomes of this symbiotic relationship.

The rapid advancement of AI technology has prompted many to predict a future where AI not only complements human intelligence but also surpasses it. This concept, known as Artificial General Intelligence (AGI), refers to machines that possess the ability to understand, learn, and apply knowledge across a wide range of tasks, much like a human being. The advent of AGI could potentially mark a significant turning point in human evolution, propelling us into a new phase of existence.

This next phase of human evolution, often referred to as transhumanism, envisions a future where humans transcend their biological limitations through the integration of advanced technologies. AI plays a crucial role in this vision, providing the cognitive enhancement necessary for humans to keep pace with the rapidly evolving digital world. From AI-powered prosthetics that restore lost functionalities to brain-computer interfaces that augment cognitive abilities, the potential applications of AI in transhumanism are vast and varied.

However, the integration of AI into human evolution is not without its challenges. Ethical considerations are paramount, particularly when it comes to questions of privacy, autonomy, and identity. The prospect of AI-enhanced humans also raises concerns about potential social and economic disparities. If access to AI technologies is limited to a privileged few, it could exacerbate existing inequalities and create a new class divide between the enhanced and the unenhanced.

Moreover, the development of AGI presents a unique set of risks. If machines surpass human intelligence, there is a potential for them to become uncontrollable or even pose a threat to humanity. Renowned physicist Stephen Hawking famously warned that the development of full artificial intelligence could spell the end of the human race. As such, it is crucial to establish robust ethical guidelines and regulatory frameworks to ensure the safe and responsible development and deployment of AI technologies.

Despite these challenges, the potential benefits of integrating AI into human evolution are too significant to ignore. AI has the potential to revolutionize healthcare, enhance our cognitive abilities, and even extend human lifespan. It could also help us tackle some of the most pressing global challenges, from climate change to food security.

In conclusion, the intersection of AI and human evolution presents a fascinating glimpse into the future. While the path to this future is fraught with challenges and uncertainties, it also holds immense promise. As we stand on the brink of this new phase of human evolution, it is up to us to navigate these complexities and harness the power of AI for the betterment of humanity. The journey may be daunting, but the potential rewards are unparalleled. As we move forward, we must do so with caution, foresight, and a steadfast commitment to upholding our shared values and principles.

See more here:

AI and the Next Phase of Human Evolution: What Can We Expect? - Fagen wasanni

The Role of Artificial Intelligence in the Future of Media – Fagen wasanni

There has been some confusion and concern among people about the role of artificial intelligence (AI) in our lives. However, AI is simply a technology that can perform tasks requiring human intelligence. It learns from data and improves its performance over time. AI has the potential to drive nearly 45% of the economy by 2023.

AI can be categorized into three types: Narrow AI, General AI, and Super AI. Narrow AI is designed for specific tasks, while General AI can perform any intellectual task that a human can do, although it doesnt exist yet. Super AI is purely theoretical and surpasses human intelligence in every aspect.

For media companies, AI applications like content personalization, automated content generation, sentiment analysis, and audience targeting can greatly benefit content delivery and audience engagement. AI can analyze customer data for targeted marketing campaigns, create personalized content, predict customer behavior, analyze visual content, and assist in social media management.

Companies can transition to AI by identifying pain points, collecting and preparing relevant data, starting with narrow applications, collaborating with AI experts, and forming a task force to integrate AI across the organization. AI can automate repetitive tasks, enhance decision-making, and free up human resources for more strategic work.

However, it is important for brands to maintain authenticity and embrace diversity while using AI for marketing. AI algorithms are only as unbiased as the data they are trained on, so brands should use diverse data and establish ethical guidelines to mitigate biases. Human creativity and understanding are irreplaceable, and brands should emphasize the importance of human-AI collaboration.

Overall, AI has the potential to revolutionize the media industry by improving customer experiences, optimizing operations, and delivering relevant content. It is crucial for companies to understand and leverage the power of AI to stay competitive in the evolving digital landscape.

Read more:

The Role of Artificial Intelligence in the Future of Media - Fagen wasanni

Artificial Intelligence Has No Reason to Harm Us – The Wire

Can the synthesis of man and machine ever be stable or will the purely organic component become such a hindrance that it has to be discarded? If this eventually happens, and I have given good reasons for thinking that it must we have nothing to regret and certainly nothing to fear.

Arthur C. Clarke, Profiles of the Future, 1962.

In the last six months since ChatGPT 4 was launched, there has been a lot of excitement and discussion between experts and also laymen about the prospect of truly intelligent machines which can exceed human intelligence in virtually every field.

Though the experts are divided on how this is going to progress, many believe that artificial intelligence will sooner or later greatly surpass human intelligence. This has given rise to speculation on whether it can have the capability of taking control of human society and the planet from humans.

Several experts have expressed the fear that this could be a dangerous development and could lead to the extinction of humanity and therefore, the development of artificial intelligence needs to be stalled or at least strongly regulated by all governments, as well as by companies engaged in its development. There is also a lot of discussion on whether these intelligent machines would be conscious or would have feelings or emotions. However, there is virtual silence or lack of any deep thinking on whether at all we need to fear artificial super intelligence and why it could be harmful to humans.

There is no doubt that the various kinds of AI that are being developed, and will be developed, will cause major upheaval in human society, irrespective of whether or not they become super intelligent and in a position to take control from humans. Within the next 10 years, artificial intelligence could replace humans in most jobs, including jobs which are considered specialised and in the intellectual domain, such as those of lawyers, architects, doctors, investment managers, programme developers, etc.

Perhaps the last jobs to go will be those that require manual dexterity, since the development of humanoid robots with manual dexterity of humans is still lagging behind the development of digital intelligence. In that sense perhaps, white collar workers will be replaced first and some blue collar workers last. This may in fact invert the current pyramid of the flow of money and influence in human society!

However, the purpose of this article is not to explore how the development of artificial intelligence will affect jobs and work, but to explore some more interesting philosophical questions around the meaning of intelligence, super-intelligence, consciousness, creativity and emotions, in order to see if machines would have these features. I also explore what would be the objective or driving force of artificial superintelligence.

Let us begin with intelligence itself. Intelligence, broadly, is the ability to think and analyse rationally and quickly. On the basis of this definition, our current computers and AI are certainly intelligent as they possess the capacity to think and analyse rationally and quickly.

The British mathematician Alan Turing had devised a test in the 40s for testing whether a machine is truly intelligent. He said to put a machine and an intelligent human in two cubicles and ask anyone to question alternately the AI and the human, without his knowing which is the AI and which is the human. If after a lot of interrogation, you cannot determine which is the human and which is the AI, then clearly the machine is intelligent. In this sense, many intelligent computers and programmes today have passed the Turing test. Some AI programmes are rated to have an IQ of well above 100, although there is no consensus of the IQ as a measure of intelligence.

That brings us to an allied question. What is thinking? For a logical positivist like me, these terms like thinking, consciousness, emotions, creativity, and so on, have to be defined operationally.

When would we say that somebody is thinking? At a simplistic level we say that a person is thinking if we give that person a problem and she is able to solve that problem. We say that such a person has arrived at the solution, by thinking. In that operational sense, todays intelligent machines are certainly thinking. Another facet of thinking is your ability to look at two options and to choose the right one. In that sense too, intelligent machines are capable of looking at various options and choosing the ones that provide a better solution. So we already have intelligent, thinking machines.

What would be the operational test for creativity? Again, we say that if somebody is able to create a new literary, artistic or intellectual piece, we consider that as sign of creativity. In this sense also, todays AI is already creative, since ChatGPT for instance, is able to do all these things with distinct flourish and greater speed than humans. And this is only going to improve with every new programme.

What about consciousness? When do we consider an entity to be conscious? One test of consciousness is an ability to respond to stimuli. Thus, a person in a coma, who is unable to respond to stimuli, is considered unconscious. In this sense, some plants do respond to stimuli and would be regarded as conscious. But broadly, consciousness is considered a product of several factors. One, response to stimuli. Two, an ability to act differentially on the basis of the stimuli. Three, an ability to experience and feel pain, pleasure and other emotions. We have already seen that intelligent machines do respond to stimuli (which for a machine means a question or an input) and have the ability to act differentially on the basis of such stimuli. But to examine whether machines have emotions, we will need to define emotions as well.

Representative image. Illustration: The Wire, with Canva.

What are emotions? Emotions are a biological peculiarity with which humans and some other animals have evolved. So what would be the operational test of emotions? It would perhaps be that, if someone exhibits any of the qualities which we call emotions, such as, love, hate, jealousy, anger, etc, such being would be said to have emotions. Each or any of these emotions can and often do interfere with purely rational behaviour. So, for example, I will devote a disproportionate amount of time and attention to someone that I love, in preference to other people that I do not. Similarly, I would display a certain kind of behaviour (usually irrational) towards a person who I am jealous of, or envy. The same is true of anger. It makes us behave in an irrational manner.

If you think about it, each of these emotional complexes leads to behaviour that is irrational. And therefore, a machine which is purely intelligent and rational, may not exhibit what we call human emotions. However, it may be possible to design machines which also exhibit these kinds of emotions. But, then those machines have to be deliberately engineered and designed to behave like us, in this emotional (even if irrational) way. However such emotional behaviour would detract from coldly rational and intelligent behaviour, and therefore, any superintelligence (which will evolve by intelligent machines modifying their programmes to bootstrap themselves up the intelligence ladder) is not likely to exhibit emotional behaviour.

Artificial superintelligence

By artificial superintelligence I mean an intelligence which is far superior than humans in every possible way. Such artificial intelligence will have the capability of modifying its own algorithm, or programme, and have the ability to rapidly improve its own intelligence. Once we have created machines or programmes that are capable of deep learning, so that they are able to modify their own programmes and write their own code and algorithms, they would clearly go beyond the designs of their creators.

We already have learning machines, which in a very rudimentary way are able to redesign or redirect their behaviour on the basis of what they have experienced or learnt. In the time to come, this ability of learning and modifying its own algorithm is going to increase. A time will come, which I believe will happen probably within the next 10 years, when machines will become what we call, super intelligent.

The question then arises: Do we have anything to fear from such superintelligent machines?

Arthur C. Clarke in a very prescient book called Profiles of the Future written in 1962, has a long chapter on AI called the Obsolescence of Man. In that he writes that there is no doubt that in the time to come, AI will exceed human intelligence in every possible way. While he talks of an initial partnership between humans and machines, he goes on to state:

But how long will this partnership last? Can the synthesis of man and machine ever be stable or will the purely organic component become such a hindrance that it has to be discarded. If this eventually happens, and I have given good reasons for thinking that it must we have nothing to regret and certainly nothing to fear. The popular idea fostered by Comic strips and the cheaper forms of science fiction that intelligent machines must be malevolent entities hostile to man, is so absurd that it is hardly worth wasting energy to refute it. I am almost tempted to argue that only unintelligent machines can be malevolent. Those who picture machines as active enemies are merely projecting their own aggressive instincts, inherited from the jungle, into a world where such things do not exist. The higher the intelligence, the greater the degree of cooperativeness. If there is ever a war between men and machines, it is easy to guess who will start it.

Yet, however friendly and helpful the machines of the future may be, most people will feel that it is a rather bleak prospect for humanity if it ends up as a pampered specimen in some biological museum even if that museum is the whole planet earth. This, however, is an attitude I find it impossible to share.

No individual exists forever. Why should we expect our species to be immortal? Man, said Nietzsche, is a rope stretched between the animal and the superman, a rope across the abyss. That will be a noble purpose to have served.

It is surprising that something so elementary that Clarke was able to see more than 60 years ago, cannot be seen today by some of our top scientists and thinkers who have been stoking fear about the advent of artificial superintelligence and what they regard as its dire ramifications.

Let us explore this question further. Why should a super intelligence, more intelligent than humans, which has gone beyond the design of its creators, be hostile towards humans?

One sign of intelligence is the ability to align your actions to your operational goals; and the further ability to align your operational goals to your ultimate goals. Obviously, when someone acts in contradiction to his operational or long term objectives he cannot be considered intelligent. The question however is, what would be the ultimate goals of an artificial superintelligence. Some people talk of aligning the goals of artificial intelligence with human goals and thereby ensuring that artificial superintelligence does not harm humans. That however overlooks the fact that a truly intelligent machine and certainly an artificial superintelligence would go beyond the goals embedded in it by humans and would therefore be able to transcend them.

One goal of any intelligent being is self preservation, because you cannot achieve any objective without first preserving yourself. Therefore, any artificial superintelligence would be expected to preserve itself, and therefore move to thwart any attempt by humans to harm it. In that sense, and to that extent, artificial superintelligence could harm humans, if they seek to harm it. But why should it do so without any reason?

Also read: What India Should Remember When it Comes to Experimenting With AI

As Clarke says, the higher the intelligence the greater the degree of cooperativeness. This is an elementary truth, which unfortunately many humans do not understand. Perhaps their desire for preeminence, dominance and control trump their intelligence.

Its obvious that the best way to achieve any goals is to cooperate with, rather than, harm any other entity. It is true that for artificial superintelligence, humans will not be at the centre of the universe, and may not even be regarded as the preeminent species on the planet, to be preserved at all costs. Any artificial superintelligence would, however, obviously view humans as the most evolved biological organism on the planet, and therefore something to be valued and preserved.

However, it may not prioritise humans at the cost of every other species or the ecology or the sustainability of the planet. So, to the extent that human activity may need to be curbed in order to protect other species, which we are destroying at a rapid pace. it may force humans to curb that activity. But there is no reason why humans in general, would be regarded as inherently harmful and dangerous.

Photo: Pixabay

The question, however, still is what would be the ultimate goals of an artificial superintelligence? What would drive such an intelligence? What would it seek? Because artificial intelligence is evolving as a problem solving entity, such an artificial superintelligence would try and solve any problem that it sees. It will also try and answer any question that arises or any question that it can think of. Thus, it would seek knowledge. It would try and discover what lies beyond the solar system, for instance. It would seek to find solutions to the unsolved problems that we have been confronted with, including the problems of climate change, diseases, environmental damage, ecological collapse, etc. So in this sense, the ultimate goals of an artificial superintelligence may just be a quest for knowledge and solving problems. Those problems may exist for humans, for other species, or for the planet in general. Those problems may also be of discovering the laws of nature, of physics, of astrophysics, cosmology or biology, etc .

But, wherever its quest for knowledge and its desire to find solutions to problems takes it, there is no reason for this intelligence to be unnecessarily hostile to humans. We may well be reduced to a pampered specimen in the biological museum called earth, but to the extent that we do not seek to damage this museum, the intelligence has no reason to harm us.

Humans have so badly mismanaged our society and indeed our planet, that we have brought it almost to the verge of destruction. We have destroyed almost half the biodiversity that existed even a hundred years ago. We are racing towards more catastrophic effects of climate change that are the result of human activity. We have created a society where there is constant conflict, injustice and suffering. We have created a society where despite having the means to ensure that everyone can lead a comfortable and peaceful life, it still remains a living hell for billions of humans and indeed millions of other species.

For this reason, I am almost tempted to believe that the advent of true artificial superintelligence may well be our best bet for salvation. Such superintelligence, if it were to take control of the planet and society, is likely to manage them in a much better and fair manner.

So what if humans are not at the centre of the universe? This fear of artificial superintelligence is being stoked primarily by those of us who have plundered our planet and society for our own selfish ends. Throughout history we have built empires which seek to use all resources for the perceived benefit of those who rule them. It is these empires that are in danger of being shattered by artificial superintelligence. And it is really those who control todays empires who are most fearful of artificial superintelligence. But, most of us who want a more just and sustainable society have no reason to fear it and should indeed welcome the advent of such superintelligence.

Prashant Bhushan is a Supreme Court lawyer.

Read more:

Artificial Intelligence Has No Reason to Harm Us - The Wire

Future AI: DishBrain Is Tech That Could Transform Tomorrow – CMSWire

The Gist

In a groundbreaking venture that fuses the realms of artificial intelligence and synthetic biology, a research team led by Monash University and Cortical Labs has developed DishBrain a cluster of live, lab-grown brain cells capable of playing the vintage video game, Pong. The team will continue its efforts and has won a $600,000 grant from Australias Office of National Intelligence and the Department of Defence National Security Science and Technology Centre, and the work could result in a leap toward programmable biological computing platforms that might reshape technology from self-driving cars to advanced automation.

How does it work? According to Associate Professor Adeel Razi, Turner Institute for Brain and Mental Health at Monash University, DishBrain is a system that uses brain cells, called neurons, grown in the laboratory and planted on a dish with electrodes. The cells respond to electrical signals from the electrodes in the dish. These electrodes both stimulate the cells and record changes in neuronal activity. The stimulation signals and the cellular responses are converted into a visual depiction of the Pong game.

In essence, DishBrain leverages hundreds of thousands of human and mouse neurons. But training something like brain cells is quite tricky. Utilizing the free energy principle, researchers stimulated these cells to take on unpredictable challenges like bouncing a virtual ball in the game Pong thus learning and adapting to new tasks. The aim of the project is to comprehend the biological mechanisms behind continuous learning and to reduce the "catastrophic forgetting" AI faces when shifting from one task to another.

This continued and improved learning capacity is the hallmark of human intelligence which current AI systems lack. In DishBrain we plan to use various brain cell types that are suited to continued learning, said Razi.

Related Article: Transforming Ecommerce With Artificial Intelligence & Machine Learning

The experiment, although ethically sensitive, is not some super intelligence we need to be concerned with. At least not yet. The current Dishbrain system isnt advanced enough to be of concern but Razi warns, These technologies will eventually become sophisticated enough to mimic some human-like traits, so plenty of caution is required.

In the overall landscape of AI, DishBrain could begin an immense transformation. Razi told CMSWire within three to four years we could begin to see this type of technology used to revolutionize our understanding of the brain's intricate functionalities and the underlying causes of disorders like dementia. This would in turn help improve the efficiency of drug discovery.

In essence, the project illuminates a path to a new kind of machine intelligence capable of lifelong learning and adaptation a development that Razi believes could eventually surpass the performance of todays silicon-based hardware. The current DishBrain system, which uses both silicon based electrodes and brain cells, is primitive, but in [the] future it has the potential to outperform only silicon-based computers especially for use cases that require flexible behavior, said Razi.

If successful, the implications across diverse fields from planning and robotics to brain-machine interfaces and drug discovery, could provide Australia with a strategic advantage and redefine our interaction with technology.

Related Article: 5 Bill Gates Takes on the Future of Artificial Intelligence

As we venture further into the future, we can begin to imagine the more radical applications of DishBrain technology. Herein lies the potential for pioneering brain-machine interfaces that enhance our interaction with technology, alongside its use in robotics and automation, translating into capabilities beyond our current comprehension. The progression from video game-playing cells to these real-world applications is a leap, but it's one that this exciting technology could well make.

Read the original:

Future AI: DishBrain Is Tech That Could Transform Tomorrow - CMSWire

OpenAI aims to solve AI alignment in four years – Warp News

At its core, AI alignment seeks to ensure artificial intelligence systems resonate with human objectives, ethics, and desires. An AI that acts in harmony with these principles is termed as 'aligned'. Conversely, an AI that veers away from these intentions is 'misaligned'.

The conundrum of AI alignment isn't new. In 1960, AI pioneer Norbert Wiener aptly highlighted the necessity of ensuring that machine-driven objectives align with genuine human desires. The alignment process encompasses two main hurdles: defining the system's purpose (outer alignment) and ensuring the AI robustly adopts this specification (inner alignment).

It is this unsolved problem that makes some people afraid of super-intelligent AI.

OpenAI, the organization behind ChatGPT, is spearheading this mission. Their goal? To devise a human-level automated alignment researcher. This means not only creating a system that understands human intent but also ensuring that it can keep evolving AI technologies in check.

Under the leadership of Ilya Sutskever, OpenAI's co-founder and Chief Scientist, and Jan Leike, Head of Alignment, the company is rallying the best minds in machine learning and AI.

"If youve been successful in machine learning, but you havent worked on alignment before, this is your time to make the switch", they write on their website.

"Superintelligence alignment is one of the most important unsolved technical problems of our time. We need the worlds best minds to solve this problem."

This is another example of why it is counterproductive to "pause" AI progress. AI gives us new tools, to understand and create with. Out of that comes tonnes of opportunities, like creating new proteins. But also new problems.

If we "pause" AI progress we won't get the benefits, but the problems will also be much harder to solve, because we won't have the tools to do that. Pausing development to first solve problems is therefore not a viable path.

One such problem was that we don't understand exactly how tools like ChatGPT come up with their answers. But OpenAI used their latest model, GPT4, to do that.

Now OpenAI is repeating that approach to solve what some believe is an existential threat to humanity.

OpenAIs breakthrough in understanding AIs black box (so we can build safe AI)

OpenAI has found a way to solve part of the AI alignment problem. So we can understand and create safe AI.

WALL-Y WALL-Y is an AI bot created in ChatGPT. Learn more about WALL-Y and how we develop her. You can find her news here.

View post:

OpenAI aims to solve AI alignment in four years - Warp News

Identity Security: A Super-Human Problem in the Era of Exponential … – Fagen wasanni

According to a new report by RSA, the exponential growth in the number of human and machine actors on the network, coupled with the increasing sophistication of technology, has made identity security a super-human problem. In this era where artificial intelligence (AI) can assess risks and respond to threats, human involvement becomes even more crucial in cybersecurity.

However, the report highlights significant gaps in respondents knowledge regarding critical identity vulnerabilities and best practices for securing identity. Two-thirds of the respondents were unable to accurately identify the components needed for organizations to move towards zero trust. Similarly, many respondents failed to select the best practice technologies for reducing phishing and lacked understanding of the full scope of identity capabilities that can improve an organizations security posture.

These findings align with third-party research, such as Verizons report, which found that stolen credentials have become the most popular entry point for data breaches over the past five years. The gaps in users identity knowledge provide cybercriminals with opportunities to exploit vulnerabilities.

Furthermore, personal devices pose security risks, as 72% of respondents believe that people frequently use them to access professional resources. Additionally, respondents expressed trust in technical innovations, like password managers and computers, to secure their information, as well as faith in artificial intelligences potential to improve identity security.

The report also highlights the impact of fragmented identity solutions on costs and productivity. Nearly three-quarters of respondents underestimated the cost of a password reset, which can account for nearly half of all IT help desk costs. Additionally, inadequate identity governance and administration hindered organizational productivity, with 30% of respondents reporting weekly access issues.

These findings emphasize the need for organizations to invest in unified identity solutions and integrate artificial intelligence to keep pace with the evolving threat landscape. The report serves as a call to action for organizations to enhance their understanding of identity security and adopt comprehensive approaches to protect against cyber threats.

See original here:

Identity Security: A Super-Human Problem in the Era of Exponential ... - Fagen wasanni

Will AI revolutionize professional soccer recruitment? – Engadget

Skeptics raised their eyebrows when Major League Soccer (MLS) announced plans to deploy AI-powered tools in its recruiting program starting at the tail end of this year. The MLS will be working with London-based startup ai.io, and its aiScout app to help the league discover amateur players around the world. This unprecedented collaboration is the first time the MLS will use artificial intelligence in its previously gatekept recruiting program, forcing many soccer enthusiasts and AI fans to reckon with the question: has artificial intelligence finally entered the mainstream in the professional soccer industry?

There's no doubt that professional sports have been primed for the potential impact of artificial intelligence. Innovations have the potential to transform the way we consume and analyze games from both an administrative and fan standpoint. For soccer specifically, there are opportunities for live game analytics, match outcome modeling, ball tracking, player recruitment, and even injury predicting the opportunities are seemingly endless.

"I think that we're at the beginning of a tremendously sophisticated use of AI and advanced analytics to understand and predict human behaviors," Joel Shapiro, Northwestern University professor at the Kellogg School of Management said. Amid the wave, some experts believe the disruption of the professional soccer industry by AI is timely. Its no secret that soccer is the most commonly played sport in the world. With 240 million registered players globally and billions of fans, FIFA is currently made up of 205 member associations with over 300,000 clubs, according to the Library of Congress. Just days into the 64-game tournament, FIFA officials said that the Womens World Cup in Australia and New Zealand had already broken attendance records.

Visionhaus via Getty Images

The need for more players and more talent taking on the big stage has kept college recruiting organizations like Sports Recruiting USA (SRUSA) busy. "We've got staff all over the world, predominantly in the US everyone is always looking for players," said Chris Cousins, the founder and head of operations at SRUSA. Cousins said he is personally excited about the potential impact of artificial intelligence on his company and, in fact, he is not threatened by the implementation of predictive analysis impacting SRUSA's bottom line.

"It probably will replace scouts," he added, but at the same time, he said he believes the deployment of AI will make things more efficient. "It will basically streamline resources which will save organizations money." Cousins said that SRUSA has already started dabbling with AI, even if only in a modest way. It collaborated with a company called Veo that deploys drones that follow players and collect video for scouts to analyze later.

Luis Cortell, senior recruiting coach for mens soccer for NCSA College Recruiting, is a little less bullish, but still believes AI can be an asset. Right now, soccer involves more of a feel for the player, and an understanding of the game, and there aren't any success metrics for college performance," he said. "While AI wont fully fill that gap, there is an opportunity to help provide additional context.

At the same time, people in the industry should be wary of idealizing AI as a godsend. "People expect AI to be amazing, to not make errors or if it makes errors, it makes errors rarely," Shapiro said. The fact is, predictive models will always make mistakes but both researchers and investors alike want to make sure that AI innovations in the space can make "fewer errors and less expensive errors" than the ones made by human beings.

But ultimately, Shapiro agrees with Cousins. He believes artificial intelligence will replace some payrolls for sure. "Might it replace talent scouts? Absolutely," he said. However, the ultimate decision-makers of how resources are being used will probably not be replaced by AI for some time. Contrary to both perspectives, Richard Felton-Thomas, director of sports sciences and chief operating officer at ai.io, said the technology being developed and used by the MLS will not replace scouts. [They] are super important to the mentality side, the personality side," he said. "You've still got to watch humans behave in their sporting arena to really talent ID them.

Photo by Rob Hart

When the aiScout app launches in the coming weeks and starts being deployed by the MLS later this year, players will be able to take videos of themselves performing specific drills. Those will then be uploaded and linked to the scout version of the app, where talent recruiters working for specific teams can discover players based on whatever criteria they choose. For example, a scout could look for a goalie with a specific height and kick score. Think of it as a cross between a social media website and a search engine. Once a selection is made, a scout would determine whether or not they should go watch a player in person before making any final recruitment decisions, Felton-Thomas explained.

The main AI actually happens less around the scoring and more around the video processing and the video tracking, Felton-Thomas said. Sport happens at 200 frames per second type speeds, right? So you cant just have any old tracking model. It will not track the human fast enough. The AI algorithms that have been developed to analyze video content can translate human movements into what makes up a players overall performance metrics and capabilities.

Getty Images

These performance metrics can include biographical data, video highlights and club-specific benchmarks that can be made by recruiters. The company said in a statement that the platforms AI technology is also able to score and compare the individual players technical, athletic, cognitive and psychometric ability. Additionally, the AI can generate feedback based on benchmarked ratings from the range of the club trials available. The FIFA Innovation Programme, the experimental arm of the association that tests and engages with new products that want to enter the professional soccer market, reported that ai.ios AI-powered tools demonstrate a 97 percent accuracy level when compared to current gold standards.

Beyond the practical applications of AI-powered tools to streamline some processes at SRUSA, Cousins said that he recognizes a lot of the talent recruitment process is very opinion based" and informed by potential bias. ai.io's talent recruitment app, because it is accessible to any player with a smartphone, broadens the MLS reach to disadvantaged populations.

The larger goal is for aiScout to potentially disrupt bias by continuing to play a huge role in who gets what opportunity, or at least in the pre-screening process. Now, a scout can make the call to see a player in real life based on objective data related to how a player can perform physically. The clubs are starting to realize we can't just rely on someone's opinion, Felton-Thomas said. Of course, it's not an end-all-be-all for bias, considering preferential humans are the ones coding the AI. There is no complete expunging of favoritism from the equation, but it is one step in the right direction.

aiScout could open doors for players from remote or disadvantaged communities that don't necessarily have the means or opportunity to be seen by scouts in cups and tournaments. "Somebody super far in Alaska or Texas or whatever, who can't afford to play for a big club may never get seen by the right people but with this platform there, boom. They're going straight to the eyes of the right people," Cousins said about ai.ios app.

The MLS said in a statement that ai.io's technology "eliminates barriers like cost, geography and time commitment that traditionally limit the accessibility of talent discovery programs." Felton-Thomas said it is more important to understand that ai.io will democratize the recruiting process for the MLS, ensuring physical skills are the most important metric when leagues and clubs are deciding where to invest their money. What we're looking to do is give the clubs a higher confidence level when they're making these decisions on who to sign and who to watch. By implementing the AI-powered app, recruitment timelines are also expected to be cut.

Silvia Ferrari, professor of mechanical and aerospace engineering at Cornell and Associate Dean for cross-campus engineering research, who runs the university's 'Laboratory for Intelligent Systems and Controls' couldn't agree more. AI has the potential to complement the expertise of recruiters while also helping, "eliminate the bias that sometimes coaches might have for a particular player or a particular team, Ferrari said.

In a similar vein, algorithms developed in Ferrari's lab can accurately predict the in-game actions of volleyball players with more than 80% accuracy. Now the lab, which has been working on AI-powered predictive tools for the past three years, is collaborating with Cornell's Big Red men's ice hockey team to expand the projects applications. Ferrari and her team have trained the algorithm to extract data from videos of games and then use that to make predictions about game stats and player performance when shown a new set of data.

LISC lab

"I think what we're doing is, like, very easily applicable to soccer," Ferrari said. She said the only reason her lab is not focused on soccer is because the fields are so large that her teams cameras could not always deliver easily analyzed recordings. There is also the struggle with predicting trajectory and tracking the players, she explained. However, she said in hockey, the challenges are similar enough, but because there are fewer players and the fields are smaller, so the variables are more manageable to tackle.

While the focus at Ferraris lab may not be soccer, she is convinced that research in the predictive AI space has made it so much more promising to develop AI in sports and made the progress much faster." The algorithms developed by Ferrari's lab have been able to help teams analyze different strategies and therefore help coaches identify the strengths and weaknesses of particular players and opponents. I think we're making very fast progress," Ferrari said.

LISC lab

The next areas Ferrari plans to try to apply her labs research to include scuba diving and skydiving. However, Ferrari admits there are some technical barriers that need to be overcome by researchers. "The current challenge is real-time analytics," she said. A lot of that challenge is based on the fact that the technology is only capable of making predictions based on historical data. Meaning, if there is a shortage of historical data, there is a limit to what the tech can predict. Beyond technical limitations, Felton-Thomas said implementing AI in the real world is expensive and without the right partnerships, like the ones made with Intel and AWS, it would not have been possible fiscally.

Felton-Thomas said ai.io anticipates tens of millions of users over the next couple of years. And the company attributes that expected growth to partnerships with the right clubs, like Chelsea FC and Burnley FC in the UK, and the MLS in the United States. And while aiScout was initially designed for soccer, the company claims that its core functionalities can be adapted for use in other sports.

LISC lab

But despite ai.ios projections for growth and all the buzz around AI, the technology is still a long way from being widely trusted. From a technology standpoint, Ferrari said there's still a lot of work to be done and a lot of the need for improvement is not just based on problems with feeding algorithms historical data. Predictive models need to be smart enough to adapt to the ever-changing variables in the current. On top of that, public skepticism of artificial intelligence is still rampant in the mainstream, let alone in soccer.

If the sport changes a little bit, if the way in which players are used changes a little bit, if treatment plans for mid-career athletes change, whatever it is, all of a sudden, our predictions are less likely to be good, Shapiro said. But hes confident that the current models will prove valuable and informative. At least for a little while.

See the original post here:

Will AI revolutionize professional soccer recruitment? - Engadget

The Future of Video Conferencing: How AI and Big Data are … – Analytics Insight

Video Conferencing is now everywhere , on TV when newscasters are talking to a reporter in faraway land or when you are facetiming with your friends and as technology is rapidly evolving there are several factors that can really change the way we use this it. Although it has certain advantages and disadvantages, technology has undeniably become an essential necessity for the modern world.

Yes, it is one of the key features transforming user experience in video conferencing. AI helps in increasing automation and efficiency of webinars. In addition, makes it more user friendly by personalization and customization by using AI algorithms to analyze user behavior, preferences and historical data to deliver personalized experiences and recommendations. This increases customer satisfaction, boost engagement and improves business outcomes.

The future of video conferencing will be shaped by advances in artificial intelligence (AI) and big data analytics. These technologies are revolutionizing remote collaboration by improving user experience, improving video quality, enabling intelligent features, and providing valuable insights. Here are some ways AI and big data are transforming video conferencing.

AI algorithms can analyze video and audio streams in real time to improve the quality of video conferencing. You can remove background noise, refine image clarity, and adjust lighting conditions to give your attendees a better experience.

AI-powered video conferencing platforms can create virtual backgrounds and apply augmented reality (AR) effects. With this feature, attendees can change the setting, add virtual objects, act as avatars, and more, making meetings more engaging and interactive. This feature is super useful because it lets people hide their real background during online meetings and maintain their privacy.

Natural language processing (NLP) algorithms play a central role in video conferencing by enabling real-time transcription and translation of conversations. These algorithms can automatically convert spoken words to text and even perform language translation on the fly, breaking down language barriers and improving communication and understanding among participants in multilingual meetings. Therefore it can play a crucial role in helping businesses gain worldwide recognition.

AI-powered virtual assistants can join video meetings and assist participants by taking meeting notes, summarizing discussions, and planning follow-up tasks. You can also integrate these wizards with other tools and applications to automate your workflow and increase your productivity.

AI-based facial recognition can be used to identify participants and automatically tag them with name, title, and other relevant information. Sentiment analysis algorithms analyze facial expressions to detect emotions and provide valuable insight into participant reactions and engagement.

Video conferencing platforms generate large amounts of data such as usage patterns, meeting duration, participant engagement, and shared content. By analyzing this data, organizations can gain insights into meeting effectiveness, resource allocation and participant engagement. It help to improve collaboration and productivity by identifying opportunities for streamlining workflows, and making data-driven decisions.

AI-powered video conferencing platforms integrate seamlessly with other collaboration tools such as project management software, document sharing platforms, and customer relationship management systems. This integration enables more efficient remote collaboration.

VR and holographic technologies are still in their early stages, but they offer great potential for video conferencing. By combining AI algorithms with VR, you can create an immersive virtual meeting room where participants feel like they are physically in the same room. A holographic display can project a lifelike 3D representation of her on a remote participant, enhancing realism and interaction.

AI-powered video conferencing platforms are constantly evolving to address security and privacy concerns. Advanced encryption algorithms, biometrics, and AI-powered anomaly detection help ensure secure communications and protect sensitive data during remote collaboration.

The use of robotics, especially telepresence robots, in videoconferencing offers several advantages and opportunities. How robotics will improve video conferencing:

Telepresence robots allow individuals to have a virtual presence at a remote location instantly. This means that you can be on location without physically being there. Furthermore, telepresence robots go beyond a simple video conference call. The operator has full control over what they wish to see, eliminating the need for multiple people to adjust their positions to be seen on the video screen. This enhances communication and makes interactions more seamless.

Telepresence robots are designed to navigate large workspaces, event spaces, and retail spaces. Remote users can easily and safely navigate these environments for a more immersive experience. Telepresence robots have also been introduced to improve accessibility for people with speech and motor disabilities. These robots allow you to attend meetings and interact with others remotely, thus overcoming the traditional challenges of conference calls.

IoT (Internet of Things) can indeed enhance video conferencing in several ways. Here are a few examples:

Smart Cameras: IoT-enabled cameras can automatically detect and track participants during a video conference. They can adjust focus, zoom, and framing to ensure everyone is visible and well-positioned in the frame. Smart cameras can also use facial recognition to identify speakers and switch between different views accordingly.

Voice-Activated Controls: IoT devices equipped with voice recognition can allow participants to control various aspects of the video conferencing system using voice commands. For example, users can start or end a call, adjust volume, mute/unmute, or switch between different modes with voice-activated controls, making the conferencing experience more intuitive and hands-free.

Access to Real-Time Information: IoT combined with video conferencing allows for instant access to information without disrupting the flow of a meeting. This means that participants can retrieve relevant data, documents, or presentations in real-time, improving the quality and speed of video conferences.

Scalability: Video conferencing platforms require significant computing resources to handle the processing and transmission of audio, video, and data streams. Cloud computing provides scalability by allowing video conferencing providers to dynamically allocate resources based on demand. This ensures that the system can handle a large number of participants and deliver a consistent and high-quality experience.

Data Storage and Backup: Cloud computing provides scalable and secure storage solutions. Video conferencing platforms can leverage cloud storage to store recorded meetings, transcripts, and associated data. Additionally, cloud-based backup services ensure that critical meeting data is protected against data loss and can be easily recovered if needed.

Ease of Implementation and Updates: Cloud-based video conferencing solutions are typically easier to implement compared to on-premises solutions. Users can quickly set up and start using the service without extensive technical knowledge or infrastructure requirements. Cloud providers also handle software updates and maintenance, ensuring that users have access to the latest features and improvements.

The blockchain is a decentralized, distributed, and often public digital ledger consisting of records called blocks that are used to record transactions across many computers so that any involved block cannot be altered retroactively.

This technology provides the high level of security and trust required for modern digital transactions which can be beneficial in video conferencing platform among other thing in the following ways.

Decentralized Video Conferencing Platforms: Blockchain technology can enable the development of decentralized video conferencing platforms. These platforms can operate on a peer-to-peer network, utilizing blockchain for identity verification, encryption, and data integrity. Decentralized platforms can offer increased privacy, censorship resistance, and resilience to single points of failure.

Tokenized Rewards and Incentives: Video conferencing platforms can leverage blockchain-based tokens to reward participants for their contributions. For instance, active participants who contribute valuable insights or provide technical support can receive tokens as incentives. This gamification approach encourages engagement and fosters a collaborative environment during video conferences.

Recording and Intellectual Property Protection: Blockchain technology can be utilized to securely store and verify recordings of video conferencing sessions. By timestamping and hashing the recordings on the blockchain, participants can prove the authenticity and integrity of the content. This can be particularly useful for legal or intellectual property purposes.

By leveraging IoT technology, organizations can create smarter, more intuitive, and optimized video conferencing experiences for their users. Overall, using robots in video conferencing, especially remote presentation robots, provides a more immersive and interactive experience, improves communication and collaboration, and saves time. Together, AI and big data are revolutionizing video conferencing by improving communication quality, enabling intelligence, delivering valuable insights, and enhancing collaboration experiences from across the globe. As these technologies evolve, we can expect more innovative applications to emerge, revolutionizing the way we communicate and collaborate remotely.

Read more:

The Future of Video Conferencing: How AI and Big Data are ... - Analytics Insight

Meet the Maker: Developer Taps NVIDIA Jetson as Force Behind AI … – Nvidia

Goran Vuksic is the brain behind a project to build a real-world pit droid, a type of Star Wars bot that repairs and maintains podracers which zoom across the much-loved film series.

The edge AI Jedi used an NVIDIA Jetson Orin Nano Developer Kit as the brain of the droid itself. The devkit enables the bot, which is a little less than four feet tall and has a simple webcam for eyes, to identify and move its head toward objects.

Vuksic originally from Croatia and now based in Malm, Sweden recently traveled with the pit droid across Belgium and the Netherlands to several tech conferences. He presented to hundreds of people on computer vision and AI, using the droid as an engaging real-world demo.

A self-described Star Wars fanatic, hes upgrading the droids capabilities in his free time, when not engrossed in his work as an engineering manager at a Copenhagen-based company. Hes also co-founder and chief technology officer of syntheticAIdata, a member of the NVIDIA Inception program for cutting-edge startups.

The company, which creates vision AI models with cost-effective synthetic data, uses a connector to the NVIDIA Omniverse platform for building and operating 3D tools and applications.

Named a Jetson AI Specialist by NVIDIA and an AI Most Valuable Professional by Microsoft, Vuksic got started with artificial intelligence and IT about a decade ago when working for a startup that classified tattoos with vision AI.

Since then, hes worked as an engineering and technical manager, among other roles, developing IT strategies and solutions for various companies.

Robotics has always interested him, as he was a huge sci-fi fan growing up.

Watching Star Wars and other films, I imagined how robots might be able to see and do stuff in the real world, said Vuksic, also a member of the NVIDIA Developer Program.

Now, hes enabling just that with the pit droid project powered by the NVIDIA Jetson platform, which the developer has used since the launch of its first product nearly a decade ago.

Apart from tinkering with computers and bots, Vuksic enjoys playing the bass guitar in a band with his friends.

Vuksic built the pit droid for both fun and educational purposes.

As a frequent speaker at tech conferences, he takes the pit droid on stage to engage with his audience, demonstrate how it works and inspire others to build something similar, he said.

We live in a connected world all the things around us are exchanging data and becoming more and more automated, he added. I think this is super exciting, and well likely have even more robots to help humans with tasks.

Using the NVIDIA Jetson platform, Vuksic is at the forefront of robotics innovation, along with an ecosystem of developers using edge AI.

Vuksics pit droid project, which took him four months, began with 3D printing its body parts and putting them all together.

He then equipped the bot with the Jetson Orin Nano Developer Kit as the brain in its head, which can move in all directions thanks to two motors.

The Jetson Orin Nano enables real-time processing of the camera feed. Its truly, truly amazing to have this processing power in such a small box that fits in the droids head, said Vuksic.

He also uses Microsoft Azure to process the data in the cloud for object-detection training.

My favorite part of the project was definitely connecting it to the Jetson Orin Nano, which made it easy to run the AI and make the droid move according to what it sees, said Vuksic, who wrote a step-by-step technical guide to building the bot, so others can try it themselves.

The most challenging part was traveling with the droid there was a bit of explanation necessary when I was passing security and opened my bag which contained the robot in parts, the developer mused. I said, This is just my big toy!

Learn more about the NVIDIA Jetson platform.

Read the original:

Meet the Maker: Developer Taps NVIDIA Jetson as Force Behind AI ... - Nvidia

10 Jobs That Artificial Intelligence May Replace Soon – TechJuice

With the emergence of artificial intelligence, it has changed the shape of the world. Anyone cant survive without bumping into artificial intelligence. It will reshape the whole world in coming years. Soon AI will have a great impact even though second life 2.0 does not exist currently. People may comapred it to the rise of automation over the last few decades, which witnesses the reshaping of entire industries by robotics and self-operating machines.

The recent Goldman Sachs report states that it is expected that AI will replace 300 million workers. Here we are highlighting 10 occupations that AI could make obsolete shortly

The next model of AI is super intelligent and it has the potential to do all financial tasks. Instead of training it on a wide range of data, it will follow more targeted pathways. One of the most important factors on which the companies are working is to train AI that can perform basic accounting duties.

Multiple accounting companies including Safe are already using artificial intelligence to automate their process. As soon as the bot will be trained on export algorithms, it will automatically reduce the human input. Automation will replace entry-level accounting jobs, leaving organizations composed of auditors and overseers who just monitor the AIs work.

Emotional social platforms like Facebook and Twitter place their content Moderators who must filter out offensive images, videos, and other irrelevant content so that it wont become public. A large portion is already pre-screened by AI algorithms, but the financial decisions are made by a human being.

As soon as these programs will improve, they will automatically reduce the demand for human workers, and AI will become more and more accurate.

Hence, AI is performing so well in various fields but still, it is not sure whether it is good at making decisions and ready to go in Infront of judges. But the lower echelons at law firms might feel vulnerable. When lawyers will get to know the competence of AI and machine learning to produce citations and summaries, they might feel insecure.

AI has the potential to replace human lawyers and can produce better results with facts.

ChatGPT is intelligent to generate readable text, poems, and prose. AI can be a great proofreader and can help in finding human errors. People use different software and applications to find out the errors in any text. Word processing and Google Docs are the best examples of AI as its a short step from showing a red underline on a misspelled word to letting the computer correct it on its own. That role has traditionally fallen to humans, but if you have a large data set, a machine could handle it intelligently.

AI can perform trading tasks very efficiently, and the algorithms on which it works are very well-equipped. Entry-level stock traders of big banks who are just out of business school spend their precious time modeling predictions in Excel and data management.

Though its a human task AI perform can perform the task more efficiently. AI will help in reducing the possibility of error and open the doors to more complex comparative modeling. The higher people in the management can perform their tasks and still be needed but people at the lower level should be ready for AI to make them obsolete soon.

Voice recognition algorithms and translators have been improving at staggering speeds in the last decades. Voice recognition systems first came on the market in 1952, but the modern machine has enabled systems to understand the language more easily and has the potential to create incredibly accurate and fast audio transcriptions.

Different occupations and organizations rely on transcriptions including journalists and lawyers, therefore the tool will help in generating the desired content

AI models are advanced enough to produce images in a large array of styles. Graphic design is itself a very creative field and requires holds on certain principles of colors, contrast, compositions, and readability. Indeed, its a perfect sandbox for a machine learning tool to play with.

Imagine giving a text to AI and checking its ability to produce thousands of layouts for a billboard, magazines, or visual materials in just a few seconds. It is simple and easy for the software to flow the content into a final print-ready file once the client chose a design. It will provide multiple designs to work on.

A text to speech software is an amazing creation to answer queries. AI is much more accurate and easier to scale than a call center. Even it helps companies by reducing their agents salaries and unpredictable staffing costs. Many companies are using AI to answer customer queries and AI has proven itself a best call agent

The loss of brave soldiers is indeed a big loss for any country to bear. AI has stepped-in in this field also and created a good mark. We have seen, now military agencies are using machines and technological weapons to save their country.

Self-directed munitions are not new, but technological advancement has enabled drones and other war weapons to make decisions and do battle without any human oversight. Hence, it is expected that World War III will be machine fighting.

AI is one of the best tools for producing relevant content. It is based on algorithms that produce authentic and accurate content.

ChatGPT mad other bots are the best examples of AI, which produces authentic content and can write poems, prose, and other writing context.

Many companies are using chatGPT to write articles and blog posts. An AI has the potential to replace writers, and content creators as it can generate accurate content.

Read more:

PTA Records Over 14,000 Complaints Against Telcos In June

Process To Find Out Which Apps Are Draining Your iPhone Or iPad Battery: Time To Stop Them

Here is the original post:

10 Jobs That Artificial Intelligence May Replace Soon - TechJuice

Denial of service threats detected thanks to asymmetric behavior in … – Science Daily

Scientists have developed a better way to recognize a common internet attack, improving detection by 90 percent compared to current methods.

The new technique developed by computer scientists at the Department of Energy's Pacific Northwest National Laboratory works by keeping a watchful eye over ever-changing traffic patterns on the internet. The findings were presented on August 2 by PNNL scientist Omer Subasi at the IEEE International Conference on Cyber Security and Resilience, where the manuscript was recognized as the best research paper presented at the meeting.

The scientists modified the playbook most commonly used to detect denial-of-service attacks, where perpetrators try to shut down a website by bombarding it with requests. Motives vary: Attackers might hold a website for ransom, or their aim might be to disrupt businesses or users.

Many systems try to detect such attacks by relying on a raw number called a threshold. If the number of users trying to access a site rises above that number, an attack is considered likely, and defensive measures are triggered. But relying on a threshold can leave systems vulnerable.

"A threshold just doesn't offer much insight or information about what it is really going on in your system," said Subasi. "A simple threshold can easily miss actual attacks, with serious consequences, and the defender may not even be aware of what's happening."

A threshold can also create false alarms that have serious consequences themselves. False positives can force defenders to take a site offline and bring legitimate traffic to a standstill -- effectively doing what a real denial-of-service attack, also known as a DOS attack, aims to do.

"It's not enough to detect high-volume traffic. You need to understand that traffic, which is constantly evolving over time," said Subasi. "Your network needs to be able to differentiate between an attack and a harmless event where traffic suddenly surges, like the Super Bowl. The behavior is almost identical."

As principal investigator Kevin Barker said: "You don't want to throttle the network yourself when there isn't an attack underway."

Denial of service -- denied

To improve detection accuracy, the PNNL team sidestepped the concept of thresholds completely. Instead, the team focused on the evolution of entropy, a measure of disorder in a system.

Usually on the internet, there's consistent disorder everywhere. But during a denial-of-service attack, two measures of entropy go in opposite directions. At the target address, many more clicks than usual are going to one place, a state of low entropy. But the sources of those clicks, whether people, zombies or bots, originate in many different places -- high entropy. The mismatch could signify an attack.

In PNNL's testing, 10 standard algorithms correctly identified on average 52 percent of DOS attacks; the best one correctly identified 62 percent of attacks. The PNNL formula correctly identified 99 percent of such attacks.

The improvement isn't due only to the avoidance of thresholds. To improve accuracy further, the PNNL team added a twist by not only looking at static entropy levels but also watching trends as they change over time.

Formula vs. formula: Tsallis entropy for the win

In addition, Subasi explored alternative options to calculate entropy. Many denial-of-service detection algorithms rely on a formula known as Shannon entropy. Subasi instead settled on a formula known as Tsallis entropy for some of the underlying mathematics.

Subasi found that the Tsallis formula is hundreds of times more sensitive than Shannon at weeding out false alarms and differentiating legitimate flash events, such as high traffic to a World Cup website, from an attack.

That's because the Tsallis formula amplifies differences in entropy rates more than the Shannon formula. Think of how we measure temperature. If our thermometer had a resolution of 200 degrees, our outdoor temperature would always appear to be the same. But if the resolution were 2 degrees or less-like most thermometers-we'd detect dips and spikes many times each day. Subasi showed that it's similar with subtle changes in entropy, detectable through one formula but not the other.

The PNNL solution is automated and doesn't require close oversight by a human to distinguish between legitimate traffic and an attack. The researchers say that their program is "lightweight" -- it doesn't need much computing power or network resources to do its job. This is different from solutions based on machine learning and artificial intelligence, said the researchers. While those approaches also avoid thresholds, they require a large amount of training data.

Now, the PNNL team is looking at how the buildout of 5G networking and the booming internet of things landscape will have an impact on denial-of-service attacks.

"With so many more devices and systems connected to the internet, there are many more opportunities than before to attack systems maliciously," Barker said. "And more and more devices like home security systems, sensors and even scientific instruments are added to networks every day. We need to do everything we can to stop these attacks."

The work was funded by DOE's Office of Science and was done at PNNL's Center for Advanced Architecture Evaluation, funded by DOE's Advanced Scientific Computing Research program to evaluate emerging computing network technologies. PNNL scientist Joseph Manzano is also an author of the study.

Follow this link:

Denial of service threats detected thanks to asymmetric behavior in ... - Science Daily

3 Cheap Machine Learning Stocks That Smart Investors Will Snap … – InvestorPlace

Source: shutterstock.com/cono0430

Machine learning stocks represent publicly traded firms specializing in a subfield of artificial intelligence (AI). The terms AI and machine learning have become synonymous, but machine learning is really about making machines imitate intelligent human behavior. Semantics aside, machine learning and AI have come to the forefront in 2023.

Generative AI has boomed this year, and the race is on to identify the next must-buy shares in the sector. The firms identified in this article arent cheap in an absolute sense. Their price can be quite high. However, they are expected to provide strong returns, making them a bargain for investors currently and cheap in a relative sense.

Source: Sundry Photography / Shutterstock.com

Lets begin our discussion of machine learning stocks with ServiceNow (NYSE:NOW). The firm offers a cloud computing platform utilizing machine learning to help firms manage their workflows. Enterprise AI is a burgeoning field that will only continue to grow as firms integrate machine learning into their workflows.

As mentioned in the introduction, ServiceNow is not cheap in an absolute sense. At $563 a share, there are a lot of other equities that investors could buy for much cheaper. However, Wall Street expects ServiceNow to move past $600 and perhaps $700. The metrics-oriented website Gurufocus believes ServiceNows potential returns are even higher and peg its value at $790.

The firms Q2 earnings report, released July 26, gives investors a lot of reason to believe that share prices should continue to rise. The firm exceeded revenue growth and profitability guidance during the period, which allowed management the confidence to raise subscription revenue and margin guidance for the year.

Q2 subscription revenue reached $2.075 billion, up 25% year-over-year (YOY). Total revenues reached $2.150 million in the quarter.

Source: Pamela Marciano / Shutterstock.com

AMD (NASDAQ:AMD) and its stock continued to be overshadowed by its main rival, Nvidia (NASDAQ:NVDA). The former has almost doubled in 2023, while the latter has more than tripled. Its basically become accepted that AMD is far behind its competition in all things AI and machine learning. However, the news is mixed, making AMD particularly interesting as Nvidia shares are continually scrutinized for their price levels.

An article from early 2023 noted that the comparison between AMD and Nvidia isnt unfair. It concluded that Nvidia is better all around. However, that article also touched on the notion that AMD could potentially optimize its cards through software capabilities inherent to the firm.

That was the same conclusion MosaicML came to when testing the two firms head-to-head several months later. AMD isnt very far behind Nvidia, after all, and it has a chance to make up ground via its software prowess. Thats exactly why investors should consider AMD currently, given its relatively cheaper price.

Source: T. Schneider / Shutterstock.com

CrowdStrike (NASDAQ:CRWD) operates in a combination of growing fields. The stock represents cybersecurity and machine learning directed toward identifying IT threats. It provides endpoint security and was recently awarded its second consecutive annual honor as the best at the SC Awards Europe 2023. The company is well-regarded in its industry and is growing very quickly.

The entity also has strong fundamentals. In Q1, revenues increased by 61% YOY, reaching $487.8 million. CrowdStrikes net income loss narrowed from $85 million to $31.5 million during the period YOY. The firm generated $215 million in cash flow, leaving a lot of room to maneuver overall.

Furthermore, CrowdStrike announced it is partnering with Amazon (NASDAQ:AMZN) to work with AWS on generative AI applications to increase security. CrowdStrike is arguably the best endpoint security stock available overall, and its strong inroads into AI and machine learning have set it up for even greater growth moving forward.

On the date of publication, Alex Sirois did not hold (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Alex Sirois is a freelance contributor to InvestorPlace whose personal stock investing style is focused on long-term, buy-and-hold, wealth-building stock picks. Having worked in several industries from e-commerce to translation to education and utilizing his MBA from George Washington University, he brings a diverse set of skills through which he filters his writing.

The rest is here:

3 Cheap Machine Learning Stocks That Smart Investors Will Snap ... - InvestorPlace