Artificial Intelligence Has Come for Our…Beauty Pageants? – Glamour

First, artificial intelligence (AI) came for our jobs. Now, its coming for beauty standards by way of AI content creators."

Solely digital renderings and not real people, AI-generated influencers have become brand ambassadors and digital creators, amassing thousands of dollars in revenue through subscription-based content. Apparently, the natural next step was to have them compete in a beauty pageant.

Creator platform Fanvue, whose user base includes both real creators and AI ones, recently announced the Fanvue World AI Creator Awards (WAICAs): the worlds first ever beauty pageant for AI-generated influencers. According to the brand, they've seen exponential growth in AI-generated creators joining its platform since the end of 2023, with digital superstars garnering millions of followers on their platform, Instagram, and elsewhere.

It's true. Digital models Emily Pellegrini and Aitana Lopez, who are AI and not real people, have a combined 591 million followers on Instagram, real-life brand deals, and thousands of loyal subscribers who pay for exclusive content. FanVue says that Lopez earns over $10,000 monthly, and she's far from the only one: their Instagram comments are inundated with thousands of fellow AI-generated content creators promoting access to their private pages for a price.

Remarkably, it's paying off. FanVue expects the AI creator economy, which theyve helped pioneer, to exceed the $1 billion mark this year. This hasnt been possible until recently, the technology simply wasnt there, a FanVue spokesperson tells Glamour. With the help of monetization platforms such as Fanvue, theres been exponential growth in AI creators entering the space, growing their fanbases, and monetizing the connections theyre building with their audiences.

Aitana Lopez . Courtesy of FanVue.

Hence: the creation of the Miss AI pageant, in which AI-generated contestants will be judged on some of the classic aspects of pageantry and the the skill and implementation of AI tools used to create" the contestants. Also being considered is the AI creators social media clout"meaning theyre not just crowning the most beautiful" avatar, but also the most influential."

Read the original post:

Artificial Intelligence Has Come for Our...Beauty Pageants? - Glamour

Posted in Uncategorized

Is artificial intelligence combat ready? – Washington Technology

Human soldiers will increasingly share the battlespace with a range of robotic, autonomous, and artificial intelligence-enabled agents. Machine intelligence has the potential to be a decisive factor in future conflicts that the U.S. may face.

The pace of change will be faster than anything seen in many decades, driven by the advances in commercial AI technology and the pressure of a near-peer with formidable technological capabilities.

But are AI and machine learning combat-ready? Or, more precisely, is our military prepared to incorporate machine intelligence into combat effectively?

Creating an AI-Ready Force

The stakes of effective collaboration between AI and combatants are profound.

Human-machine teaming has the potential to reduce casualties dramatically by substituting robots and autonomous drones for human beings in the highest-risk front-line deployments.

It can dramatically enhance situational awareness by rapidly synthesizing data streams across multiple domains to generate a unified view of the battlespace. And it can overwhelm enemy defenses with the swarming of autonomous drones.

In our work with several of the Defense Department research labs working at the cutting edge of incorporating AI and machine learning into combat environments, we have seen that this technology has the potential to be a force multiplier on par with air power.

However, several technological and institutional obstacles must be overcome before AI agents can be widely deployed into combat environments.

Safety and Reliability

The most frequent concern about AI agents and uncrewed systems is whether they can be trusted to take actions with potentially lethal consequences. AI agents have an undeniable speed advantage in processing massive amounts of data to recognize targets of interest. However, there is an inherent tension between conducting war at machine speed and retaining accountability for the use of lethal force.

It only takes one incident of AI weapons systems subjecting their human counterparts to friendly fire to undermine the confidence of warfighters in this technology. Effective human-machine teaming is only possible when machines have earned the trust of their human allies.

Adapting Military Doctrine to AI Combatants

Uncrewed systems are being rapidly developed that will augment existing forces across multiple domains. Many of these systems incorporate AI at the edge to control navigation, surveillance, targeting, and weapons systems.

However, existing military doctrine and tactics have been optimized for a primarily human force. There is a temptation to view AI-enabled weapons as a new tool to be incorporated into existing combat approaches. But doctrine will be transformed by innovations such as the swarming of hundreds or thousands of disposable, intelligent drones capable of overwhelming strategic platforms.

Force structures may need to be reconfigured on the fly to deliver drones where there is the greatest potential impact. Human-centric command and control concepts will need to be modified to accommodate machines and build warfighter trust.

As autonomous agents proliferate and become more powerful, the battlespace will become more expansive, more transparent, and move exponentially faster. The decision on how and if to incorporate AI into the operational kill chain has profound ethical consequences.

An even more significant challenge will be how to balance the pace of action on the AI-enabled battlefield with the limits of human cognition. What are the tradeoffs between ceding a first-strike advantage measured in milliseconds with the loss of human oversight? The outcome of future conflicts may hinge on such questions.

Insatiable Hunger for Data

AI systems are notoriously data-hungry. There is not, and fortunately never will be, enough live operational data from live military conflicts to adequately train AI models to the point where they could be deployed on the battlefield. For this reason, simulations are essential to develop and test AI agents, and they require thousands or even millions of iterations using modern machine learning techniques.

The DoD has existing high-fidelity simulations, such as Joint Semi-Automated Forces (JSAF), but they run essentially in real-time. To unlock the full potential of AI-enabled warfare requires developing simulations with sufficient fidelity to accurately model potential outcomes but compatible with the speed requirements of digital agents.

Integration and Training

AI-enabled mission planning has the potential to vastly expand the situational awareness of combatants and generate novel multi-domain operation alternatives to overwhelm the enemy. Just as importantly, AI can anticipate and evaluate thousands of courses of action that the enemy might employ and suggest countermeasures in real time.

One reason Americas military is so effective is a relentless focus on training. But warfighters are unlikely to embrace tactical directives emanating from an unfamiliar black box when their lives hang in the balance.

As autonomous platforms move from research labs to the field, intensive warfighter training will be essential to create a cohesive, unified human-machine team. To be effective, AI course-of-action agents must be designed to align with existing mission planning practices.

By integrating such AI agents with the training for mission planning, we can build confidence among users while refining the algorithms using the principles of warfighter-centric design.

Making Human-Machine Teaming a Reality

While underlying AI technology has grown exponentially more powerful in the past few years, addressing the challenges posed by human-machine teaming will determine how rapidly these technologies can translate into practical military advantage.

From the level of the squad all the way to the joint command, it is essential that we test the limits of this technology and establish the confidence of decision-makers in its capabilities.

There are several vital initiatives the DoD should consider to accelerate this process.

Embrace the Chaos of War

Building trust in AI agents is the most essential step to effective human-machine teaming. Warfighters will rightly have a low level of confidence in systems that have only been tested under controlled laboratory conditions. The best experiments and training exercises replicate the chaos of war, including unpredictable events, jamming of communications and positioning systems, and mid-course changes to the course of action.

Human warfighters should be encouraged to push autonomous systems and AI agents to the breaking point to see how they perform under adverse conditions. This will result in iterative design improvements and build the confidence that these agents can contribute to mission success.

A tremendous strength of the U.S. military is the flexible command structure that empowers warfighters down to the squad level to rapidly adapt to changing conditions on the ground. AI systems have the potential to provide these units with a far more comprehensive view of the battlespace and generate tactical alternatives. But to be effective in wartime conditions, AI agents must be resilient enough to function under conditions of degraded communications and understand the overall intent of the mission.

Apply AI to Defense Acquisition Process

The rapid evolution of underlying AI and autonomous technologies means that traditional procurement processes developed for large cold-war platforms are doomed to fail. As an example, swarming tactics are only effective when using hundreds or thousands of individual systems capable of intelligent, coordinated action in a dynamic battlespace.

Acquiring such devices at scale will require leveraging a broad supplier base, moving rapidly down the cost curve, and enabling frequent open standards updates. Too often, we have seen weapons vendors using incompatible, proprietary communications standards that render systems unable to share data, much less engage in coordinated, intelligent maneuvers. One solution is to apply AI to revolutionize the acquisition process.

By creating a virtual environment to test systems designs, DoD customers can verify operational concepts and interoperability before a single device is acquired. This will help to reduce waste, promote shared knowledge across the services, and create a more level playing field for the supplier base.

Build Bridges from Labs to Deployment

While a tremendous amount of important work has been done by organizations such as the Navy Research Lab, the Army Research Lab, the Air Force Research Lab, and DARPA, the success of AI-enabled warfare will ultimately be determined by moving this technology from the laboratories and out into the commands. Human-machine teaming will be critical to the success of these efforts.

Just as important, the teaching of military doctrine at the service academies needs to be continuously updated as the technology frontier advances. Incorporating intelligent agents into practical military missions requires both profound changes in doctrine and reallocation of resources.

Military commanders are unlikely to be dazzled by bright and shiny objects unless they see tangible benefits to deploying them. By starting with some easy wins, such as the enhancement of ISR capabilities and automation of logistics and maintenance, we can build early bridges that will instill confidence in the value of AI agents and autonomous systems.

Educating commands about the potential of human-machine teaming to enhance mission performance and then developing roadmaps to the highest potential applications will be essential. Commanders need to be comfortable with the parameters of human-in-the-loop and human-on-the-loop systems as they navigate how much autonomy to grant to AI-at-the-edge weapons systems. Retaining auditability as decision cycles accelerate will be critical to ensuring effective oversight of system development and evolving doctrine.

Summary

Rapid developments in AI and autonomous weapons systems have simultaneously accelerated and destabilized the ongoing quest for military superiority and effective deterrence. The United States has responded to this threat with a range of policies restricting the transfer of underlying technologies. However, the outcome of this competition will depend on the ability to convincingly transfer AI-enabled warfare from research labs to potential theaters of conflict.

Effective human-machine teaming will be critical to make the transition to a joint force that leverages the best capabilities of human warfighters and AI to ensure domination of the battlespace and deter adventurism by foreign actors.

Mike Colony leads Sercos Machine Learning Group, which has helped to support several Department of Defense clients in the area of AI and machine learning, including the Office of Naval Research, the Air Force Research Laboratory, the U.S. Marine Corps, the Electronic Warfare and Countermeasures Office, and others.

Excerpt from:

Is artificial intelligence combat ready? - Washington Technology

Posted in Uncategorized

7 Artificial Intelligence Impediments & Opportunities for the Channel – Channelnomics

Channelnomics has identified seven significant challenges that will impede the adoption of artificial intelligence systems. The good news is that theyre also great opportunities for vendors and partners.

Market analyst firm IDC forecasts an impressive 55% compound annual growth rate (CAGR) for the artificial intelligence market from 2024 to 2027. However, its worth noting that this growth could be even more rapid if barriers to customer adoption and deployment werent hindering the pace of artificial intelligence uptake. This potential for accelerated growth should inspire optimism and excitement among vendors and partners alike.

Generative Pre-trained Transformer (GPT) products such as ChatGPT, Microsoft Copilot, and Google Gemini arent just tools; theyre catalysts sparking the imagination of businesses and individuals. These tools, by revolutionizing content creation, data analysis, and automated customer experiences, are making the seemingly impossible possible. However, its important to note that GPTs are just the tip of the iceberg in the artificial intelligence revolution.

In a survey by Channelnomics and channel marketplace Pax8 of end users worldwide, most businesses said they want AI tools that deliver better predictive analytics, machine learning in their automated processes, and richer communications tools. While its easy to rattle off a list of tools, making them an operational reality is much harder.

Through extensive research of vendors, channel partners, and end users, Channelnomics has pinpointed seven significant challenges currently impeding AI adoption and growth. However, these challenges, far from being roadblocks, present unique opportunities for the channel to leverage its expertise and resources, paving the way for success in the AI market. This understanding should empower vendors and partners, highlighting their potential to overcome these challenges and thrive in the AI landscape.

Lets dive into each.

Looking Forward As the artificial intelligence revolution continues to unfold, its clear that technologys transformative potential isnt without challenges. The road to widespread AI adoption is paved with obstacles, from technical hurdles to talent shortages and security concerns. However, these challenges also present a unique opportunity for the channel to step up and lead the way forward.

By leveraging their expertise and extensive resources, vendors and partners can become the driving force behind AI adoption, helping businesses navigate the complexities of this rapidly evolving landscape. Whether its through developing innovative AI solutions, providing expert guidance and support, or forming strategic partnerships, the channel has a critical role in shaping AIs future.

The AI market is poised for explosive growth as we look ahead to the coming years. But to truly harness the power of this transformative technology, vendors, partners, and customers must work together to overcome the obstacles to AI success. However, with the right strategies, partnerships, and mindset, theres no limit to what we can achieve.

Additional Resources

See more here:

7 Artificial Intelligence Impediments & Opportunities for the Channel - Channelnomics

Posted in Uncategorized

Liberty Hill ISD introduces artificial intelligence into the classroom – Community Impact

Teachers and students in Liberty Hill ISD have been exploring new ways to learn through the use of artificial intelligence, or AI, this school year.

District teachers and staff said AI has enhanced students learning experience and prepared them for future careers as AI becomes increasingly prevalent in many industries.

We are trying to prepare students for jobs that don't even exist, LHISD instructional coach Jennifer Norris said. We don't want students to be thinking for today. We want students to be thinking for the future.

A closer look

This school year, LHISD launched a pilot program amongst a handful of teachers who have begun using AI in their classrooms, Chief of Schools Travis Motal said. Amy Rosser, an English teacher at Liberty Hill High School, has implemented AI programs to assist students in revising their essays and generating creative images, she said.

In one assignment, students were tasked with creatively rewriting the ending of "Romeo and Juliet" and producing an AI-generated image to accompany it. Students had to input their instructions into the image generator multiple times before seeing the desired results, which taught students the importance of using detailed, intentional language, Rosser said.

Norris has helped many teachers learn how to adopt AI into their instruction, she said. Some classrooms have used ChatGPT to generate poems in the voices of various historical figures and compare differences in the passages tone, Norris said.

Many Spanish teachers have taken up AI as a conversational partner for students to practice speaking Spanish with, Norris said. In social studies, students have used AI to generate images related to historical events and analyze the accuracy of the images and whether they include a bias, she said.

Khanmigo, an AI program by Khan Academy, walks students through the steps to complete math problems without solving it for them, Rosser said.

Also of note

Although some teachers have initially been hesitant to adopt AI, many programs have helped teachers work more efficiently, Rosser and Norris said. Rosser now uses AI platforms to produce rubrics and materials for students, and print out translated notes for her English-as-a-second-language students.

Teachers have used programs like Magic School AI to create different types of questions and ways to present information to their students, Norris said.

You can provide the student with the information that they still need, but in a way that's going to make it easier for them to access it, Norris said.

The takeaway

Instead of discouraging students from using popular AI platforms, such as ChatGPT, LHISD has chosen to embrace the new technology to ensure students are ready for the future ahead of them, Motal said.

We know its not going away, so how can I help teachers see what is available out there to be used as a tool, so we're not trying to catch up with the students, but we're right there alongside them leading the way, Norris said.

Before introducing AI programs to her students, Rosser said she taught a unit on how to use AI with academic integrity. From encyclopedias to the internet, teachers have long taught students how to responsibly synthesize information from new resources, and AI is next in line, she said.

I started to implement and teach them strategies of how to use [AI] as a resource and a tool, not a replacement for their thinking, Rosser said.

Rosser is continuing to explore new AI tools, including a platform that can transform pictures of notes into digital flashcards, she said.

I was thinking of all of our hours that we spent writing 3-by-5 cards, Rosser said. That AI is going to help [students] be more efficient, and they'll be more likely to study.

Whats next

At the end of this school year, the district plans to collect feedback from teachers using AI through the pilot program and may implement AI in some of its curriculum in future school years, Motal said.

Excerpt from:

Liberty Hill ISD introduces artificial intelligence into the classroom - Community Impact

Posted in Uncategorized

What Nvidia Stock Investors Should Know About Recent Artificial Intelligence (AI) Updates – The Motley Fool

Japan is becoming an essential hub for developing AI advancements, which bodes well for Nvidia's hardware.

In today's video, I discuss recent updates affecting Nvidia (NVDA 3.65%) and other semiconductor companies. Check out the short video to learn more, consider subscribing, and click the special offer link below.

*Stock prices used were the after-market prices of April 22, 2024. The video was published on April 22, 2024.

Jose Najarro has positions in Nvidia and Taiwan Semiconductor Manufacturing. The Motley Fool has positions in and recommends Nvidia and Taiwan Semiconductor Manufacturing. The Motley Fool recommends Intel and recommends the following options: long January 2025 $45 calls on Intel and short May 2024 $47 calls on Intel. The Motley Fool has a disclosure policy. Jose Najarro is an affiliate of The Motley Fool and may be compensated for promoting its services. If you choose to subscribe through their link they will earn some extra money that supports their channel. Their opinions remain their own and are unaffected by The Motley Fool.

Read the rest here:

What Nvidia Stock Investors Should Know About Recent Artificial Intelligence (AI) Updates - The Motley Fool

Posted in Uncategorized

The EU’s approach to artificial intelligence centres on excellence and trust – EEAS

The EUs Artificial Intelligence (AI) Act is the result of a reflection that started more than ten years ago to develop a strategy to boost AI research and industrial capacity while ensuring safety and fundamental rights. Weeks before the official publication which will mark the beginning of its applicability, the EU Delegation is hosting in London Roberto Viola, Director General of DG CONNECT for an in conversation event, moderated by Baroness Martha Lane Fox.

The aim of the EUs policies on AI is to help it enhance its competitiveness in strategic sectors and to broaden citizens access to information. One cornerstone of this two-pillar approach boosting innovation, while safeguarding human rights was the creation six years ago, on 9 March 2018, the expert group on artificial intelligence to gather expert input and rally a broad alliance of diverse stakeholders. Moreover, to boost research and industrial capacity the EU is maximising resources and coordinating investments. For example, through the Horizon Europeandthe Digital Europeprogramme, the European Commission will jointly invest in AI 1 billion per year. The European Commission will mobilise additional investments from the private sector and the Member States, bringing an annual investment volume of 20 billion over the course of the digital decade. The Recovery and Resilience Facility makes 134 billion available for digital. In addition to the necessary investments, to build trust the Commission has also committed to create a safe and innovation-friendly AI environment for developers, for those companies that embed their products and for end users. The Artificial Intelligence Act is at the core of this endeavour. It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation in Europe. Taking into account the degree of potential risks and level of impact, the regulation establishes obligations for AI, which are based on a proportionality approach. It flags certain areas as entailing an unacceptable risk. For these areas, the Act bans the use of certain AI applications, which pose substantial threat to citizens rights, like social scoring or emotion recognition in schools. The AI Act then goes on to impose obligations for high-risk applications, e.g. in healthcare and banking, and introduces transparency obligations for medium risk applications, like general-purpose AI systems. These provisions are complemented by regulatory sandboxes and real-world testing that will have to be established at national level and made accessible to SMEs and start-ups to develop and train innovative AI before its placement on the market.

Read the original:

The EU's approach to artificial intelligence centres on excellence and trust - EEAS

Posted in Uncategorized

Gaza war: artificial intelligence is changing the speed of targeting and scale of civilian harm in unprecedented ways – The Conversation

As Israels air campaign in Gaza enters its sixth month after Hamass terrorist attacks on October 7, it has been described by experts as one of the most relentless and deadliest campaigns in recent history. It is also one of the first being coordinated, in part, by algorithms.

Artificial intelligence (AI) is being used to assist with everything from identifying and prioritising targets to assigning the weapons to be used against those targets.

Academic commentators have long focused on the potential of algorithms in war to highlight how they will increase the speed and scale of fighting. But as recent revelations show, algorithms are now being employed at a large scale and in densely populated urban contexts.

This includes the conflicts in Gaza and Ukraine, but also in Yemen, Iraq and Syria, where the US is experimenting with algorithms to target potential terrorists through Project Maven.

Amid this acceleration, it is crucial to take a careful look at what the use of AI in warfare actually means. It is important to do so, not from the perspective of those in power, but from those officers executing it, and those civilians undergoing its violent effects in Gaza.

This focus highlights the limits of keeping a human in the loop as a failsafe and central response to the use of AI in war. As AI-enabled targeting becomes increasingly computerised, the speed of targeting accelerates, human oversight diminishes and the scale of civilian harm increases.

Reports by Israeli publications +927 Magazine and Local Call give us a glimpse into the experience of 13 Israeli officials working with three AI-enabled decision-making systems in Gaza called Gospel, Lavender and Wheres Daddy?.

These systems are reportedly trained to recognise features that are believed to characterise people associated with the military arm of Hamas. These features include membership of the same WhatsApp group as a known militant, changing cell phones every few months, or changing addresses frequently.

The systems are then supposedly tasked with analysing data collected on Gazas 2.3 million residents through mass surveillance. Based on the predetermined features, the systems predict the likelihood that a person is a member of Hamas (Lavender), that a building houses such a person (Gospel), or that such a person has entered their home (Wheres Daddy?).

In the investigative reports named above, intelligence officers explained how Gospel helped them go from 50 targets per year to 100 targets in one day and that, at its peak, Lavender managed to generate 37,000 people as potential human targets. They also reflected on how using AI cuts down deliberation time: I would invest 20 seconds for each target at this stage I had zero added value as a human it saved a lot of time.

They justified this lack of human oversight in light of a manual check the Israel Defense Forces (IDF) ran on a sample of several hundred targets generated by Lavender in the first weeks of the Gaza conflict, through which a 90% accuracy rate was reportedly established. While details of this manual check are likely to remain classified, a 10% inaccuracy rate for a system used to make 37,000 life-and-death decisions will inherently result in devastatingly destructive realities.

But importantly, any accuracy rate number that sounds reasonably high makes it more likely that algorithmic targeting will be relied on as it allows trust to be delegated to the AI system. As one IDF officer told +927 magazine: Because of the scope and magnitude, the protocol was that even if you dont know for sure that the machine is right, you know that statistically its fine. So you go for it.

The IDF denied these revelations in an official statement to The Guardian. A spokesperson said that while the IDF does use information management tools [] in order to help intelligence analysts to gather and optimally analyse the intelligence, obtained from a variety of sources, it does not use an AI system that identifies terrorist operatives.

The Guardian has since, however, published a video of a senior official of the Israeli elite intelligence Unit 8200 talking last year about the use of machine learning magic powder to help identify Hamas targets in Gaza. The newspaper has also confirmed that the commander of the same unit wrote in 2021, under a pseudonym, that such AI technologies would resolve the human bottleneck for both locating the new targets and decision-making to approve the targets.

AI accelerates the speed of warfare in terms of the number of targets produced and the time to decide on them. While these systems inherently decrease the ability of humans to control the validity of computer-generated targets, they simultaneously make these decisions appear more objective and statistically correct due to the value that we generally ascribe to computer-based systems and their outcome.

This allows for the further normalisation of machine-directed killing, amounting to more violence, not less.

While media reports often focus on the number of casualties, body counts similar to computer-generated targets have the tendency to present victims as objects that can be counted. This reinforces a very sterile image of war. It glosses over the reality of more than 34,000 people dead, 766,000 injured and the destruction of or damage to 60% of Gazas buildings and the displaced persons, the lack of access to electricity, food, water and medicine.

It fails to emphasise the horrific stories of how these things tend to compound each other. For example, one civilian, Shorouk al-Rantisi, was reportedly found under the rubble after an airstrike on Jabalia refugee camp and had to wait 12 days to be operated on without painkillers and now resides in another refugee camp with no running water to tend to her wounds.

Aside from increasing the speed of targeting and therefore exacerbating the predictable patterns of civilian harm in urban warfare, algorithmic warfare is likely to compound harm in new and under-researched ways. First, as civilians flee their destroyed homes, they frequently change addresses or give their phones to loved ones.

Such survival behaviour corresponds to what the reports on Lavender say the AI system has been programmed to identify as likely association with Hamas. These civilians, thereby unknowingly, make themselves suspect for lethal targeting.

Beyond targeting, these AI-enabled systems also inform additional forms of violence. An illustrative story is that of the fleeing poet Mosab Abu Toha, who was allegedly arrested and tortured at a military checkpoint. It was ultimately reported by the New York Times that he, along with hundreds of other Palestinians, was wrongfully identified as Hamas by the IDFs use of AI facial recognition and Google photos.

Over and beyond the deaths, injuries and destruction, these are the compounding effects of algorithmic warfare. It becomes a psychic imprisonment where people know they are under constant surveillance, yet do not know which behavioural or physical features will be acted on by the machine.

From our work as analysts of the use of AI in warfare, it is apparent that our focus should not solely be on the technical prowess of AI systems or the figure of the human-in-the-loop as a failsafe. We must also consider these systems ability to alter the human-machine-human interactions, where those executing algorithmic violence are merely rubber stamping the output generated by the AI system, and those undergoing the violence are dehumanised in unprecedented ways.

See the rest here:

Gaza war: artificial intelligence is changing the speed of targeting and scale of civilian harm in unprecedented ways - The Conversation

Posted in Uncategorized

Unlocking the Power of Artificial Intelligence in Food Virtual Workshop – Food Industry Executive

Register for the webinar here

From IFT:

This interactive workshop will provide the science of food community with a deep understanding of artificial intelligence through hands-on exercises, industry use cases, and insights from Artificial Intelligence thought leaders. Partnered with Sidecar, this virtual workshop will equip attendees with the knowledge, skills, and confidence to harness the power of AI within their own organizations and roles. In order to provide the most immersive and hands-on experience during this workshop, Sidecar will set up a dedicated instance of Betty, a personalized IFT AI assistant, tailored to IFTs content. This will offer a real-world experience of AI applications, allowing attendees to see the immediate value of integrating such technology into their daily routines.

Participants will learn:

Register here.

Read more:

Unlocking the Power of Artificial Intelligence in Food Virtual Workshop - Food Industry Executive

Posted in Uncategorized

Artificial Intelligence (AI)’s Biggest Impact on the U.S. Economy – Banyan Hill Publishing

I called it

The market is experiencing an inevitable pullback.

This comes after one of the strongest six-month rallies in the S&P 500 within the last 30 years.

Like I said last week, this pullback is due to a combination of geopolitical concerns in the Middle East, and fears that the Federal Reserve will take longer than expected to cut interest rates, thanks to inflation.

But think of this as a bull market correction, rather than a world-ending market event.

Im looking at the longer-term investing opportunities of artificial intelligence (AI), cryptocurrency, biotech and other massive mega trends that will bring us higher returns by the end of this decade.

So today, Amber and I are taking a closer look at the tech markets potential impact on the U.S. economy, an update on bitcoin (BTC) after its fourth halving and the biggest highlights from the eMerge Tech Expo, a conference I attended on Friday.

This expo not only showcased some of the most cutting-edge developments for ChatGPT and robotics

But also one of the biggest promises of AI, and its impact on the workforce

(Or read the transcript here.)

By the way

If youre interested in investing in artificial intelligence (AI), but you dont know where to start, subscribers of my Strategic Fortunes service have invested in my No. 1 AI stock for 2024.

Its a California-based chipmaking juggernaut that develops computer processors and related technologies for both businesses and consumer markets.

Its production of AI applications (and other parts of its business) has put this company at the center of some of the most innovative technologies being developed today.

Go here to learn more about this company.

Until next time,

Ian King Editor,Strategic Fortunes

Originally posted here:

Artificial Intelligence (AI)'s Biggest Impact on the U.S. Economy - Banyan Hill Publishing

Posted in Uncategorized

California Nurses Association demand patient safeguards against artificial intelligence technology – National Nurses United

Hundreds of concerned nurses from across the state to protest at Kaiser Permanentes San Francisco MedicalCenter

On Monday, April 22, hundreds of registered nurses and members of California Nurses Association (CNA) from across the state will hold a protest at Kaiser Permanente (KP)s San Francisco Medical Center to highlight their patient safety concerns about artificial intelligence (AI) technology and the hospital industrys rush to implement untested and unregulated AI.

It is deeply troubling to see Kaiser promote itself as a leader in AI in health care, when we know their use of these technologies comes at the expense of patient care, all in service of boosting profits, said Michelle Gutierrez Vo, a longtime registered nurse at Kaiser Permanente Fremont Medical Center and CNA president. Nurses are all for tech that enhances our skills and the patient care experience. But what we are witnessing in our hospitals is the degradation and devaluation of our nursing practice through the use of these untested technologies. As patient advocates, we are obligated to speak out. We demand that workers and unions be involved at every step of the development of data-driven technologies and be empowered to decide whether and how AI is deployed in the workplace.

The action coincides with the start of KP Internationals Integrated Care Experience conference. While the nurses are protesting at Kaiser, one of the earliest adopters of AI technology, other hospital systems are also busy implementing similar AI pilots and programs. CNA represents 24,000 nurses within the KP system.

I have been a Kaiser nurse for more than 40 years, and I promise you, union nurses will never stop fighting for a health care system that guarantees person-to-person, hands-on care for every patient, said Cathy Kennedy, a registered nurse at Kaiser Permanente Roseville Medical Center and a president of CNA. Human expertise and clinical judgment are the only ways to ensure safe, effective, and equitable nursing care. We know there is nothing inevitable about AIs advancement into health care. No patient should be a guinea pig and no nurse should be replaced by a robot.

California Nurses Association/National Nurses United is the largest and fastest-growing union and professional association of registered nurses in the nation with 100,000 members in more than 200 facilities throughout California and nearly 225,000 RNs nationwide.

Original post:

California Nurses Association demand patient safeguards against artificial intelligence technology - National Nurses United

Posted in Uncategorized

Jennifer Lopez’s Mecha Movie Is All About Learning to Love Artificial Intelligence – Gizmodo

Everyones a little freaked out about AI right nowthe ramifications of unregulated tech bro start up culture smashing into industries across the board, from journalism, to science, to entertainment, being turned upside down by the hot new trend. But together, Jennifer Lopez and Netflix boldly ask in Atlas the primordial question: what if learning to love AI got us a Titanfall movie with the names filed off?

The Best Things You Didnt Know Your Switch Could Do

This morning Netflix dropped a more in depth look at the frankly absurd combination of words that is the Jennifer Lopez mecha movie, Atlas. While the first trailer played up Lopezs character, the titular Atlasnot to be confused with the recently revived Boston Dynamics robot of the same nameand her abject horror at being plummeted onto an alien world via giant robot cockpits, the second lifts a few more layers around the film... which makes it feel less like a story about a first time mecha pilot, and more about how Jennifer Lopez should realize the concept of Not All Artificial Intelligences.

This look introduces us to Simu Lius baddy, an AI robot named Harlan designed to aid humanity but (shock, horror) turned against his organic masters. Now deeply distrustful of any artificial intelligence after her hunt for Harlan, Atlas finds herself forced to make compromises when she is forced to get into a giant mech suit on a mission gone wrongworking with its onboard intelligence, Smith, to learn the ways of beating the crap out of people with a giant robot.

As fun as it is to see Jennifer Lopez at the center of a movie that feels very inspired by the likes of Respawns beloved shooter seriesTitanfall in its mech design, the whole youve just gotta learn to love your Siri-adjacent robot friend! buddy cop vibe definitely feels a little weirdly timed as were on the crest of a general air of skepticism about the use of rudimentary image generators and LLMs in creative fields. But hey, maybe Jennifer Lopezs giant robot will punch things good enough this Memorial Day for us to put those concerns aside for a couple hours.

Atlas begins streaming on Netflix May 24.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, whats next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

Read more:

Jennifer Lopez's Mecha Movie Is All About Learning to Love Artificial Intelligence - Gizmodo

Posted in Uncategorized

Native America Calling: Safeguards on Artificial Intelligence – indianz.com

Indianz.Com > News > Native America Calling: Safeguards on Artificial Intelligence All Episodes on Spotify | More Options

Native America Calling: Safeguards on Artificial Intelligence

Tuesday, April 23, 2024

Safeguards on Artificial Intelligence

Guests on Native America Calling

Native America Calling

Listen to Native America Calling every weekday at 1pm Eastern.

Read more from the original source:

Native America Calling: Safeguards on Artificial Intelligence - indianz.com

Posted in Uncategorized

EMBs from the Western Balkans, local and international experts discuss the impact of Artificial Intelligence on Electoral … – International IDEA

The International Institute for Democracy and Electoral Assistance (International IDEA) and the Rule of Law Centre of Finland (RoL Centre), in partnership with the Central Election Commission of Bosnia and Herzegovina, recently hosted a regional discussion on Artificial Intelligence and Elections.

Meeting in Sarajevo, Election Management Bodies (EMBs) from the Western Balkans, alongside esteemed academics, and representatives from civil society organizations exchanged views on how to utilize Artificial Intelligence to boost the integrity and public trust in elections.

The discussions highlighted both the opportunities and challenges stemming from the imminent spreading of AI tools in election campaigns and election management. The event explored the definition of AI, risks of disinformation in election campaigns, considerations on personal data protection, human rights and democracy, but also how AI can help EMBs to better implement electoral processes. This includes experience with AI in political finance oversight, such as that of the UK, and the use at different stages of election management, such as updates in voter lists.

Recognizing the trend in increased use of AI tools, discussants delved into implications for EMB capacities and human resources, regulation and the role of the EU as a standard setter in the field.

The event was organized in the framework of the "Integrity and Trust in Albanian Elections: Fostering Political Finance Transparency and the Safe Use of Information and Communication Technologies", co-implemented by International IDEA and the Rule of Law Centre of Finland.

Continued here:

EMBs from the Western Balkans, local and international experts discuss the impact of Artificial Intelligence on Electoral ... - International IDEA

Posted in Uncategorized

As Massachusetts leans in on artificial intelligence, AG waves a yellow flag Rhode Island Current – Rhode Island Current

BOSTON While the executive branch of state government touts the competitive advantage to investing energy and money into artificial intelligence across Massachusetts tech, government, health, and educational sectors, the states top prosecutor is sounding warnings about its risks.

Attorney General Andrea Campbell issued an advisory to AI developers, suppliers, and users on Tuesday, reminding them of their obligations under the states consumer protection laws.

AI has tremendous potential benefits to society, Campbells advisory said. It presents exciting opportunities to boost efficiencies and cost-savings in the marketplace, foster innovation and imagination, and spur economic growth.

However, she cautioned, AI systems have already been shown to pose serious risks to consumers, including bias, lack of transparency or explainability, implications for data privacy, and more. Despite these risks, businesses and consumers are rapidly adopting and using AI systems which now impact virtually all aspects of life.

Developers promise that their complex and opaque systems are accurate, fair, effective, and appropriate for certain uses, but Campbell notes that the systems are being deployed in ways that can deceive consumers and the public, citing chatbots used to perpetrate scams or of false computer-generated images and videos called deepfakes that mislead consumers and viewers about a participants identity. Misleading and potentially discriminatory results from these systems can run afoul of consumer protection laws, according to the advisory.

The advisory has echoes of a dynamic in the states enthusiastic embrace of gambling at the executive level, with Campbell cautioning against potential harmful impacts while staying shy of a full-throated objection to expansions like an online Lottery.

Gov. Maura Healey has touted applied artificial intelligence as a potential boon for the state, creating an artificial intelligence strategic task force through executive order in February. Healey is also seeking $100 million in her economic development bond bill the Mass Leads Act to create an Applied AI Hub in Massachusetts.

Massachusetts has the opportunity to be a global leader in Applied AI but its going to take us bringing together the brightest minds in tech, business, education, health care, and government. Thats exactly what this task force will do, Healey said in a statement accompanying the task force announcement. Members of the task force will collaborate on strategies that keep us ahead of the curve by leveraging AI and GenAI technology, which will bring significant benefit to our economy and communities across the state.

The executive order itself makes only glancing references to risks associated with AI, focusing mostly on the task forces role in identifying strategies for collaboration around AI and adoption across life sciences, finance, and higher education. The task force members will recommend strategies to facilitate public investment in AI and promoting AI-related job creation across the state, as well as recommending structures to promote responsible AI development and use for the state.

In conversation with Healey last month, tech journalist Kara Swisher offered a sharp critique of the enthusiastic embrace of AI hype, describing it as just marketing right now and comparing it to the crypto bubble and signs of a similar AI bubble are troubling other tech reporters. Tech companies are seeing the value in pushing whatever were pushing at the moment, and its exhausting, actually, Swisher said, adding that certain types of tasked algorithms like search tools are already commonplace, but the trend now is slapping an AI onto it and saying its AI. Its not.

Eventually, Swisher acknowledged, tech becomes cheaper and more capable at certain types of labor than people as in the case of mechanized farming and its up to officials like Healey to figure out how to balance new technology while protecting the people it impacts.

Mohamad Ali, chief operating officer of IBM Consulting, opined in CommonWealth Beacon that there need to be significant investments in an AI-capable workforce that prioritizes trust and transparency.

Artificial intelligence policy in Massachusetts, as in many states, is a hodgepodge crossing all branches of government. The executive branch is betting big that the technology can boost the states innovation economy, while the Legislature is weighing the risks of deepfakes in nonconsensual pornography and election communications.

Reliance on large language model styles of artificial intelligence melding the feel of a search algorithm with the promise of a competent researcher and writer has caused headaches for courts. Because several widely used AI tools use predictive text algorithms trained on existing work but not always limiting itself to it, large language model AI can hallucinate and fabricate facts and citations that dont exist.

In a February order in the troubling wrongful death and sexual abuse case filed against the Stoughton Police Department, Associate Justice Brian Davis sanctioned attorneys for their reliance on AI systems to prepare legal research and blindly file inaccurate information generated by the systems with the court. The AI hallucinations and the unchecked use of AI in legal filings are disturbing developments that are adversely affecting the practice of law in the Commonwealth and beyond, Davis wrote.

This article first appeared on CommonWealth Beacon and is republished here under a Creative Commons license.

GET THE MORNING HEADLINES DELIVERED TO YOUR INBOX

SUBSCRIBE

Continue reading here:

As Massachusetts leans in on artificial intelligence, AG waves a yellow flag Rhode Island Current - Rhode Island Current

Posted in Uncategorized

All Arlecchino Ascension and Talent materials in Genshin Impact – Destructoid

If youve pulled Arlecchino, another 5-star banner added in Genshin Impact 4.6, youll want to Ascend her to max level as soon as possible to cash in on that Wish investment. The latest Pyro character will lead you on a hunt for Fatui-themed upgrades to make the most of her levels and Talent.

To perfect her build, heres every Ascension material youll need for Arlecchino and where to find it that includes requirements for her Talent boosts, too.

Arlecchino uses Fragments of a Golden Melody, which drop from the Legatus Golem world boss. You can find this world boss in the lost kingdom of Remuria, which you can access by following the World Quest that starts in the Fontanian island village of Petrichor.

Youll need 46 Fragments in total, which will take an average of 18 boss runs or about 720 Resin at World Level 8.

Arlecchino require Rainbow Roses, which grow around Fontaine. If you collect them with the Seed Dispensary gadget equipped, you can gather seeds to grow additional Rainbow Roses in your teapot.

You will need 168 Rainbow Roses to fully Ascend Arlecchino. Like other local specialties, Rainbow Roses respawn 48 hours after collecting them. There are only 81 Rainbow Roses in the overworld, so you may need to ask a friend if you can take theirs.

As a member of the Fatui, Arlecchino uses Fatui insignias to ascend. These drop from most Fatui enemies, including Cicin Mages, Pyro Agents, and all Skirmishers.

For maximum character levels, Arlecchino will need 18 Recruits Insignia, 30 Sergeants Insignia, and 36 Lieutenants Insignia. This doesnt include insignias needed for Talents.

Like other Pyro characters, Arlecchino uses red Agnidus Agates to Ascend. These do drop from her boss, the Legatus Golem, but they will come mixed with Geo gems. If youre short on Pyro gems, use Dust of Azoth to convert some from a different element.

Arlecchino will need 1 Agnidus Agate Sliver, 9 Fragments, 9 Chunks, and 6 Gemstones to fully Ascend.

Despite her Snezhnayan affiliation, Arlecchino uses Order Talent books from Fontaine. You can farm these from the Fontaine Talent domain on Wednesdays, Saturdays, and Sundays.

To level her Talents to 10/10/10, Arlecchino will need 9 brown books, 63 silver books, and 114 gold books. This translates to about 2,480 Resin, depending on your drop luck.

Fatui enemies can be found all over Teyvat. My favorite farming spots are the north side of Dragonspine and the northwest side of Seirai Island.

Fully leveling Arlecchinos Talents requires 18 Recruits Insignia, 66 Sergeants Insignia, and 93 Lieutenants Insignia. This doesnt include insignias needed for character Ascension.

Fading Candles drop from the new weekly boss in 4.6, The Knave. You can unlock this boss domain by completing Arlecchinos Story Quest, or you can quick-start the challenge from your Adventurers Handbook.

Arlecchino will need 18 Candles to fully level her Talents. This means around 8 weeks of farming, depending on your luck.

Continued here:

All Arlecchino Ascension and Talent materials in Genshin Impact - Destructoid

Former Ascension Parish Deputy Assessor arrested, accused of changing tax values for his property – WBRZ

GONZALES - The former deputy of the Ascension Parish Assessor's Office was arrested Tuesday after he allegedly changed tax values for his personal property.

Jail records said 44-year-old Justin Champlin was booked for two counts of computer tampering, two counts of injuring public records and one count malfeasance in office.

The Gonzales Police Department said Champlin has been working at the assessor's office since 2012 and the issue happened while he was working as the deputy assessor.

The case is being turned over to the District Attorney's office.

View original post here:

Former Ascension Parish Deputy Assessor arrested, accused of changing tax values for his property - WBRZ

Ascension Sacred Heart proposes to develop new medical office building – Pensacola News Journal

pnj.com wants to ensure the best experience for all of our readers, so we built our site to take advantage of the latest technology, making it faster and easier to use.

Unfortunately, your browser is not supported. Please download one of these browsers for the best experience on pnj.com

See the rest here:

Ascension Sacred Heart proposes to develop new medical office building - Pensacola News Journal

Doctors, nurses walk out on strike at Ascension St. John in Detroit – Detroit Free Press

freep.com wants to ensure the best experience for all of our readers, so we built our site to take advantage of the latest technology, making it faster and easier to use.

Unfortunately, your browser is not supported. Please download one of these browsers for the best experience on freep.com

View post:

Doctors, nurses walk out on strike at Ascension St. John in Detroit - Detroit Free Press

Former Ascension Parish deputy assessor arrested on malfeasance, computer tampering charges: Gonzales Police … – Weekly Citizen

weeklycitizen.com wants to ensure the best experience for all of our readers, so we built our site to take advantage of the latest technology, making it faster and easier to use.

Unfortunately, your browser is not supported. Please download one of these browsers for the best experience on weeklycitizen.com

Follow this link:

Former Ascension Parish deputy assessor arrested on malfeasance, computer tampering charges: Gonzales Police ... - Weekly Citizen

Ascension Chief Deputy Assessor arrested, accused of changing his own property assessment – Unfiltered with Kiran

About Kiran

With over 20 years of experience in journalism, I ventured off to launch a new concept: 100% digital media news and called it Unfiltered with Kiran. What started as a single person launching a dream has now turned into a team with extremely dedicated journalists. We, as a team, are proud to serve each and every one of you!

Read more:

Ascension Chief Deputy Assessor arrested, accused of changing his own property assessment - Unfiltered with Kiran