Artificial intelligence could ‘revolutionise’ chemistry but researchers warn of hype – Chemistry World

Artificial Intelligence can revolutionise science by making it faster, more efficient and more accurate, according to a survey of European Research Council (ERC) grant winners. And while the report looks at the impact of AI on all scientific fields, the field of chemistry, in particular, can be expected to benefit greatly from the revolution, say researchers. But there are also warnings that AI is being overhyped, and avowals of the importance of human experts in chemical research.

The ERC report summarises how 300 researchers are using AI in their work, and what they see as its potential impacts and risks by 2030. Researchers in the physical sciences report that AI has become essential for data analysis, and for working on advanced simulations. They also note the applications of AI systems to perform calculations, operate instruments and control complex systems.

But they warn AI could spread false or inaccurate information, and that it might have a harmful impact on research integrity if researchers overuse AI tools to write research papers. They also express concerns about AIs lack of transparency and scientific replicability: AI was likened to a black box which could generate results without any underlying understanding of them.

Princeton Universitys Michael Skinnider, who uses machine learning to identify molecules with mass-spectrometry, says AIs greatest advances will be in analysing data, rather than the use of AI tools like large language models as aids for writing and researching. As well as extracting value from large datasets, AI would allow scientists to collect even larger datasets through more complex and ambitious experiments, with the expectation that we will be able to sift through huge amounts of data to ultimately arrive at new biological insights, he says.

Its a view also held by Tim Albrecht at the University of Birmingham, who adds that the latest AI systems can determine through training what features they should look for in data, as well as simply finding data features that theyve been pre-programmed for.

Gonalo Bernardes of Cambridge University, who has used AI methods to optimise organic reactions, stresses that AI can also usefully analyse small data sets. I believe its true power comes when dealing with small datasets and being able to inform on specific questions, [such as] what are the best conditions for a given reaction, he says.

And Simon Woodward of the University of Nottingham notes the ability of AI to inspire intuitive guesses. We have found the latest generations of message-passing neural networks show the highest potential for such approaches in catalysis, he says.

Chemist Keith Butler at University College London specialises in using AI systems to design new materials. He agrees that AI will create major changes in chemical research, but says they cant replace expert humans. There has been a lot of talk about self-driving autonomous labs lately, but I think that fully closed-loop labs are likely to be limited to specialist processes, he says. One could argue that scientific research is often advanced by edge-cases, so full automation is hard to imagine.

Butler makes an analogy between AI chemistry and self-driving cars. While AI has not led to fully autonomous vehicles, if you drive a car produced today compared to a car produced 15 years ago you will see just how much AI can change the way we operate: sat nav, parking guidance, sensors and indicators for all sorts of performance, he says. I already see significant impact of AI and in particular machine learning in the chemical sciences but in all cases human experts checking and guiding the process is critical.

Princetons Skinnider adds that he is less convinced of the potential for AI to replace higher-level thinking, such as AI for scientific discovery or generating new scientific hypotheses two hyped aspects of AI touched on in the ERC report. Isnt there some amount of joy inherent in these processes that motivates people to become scientists in the first place?

Read this article:

Artificial intelligence could 'revolutionise' chemistry but researchers warn of hype - Chemistry World

How the EU AI Act regulates artificial intelligence: What it means for cybersecurity – CSO Online

According to van der Veer, organizations that fall into the categories above need to do a cybersecurity risk assessment. They must then adhere to the standards set by either the AI Act or the Cyber Resilience Act, the latter being more focused on products in general. That either-or situation could backfire. People will, of course, choose the act with less requirements, and I think thats weird, he says. I think its problematic.

When it comes to high-risk systems, the document stresses the need for robust cybersecurity measures. It advocates for the implementation of sophisticated security features to safeguard against potential attacks.

Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behavior, performance or compromise their security properties by malicious third parties exploiting the systems vulnerabilities, the document reads. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g., data poisoning) or trained models (e.g., adversarial attacks), or exploit vulnerabilities in the AI systems digital assets or the underlying ICT infrastructure. In this context, suitable measures should therefore be taken by the providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.

The AI Act has a few other paragraphs that zoom in on cybersecurity, the most important ones being those included in Article 15. This article states that high-risk AI systems must adhere to the security by design and by default principle, and they should perform consistently throughout their lifecycle. The document also adds that compliance with these requirements shall include implementation of state-of-the-art measures, according to the specific market segment or scope of application.

The same article talks about the measures that could be taken to protect against attacks. It says that the technical solutions to address AI-specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve, and control for attacks trying to manipulate the training dataset (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws, which could lead to harmful decision-making.

What the AI Act is saying is that if youre building a high-risk system of any kind, you need to take into account the cybersecurity implications, some of which might have to be dealt with as part of our AI system design, says Dr. Shrishak. Others could actually be tackled more from a holistic system point of view.

According to Dr. Shrishak, the AI Act does not create new obligations for organizations that are already taking security seriously and are compliant.

Organizations need to be aware of the risk category they fall into and the tools they use. They must have a thorough knowledge of the applications they work with and the AI tools they develop in-house. A lot of times, leadership or the legal side of the house doesnt even know what the developers are building, Thacker says. I think for small and medium enterprises, its going to be pretty tough.

Thacker advises startups that create products for the high-risk category to recruit experts to manage regulatory compliance as soon as possible. Having the right people on board could prevent situations in which an organization believes regulations apply to it, but they dont, or the other way around.

If a company is new to the AI field and it has no experience with security, it might have the false impression that just checking for things like data poisoning or adversarial examples might satisfy all the security requirements, which is false. Thats probably one thing where perhaps somewhere the legal text could have done a bit better, says Dr. Shrishak. It should have made it more clear that these are just basic requirements and that companies should think about compliance in a much broader way.

The AI Act can be a step in the right direction, but having rules for AI is one thing. Properly enforcing them is another. If a regulator cannot enforce them, then as a company, I dont really need to follow anything - its just a piece of paper, says Dr. Shrishak.

In the EU, the situation is complex. A research paper published in 2021 by the members of the Robotics and AI Law Society suggested that the enforcement mechanisms considered for the AI Act might not be sufficient. The experience with the GDPR shows that overreliance on enforcement by national authorities leads to very different levels of protection across the EU due to different resources of authorities, but also due to different views as to when and how (often) to take actions, the paper reads.

Thacker also believes that the enforcement is probably going to lag behind by a lot for multiple reasons. First, there could be miscommunication between different governmental bodies. Second, there might not be enough people who understand both AI and legislation. Despite these challenges, proactive efforts and cross-disciplinary education could bridge these gaps not just in Europe, but in other places that aim to set rules for AI.

Striking a balance between regulating AI and promoting innovation is a delicate task. In the EU, there have been intense conversations on how far to push these rules. French President Emmanuel Macron, for instance, argued that European tech companies might be at a disadvantage in comparison to their competitors in the US or China.

Traditionally, the EU regulated technology proactively, while the US encouraged creativity, thinking that rules could be set a bit later. I think there are arguments on both sides in terms of what ones right or wrong, says Derek Holt, CEO of Digital.ai. We need to foster innovation, but to do it in a way that is secure and safe.

In the years ahead, governments will tend to favor one approach or another, learn from each other, make mistakes, fix them, and then correct course. Not regulating AI is not an option, says Dr. Shrishak. He argues that doing this would harm both citizens and the tech world.

The AI Act, along with initiatives like US President Bidens executive order on artificial intelligence, are igniting a crucial debate for our generation. Regulating AI is not only about shaping a technology. It is about making sure this technology aligns with the values that underpin our society.

Link:

How the EU AI Act regulates artificial intelligence: What it means for cybersecurity - CSO Online

This year in privacy: Wins and losses around the world | Context – Context

Whats the context?

New laws around the world boosted privacy protections, but enforcement is key, and concerns around AI's impact are growing

This was something of a watershed year for privacy, with key legislations introduced from California to China, and heated debates around what the rapid advance of generative artificial intelligence means for individual privacy rights.

While world leaders agreed at the inaugural AI Safety Summit in Britain to identify and mitigate risks, including to consumer privacy, data breaches exposing personal data were reported at the UK Electoral Commission, genetics company 23andMe, Indian hospitals and elsewhere.

"2023 was a consistently mixed bag built on incredibly positive foundations: there are oversight bodies and policy-makers doing their jobs to hold bad actors to account at levels we have never seen before," said Gus Hosein, executive director at advocacy group Privacy International.

"Looking forward, governments can either act to create safeguards, or they can see the digital world start burning around them with rampant state-sponsored hacking, unaccountable automated decision making (and) deepening powers for Big Tech," he told Context.

"One huge question is this: where do Large Learning Models get their data from tomorrow? I'm worried it will be about getting it from people in ways beyond our control as consumers and citizens."

These are the year's most consequential privacy milestones, and what they mean for digital rights:

The sweeping Digital Services Act went into effect on Aug. 25, imposing new rules on user privacy on the largest online platforms, including banning or limiting of some user-targeting practices and imposing stiff penalties for any violations.

The EU's success in implementing this and other tech laws such as the Digital Markets Act, could influence similar rules elsewhere around the world, much like the General Data Protection Regulation (GDPR) did, tech experts say.

But enforcement is a challenge, with any infringement procedure against a company dependent on external reports that must be done at least once a year by independent auditing organisations. These audits aren't due until August 2024.

The UK parliament in September passed the Online Safety Bill, which aims to make the UK "the safest place" in the world to be online.

But digital rights groups say the bill could undermine the privacy of users everywhere, as it forces companies to build technology that can scan all users for child abuse content - including messages that are end-to-end encrypted.

Moreover, the bill's age-verification system meant to protect kids will "invariably lead to adults losing their rights to private speech, and anonymous speech, which is sometimes necessary," noted the Electronic Frontier Foundation.

India passed a long-delayed data protection law in August, which digital rights experts quickly denounced as privacy-damaging and hurting rather than protecting fundamental rights.

The law "grants unchecked powers to the government, including on censorship and surveillance, while jeopardising the rights to information and free speech," noted digital rights group Access Now.

"It's a bad law ... the Data Protection Board lacks independence from the government, which is among the largest data miners (and) people whose privacy has been breached are not entitled to compensation, and are threatened with penalties," said Namrata Maheshwari, Access Now's policy counsel in Asia.

On Oct. 31, China's most popular social media sites - including microblogging platform Weibo, super app WeChat, Chinese TikTok Douyin and search engine Baidu - announced that so-called self-media accounts with more than 500,000 followers will be required to display real-name information.

Self-media includes news and information not necessarily approved by the government, and the new measures will remove the anonymity of thousands of influencers on platforms that are used daily by hundreds of millions of Chinese.

Users have expressed concerns about privacy violations, doxxing and harassment, and greater state surveillance, and several bloggers have quit the platforms. Authorities in Vietnam said they are considering similar rules.

California Governor Gavin Newsom in October signed the Delete Act, which enables Californians to either ask data brokers to delete their personal data, or forbid them from selling or sharing it, with a single request.

"It helps us gain better control over our data and makes it easier to mitigate the risks that the collection and sale of personal information create in our everyday lives," the Electronic Frontier Foundation said.

But a federal judge in September blocked enforcement of the California Age-Appropriate Design Code that was seen as a major win for privacy protections and safety for children online when it was passed last year.

The Chilean Supreme Court in August issued a ruling ordering Emotiv, a U.S. producer of a commercial brain scanning tool, to erase the data it had collected on a former Chilean senator, Guido Girardi.

The ruling - the first of its kind - puts Latin America at the forefront of a new race to protect the brain from machine mining and exploitation, with countries including Brazil, Mexico and Uruguay considering similar provisions.

"It is a significant victory for privacy advocates and sets a precedent for the protection of neural data around the world through the explicit establishment and protection of neurorights," the NeuroRights Foundation, a U.S.-based advocacy group, said.

(Reporting by Rina Chandran. Editing by Zoe Tabary)

See the original post:

This year in privacy: Wins and losses around the world | Context - Context

El Camino using artificial intelligence audio recorder in classrooms to aid disabled students – El Camino College Union

In an age where technology develops seemingly every day, El Camino College has been utilizing certain artificial intelligence programs to help students with disabilities get their proper education.

Otter.ai is an AI program that audio records conversations in real-time, automatically transcribes audio into a written text and can even help generate short summaries of longer texts.

For 40-year-old business student Clay Grant, the use of this program has improved his academic career at El Camino.

Before his recent enrollment at El Camino, Grant worked as a Deputy Sheriff at the Los Angeles County Sheriffs Department for close to 15 years.

After suffering from a stroke in 2021, Grant had difficulty with reading, spelling and memorization skills. Despite this, he returned to school.

Although Grant has certain limitations in the classroom he says that the transcription program serves as an assistance aid, it is not a necessity for him or his grades.

I didnt struggle [in class], it was more of an enhancement, Grant said.

Grant liked how easy the program is to navigate and he said that it helped him retain even more information. Due to the benefit of using Otter in class, Grant believes that it should be expanded outside of students with disabilities.

I would say anybody, even if they dont have a disability, should use [Otter], Grant said.

The El Camino Special Resource Center has an agreement with Otter.ai that allows the college to give qualifying students licenses for the program. This allows qualifying students to use the transcription program in the classroom.

While Otter offers a free version of their services, premium features require a monthly fee. Individual pricing starts at $10 a month.

The Special Resource Center services around 1,000 students and has 100 licenses available from Otter; although only 60 to 80 are used per semester.

To receive a license for Otter, students must go through a process.

It begins with proving ones disability and a consultation that decides whether the student qualifies or not.

If a student qualifies for a license, they then speak with their teachers and come to an agreement about recording in the classroom that works for both of them.

Once this happens, the student signs a contract highlighting what is and is not allowed to do with the program. Certain stipulations must be followed by using the program in an in-person classroom setting.

They are then taught how to use Otter by Brian Krause, the Special Resource Center assistive computer technology specialist.

Roles and responsibilities indicate that the student cannot share the recording with others and its to be used in the context of the classroom and educational use, Bonnie Mercado, special resource center supervisor said.

Although there are licenses available, not all students need one.

Medical documentation will go ahead and indicate the level of need, so its not cookie cutter, its all very individualized, Mercado said.

Although not every license is put to use, there are hopes of increasing usage and possibly the number of licenses as well.

If we need to buy more, we will as we increase [the number of licenses] Krause said.

The Special Resource Center has been using Otter for two semesters and theyve received a lot of positive feedback.

The interface is clean, sleek, and, again, a lot more user-friendly, Mercado said. Trying to keep the students in mind with regards to easy use.

Krause attended the technology conference for persons with disabilities held each year by California State University Northridge.

This is where everybody goes with the latest technology, sharing information. So this is where we find out about how other schools are using it and people do presentations, Krause said.

Along with talking to colleagues and other peers throughout the state, the Special Resource Center found Otter to be a better fit for them than their previous program, Sonocent Audio Notetaker.

As a person with a disability, I enjoyed having the visual representation and stuff that was there, Krause said.

Originally posted here:

El Camino using artificial intelligence audio recorder in classrooms to aid disabled students - El Camino College Union

Artificial Intelligence: Agencies Have Begun Implementation but Need to Complete Key Requirements – Government Accountability Office

Office of Management and Budget The Director of OMB should ensure that the agency issues guidance to federal agencies in accordance with federal law, that is to (a) inform the agencies' policy development related to the acquisition and use of technologies enabled by AI, (b) include identifying responsible AI officials (RAIO), (c) recommend approaches to remove barriers for AI use, (d) identify best practices for addressing discriminatory impact on the basis of any classification protected under federal nondiscrimination laws, and (e) provide a template for agency plans that includes the required contents. (Recommendation 1) Office of Management and Budget The Director of OMB should ensure that the agency develops and posts a public roadmap for the agency's policy guidance to better support AI use, and, where appropriate, include a schedule for engaging with the public and timelines for finalizing relevant policy guidance, consistent with EO 13960. (Recommendation 2) Office of Science and Technology Policy The Director of the Office of Science and Technology Policy should communicate a list of federal agencies that are required to implement the Regulation of AI Applications memorandum requirements (M-21-06) to inform agencies of their status as implementing agencies with regulatory authorities over AI. (Recommendation 3) Office of Personnel Management The Director of OPM should ensure that the agency (a) establishes or updates and improves an existing occupational series with AI-related positions; (b) establishes an estimated number of AI-related positions, by federal agency; and, based on the estimate, (c) prepares a 2-year and 5-year forecast of the number of federal employees in these positions, in accordance with federal law. (Recommendation 4) Office of Personnel Management The Director of OPM should ensure that the agency creates an inventory of federal rotational programs and determines how these programs can be used to expand the number of federal employees with AI expertise, consistent with EO 13960. (Recommendation 5) Office of Personnel Management The Director of OPM should ensure that the agency issues a report with recommendations for how the programs in the inventory can be used to expand the number of federal employees with AI expertise and shares it with the interagency coordination bodies identified by the Chief Information Officers Council, consistent with EO 13960. (Recommendation 6) Office of Personnel Management The Director of OPM should ensure that the agency develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 7) Department of Agriculture The Secretary of Agriculture should ensure that the department (a) reviews the department's authorities related to applications of AI, and (b) develops and submits to OMB plans to achieve consistency with the Regulation of AI Applications memorandum (M-21-06). (Recommendation 8) Department of Agriculture The Secretary of Agriculture should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 9) Department of Commerce The Secretary of Commerce should ensure that the department develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 10) Department of Commerce The Secretary of Commerce should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 11) Department of Education The Secretary of Education should ensure that the department develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 12) Department of Energy The Secretary of Energy should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 13) Department of Health and Human Services The Secretary of Health and Human Services should ensure that the department develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 14) Department of Health and Human Services The Secretary of Health and Human Services should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 15) Department of Homeland Security The Secretary of Homeland Security should ensure that the department develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 16) Department of Homeland Security The Secretary of Homeland Security should ensure that the department (a) reviews the department's authorities related to applications of AI and (b) develops and submits to OMB plans to achieve consistency with the Regulation of AI Applications memorandum (M-21-06). (Recommendation 17) Department of Homeland Security The Secretary of Homeland Security should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 18) Department of the Interior The Secretary of the Interior should ensure that the department develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 19) Department of the Interior The Secretary of the Interior should ensure that the department (a) reviews the agency's authorities related to applications of AI and (b) develops and submits to OMB plans to achieve consistency with the Regulation of AI Applications memorandum (M-21-06). (Recommendation 20) Department of the Interior The Secretary of the Interior should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 21) Department of Labor The Secretary of Labor should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 22) Department of State The Secretary of State should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 23) Department of Transportation The Secretary of Transportation should ensure that the department (a) reviews the department's authorities related to applications of AI and (b) develops and submits to OMB plans to achieve consistency with the Regulation of AI Applications memorandum (M-21-06). (Recommendation 24) Department of Transportation The Secretary of Transportation should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 25) Department of the Treasury The Secretary of the Treasury should ensure that the department develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 26) Department of the Treasury The Secretary of the Treasury should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 27) Department of Veterans Affairs The Secretary of Veterans Affairs should ensure that the department updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 28) Environmental Protection Agency The Administrator of the Environmental Protection Agency should ensure that the agency fully completes and approves its plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 29) Environmental Protection Agency The Administrator of the Environmental Protection Agency should ensure that the agency updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 30) General Services Administration The Administrator of General Services should ensure that the agency develops a plan to either achieve consistency with EO 13960 section 5 for each AI application or retires AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 31) General Services Administration The Administrator of General Services should ensure that the agency updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 32) National Aeronautics and Space Administration The Administrator of the National Aeronautics and Space Administration should ensure that the agency updates and approves the agency's plan to achieve consistency with EO 13960 section 5 for each AI application, to include retiring AI applications found to be developed or used in a manner that is not consistent with the order. (Recommendation 33) National Aeronautics and Space Administration The Administrator of the National Aeronautics and Space Administration should ensure that the agency updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 34) U.S. Agency for International Development The Administrator of the U.S. Agency for International Development should ensure that the agency updates its AI use case inventory to include all the required information, at minimum, and takes steps to ensure that the data in the inventory aligns with provided instructions. (Recommendation 35)

Excerpt from:

Artificial Intelligence: Agencies Have Begun Implementation but Need to Complete Key Requirements - Government Accountability Office

Pope Francis calls for international treaty on artificial intelligence – National Catholic Reporter

Pope Francis on Dec. 14 called for a binding international treaty to regulate the development and use of artificial intelligence, saying that while new advancements could result in groundbreaking progress, they could also lead to a "technological dictatorship."

"The goal of regulation, naturally, should not only be the prevention of harmful practices but also the encouragement of best practices, by stimulating new and creative approaches and encouraging individual or group initiatives," said Francis.

The pope's request came in his message for the World Day of Peace, which is celebrated by the Catholic Church each year on Jan. 1. Each year the pope sends the document to heads of state and other global leaders along with his New Year's wishes. In addition, the pope typically gives an autographed copy of the document to high-profile Vatican visitors.

"Any number of urgent questions need to be asked. What will be the consequences, in the medium and long term, of these new digital technologies?" Francis asked in his six-page document on artificial intelligence. "And what impact will they have on individual lives and on societies, on international stability and world peace?"

The release of the pope's message comes just days after what was hailed as a landmark agreement within the European Union that provides the first global framework for artificial intelligence regulation.

At the same time, in the United States, a bipartisan group of lawmakers has been formed to consider what artificial intelligence guardrails might be necessary, though there is no clear timeframe for when such legislation may be considered. And in recent months, big tech entrepreneurs in Silicon Valley have been embroiled in a series of controversies over the future of artificial intelligence and what, if any, limits should be imposed on their own industry.

Similar to Laudate Deum, the pope's October 2023 apostolic exhortation on climate change, Francis uses his World Day of Peace message to issue a clarion call for a greater commitment to multilateral action to better regulate emerging technologies.

"The global scale of artificial intelligence makes it clear that, alongside the responsibility of sovereign states to regulate its use internally, international organizations can play a decisive role in reaching multilateral agreements and coordinating their application and enforcement," he writes.

While the document acknowledges that artificial intelligence could yield tremendous benefits for human development among them innovations in agriculture, education and improving social connections the pope offers a stern warning that it could "pose a risk to our survival and endanger our common home."

At a time when artificial intelligence is being used to execute the ongoing war in Gaza and is widely utilized in other armed conflicts, the pope sounds the alarm that the use of such technology could not only fuel more war and the weapons trade, but make peace further unattainable.

"The ability to conduct military operations through remote control systems has led to a distancing from the immense tragedy of war and a lessened perception of the devastation caused by those weapon systems and the burden of responsibility for their use," he writes.

"Autonomous weapon systems can never be morally responsible subjects," he continues. "It is imperative to ensure adequate, meaningful and consistent human oversight of weapon systems. Only human beings are truly capable of seeing and judging the ethical impact of their actions, as well as assessing their consequent responsibilities."

Among the other admonitions Francis offers is an overreliance on technology for language-processing tools, surveillance and security. Such products and innovations, he warns, raise serious questions about privacy, bias, "fake news" and other forms of technological manipulation.

At a Dec. 14 Vatican press conference, Jesuit Cardinal Michael Czerny a close collaborator of Francis said that the pope is "no Luddite" and celebrates genuine scientific and technological progress. But he warned that artificial intelligence is a high-stakes gamble and that such digital technologiesrely on the individual and social values of their creators.

"We should not liken techno-scientific progress to a 'neutral' tool such as a hammer: whether a hammer contributes to good or evil depends upon the intentions of the user, not of the hammer-maker," said Czerny, who heads the Vatican's Dicastery for Promoting Integral Human Development.

Barbara Caputo, who teaches at the Polytechnic University of Turin and directs the university's Hub on Artificial Intelligence, called for greater technical training on artificialintelligence that is inclusiveof men and women from all over the world, rather than select elites.

"Artificial intelligence will be true progress for humanity only if its technical knowledge in-depth will cease to be the domain of the few," she said. "The Holy Father reminds us that the measure of our true humanity is how we treat our most disadvantaged sisters and brothers."

In summary, writes the pope in the new document, "artificial intelligence ought to serve our best human potential and our highest aspirations, not compete with them."

"Technological developments that do not lead to an improvement in the quality of life of all humanity, but on the contrary aggravate inequalities and conflicts, can never count as true progress," Francis warns.

Read the original:

Pope Francis calls for international treaty on artificial intelligence - National Catholic Reporter

Exploring the limitations of artificial intelligence for businesses Gene Marks – Atlanta Small Business Network

While artificial intelligence is taking the world by storm, many have expressed anxieties over the technologys rapid acceptance by the business community. Even though the benefits of platforms such as ChatGPT are clear, there remain many unknowns that make AI adoption difficult, especially for entrepreneurs and owners of smaller companies.

On this episode of The Small Business Show, host Jim Fitzpatrick is joined by Gene Marks, author, tech columnist for Forbes and CEO of The Marks Group. Marks recently covered artificial intelligence in an article examining the pros and cons of using generative text platforms for accounting purposes. Now, he shares his insights into the potential challenges of embracing AI and why he still believes the technology is a powerful tool for business owners.

Key Takeaways

1.Although it is easy to feel pessimistic about the proliferation of artificial intelligence, Marks notes that it is nothing more than a developers tool: a technology that is evolving into an experts assistant.

2.Unfortunately, artificial intelligence still has limitations, math being one of them. When providing subjective information, generative text platforms excel, but when providing objective data, AI tools struggle to maintain accuracy, although the technology is likely to improve.

3.Rather than using artificial intelligence as a replacement for human editing, Marks recommends that entrepreneurs use generative text platforms as a means for handling grunt work and lowering the costs of hiring writers.

4.Google is also leveraging artificial intelligence to assist entrepreneurs. Google My Business, for example, can direct customers to localized brands based on data collected from small businesses.

5.Marks notes that the question of when it is a good time to start a business depends on the business being opened. Not all enterprises will be successful at all times, which is why entrepreneurs must select their industry carefully and make their decisions based on local factors rather than national ones.

ASBN, from startup to success, we are your go-to resource for small business news, expert advice, information, and event coverage.

While youre here, dont forget to subscribe to our email newsletter for all the latest business news know-how from ASBN.

Read more here:

Exploring the limitations of artificial intelligence for businesses Gene Marks - Atlanta Small Business Network

Even the Pope has something to say about artificial intelligence – Cointelegraph

Over the past year, theres been no shortage of scientists, tech CEOs, billionaires and lawmakers sounding the alarm over artificial intelligence (AI) and now, even the Pope wants to talk about it too.

In a hefty 3,412-word letter dated Dec. 8, Pope Francis the head of the Catholic Church warned of the potential dangers of AI to humanity and what needs to be done to control it. The letter came as the Roman Catholic Church prepares to celebrate World Day of Peace on Jan. 1, 2024.

Pope Francis wants to see an international treaty to regulate AI to ensure it is developed and used ethically otherwise, we risk falling into the spiral of a technological dictatorship.

The threat of AI arises when developers have a desire for profit or thirst for power that overpowers ones wish to exist freely and peacefully, the Pope explained.

Technologies that fail to do this aggravate inequalities and conflicts and, therefore, can never count as true progress, he added.

Meanwhile, the emergence of AI-generated fake news is a serious problem, added the Pope, which could lead to growing mistrust in the media.

The Pope was recently a victim of generative AI when a fake image surfaced of him wearing a luxury white puffer jacket went viral in March.

Pope Francis, however, also acknowledged the benefits of AI in enabling more efficient manufacturing, easier transport and more ready markets, as well as a revolution in processes of accumulating, organizing and confirming data.

But hes also concerned that AI will benefit those controlling it and leave a large portion of the population without employment to pay for a living:

Pope Francis has long warned about the misuse of emerging technologies, stating that both theoretical and practical moral principles need to be embedded into them. He is, however, often seen as more tech-savvy and forward-looking than his predecessors.

Pope Francis recent remarks come after a year of outcry from all corners of the world over the potential dangers of AI.

Tech leaders, such as Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, have expressed concern about how rapidly AI is advancing. It prompted them and more than 2,600 tech leaders and researchers to sign a petitionto pause AI developments in March 2023, sharing concerns that AI more advanced than GPT-4 can pose profound risks to society and humanity.

United States President Joe Biden has also expressed concerns. His administration released an executive order on the safe, secure, and trustworthy development and use of artificial intelligence in late October to address risks posed by AI.

Related: NFT and Islamic education: A new frontier to teach religion?

Even Hollywood filmmakers and celebrities are adding their thoughts to the issue.

In July, Canadian filmmaker James Cameron reportedly said he had been warning of the dangers of AI since The Terminator, which he directed nearly 40 years ago.

I warned you guys in 1984 and you didn't listen, Cameron told CTV News.

I think the weaponization of AI is the biggest danger [...] I think that we will get into the equivalent of a nuclear arms race with AI, and if we dont build it, the other guys are for sure going to build it, and so then itll escalate, he added.

Magazine: Experts want to give AI human souls so they dont kill us all

See the article here:

Even the Pope has something to say about artificial intelligence - Cointelegraph

Sam Altman on OpenAI and Artificial General Intelligence – TIME

If 2023 was the year artificial intelligence became a household topic of conversation, its in many ways because of Sam Altman, CEO of the artificial intelligence research organization OpenAI. Altman, who was named TIMEs 2023 CEO of the Year spoke candidly about his November oustingand reinstatementat OpenAI, how AI threatens to contribute to disinformation, and the rapidly advancing technologys future potential in a wide-ranging conversation with TIME Editor-in-Chief Sam Jacobs as part of TIMEs A Year in TIME event on Tuesday.

Altman shared that his mid-November sudden removal from OpenAI proved a learning experienceboth for him and the company at large. We always said that some moment like this would come, said Altman. I didnt think it was going to come so soon, but I think we are stronger for having gone through it.

Read More: CEO of the Year 2023: Sam Altman

Altman insists that the experience ultimately made the company strongerand proved that OpenAIs success is a team effort. Its been extremely painful for me personally, but I just think its been great for OpenAI. Weve never been more unified, he said. As we get closer to artificial general intelligence, as the stakes increase here, the ability for the OpenAI team to operate in uncertainty and stressful times should be of interest to the world.

I think everybody involved in this, as we get closer and closer to super intelligence, gets more stressed and more anxious, he explained of how his firing came about. The lesson he came away with: We have to make changes. We always said that we didnt want AGI to be controlled by a small set of people, we want it to be democratized. And we clearly got that wrong. So I think if we don't improve our governance structure, if we dont improve the way we interact with the world, people shouldnt [trust OpenAI]. But were very motivated to improve that.

The technology has limitless potential, Altman saysI think AGI will be the most powerful technology humanity has yet inventedparticularly in democratizing access to information globally. If you think about the cost of intelligence and the equality of intelligence, the cost falling, the quality increasing by a lot, and what people can do with that, he said, it's a very different world. Its the world that sci-fi has promised us for a long timeand for the first time, I think we could start to see what thats gonna look like.

Still, like any other previous powerful technology, that will lead to incredible new things, he says, but there are going to be real downsides.

Read More: Read TIMEs Interview With OpenAI CEO Sam Altman

Altman admits that there are challenges that demand close attention. One particular concern to be wary of, with 2024 elections on the horizon, is how AI stands to influence democracies. Whereas election interference circulating on social media might look straightforward todaytroll farmsmake one great meme, and that spreads outAltman says that AI-fueled disinformation stands to become far more personalized and persuasive: A thing that Im more concerned about is what happens if an AI reads everything youve ever written online and then right at the exact moment, sends you one message customized for you that really changes the way you think about the world.

Despite the risks, Altman believes that, if deployment of AI is safe and placed responsibly in the hands of people, which he says is OpenAIs mission, the technology has the potential to create a path where the world gets much more abundant and much better every year.

I think 2023 was the year we started to see that, and in 2024, well see way more of it, and by the time the end of this decade rolls around, I think the world is going to be in an unbelievably better place, he said. Though he also noted: No one knows what happens next. I think the way technology goes, predictions are often wrong.

A Year in TIME was sponsored by American Family Insurance, The Macallan, and Smartsheet.

View original post here:

Sam Altman on OpenAI and Artificial General Intelligence - TIME

2023: The year we played with artificial intelligence and weren’t sure what to do about it – Post Register

State Alabama Alaska Arizona Arkansas California Colorado Connecticut Delaware Florida Georgia Hawaii Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Washington Washington D.C. West Virginia Wisconsin Wyoming Puerto Rico US Virgin Islands Armed Forces Americas Armed Forces Pacific Armed Forces Europe Northern Mariana Islands Marshall Islands American Samoa Federated States of Micronesia Guam Palau Alberta, Canada British Columbia, Canada Manitoba, Canada New Brunswick, Canada Newfoundland, Canada Nova Scotia, Canada Northwest Territories, Canada Nunavut, Canada Ontario, Canada Prince Edward Island, Canada Quebec, Canada Saskatchewan, Canada Yukon Territory, Canada

Zip Code

Country United States of America US Virgin Islands United States Minor Outlying Islands Canada Mexico, United Mexican States Bahamas, Commonwealth of the Cuba, Republic of Dominican Republic Haiti, Republic of Jamaica Afghanistan Albania, People's Socialist Republic of Algeria, People's Democratic Republic of American Samoa Andorra, Principality of Angola, Republic of Anguilla Antarctica (the territory South of 60 deg S) Antigua and Barbuda Argentina, Argentine Republic Armenia Aruba Australia, Commonwealth of Austria, Republic of Azerbaijan, Republic of Bahrain, Kingdom of Bangladesh, People's Republic of Barbados Belarus Belgium, Kingdom of Belize Benin, People's Republic of Bermuda Bhutan, Kingdom of Bolivia, Republic of Bosnia and Herzegovina Botswana, Republic of Bouvet Island (Bouvetoya) Brazil, Federative Republic of British Indian Ocean Territory (Chagos Archipelago) British Virgin Islands Brunei Darussalam Bulgaria, People's Republic of Burkina Faso Burundi, Republic of Cambodia, Kingdom of Cameroon, United Republic of Cape Verde, Republic of Cayman Islands Central African Republic Chad, Republic of Chile, Republic of China, People's Republic of Christmas Island Cocos (Keeling) Islands Colombia, Republic of Comoros, Union of the Congo, Democratic Republic of Congo, People's Republic of Cook Islands Costa Rica, Republic of Cote D'Ivoire, Ivory Coast, Republic of the Cyprus, Republic of Czech Republic Denmark, Kingdom of Djibouti, Republic of Dominica, Commonwealth of Ecuador, Republic of Egypt, Arab Republic of El Salvador, Republic of Equatorial Guinea, Republic of Eritrea Estonia Ethiopia Faeroe Islands Falkland Islands (Malvinas) Fiji, Republic of the Fiji Islands Finland, Republic of France, French Republic French Guiana French Polynesia French Southern Territories Gabon, Gabonese Republic Gambia, Republic of the Georgia Germany Ghana, Republic of Gibraltar Greece, Hellenic Republic Greenland Grenada Guadaloupe Guam Guatemala, Republic of Guinea, Revolutionary People's Rep'c of Guinea-Bissau, Republic of Guyana, Republic of Heard and McDonald Islands Holy See (Vatican City State) Honduras, Republic of Hong Kong, Special Administrative Region of China Hrvatska (Croatia) Hungary, Hungarian People's Republic Iceland, Republic of India, Republic of Indonesia, Republic of Iran, Islamic Republic of Iraq, Republic of Ireland Israel, State of Italy, Italian Republic Japan Jordan, Hashemite Kingdom of Kazakhstan, Republic of Kenya, Republic of Kiribati, Republic of Korea, Democratic People's Republic of Korea, Republic of Kuwait, State of Kyrgyz Republic Lao People's Democratic Republic Latvia Lebanon, Lebanese Republic Lesotho, Kingdom of Liberia, Republic of Libyan Arab Jamahiriya Liechtenstein, Principality of Lithuania Luxembourg, Grand Duchy of Macao, Special Administrative Region of China Macedonia, the former Yugoslav Republic of Madagascar, Republic of Malawi, Republic of Malaysia Maldives, Republic of Mali, Republic of Malta, Republic of Marshall Islands Martinique Mauritania, Islamic Republic of Mauritius Mayotte Micronesia, Federated States of Moldova, Republic of Monaco, Principality of Mongolia, Mongolian People's Republic Montserrat Morocco, Kingdom of Mozambique, People's Republic of Myanmar Namibia Nauru, Republic of Nepal, Kingdom of Netherlands Antilles Netherlands, Kingdom of the New Caledonia New Zealand Nicaragua, Republic of Niger, Republic of the Nigeria, Federal Republic of Niue, Republic of Norfolk Island Northern Mariana Islands Norway, Kingdom of Oman, Sultanate of Pakistan, Islamic Republic of Palau Palestinian Territory, Occupied Panama, Republic of Papua New Guinea Paraguay, Republic of Peru, Republic of Philippines, Republic of the Pitcairn Island Poland, Polish People's Republic Portugal, Portuguese Republic Puerto Rico Qatar, State of Reunion Romania, Socialist Republic of Russian Federation Rwanda, Rwandese Republic Samoa, Independent State of San Marino, Republic of Sao Tome and Principe, Democratic Republic of Saudi Arabia, Kingdom of Senegal, Republic of Serbia and Montenegro Seychelles, Republic of Sierra Leone, Republic of Singapore, Republic of Slovakia (Slovak Republic) Slovenia Solomon Islands Somalia, Somali Republic South Africa, Republic of South Georgia and the South Sandwich Islands Spain, Spanish State Sri Lanka, Democratic Socialist Republic of St. Helena St. Kitts and Nevis St. Lucia St. Pierre and Miquelon St. Vincent and the Grenadines Sudan, Democratic Republic of the Suriname, Republic of Svalbard & Jan Mayen Islands Swaziland, Kingdom of Sweden, Kingdom of Switzerland, Swiss Confederation Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand, Kingdom of Timor-Leste, Democratic Republic of Togo, Togolese Republic Tokelau (Tokelau Islands) Tonga, Kingdom of Trinidad and Tobago, Republic of Tunisia, Republic of Turkey, Republic of Turkmenistan Turks and Caicos Islands Tuvalu Uganda, Republic of Ukraine United Arab Emirates United Kingdom of Great Britain & N. Ireland Uruguay, Eastern Republic of Uzbekistan Vanuatu Venezuela, Bolivarian Republic of Viet Nam, Socialist Republic of Wallis and Futuna Islands Western Sahara Yemen Zambia, Republic of Zimbabwe

Read the rest here:

2023: The year we played with artificial intelligence and weren't sure what to do about it - Post Register

Should We Be Worried About AI(Artificial Intelligence)? – Medium

Name one time you went to ChatGPT and said What should I cook today?, or you said in your long-awaited weekend Where should I go today. You might have had a memory in your head of you saying that to ChatGPT. But the thing is, how reliable do we have to be with this new awaited technology?

As of 2023, ChatGPT attracted 1.7 million users. now that is a lot of people. now let's look at an example of Bing. As of March 2023, Bing AI had 1 million users. now we see how people are dependent on their AI engines. But are these 1.7 or 1 million using this free power safely and honestly? 43% of college students have used ChatGPT or similar AI tools. And 26% of K-12 teachers have caught a student cheating with ChatGPT. So it is widespread these days to see students using ChatGPT on online standardized tests. but there are many things we need to worry about AI.

Have you watched the movie Terminator? If yes, you know exactly what I mean when I say the AI revolution. if not, then you might have no idea what I am talking about. In the movie Terminator, robots gain Consciousness somehow, and they wage a war with humans. Their goal was to kill all the humans and have total superiority. now people have been scared of how much AI has grown and grown. From robots that could only walk, to robots that can be your friend, it is pretty disturbing to see how much this tech has grown. But you do not have to worry about the Robo apocalypse. This will never happen.

Have you wondered about a life where after you wake up from your bed, a Robo housemaid is cleaning up your room for you? Then you walk downstairs and you see a robo-cook preparing breakfast for you. Believe it or not, this is already happening at this point in the generation. Soon everything will be fully automated and we will not have to do any work. But have you even realized the cons of this happening? This will immediately tip over the economy and will result in absolute war and riots if we do not do anything about it. This is already when big companies are firing employees to replace robots with them. But are you worried that these big companies will someday fire you? but you do not have to worry, because these companies have got it in hand.

Are you kind of scared and disturbed about this new tech after reading this article? That is okay because everyone has a reason to be scared of this new tech. So the way that I think we should use this technology to make the world a better place worth living in today. And the way we achieve that is by working hard and not giving up.

Sources

Nerdynav https://nerdynav.com/chatgpt-cheating-statistics/#:~:text=43%25%20of%20college%20students%20have,per%20a%20Daily%20Mail%20survey.

New York Times https://www.nytimes.com/2023/06/10/business/ai-jobs-work.html

before you go!

fill out this reader satisfaction form

https://docs.google.com/forms/d/1PFnrZbOeG4r8KJrpDA1-tFqf6mWt2YQgLVC_a7_b3tc/edit

More here:

Should We Be Worried About AI(Artificial Intelligence)? - Medium

Diversity In Artificial Intelligence Could Help Make It More Equitable – Black Enterprise

by Daniel Johnson

December 16, 2023

Of all computer science doctorates, only 1.6% were awarded to Black doctoral candidates.

In 2019, The Guardian cited a study conducted by NYU, which emphasized the critical need for diversity in the field of artificial intelligence. The urgency behind this issue is increasing as AI becomes increasingly integrated into society, Dana Metaxa, a PhD candidate and a researcher at Stanford University focused on issues of internet and democracy, told the outlet. Essentially, the lack of diversity in AI is concentrating an increasingly large amount of power and capital in the hands of a select subset of people.

As we head into 2024, not much on that front has changed. In November, Wired talked to several prominent women in the artificial intelligence community about why they would not want a seat on the board of OpenAI following Sam Altmans coup. Timnit Gebru, who made waves when Google dismissed her following a warning she issued regarding the companys plans for AI, said that there was a better chance of her returning to Google than joining Altmans board.

Its repulsive to me, Gebru said. I honestly think theres more of a chance that I would go back to GoogleI mean, they wont have me and I wont have themthan me going to OpenAI.

It is in this subsection of artificial intelligence, the field of AI ethics, where women in tech have found a measure of success, but their work in the field often puts them at odds with the white men who control the boards and companies in Silicon Valley. Meredith Whittaker, the president of Signal, an encrypted messaging app, says the problem is really about giving people from diverse backgrounds power to effect change, as opposed to tokenizing their seats at the table.

Were not going to solve the issuethat AI is in the hands of concentrated capital at presentby simply hiring more diverse people to fulfill the incentives of concentrated capital, Whittaker told Wired. I worry about a discourse that focuses on diversity and then sets folks up in rooms with [expletive] Larry Summers without much power.

Black people in particular have felt the brunt of the way artificial intelligence is used by the police, for example.

As BLACK ENTERPRISEpreviously reported, the city of Detroit was sued by a Black woman who was arrested while eight months pregnant because officers used a facial recognition program to tie her to the crime. And, this is just one of many similar incidents.

In a November article for Esquire, Mitchell S. Jackson surmises that this is inescapable as the field of criminal justice insists on pushing to use artificial intelligence, even though the datasets those programs will use are filled with negative biases that will inevitably work against Black people.

Jackson writes, AI in policing is being implemented into that already flawed system. Its more dangerous to Black and brown people because the persistent lack of diversity in the STEM fieldsfrom which AI comesis apt to generate more built-in biases against people of color, the same people who are overpoliced and underprotected.

He continued, AI in policing is hella dangerous to my people because it operates on datacrime reports, arrest records, license plates, imagesthat is itself steeped in biases.

According to a 2023 report conducted by the Code.org Advocacy Coalition, only 78% of Black high school students high school students had access to foundational computer science courses, compared to 89% of Asian high school students and 82% of white high school students. A 2022 survey from the Computing Research Association says that two-thirds of all computer science doctorates went to non-permanent U.S. residents for whom no ethnic background is available, but almost 19% of those degrees went to white doctoral candidates and 10.1 % were awarded to Asian doctoral candidates. Only 1.6% were awarded to Black doctoral candidates, which illustrates why the diversity numbers in technology companies are as abysmal as they are.

Calvin Lawrence, the author of Hidden In White Sight, a book examining how artificial intelligence contributes to systemic racism, spoke to CNN about how the biases in AI are also a product of a lack of access. Lawrence explained that in order to get more Black people into the field, you have to at least present it as a path they can take.

You certainly dont have a lot of Black folks or data scientists participating in the process of deploying and designing AI solutions, Lawrence said. The only way you can get them to have seats at the table, you have to educate them.

RELATED CONTENT: What If Sam Altman Was A Black Woman Debate About Bias In AI Engulfs Twitter

Visit link:

Diversity In Artificial Intelligence Could Help Make It More Equitable - Black Enterprise

The real issue with artificial intelligence: The misalignment problem – The Hill

Breathtaking advances in technology, from genetic engineering to quantum computing, have opened policy vistas and security challenges that were completely unanticipated even five years ago. The next decade will bring smaller devices, larger networks and anthropomorphic computers that will extend human thought where they dont replace human thought beyond, literally, imagination or belief.

Although AI-doomsday forecasts designed to stoke public anxiety make great headlines and popular podcasts, from the perspective of many Ph.D.-level scientists and engineers the life-under-borg predictions are strangely overwrought. One of the reasons artificial intelligence (AI) captures so much attention is that it, like satellite navigation and drug discovery, is hardly distinguishable from magic. Large language models like Bard, Copilot and ChatGPT sound like a real person, which makes their wizardry even more fascinating. But they are fraught with errors and sweet-sounding hallucinations, and they will never be infallible.

Our obsession with AI diverts attention and energy away from more imminent and transcendent threats to our society and global human progress. The dangerous misalignment is not of moral values between people and computers, but between people and their ideological opponents. Irrefutable facts and valid (if mistaken) opinions have been replaced by deliberately false ideas injected into our discourse like a potent and addictive narcotic of delusion. If we cannot agree on objective and repeatable scientific insights, or a true historical record, how will we collaborate in the best long-term interests of the country or our planet?Today there is nothing that filters the shibboleths from the facts that are fed to AI computers.

The real hazard is not machine-derived calamity. It is bad human decisions that are accelerated and amplified by AI. There are plenty of things we think we know, from calculating financial risk to determining criminal recidivism, that in the immortal words of Mark Twain, just aint so. Training computers based on discriminatory precedent is irresponsible at best and prejudicial at worst. Repairing flawed ideology in human memory or computer storage is wickedly difficult, and it takes time to focus ethical lenses in both media.

In the real world, and for all of pre-broadcast history, new information or edicts, provable or not, sustainable or not, diffused very slowly. The worst ideas, designed to oppress, exclude, incite, and subjugate, were eventually extirpated, sometimes painfully, from the social system. Good ideas including the demolition of the bad ones take even longer, but eventually succeed. As Mahatma Gandhi, Martin Luther King Jr., Golda Meir, and Nelson Mandela reassure us: The defeat of harmful structures is always just a question of time. The ultimate strategic intelligence is that authentic liberty is renewable but not self-executing. It needs debate and nourishes criticism; AI is capable of neither.

Another deficit, and a source of contamination, is the weaponized misinformation inserted by foreign interests into our popular press and social media. Those pathogens are ingested into the training sets that teach generative platforms how to speak and what to say. AI has neither ambition nor judgement. It is just advanced and impressive pattern recognition. Unless we are much more careful and deliberate, it will be years before we expunge toxic spew from the training sets and align them to our expectations and laws. 

Finally, the global market and (until recently) our national security depend on sophisticated components that come from China. We taught them ourselves. Policymakers from both parties expected the Middle Kingdom to become a large market and friendly competitor. Instead they are a fierce commercial rival and Americas most worrisome military antagonist. They already train almost 10 times the number of engineering students we do and will soon produce twice as many engineering Ph.D.s. The AI misalignment here is that they have more of it than we do.

The clear and present danger is not artificial intelligence. It is the integrity of its training. 

Like a real brain, AI only learns what we teach it. Todays computer models are vulnerable to absorbing wrong ideas and disproven theories about science, history, economics and philosophy. This is no different than schools that promote creationism, holocaust denial, mercantilism, and oppression theories cloaked as real science. Dumb ideas are being embedded into massive computer memories (now about as big as a human brain) that indiscriminately produce conclusions that sound real but cannot be independently validated, traced, checked or challenged. The real-world implications are identical: spiritual superstition, entrenched suspicion, and fabricated conflict.

AI has no imagination; it is a mix master of ideas some good, some bad that we have already considered. Sometimes the results are interesting, like a new chess move or a previously unseen protein fold, and sometimes theyre ridiculous.But hand wringing over AI itself will lead nowhere. Instead, we should focus on a far-superior policy, suggested by the fabulous title of the most influential computer science paper of the last 10 years: attention is all you need.

No machine created this misalignment, and only human ingenuity will solve it. Our attention should be on listing the ingredients, just like we already do with food, gasoline, medicines and clothes. We need to make sure that were teaching these machines things that are scientifically proven, socially aligned and integrity tested for both accuracy and fairness.

Peter L. Levin is adjunct senior fellow in the Technology and National Security program at the Center for a New American Security, and CEO of Amida Technology Solutions Inc.

Read more here:

The real issue with artificial intelligence: The misalignment problem - The Hill

Opinion: Here’s how we can safeguard privacy amid the rise of artificial intelligence – The Globe and Mail

Open this photo in gallery:

A robot called Pepper is positioned near an entrance to a Microsoft Store in Boston on March 21, 2019.Steven Senne/The Associated Press

Ann Cavoukian is executive director of the Global Privacy and Security by Design Centre and the former three-term information and privacy commissioner of Ontario.

A few decades ago, artificial intelligence wasnt nearly as pervasive and neither were the risks that come with it. Fast forward to today, and both the potential and the pitfalls of this incredible technology are glaringly obvious. It is no wonder the world has become consumed with finding a solution that is able to mitigate the risks of using data, while allowing for benefits to be realized in a sustainable way as technology evolves.

That is why the Privacy by Design principles, which I first began developing in the nineties as the way forward, are essential. Newly codified by the International Organization for Standardization, this approach is now the international standard for data privacy management and protection.

The Privacy by Design principles, or ISO 31700-1, have the power to guide us toward a future where innovation does not slow down and privacy isnt an afterthought. Rather, they ensure that privacy is ingrained in the DNA of technology and built into every layer right from the beginning, at the design stage.

While local laws may differ from country to country, principles are borderless. Adhering to this internationally-recognized standard will be the only way that our global community can set itself up for a future that leverages data to its fullest potential, in a transparent and responsible way.

The current age has sometimes been accurately referred to as the fourth industrial revolution, where technology, connectivity, analytics and automation inform everything we do in business and at home. But theres one caveat: Such a transformation cannot be successful if it comes at the expense of our data privacy.

This digital revolution certainly offers the promise of convenience, and, more importantly, the opportunity to use technology to do social good. But with the infinite volume of data we share with companies (sometimes even unknowingly), there are understandable concerns around how it will be managed and respected.

Canadians can be proud that the first program certified in ISO 31700-1 in the world is TELUSs Data for Good program. It serves as a global example for business, industry and government on how to ensure that data is respected at every stage of innovation.

The groundbreaking program gives researchers access to high-quality, strongly de-identified and aggregated datasets to address societal issues, such as developing efficient transportation systems in response to natural disasters, or supporting evidence-based environmental sustainability initiatives.

The program was built with Privacy by Design principles embedded into every layer to make sure that it allows researchers to access useful data. But it does not put data, and specifically privacy, at risk far from it. With these principles in place, they are helping to build trust in technology and create a better world.

In an era where citizens recognize and care about their data more than ever before, it is critical that we get this right. There are organizations that recognize the importance of fostering trust in the digital world and lead the way forward by collaborating at an international level to develop the cross-functional co-operation the world needs. Doing so requires a commitment to education, transparency, accountability, responsible innovation, participatory design and dedication to ensuring that respect for data is always the first priority.

We must champion these principles at an international scale to protect our rights and set technology up for sustainable success especially now, as AIs use grows exponentially and legislation is being developed in jurisdictions across the globe. Without these principles, we simply wont realize the full potential of our innovation.

Privacy by Design is not just a good idea it is essential to mitigate the potential risks of AI and protect our digital future.

Read the rest here:

Opinion: Here's how we can safeguard privacy amid the rise of artificial intelligence - The Globe and Mail

WilmU Gains Edge in Artificial Intelligence and Machine Learning – DELCO.Today

Wilmington University is on the cutting edge in computer science, part of an elite group of institutions in the Amazon Artificial Intelligence (AI) / Machine Learning Educator program.

These transformational technologies are very fast moving, and theres a huge demand for people with machine-learning and computer science skills, saidJodee Vallone,assistant chair ofComputer Sciencein the College of Technology. We are excited to be part of a community of learners in which we have early access to education we might not have otherwise.

Through the AWS (Amazon Web Service) Machine Learning University, WilmU faculty will receive free training and advice, with the option of curriculum with ready-to-use education tools. Students can receive AWS certification in machine learning, a significant boost in the job market.

Computer science is a growing field, expected to grow 13 percent through 2026, according to the U.S. Bureau of Labor Statistics. The College of Technology is actively recruiting adjuncts to meet the demand. More than 40 educators, including adjuncts, teach more than 1,000 computer science students at WilmU; 30 percent of those students are women, compared to the national average of more than 15 percent.

Diversity is a key need in computer science and technology and one of the things Amazon is looking for, Vallone said. Its exciting to be on the cutting edge of this tech revolution.

WilmU is already an education partner with Amazon, providing hourly distribution center employees with access to over 150 certificate, associate, and bachelor degree programs.

Learn more at Wilmington University.

Read the original:

WilmU Gains Edge in Artificial Intelligence and Machine Learning - DELCO.Today

What Is the Impact of Artificial Intelligence on Astrology? – Shondaland.com

Ill try not to make this article sound like a plea for my job as I delve into AI and the mark its made on my world as an astrologer and spiritual practitioner. Ive always assumed this line of work would be safe from digital interference; after all, it seems impossible to program intuition and human compassion into a computer. Then again, technology has always been as mysterious to me as astrology charts, tarot cards, and psychic phenomena might be to the average person. With the rise of increasingly popular astrology apps and websites that rely on AI to generate horoscopes and predictions, I couldnt resist the temptation to explore these trends for myself.

Before I start poking holes in AIs approach to astrology, Ill admit that technological advancements have made decoding the stars much more accessible than it used to be. Technology in general has played a huge role in making astrology not only approachable but also as big as it has become in recent years. In ancient times, astrologers did all their birth chart analysis by hand and now, there are many different programs that help speed the process, explains

While I wholeheartedly value the convenience planetary tracking programs offer, the information I receive is more data-centric than predictive. Im not asking the computer how Mercury entering Aquarius will impact my communication skills; I just need to know when this shift will occur so I can tell my readers what to expect. From my knowledge about AI, its not reliable in writing opinion-based articles on astrology or horoscopes since interpretation is key, says author and astrologer Lisa Stardust. For a debriefing in transits, itll be effective, but remember that transits dont affect everyone the same, which is why an astrologer is vital. When we astrologers are unpacking birth charts or writing your horoscopes, were looking at more than planetary interaction. We make intuitive judgments on how the cosmos will impact a single person or the collective while taking into consideration themes being dealt with as a society.

Digging further, I began my research by conversing with the AI program du jour ChatGPT, asking simple questions like What is my rising sign? and including all the necessary info needed to generate an answer, such as birth time, date, and location. Im immediately underwhelmed as my rising and moon signs are incorrectly predicted but also feel vindicated at this technological misstep. To its credit, the application suggests consulting with a professional astrologer for more accurate insights, an added touch I greatly appreciated.

In March 2023, Google Docs launched a Help Me Write application, leading to the ever-present pen and star icon in the top left margin of any document Ive drafted since then. Im curious if it offers astrological accuracy, wondering if any of my fellow astrology writers will utilize this tool as a way to cut corners or save time. I click the button and ask what Scorpios can expect in 2024. After a few seconds, I receive the following write-up:

Scorpios can expect a year of transformation and growth in 2024. The planet Pluto will be in your sign for the entire year, which will bring up deep issues and emotions that need to be addressed. This can be a challenging time, but it will also be an opportunity for you to grow and evolve. Jupiter will also be in your sign for part of the year, which will bring good luck and opportunities. This is a great time to start new projects or take risks.

Ill be frank here and say this is straight-up malarkey and nonsense. Pluto will move from Capricorn to Aquarius in 2024 with no appearance in Scorpio. In fact, Pluto hasnt been in Scorpio since 1995 and wont return to the sign of death and rebirth until the year 2229. Meanwhile, Jupiter will be happily housed in Taurus until May 25, 2024, before it shifts into Gemini, where it will ride out the rest of the year. So, thats a total strikeout for AIs astrological know-how in this instance.

With that being said, there are companies programming artificial intelligence to offer more specific and accurate astrological predictions. The Co-Star app living in many of our phones isnt delivering advice from the mind of a human practitioner. Rather, the information is generated by an algorithm using astrological methods and NASAs planetary data. While I have never connected with the app in a way that was meaningful, Im aware of its extreme popularity, and many of my clients swear by it. I will begrudgingly state that it does offer a more complex and complete review of what someone might be experiencing at any given time since it takes into consideration the entire birth chart of its users. Upon downloading, youre asked to contribute your birth date, time, and location, allowing the program to track significant transits that are unique to you. This provides a slight edge over daily horoscopes if you dont mind taking advice from a robot.

I learned of an astrology machine residing at the Grove in Los Angeles, the latest promotional brainchild of Co-Star and perhaps the most cutting-edge AI astrology tool available at the moment. On a recent trip to visit my partners family for Thanksgiving, I dragged him to check out this celestial droid with me, in the name of research, of course. After locating the boxy gray console, I couldnt help but feel slightly excited. The sensation was reminiscent of being a child with a quarter in front of a Zoltar machine.

I plugged my birth chart info into the screen, then chose from its selection of preprogrammed questions. I cheekily smiled at my significant other as I selected the query Am I actually in love? It took my photo, lights flickered, then a receipt spit out with a write-up validating that I am indeed entranced with my companion. Upon further inspection, I saw that transits were documented at the bottom, noting that Venus was in conjunction with my natal Mars, which I double-checked and found to be true.

I nudged my partner to ask the same question and provided him with his birth time, information I got from his mother a few months into our relationship because thats what falling in love with an astrologer looks like. Its a good thing I am confident in our bond because the computer wasnt convinced of his feelings, giving a firm No, you are not actually in love. My biggest annoyance here is that he now has the pleasure of teasing me with this prediction. Im also highly aware of the fact that a solid relationship reading requires a look at both partners natal charts. This process viewed our planetary placements separately and made a judgment call. On a more serious note, it seems kind of reckless on Co-Stars end.

Ive given tarot readings to friends since I was 10, started a coven during my 13th birthday party, and have eight years under my belt as a professional adviser. I know firsthand how much stock people put into faith and the readings they receive. While many are sensible enough to understand the importance of making their own choices, Ive definitely had clients who wanted me to make decisions for them. I shudder to think that someone could walk up to this device, ask a question about love, money, or a career path, and make life-altering choices accordingly. I ponder what might have happened if a younger, less experienced couple had a reading similar to the one my sweetheart and I had. Would a fight or worse ensue?

I decided to ask two more questions. The results varied, and I couldnt help but roll my eyes when it was stated that I should choose another line of work. AI claimed I was holding on to my current position out of fear, when in reality Im grateful every day for the work I do because I love it. I will say that when I asked what I should be famous for, it replied that I would benefit from sharing my expertise with others, which hello! If this session had been with a human astrologer, those conflicting answers wouldnt have emerged.

The problem with artificial intelligence offering readings is there seems to be a lack of discernment and consistent results. When Im interacting with a client in real time, Im picking up on their energy and emotional disposition and making judgments based on these subtle cues. Oftentimes, people come to me when they are in fragile states, so my job is to hold space in a way that is compassionate and nurturing in order to allow them to feel comfortable in their vulnerability. If AI can be an empathetic and intuitive reader, then that would be amazing, Stardust adds, but Im not sure if robots are built for that. Though it can be amusing to check out computer-generated readings for the fun of it, it may not be the best call for major life guidance.

Nothing compares to words of comfort when youre feeling blue, anxious, or scared, and AI simply isnt advanced enough to offer spiritual support in this capacity. A robot will always lack the human touch and human experience, regardless of how much it is trained, adds Montfar. The reason an astrology reading is so powerful is because [we] astrologers have experienced astrology very intimately. We know how certain astrological transits affect us because we ourselves have experienced them in our lives and at a personal level. For that reason, we are able to sympathize with clients and understand their perspective. Im not suggesting you should ditch your favorite astrology apps, especially when theres plenty of entertainment to be had. Just remember to take the advice with a grain of salt, and consider booking with a reputable practitioner when bigger questions arise.

Rene Watt is a Pacific Northwest-based professional psychic, astrologer, and witch. Her mystical insights have been featured in Vogue, Cosmopolitan, and InStyle. She hosts the weekly podcast The Glitter Cast, which features celebrity ghost stories and interviews with leading professionals in her field.

Get Shondaland directly in your inbox:

See original here:

What Is the Impact of Artificial Intelligence on Astrology? - Shondaland.com

Artificial Intelligence in the Workplace: How Could AI Affect NoMi Businesses? – northernexpress.com

Local experts weigh in on AI applications in real estate and marketing By Al Parker | Jan. 27, 2024

The role of artificial intelligence (AI) is growing dramatically, with new uses popping up almost overnight, like morels after a spring rain. For some, AI takes the stress out of daily work tasks. For others, AI has actually taken their jobs.

Generative AIthat is artificial intelligence like ChatGPT that creates contenthas been the latest game changer. Roughly 300 million jobs worldwide are expected to feel the benefits (and challenges) of the technology, and as many as 85 million jobs could be replaced by 2025.

How will AI affect our lives tomorrow, next year, or 10 years from now? Will it be a blessing or a curse for businesses and workers across the region? We asked a few local experts in the field for their thoughts.

Chris Linsell is a Traverse City-based realtor, content strategist, writer, real estate analyst, and self-described technology pundit who relies on AI every day.

There are certain tasks that AI is very good at, even at this stage in its development, says Linsell. I use it daily to aid in my production of written content. Its a part of the workflowall content gets a pass through AI to check for spelling, grammar, punctuation, missing words, etc. AI isnt writing my content for me, but it is making sure Im not making any easy mistakes.

Linsell believes AI is well suited at the present time for chores like data entry, simple communication, basic content generation, and objective question-answer interactions. In the real estate industry, a big chunk of a professionals time is spent executing tasks, many of which are largely administrative, he explains.

However, the true value of a real estate professional is rooted in their ability to serve their clients and build relationships with them, Linsell says. The rise of AI-powered tools will allow real estate professionals to spend dramatically less time executing tasks and dramatically more time focusing on their purposeserving the real estate needs of their clients and the community at large.

He continues, For realtors whose value is rooted in purpose, this is going to allow them to flourish. For realtors whose value is rooted in their ability to execute tasks, well, theyre going to be in trouble.

But does he think one day soon AI-powered tech will be handling home sales from end to end? Not so much.

AI will not likely, at least not in our lifetimes, ever replace humans when the task requires specific, unique experience and insight, Linsell predicts. Remember, the current AI models work by aggregating all the experience and input across the internet in order to answer questions. Which works great when your questions are things like, How much flour do I need to make a batch of 20 cookies, but terrible when the questions are things like What kind of cookies should I bake for my sister who has expressed her preferences over the many years of knowing each other?

In the real estate world, AI has long had applications for agents, buyers and sellers alikethink of the Zestimate tool on Zillow, for example, which estimates a homes value based on a number of factors. Thats the kind of AI Linsell expects to see significantly integrated into real estate technology in the next two years.

Most notably, searching for a home will likely get a lot easier and more efficient, he says, since AI will allow searchers to identify preferences based on criteria like whats in the listing photos. Additionally, I think well see the tools real estate professionals are using to automate much of the administrative tasks. Gone will be the days where realtors will be trapped behind a computer all day.

But what if you are someone whose job is all about being trapped behind a computer?

CNBC reports more than one-third (37%) of business leaders say AI replaced workers in 2023, and an article from Forbes listed media and marketing as two industries that are most impacted by AI. Indeed, copywriters and content creators have been cut loose left and rightfor example, both CNET and Insider trimmed 10 percent their staff last springwith some companies saying theyll now rely on AI-generated content that is checked over by a real human.

Oneupweb, a full-service digital marketing agency based in Traverse City, is taking a more intentional approach to using the powerful technology.

As an experienced team of digital marketers, were experimenting and evaluating how to use AI assistants while prioritizing human experience, says Oneupwebs Brand Manager Tessa Lighty.

Rather than leap blindly into the AI fray, the firm has developed an in-house manifesto on using AI responsibly. These are their eight guiding principles:

1. We believe artificial intelligence (AI) is a valuable, ever-changing tool we can use to expedite, streamline and multiply our efforts. 2. We understand the limiting and concerning aspects of AI and will consider those factors in our decisions. 3. We believe AI to be assistive but not autonomous; no final product will be 100% produced by AI. 4. Individual team members are accountable for decisions and actions produced by AI under their instruction. 5. Continued transparency, education and experimentation is critical to maintaining a proactive and productive approach to AI. 6. We believe humans are integral to producing creative, engaging, intelligent, human-centered content in all forms. 7. We will prioritize educating our teams, our clients and our industry on the responsible use of AI tools. 8. We believe AI is not, and never will be, a replacement for humanity.

On that last note, Lighty says, I dont necessarily think that we really feel threatened [by AI] at this point. In our agency, we havent found an AI that is able to replace a human.

She says Oneupweb uses generative AI to create outlines, help with brainstorming, and build the base of an image that would then be heavily tweaked by a human staff member. (If youve ever seen the many-fingered hands produced by art bots, youll understand why.) Lighty points to improvements in image editing, like Photoshops Generative Fill, and tools like Grammarly as the place where AI and humans work best together.

But were not having Grammarly write entire books for uswere simply using it to check our spelling, she explains. AI is really good for very black and white, cut and dry items. Its really great for data analysis, things like that. It can help speed up processes. But at the end of the day, the stuff that is being created is better created by a human.

There are other drawbacks, too, when using AI, which is part of the reason Oneupweb treads lightly. According to Lighty, there have been a lot of changes to AI within the last year and theres not a lot of regulation over it. She points to litigation over how intellectual property is being used by AI when it comes to everything from image and text generation to Google responses to search questions.

The world has changed forever because of it. Theres just no doubt about it, Lighty says. But for the foreseeable future, Oneupweb plans to work with AI rather than let AI do the work.

She concludes, A quote that sticks with us as an agency is that marketers will not be replaced by AI, but marketers who use AI will replace those who do not use it. So its all about learning how to use it, how to make it better.

More:

Artificial Intelligence in the Workplace: How Could AI Affect NoMi Businesses? - northernexpress.com

North Korea’s Artificial Intelligence Research: Trends and Potential Civilian and Military Applications – 38 North

Introduction

The advent of artificial intelligence (AI), particularly its sub-field machine learning (ML), has witnessed substantial global progress over the past decade, fueled by advancements in computation power and a surge of data accessibility since the 2010s. While many nations have significantly invested in these technologies for a myriad of civilian and military applications, assessing North Koreas AI/ML landscape poses a unique challenge. Following the development of its Eunbyul AI program in 1998, the countrys increasingly isolated, secretive nature and constraints posed by the current sanctions regime would arguably make evaluations of its current capabilities extremely speculative. However, while attempts to procure hardware for AI development may be stymied, open-source information, including scientific journal articles and state media, suggests North Korea is actively developing and promoting AI/ML technology across various sectors to keep abreast of global progress.

As part of a comprehensive review project, this analysis presents an initial survey of North Koreas AI/ML research, shedding light on the countrys AI/ML development efforts across North Koreas government, academia and industry. Among those, it is worth noting that North Korean researchers have applied AI/ML for sensitive applications, such as wargaming and surveillance, and continued scientific collaboration with foreign scholars until recently. Given that AI/ML is a software-centric technology that can be transferred via intangible means, called intangible transfer of technology (ITT), it is important to monitor such activities and, if necessary, implement measures to mitigate potential sanctions risks within the academic and private sectors. This can be achieved by enhancing academic scholars awareness of such risks, particularly in the realms of international conferences and cloud computing services.

Overview of North Koreas AI/ML Development

North Korean efforts to develop AI/ML have been consistently seen over three decades across various sectors. The DPRKs foray into AI/ML appears to have commenced in the 1990s, primarily to address nationwide challenges, from forecasting air pollution levels to better preparing for droughts, monitoring hydro turbine vibration, and most recently, applying AI/ML during the COVID-19 pandemic to create a model for evaluating proper mask usage and prioritizing clinical symptom indicators of infection.[1]

In recent years, the state has placed a strong emphasis on the development of AI/ML as an informatized/digitized economy, as reflected in its Socialist Constitution. Specifically, North Korea amended Article 26 of the Constitution in April 2019 to add informatization () to its core lines of economic efforts, including Juche-oriented, self-reliance, (), modernization () and scientization () to achieve a socialist independent national economy. In that same year, state media reported that the country believes its digital economys growth is driven by advancements in AI and that data is more valuable than gold and crude oil in the era of AI.

To spearhead these efforts, North Korea established the Artificial Intelligence Research Institute () under the Bureau of the Information Industry Guidance () in 2013, which has been incorporated into the Ministry of Information Industry () since 2021. This initiative aims to elevate the Bureaus authority to the ministry level, thereby actively promoting the informatization and digitalization of the country. Previously, these efforts were impeded by internal competition and a lack of cooperation among government agencies.

In academia, North Korea has embraced AI/ML across various educational levels. In 2014, Kim Il Sung University established the High-Technology Development Center (renamed the Center for Advanced Technology Research and Development [CATRAD]), focusing on cutting-edge technologies such as voice and text recognition, simultaneous interpretation and big data analysis. Since 2018, many universities have followed suit and introduced AI-focused programs.

At the enterprise level, North Korean companies have recently been promoting their commercial products that employ AI/ML technologies. In 2020, the Mangyongdae Information Technology Corporation ( ) launched two mobile phones, the Azalea 6 and 7 (Jindallae 6, 7, 6, 7). The company claims to have successfully incorporated technologies for fingerprint, voice, facial and text recognition, based on deep neural networks (DNN), into this device. The company is staffed by dozens of researchers, primarily from Kim Il-sung University and Kim Chaek University of Technology and is currently promoting domestic technical cooperation with other research institutes. In addition, according to a flyer posted in North Korean media, the Yalu River Technology Development Company () has applied DNN to its security surveillance systems and intelligent IP cameras. The company claims to actively promote collaborative research and development with renowned IT companies from over 20 countries (Appendix 1).[2]

North Korea has demonstrated a comprehensive approach to developing its AI/ML capabilities across sectors, encompassing government initiatives, academia and commercial applications. There is evidence of concerted efforts to leverage these technologies, such as nuclear safety and wargaming, to achieve its broader economic and technological goals, as discussed in the case studies that follow. It should be noted that the current sanctions regime limits scientific collaboration with North Korea, as these transfers of knowledge through collaboration with foreign scholars pose risks of dual-use applications, even for non-military and non-nuclear purposes.

Study Highlight 1: Civilian ApplicationNuclear Safety

In 2022, the North Korean nuclear scientists Ho Il Mun, So Chol and others published a study titled PWR core loading pattern optimization with adaptive genetic algorithm in the academic journal Annals of Nuclear Energy. Genetic algorithms (GA) are a machine learning technique that aims to find optimized solutions for a problem by mimicking the evolution of genes, such as mutation and crossover. In a pressurized water reactor (PWR), ensuring the optimal arrangement of fuel rods, known as the fuel loading pattern, is essential for maintaining reactor safety. Specifically, by optimizing this pattern, nuclear operations can secure a necessary safety margin to prevent particular fuel rods from overheating. This optimization involves arranging the fuel rods with varying properties, such as levels of enrichment, and mitigates the risk of nuclear accidents, ultimately contributing to the overall safety of the reactor and an increase in power generation. The study concludes that their version of GA is proven to be faster and more effective than other referenced GAs in finding optimal fuel loading patterns.

As faculty members of the Energy Science Department of Kim Il-sung University, Ho Il Mun and So Chol have primarily focused on the civilian applications of nuclear technology. The two scholars appear to have been collaborating on fuel assembly and burnup analysis in the context of PWRs or light water reactors (LWRs) since 2005 and have worked together on more than ten projects (Appendix 2). Their primary focus for scientific simulations is 1,000 MWe PWR reactors, with specific design data presented in Appendix 3.[3]

Study Highlight 2: Military ApplicationWargaming/Battle Simulation

In 2022, the North Korean journal Information Science indicated that a research project was conducted focusing on the development of a wargaming simulation using a machine learning method called reinforcement learning (RL).[4] In RL, an agent is trained to maximize rewards in a given environment through trial and error, aiming to achieve goals set by an engineer. For instance, consider an engineer who wants to develop an algorithm that enables a robot to ride a swing to reach the highest possible height. In this case, the robot capable of bending its knee acts as the agent, and reaching the maximum height is a goal set by the engineer to which the reward is given. The environment is the swing ride, where the robot continually adjusts its knee-bending timing to propel the swing optimally, thereby leading to maximum accumulated rewards. As exemplified by Googles AlphaGo, RL is extensively employed across a myriad of domains, necessitating decision-making and optimization, potentially extending its utility to military applications (Appendix 4).

The study indicates that North Korean scientists opted for RL for wargaming purposes since they view the running speed of RL as faster than that of other methods. While the specifics of the agent and environment are not explained, information related to the rewards provides insight into what North Korea aims to achieve with this simulation. Specifically, the study established three criteria for reward calculation: victory in battle, the ratio between the number of artillery shells landed on the enemy and the number of shells fired by the agent, and the ratio of survival time of the agent to total conflict duration.[5] This suggests North Koreas conceived wargaming environment might be actual conflicts at a tactical level involving artillery shells.

There are also clues for assessing potential military considerations of the simulation. In their research, the North Korean authors referenced a study titled Adaptive Human Behavior Modeling for Air Combat Simulation, conducted by Chinese scholars and published on the Institute of Electrical and Electronics Engineers (IEEE) platform. The Chinese lead author, Jian Yao, is associated with the Academy of Military Sciences or the National University of Defense Technology (NUDT), which is listed on the US trade denylist called the Entity List. Her research primarily focuses on military applications, as evidenced by her works: Analyzing Ballistic Missile Defense System Effectiveness Based on Functional Dependency Network Analysis and Weapon Effectiveness Simulation System (WESS) (Appendix 5). Given Yaos consistent focus on military applications and her affiliation with the military organization, it is plausible that North Korea may also aim to develop wargaming simulations applicable to the military domain beyond the current embryonic gaming simulations to enhance its strategic planning.

Sanctions and Export Control Implications

As evidenced by the aforementioned cases, despite the current sanctions regime, especially the United Nations Security Council Resolution (UNSCR) 2321 of 2016 and the prohibition of scientific collaboration with North Korea, unintended risks abound.

Regarding North Korean companies, there is a significant risk that any international cooperation on AI with Yalu River Technology Development Company could lead not only to a breach of sanctions, but also to being listed on the US Entity List. US export regulations allow the Department of Commerce to add entities whose activities act against US foreign policy interests, including human rights abuses. For this reason, a Chinese tech company, HikVision, was listed in 2019 for its surveillance products allegedly used in human rights abuses in China. North Koreas Yalu River Technology Development Company currently advertises its scientific collaborations with enterprises from roughly 20 countries (Appendix 1). If such cooperation exists and continues, they may lose access to US technologies and products as long as the US continues to view North Korea as a country with human rights concerns.

As for the nuclear safety-related studies, there has been no record of international academic collaboration involving the authors. However, it is worthwhile to keep monitoring scholars academic activities concerning applications of AI. For example, authors expertise in and activities related to burnup analysisfuel containing plutonium isotopes desirable for nuclear weaponscould shed insight on current and potential proliferation activities.

North Koreas AI-driven military study also suggests significant implications for sanctions and export controls. First, the North Korean authors collaborated with Chinese scholars associated with a company currently under US financial restrictions. The North Korean journal does not provide detailed information about the lead author, Ri Jong Hyok. However, an open-source database shows a publication history for an individual with the same name specializing in AI/ML. Specifically, the database indicates that Ri Jonghyok (author identifier: 57203266720), affiliated with Kim Il-sung University, coauthored a few studies with Chinese scientists between 2018 and 2020.

Moreover, one of these collaborators, Wenliang Huang (Author identifier: 56161896800), is associated with China Unicom Ltd., an organization currently subject to financial restrictions imposed by the US Treasury Department. Unicom is also included in the Non-Specially Designated Nationals Chinese Military-Industrial Complex Companies List (NS-CMIC List) since the US considers activities of its related organizations to be detrimental to US national security and foreign policy interests.

Second, Ris publication history hints at potential channels for technology transfers. In 2023, Ri published a paper titled Target adaptive extreme learning machine for transfer learning. Transfer learning is a technique for fine-tuning a pre-trained model to enhance its performance under specific conditions. Unlike traditional machine learning methods, transfer learning does not require the entire training dataset used for the pre-trained model. Instead, it only requires data that a developer is interested in to further train the pre-trained model for their specific needs or circumstances.

In this regard, transfer learning offers several advantages, including reduced training time and resource requirements, such as data storage and computational power. Moreover, it is theoretically feasible to fine-tune a model initially developed for civilian applications for military purposes. For instance, a model trained by foreign scholars for object detection purposes in aerial environments could be adapted for further fine-tuning that uses data pertaining to military objects that North Korea is interested in. The scope of military simulation can also be expanded through transfer learning to cover more complex combat situations. For example, an agent trained in 2-versus-1 air combat scenarios could be transferred to 2-versus-2 scenarios for further training.

The benefits of transfer learning highlight potential risks associated with technology transfers via intangible means, such as sharing electronic files, a pre-trained model in this context, through email and cloud computing services. Many cloud computing services, such as Google Collab, Microsoft Azure and other enterprises, offer AI/ML development environments. These environments are supported by computing power, including a Graphic Processing Unit (GPU), TensorFlow Processing Unit (TPU) and NVIDIAs A100 and H100 units. Therefore, the potential proliferation risks linked with ITT and cloud computing services could negate the effectiveness of the sanctions regime and export controls that mainly focus on the transfer of physical goods in general.

Finally, international conferences are platforms that can beand have already beenexploited by North Korean scholars to seek technical assistance from foreign researchers. The IEEE, for example, publishes numerous studies on AI/ML and military simulations and hosts various international conferences involving such topics (Appendix 4).[6] If North Korean scholars attend these conferences to seek technical advice from foreign experts, it could lead to potential violations of sanctions and export controls. It is because of this risk of ITT that UNSCR 2270 prohibits the transfer of any items, including both physical goods and technologies, that could enhance operational capabilities of the North Korean military and export control regulations of the US, the European Union (EU) and South Korea prohibit their nationals from providing technical assistance to foreigners posing military risks through intangible means such as conversation, visual inspection or demonstration.

Conclusion and Policy Recommendations

North Koreas recent endeavors in AI/ML development signify a strategic investment to bolster its digital economy. This commitment is underscored by constitutional amendments fostering the digitization and informatization of its socialist economy, coupled with institutional reforms to address competing self-interest across government offices. The promotion of AI also extends across academia, as evidenced by the establishment of AI-focused programs in secondary education and universities. North Korean scientific projects often focus on nationwide concerns, such as pandemics and environmental issues, and enterprises have recently begun launching commercial products incorporating AI/ML technologies and nuclear safety research, demonstrating a multi-faceted approach.

However, the inherent dual-use nature of AI/ML technologies presents numerous challenges. For instance, North Koreas pursuit of a wargaming simulation program using RL reveals intentions to better comprehend operational environments against potential adversaries. Furthermore, North Koreas ongoing collaborations with foreign scholars pose concerns for the sanctions regime. Moreover, the conversion of civilian AI technology into military applications poses a substantial risk, particularly in cloud computing environments that sidestep the need for specialized hardware. And finally, international conferences could be exploited by North Korea to seek technical assistance from foreign scholars.

To effectively address the potential sanctions and proliferation risks posed by North Koreas AI/ML endeavors, national authorities should proactively engage with cloud computing service providers and academic/professional associations that host international conferences on emerging technology. Discussions with cloud computing service providers should center on raising awareness of potential threats posed by North Korea and considerations for enhancing customer screening during onboarding. For conference hosts, deliberations should revolve around devising ways to apprise scholars of the risks associated with international collaborations, ensuring they do not inadvertently support undisclosed military applications in violation of UN and other unilateral sanctions while safeguarding academic freedom.

***

DOWNLOAD PDF OF APPENDICES HERE, OR VIEW BELOW

Appendix I. North Koreas Commercial Products Employing AI/ML.

Appendix II. Publication List of Ho Il Mun and So Chol.

Appendix III. Visual information on 1,000 MWe PWR.

Appendix IV. Examples of AI/ML/RL Studies for Potential Military Applications.

Appendix V. List of Jian Yaos Studies.

See the original post here:

North Korea's Artificial Intelligence Research: Trends and Potential Civilian and Military Applications - 38 North

New Texas Center Will Create Generative AI Computing Cluster Among Largest of Its Kind – The University of Texas at Austin

AUSTIN, Texas The University of Texas at Austin is creating one of the most powerful artificial intelligence hubs in the academic world to lead in research and offer world-class AI infrastructure to a wide range of partners.

UT is launching the Center for Generative AI, powered by a new GPU computing cluster, among the largest in academia. The cluster will comprise 600 NVIDIA H100s GPUs short for graphics processing units, specialized devices to enable rapid mathematical computations, making them ideal for training AI models. The Texas Advanced Computing Center (TACC) will host and support the cluster, called Vista.

Artificial intelligence is fundamentally changing our world, and this investment comes at the right time to help UT shape the future through our teaching and research, said President Jay Hartzell. World-class computing power combined with our breadth of AI research expertise will uniquely position UT to speed advances in health care, drug development, materials and other industries that could have a profound impact on people and society. We have designated 2024 as the Year of AI at UT, and a big reason why is the combination of the trends and opportunities across society, our talented people and strengths as a university, and now, our significant investment in the Center for Generative AI.

The growth of ChatGPT and similar generative AI technologies has put pressure on many industry groups, health care organizations and public agencies to work with academic institutions to harness AI for innovation. Experts from the center will collaborate with external partners to develop and apply generative AI solutions to challenging problems across industries.

With a core focus on biosciences, health care, computer vision and natural language processing (NLP), the new center will be housed within UTs interdisciplinary Machine Learning Laboratory and co-led by the Cockrell School of Engineering and the College of Natural Sciences. In recognition of AIs growth across industries, it also includes faculty members and support from Dell Medical School, as well as researchers from the School of Information and McCombs School of Business.

We believe academia should continue to play a leading role in the development of AI, said Alex Dimakis, director of the center and professor in the Cockrell Schools Chandra Family Department of Electrical and Computer Engineering. Open-source models, open data sets and interdisciplinary peer-reviewed research is the safest way to drive the upcoming AI revolution. Universities are uniquely suited to shape this ecosystem, and we are excited to be on the frontier of generative AI here in Austin.

The University is currently home to the National Science Foundation-supported AI Institute for Foundations of Machine Learning (IFML) and TACCs Frontera, the most powerful supercomputer at a U.S. university. Center for Generative AI leaders envision applying fundamental algorithmic resources to solve large-scale applied problems, bridging academic and industrial goals for artificial intelligence.

UT has established a tremendous foundation in AI, said Adam Klivans, a professor in the College of Natural Sciences Department of Computer Science and director of the Machine Learning Laboratory. With this investment, we can accelerate the process of scientific discovery and find new solutions to major engineering challenges that would otherwise take years of experimental work.

The center offers a core pillar for advancing AI technologies while building on momentum generated in recent years.

Go here to see the original:

New Texas Center Will Create Generative AI Computing Cluster Among Largest of Its Kind - The University of Texas at Austin

More support for artificial intelligence start-ups to boost innovation – European Union

The Commission is stepping up its support to European start-ups and small and medium enterprises (SMEs) so they can develop trustworthy artificial intelligence (AI) that respects EU values and rules.

The new AI package includes a broad range of measures to support these start-ups and innovation, along with a proposal to provide privileged access to supercomputers to AI start-ups and the broader innovation community. Other measures include:

The Commission will also establish two European Digital Infrastructure Consortiums, together with several Member States. These groups will develop common European infrastructure in language technologies and state-of-the-art AI-tools to help cities optimise processes, from traffic to waste management.

For years, the Commission has been facilitating and enhancing cooperation on artificial intelligence across the EU to boost its competitiveness and ensure trust based on EU values. The EU AI Act agreed in December 2023 is the world's first comprehensive law on artificial intelligence and will support the development, deployment and take-up of trustworthy AI in the EU.

For more information

Excellence and trust in artificial intelligence

AI Pact

The European High-Performance Computing Joint Undertaking

Common European Data Spaces

Press release: Commission launches AI innovation package to support Artificial Intelligence start-ups and SMEs

European Digital Infrastructure Consortium (EDIC)

Continued here:

More support for artificial intelligence start-ups to boost innovation - European Union