AI and the law: Imperative need for regulatory measures – ft.lk

Using AI Technology, without the needed laws and policies to understand and monitor it, can be risky

The advent of superintelligent AI would be either the best or the worst thing ever to happen tohumanity. The real risk with AI isnt malice but

competence. A super-intelligent AI will be extremely good at accomplishing its goals and if those goals arent aligned with ours were in trouble.1

Generative AI, most well-known example being ChatGPT, has surprised many around the world, due to its output to queries being very human likeable. Its impact on industries and professions will be unprecedented, including the legal profession. However, there are pressing ethical and even legal matters that need to be recognised and addressed, particularly in the areas of intellectual property and data protection.

Firstly, how does one define Artificial Intelligence? AI systems could be considered as information processing technologies that integrate models and algorithms that produces capacity to learn and to perform cognitive tasks leading to outcomes such as prediction and decision-making in material and virtual environments. Though in general parlance we have referred to them as robots, AI is developing at such a rapid pace that it is bound to be far more independent than one can ever imagine.

As AI migrated from Machine Learning (ML) to Generative AI, the risks we are looking at also took an exponential curve. The release of Generative technologies is not human centric. These systems provide results that cannot be exactly proven or replicated; they may even fabricate and hallucinate. Science fiction writer, Vernor Vinge, speaks of the concept of technological singularity, where one can imagine machines with super human intelligence outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders and potentially subduing us with weapons we cannot even understand. Whereas the short term impact depends on who controls it, the long-term impact depends on whether it cannot be controlled at all2.

The EU AI Act and other judgements

Laws and regulations are in the process of being enacted in some of the developed countries, such as the EU and the USA. The EU AI Act (Act) is one of the main regulatory statutes that is being scrutinised. The approach that the MEPs (Members of the European Parliament) have taken with regard to the Act has been encouraging. On 1 June, a vote was taken where MEPs endorsed new risk management and transparency rules for AI systems. This was primarily to endorse a human-centric and ethical development of AI. They are keen to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory and environmentally friendly. The term AI will also have a uniform definition which will be technology neutral, so that it applies to AI systems today and tomorrow.

Co-rapporteur Dragos Tudovache (Renew, Romania) stated, We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate, while protecting fundamental rights, strengthening democratic oversight and ensuring a mature system of AI governance and enforcement3.

The Act has also adopted a Risk Based Approach in terms of categorising AI systems, and has made recommendations accordingly. The four levels of risk are,

Unacceptable risk (e.g., remote biometric identification systems in public),

High risk (e.g., use of AI in the administration of justice and democratic processes),

Limited risk (e.g., using AI systems in chatbots) and

Minimal risk (e.g., spam filters).

Under the Act, AI systems which are categorised as Unacceptable Risk will be banned. For High Risk AI systems, which is the second tier, developers are required to adhere to rigorous testing requirements, maintain proper documentation and implement an adequate accountability framework. For Limited Risk systems, the Act requires certain transparency features which allows a user to make informed choices regarding its usage. Lastly, for Minimal Risk AI systems, a voluntary code of conduct is encouraged.

Moreover, in May 2023, a judgement4 was given in the USA (State of Texas), where all attorneys must file a certificate that contains two statements stating that no part of the filing was drafted by Generative AI and that language drafted by Generative AI has been verified for accuracy by a human being. The New York attorney had used ChatGPT, which had cited non-existent cases. Judge Brantley Starr stated, [T]hese platforms in their current states are prone to hallucinations and bias.on hallucinations, they make stuff up even quotes and citations. As ChatGPT and other Generative AI technologies are being used more and more, including in the legal profession, it is imperative that professional bodies and other regulatory bodies draw up appropriate legislature and policies to include the usage of these technologies.

UNESCO

On 23 November 2021, UNESCO published a document titled, Recommendations on the Ethics of Artificial Intelligence5. It emphasises the importance of governments adopting a regulatory framework that clearly sets out a procedure, particularly for public authorities to carry out ethical impact assessments on AI systems, in order to predict consequences, address societal challenges and facilitate citizen participation. In explaining the assessment further, the recommendations by UNESCO also stated that it should have appropriate oversight mechanisms, including auditability, traceability and explainability, which enables the assessment of algorithms and data and design processes as well including an external review of AI systems. The 10 principles that are highlighted in this include:

Proportionality and Do Not Harm

Safety and Security

Fairness and Non-Discrimination

Sustainability

Right to Privacy and Data Protection

Human Oversight and Determination

Transparency and Explainability

Responsibility and Accountability

Awareness and Literacy

Multi Stakeholder and Adaptive Governance and Collaboration.

Conclusion

The level of trust citizens have in AI systems can be a factor to determine the success in AI systems being used more in the future. As long as there is transparency in the models used in AI systems, one can hope to achieve a degree of respect, protection and promotion of human rights, fundamental freedoms and ethical principles6. UNESCO Director General Audrey Azoulay stated, Artificial Intelligence can be a great opportunity to accelerate the achievement of sustainable development goals. But any technological revolution leads to new imbalances that we must anticipate.

Multi stakeholders in every state need to come together in order to advise and enact the relevant laws. Using AI Technology, without the needed laws and policies to understand and monitor it, can be risky. On the other hand, not using available AI systems for tasks at hand, would be a waste. In conclusion, in the words of Stephen Hawking7, Our future is a race between the growing power of our technology and the wisdom with which we use it. Lets make sure wisdom wins.

Footnotes:

1Pg 11/12; Will Artificial Intelligence outsmart us? by Stephen Hawking; Essay taken from Brief Answers to the Big Questions John Murray, (2018)

2 Ibid

3https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence

4https://www.theregister.com/2023/05/31/texas_ai_law_court/

5 https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence

6 Ibid; Pg 22

7 Will Artificial Intelligence outsmart us? Stephen Hawking; Essay taken from Brief Answers to the Big Questions John Murray, (2018)

(The writer is an Attorney-at-Law, LL.B (Hons.) (Warwick), LL.M (Lon.), Barrister (Lincolns Inn), UK. She obtained a Certificate in AI Policy at the Centre for AI Digital Policy (CAIDP) in Washington, USA in 2022. She was also a speaker at the World Litigation Forum Law Conference in Singapore (May 2023) on the topic of Lawyers using AI, Legal Technology and Big Data and was a participant at the IGF Conference 2023 in Kyoto, Japan.)

Read the original here:

AI and the law: Imperative need for regulatory measures - ft.lk

East Africa lawyers wary of artificial intelligence rise – The Citizen

Arusha. It is an advanced technology which is not only unavoidable but has generally simplified work.

It has made things much easier by shortening time for research and reducing the needed manpower.

Yet artificial intelligence (AI) is still at crossroads; it can lead to massive job losses with the lawyers among those much worried.

It is emerging as a serious threat to the legal profession, said Mr David Sigano, CEO of the East African Lawyers Society (EALS).

The technology will be among the key issues to be discussed during the societys annual conference kicking off in Bujumbura today.

He said time has come for lawyers to position themselves with the emerging technology and its risks to the legal profession.

We need to be ready to compete with the robots and to operate with AI, he told The Citizen before departing for Burundi.

Mr Sigano acknowledged the benefits of AI, saying like other modern technologies it can improve efficiency.

AI is intelligence inferred, perceived or synthesised and which is demonstrated by machines as opposed to intelligence displayed by humans.

AI applications include advanced web search, recommendation systems used by Youtube, Amazon and Netflix, self-driving cars, creative tools and automated decisions, among others.

However, the EALS boss expressed fears of job losses among the lawyers and their assistants through robots.

How do you prevent massive job losses? How do you handle ethics? Mr Sigano queried during an interview.

He cited an AI-powered Super Lawyer, a robot recently designed and developed by a Kenyan IT guru.

The tech solution, known as Wakili (Kiswahili for lawyer) is now wreaking havoc in that countrys legal sector, replacing humans in determining cases.

All you need to do is to access it on your mobile or computer browser; type in your question either in Swahili, English, Spanish, French or Italian and you have the answers coming to you, Mr Sigano said.

Wakili is a Kenyan version of the well-known Chat GPT. Although it has been lauded on grounds that it will make the legal field grow, there are some reservations.

Mr Sigano said although the technology has its advantages, AI could either lead to job losses or be easily misused.

We can leverage the benefits of AI because of speed, accuracy and affordability. We can utilise it, but we have to be wary of it, he pointed out.

A prominent advocate in Arusha, Mr Frederick Musiba, said AI was no panacea to work efficiency, including for the lawyers.

It can not only lead to job losses to the lawyers but also increase the cost of legal practice through its access through the Internet.

Lawyers will lose income as some litigants will switch to AI. Advocates will lose clients, Mr Musiba told The Citizen when contacted for comment.

However, the managing partner and advocate with Fremax Attorneys said AI was yet to be fully embraced in Tanzania unlike in other countries.

Nevertheless, Mr Musiba said the technology has its advantages and disadvantages, cautioning people not to rush to the robots.

However, Mr Erik Kimaro, an advocate with Keystone Legal firm, also in Arusha, said AI was an emerging technological advancement that is not avoidable.

Whether we like it or not, it is here with its advantages and disadvantages. But it has made things much easier, he explained.

I cant say we have to avoid it but we have to be cautious, he added, noting that besides leading to unemployment it reduces critical thinking of human beings.

Mr Aafez Jivraj, an Arusha resident and player in the tourism sector, said it will take time before Tanzania fully embraced AI technology but said he was worried of job losses.

It is obvious that it can remove people from jobs. One robot can work for 20 people. How many members of their families will be at risk? he queried.

AI has been a matter of debate across the world in recent years with the risk of job losses affecting nearly all professions besides the lawyers.

According to Deloitte, over 100,000 thousand jobs will be automated in the legal sector in the UK alone by 2025 with companies that fail to adopt AI are fated to be left behind.

On his part, an education expert in Arusha concurred, saying that modern technologies such as AI can lead to job losses.

The situation may worsen within the next few years or decades as some of the jobs will no longer need physical labour.

AI has some benefits like other technologies but it is threatening jobs, said Mr Yasir Patel, headmaster of St Constantine International School.

He added that the world was changing so fast that many of the jobs that were readily available until recently have been taken over by computers.

Computer scientists did not exist in the past. Our young generation should be reminded. They think the job market is still intact, he further pointed out.

See the article here:

East Africa lawyers wary of artificial intelligence rise - The Citizen

Pharmacogenomic Testing in Major Depression: Benefits, Cost … – HealthDay

WEDNESDAY, Nov. 22, 2023 (HealthDay News) -- For patients with major depressive disorder, pharmacogenomics testing to guide antidepressant use yields population health gains and reduces health system costs, according to a study published online Nov. 14 in CMAJ, the journal of the Canadian Medical Association.

Shahzad Ghanbarian, Ph.D., from the University of British Columbia in Vancouver, Canada, and colleagues developed a discrete-time microsimulation model of care pathway for major depressive disorder in British Columbia to examine the effectiveness and cost-effectiveness of pharmacogenomic testing from the public payer's perspective. Incremental costs, life-years, and quality-adjusted life-years (QALYs) were estimated for a representative cohort of patients.

The researchers found that pharmacogenomic testing was predicted to save the British Columbia health system $956 million over 20 years ($4,926 per patient) and bring health gains of 0.064 and 0.381 life-years and QALYs per patient, respectively, if implemented for adult patients with moderate-to-severe major depressive disorder. The savings were mostly as a result of slowing or preventing the transition to refractory depression. Over 20 years, pharmacogenomic-guided care was associated with 37 percent fewer patients with refractory depression. The costs of pharmacogenomics testing would be offset within about two years of implementation as estimated in sensitivity analyses.

"Interventions that might improve remission rates and reduce the number of cases of refractory depression, in particular, are needed to improve the quality of life for patients, and reduce the economic burden of major depressive disorder on already strained health care systems," the authors write.

Abstract/Full Text

Editorial

Originally posted here:
Pharmacogenomic Testing in Major Depression: Benefits, Cost ... - HealthDay

Perceptions of Nigerian medical students regarding their … – BMC Medical Education

A total of 300 medicine and surgery clinical students completed the survey (170 from the University of Lagos and 130 from Lagos State University) resulting in a 40% response rate (calculated as the number of completed questionnaires divided by the potential number of eligible participants based on the MDCN quota for both colleges). The sociodemographic characteristics of the respondents by knowledge, ability and summary scores are shown in Table1. Respondents were 19 to 39 years old with a median age of 23 (IQR: 2224) and slightly higher females (52.3%). At least a quarter of the respondents were from each level, with the majority from sixth (38.3%) and fifth years (36.3%). Most respondents (63.3%) indicated an interest in a career involving research.

Most respondents (92.0%, n=276) indicated they had heard of at least one of the precision medicine terminologies. The most commonly indicated terminology were Pharmacogenomics (71.0%, n=213) and Genomic Medicine (47.7%, n=143), while the least indicated terminologies were Genome-guided prescribing (19.7%, n=59) and Next Generation Sequencing (18.0%, n=54). Among those who had indicated awareness, the most commonly cited source of knowledge was Lectures (49.6%, n=137), Media (34.4%, n=95) and less commonly Healthcare providers (10.1%, n=28) and Peers (5.1%, n=14).

Knowledge scores of the respondents ranged from 4 to 20, with a median knowledge score of 12 (IQR: 814.5). Respondents were more comfortable about their knowledge of genetic variations predisposing to common diseases (43.3%, n=130) and pharmacogenomics (38.0%, n=114). They were least comfortable about their understanding of basic genomic testing concepts and terminology (29.7%, n=89) and next-generation sequencing (23.3%, n=70). The distribution of responses to knowledge questions is shown in Fig.1.

Distribution of knowledge and ability responses of participants

On univariate analyses, respondents medical school year was significantly associated with their knowledge score (F [2,297]=3.23, p=0.04). Compared to those in their 4th year, students in their 6th year had a 1.54-point lower mean knowledge score (95%CI: -2.83, -0.24; p=0.02) while those in 5th year had a 0.39-point lower mean knowledge score but this was not statistically significant (95%CI: -1.69, 0.92; p=0.56). Students who indicated an interest in a career involving research had a borderline significant 1.03-point higher mean knowledge score compared to those who did not (95%CI: -0.03, 2.08; p=0.06). Age, gender and ethnicity of participants did not show any significant associations with knowledge score of the participants.

After sequentially adjusting for age, gender, and interest in a research career, participants medical school year was significantly associated with knowledge score (F [2, 294]=4.78, p=0.009). Students in their 6th year had a statistically significant 2.16-point lower mean knowledge score than those in their 4th year (95%CI: -3.60, -0.72; p=0.003). After adjusting for age, gender, and interest in a career involving research, each unit increase in medical school year was associated with a statistically significant 1.10-point lower mean knowledge score (F [1,295]=8.97, ptrend = 0.003) [Table2].

The ability scores of the respondents ranged from 4 to 20, with a median score of 11 (IQR: 715). Respondents were more comfortable about their ability to recommend genetic testing options to patients (39.0%, n=117), to a lesser extent, understand genomic test results (30.3%, n=91 and were least comfortable in their ability to make treatment recommendations based on genomic test results (29.3%, n=88) and explain genomic test results to patients (29.3%, n=88). The distribution of responses to ability questions is shown in Fig.1.

On univariate analyses, respondents medical school year was significantly associated with ability scores (F [2,297]=6.26, p=0.002). Compared to students in their 4th year, students in their 5th year had a statistically significant 1.47-point lower mean ability score (95%CI: -2.84, -0.09; p=0. 04) while students in their 6th year had a statistically significant 2.44-point lower mean ability score (95%CI: -3.81, -1.08; p<0.001). In addition, each unit increase in knowledge score was significantly associated with a 0.77-point increase in mean ability score (95%CI: 0.69, 0.86; p<0.001). Age, gender, ethnicity of participants and interest in a career involving research did not show any significant associations.

After multivariate adjustments for age, gender, medical school year, interest in a career involving research and knowledge score, participants knowledge score (: 0.76 95%CI: 0.67, 0.84; p<0.001), and medical school year (F [2,293]=4.67, p=0.01) were independent predictors of ability score. Compared to students in their 4th year, students in their 5th year had a 1.24-point lower mean ability score (95%CI: -2.21, -0.27; p=0.01), and those in their 6th year had a 1.58-point lower mean ability score (95%CI: -2.66, -0.50; p=0.004). After adjusting for age, gender, interest in a career involving research and knowledge score, each unit increase in medical school year was associated with a significant 0.78-point lower mean ability score (F [1,294]=8.06, ptrend = 0.005) [Table3].

The attitude scores of participants ranged from 14 to 40, with a median score of 28 (IQR: 2433). The median score on the openness items was 15 (IQR: 1216). Respondents were more willing to use a patients genetic information to guide decisions in clinical practice (62.0%, n=186), use new types of therapies to help patients (60.0%, n=180), and use genome-guided tools developed by researchers (56.0%, n-168) but were less willing to use genome-guided prescribing in their career when senior physicians were not (41.0%, n=123). The median score on the divergence items was 15 (IQR: 1217). Respondents agreed that research-based genome-guided interventions were clinically useful (79.0%, n=237), were willing to prescribe different medications or doses of drugs (61.0%, n=183), to a lesser extent disagreed that clinicians know how to treat patients based on their genetic information better than researchers (52.0%, n=156), and to a much lesser extent disagreed that clinical experience is more important than using a patients genetic information to make decisions (36.3%, n=109). The distribution of responses to attitude questions is shown in Fig.2.

Distribution of participants responses to attitudes questions

Respondents responses to questions assessing their attitudes towards the adoption of genome-guided prescribing and precision medicine. Section A includes the distribution of responses to openness questions while section B includes the distribution of responses to divergence questions

On univariate analyses, each unit increase in knowledge score of the participants was significantly associated with a 0.14 decrease in mean attitude score (95%CI: -0.26, -0.02; p=0.03). Age, gender, ethnicity, medical school year and interest in a career involving research were not significantly associated with attitude scores. Although the association with knowledge score persisted after adjusting for age and gender, adjusting for medical school year and interest in a career involving research resulted in a trend towards a null association. After maximal adjustment for age, gender, knowledge score, and interest in a research career, students in their 6th year had a significant 1.65-point higher mean attitude score than those in their 4th year (95%CI: 0.75, 3.23; p=0.04). However, medical school year overall was not significantly associated with attitude scores (F [2,293]=2.50, p=0.08). Nevertheless, after maximal adjustment, each unit increase in medical school year was significantly associated with a 0.81-point increase in mean attitude scores (95%CI: 0.02, 1.60; ptrend = 0.04) [Table4]. Likelihood ratio chi-square tests did not reveal any evidence of statistical interaction between knowledge scores and medical school year (X2=2.66, p=0.26).

The distribution of ethical concerns expressed by respondents is shown in Fig.3. More than a quarter of the respondents were worried that genomic information obtained would be misused by government and corporate bodies (35.7%, n=107) and that their application would increase margins between the rich and the poor (34.0%, n=102). A similar proportion were worried that results from tests can affect employability if serious genetic defects are made known to their employers (33.0%, n=99) and that they will lead to insurance discrimination (30.0%, n=90). However, less than a quarter of the respondents felt that precision medicine approaches would lead to ethnic/racial discrimination (12.3%, n=37), and only 8.7% (n=26) of the respondents felt that precision medicine approaches would violate privacy and confidentiality.

Respondents perceptions of ethical concerns and education about Precision Medicine

Most respondents (65.0%, n=195) thought it was important to learn about precision medicine. Only 11.3% (n=34) of the respondents felt that their education had adequately prepared them to practice precision medicine. Only 10.7% (n=32) thought they knew who to ask about genomic testing. Finally, only 10.3% (n=31) of the respondents felt their professors had encouraged the use of precision medicine. The distribution of responses to education items is shown in Fig.3.

Read the original here:
Perceptions of Nigerian medical students regarding their ... - BMC Medical Education

Publication Bias Inflates Efficacy of Alprazolam XR: Study Reveals … – HealthDay

WEDNESDAY, Nov. 22, 2023 (HealthDay News) -- Publication bias inflates the apparent efficacy of alprazolam extended-release, according to a study published online Oct. 19 in Psychological Medicine.

Rosa Y. Ahn-Horst, M.D., M.P.H., from Massachusetts General Hospital in Boston, and Erick H. Turner, M.D., from the Veterans Affairs Portland Health Care System in Oregon, examined publication bias with alprazolam by comparing its efficacy for panic disorder using trial results from the published literature and the U.S. Food and Drug Administration. Data were included from all phase 2/3 efficacy trials of alprazolam extended-release (Xanax XR) for the treatment of panic disorder.

The researchers identified five trials in the FDA review, one of which had positive results (20 percent). Of the four trials without positive results, two were published conveying a positive outcome and two were not published. Therefore, according to the three published trials, 100 percent were positive. Using FDA data, alprazolam's effect size was 0.33 versus 0.47 using published data, representing a 42 percent increase.

"Clinicians are well aware of these safety issues, but there's been essentially no questioning of their effectiveness," Turner said in a statement. "Our study throws some cold water on the efficacy of this drug. It shows it may be less effective than people have assumed."

Abstract/Full Text

Read more from the original source:
Publication Bias Inflates Efficacy of Alprazolam XR: Study Reveals ... - HealthDay

OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say – Reuters

Nov 22 (Reuters) - Ahead of OpenAI CEO Sam Altmans four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.

The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*s future success, the source said.

Reuters could not independently verify the capabilities of Q* claimed by the researchers.

[1/2]Sam Altman, CEO of ChatGPT maker OpenAI, arrives for a bipartisan Artificial Intelligence (AI) Insight Forum for all U.S. senators hosted by Senate Majority Leader Chuck Schumer (D-NY) at the U.S. Capitol in Washington, U.S., September 13, 2023. REUTERS/Julia Nikhinson/File Photo Acquire Licensing Rights

Researchers consider math to be a frontier of generative AI development. Currently, generative AI is good at writing and language translation by statistically predicting the next word, and answers to the same question can vary widely. But conquering the ability to do math where there is only one right answer implies AI would have greater reasoning capabilities resembling human intelligence. This could be applied to novel scientific research, for instance, AI researchers believe.

Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.

In their letter to the board, researchers flagged AIs prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

Researchers have also flagged work by an "AI scientist" team, the existence of which multiple sources confirmed. The group, formed by combining earlier "Code Gen" and "Math Gen" teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.

Altman led efforts to make ChatGPT one of the fastest growing software applications in history and drew investment - and computing resources - necessary from Microsoft to get closer to AGI.

In addition to announcing a slew of new tools in a demonstration this month, Altman last week teased at a summit of world leaders in San Francisco that he believed major advances were in sight.

"Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime," he said at the Asia-Pacific Economic Cooperation summit.

A day later, the board fired Altman.

Anna Tong and Jeffrey Dastin in San Francisco and Krystal Hu in New York; Editing by Kenneth Li and Lisa Shumaker

Our Standards: The Thomson Reuters Trust Principles.

Anna Tong is a correspondent for Reuters based in San Francisco, where she reports on the technology industry. She joined Reuters in 2023 after working at the San Francisco Standard as a data editor. Tong previously worked at technology startups as a product manager and at Google where she worked in user insights and helped run a call center. Tong graduated from Harvard University. Contact:4152373211

Jeffrey Dastin is a correspondent for Reuters based in San Francisco, where he reports on the technology industry and artificial intelligence. He joined Reuters in 2014, originally writing about airlines and travel from the New York bureau. Dastin graduated from Yale University with a degree in history. He was part of a team that examined lobbying by Amazon.com around the world, for which he won a SOPA Award in 2022.

Krystal reports on venture capital and startups for Reuters. She covers Silicon Valley and beyond through the lens of money and characters, with a focus on growth-stage startups, tech investments and AI. She has previously covered M&A for Reuters, breaking stories on Trump's SPAC and Elon Musk's Twitter financing. Previously, she reported on Amazon for Yahoo Finance, and her investigation of the company's retail practice was cited by lawmakers in Congress. Krystal started a career in journalism by writing about tech and politics in China. She has a master's degree from New York University, and enjoys a scoop of Matcha ice cream as much as getting a scoop at work.

See more here:

OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say - Reuters

The OpenAI Drama Has a Clear Winner: The Capitalists – The New York Times

What happened at OpenAI over the past five days could be described in many ways: A juicy boardroom drama, a tug of war over one of Americas biggest start-ups, a clash between those who want A.I. to progress faster and those who want to slow it down.

But it was, most importantly, a fight between two dueling visions of artificial intelligence.

In one vision, A.I. is a transformative new tool, the latest in a line of world-changing innovations that includes the steam engine, electricity and the personal computer, and that, if put to the right uses, could usher in a new era of prosperity and make gobs of money for the businesses that harness its potential.

In another vision, A.I. is something closer to an alien life form a leviathan being summoned from the mathematical depths of neural networks that must be restrained and deployed with extreme caution in order to prevent it from taking over and killing us all.

With the return of Sam Altman on Tuesday to OpenAI, the company whose board fired him as chief executive last Friday, the battle between these two views appears to be over.

Team Capitalism won. Team Leviathan lost.

OpenAIs new board will consist of three people, at least initially: Adam DAngelo, the chief executive of Quora (and the only holdover from the old board); Bret Taylor, a former executive at Facebook and Salesforce; and Lawrence H. Summers, the former Treasury secretary. The board is expected to grow from there.

OpenAIs largest investor, Microsoft, is also expected to have a larger voice in OpenAIs governance going forward. That may include a board seat.

Gone from the board are three of the members who pushed for Mr. Altmans ouster: Ilya Sutskever, OpenAIs chief scientist (who has since recanted his decision); Helen Toner, a director of strategy at Georgetown Universitys Center for Security and Emerging Technology; and Tasha McCauley, an entrepreneur and researcher at the RAND Corporation.

Mr. Sutskever, Ms. Toner and Ms. McCauley are representative of the kinds of people who were heavily involved in thinking about A.I. a decade ago an eclectic mix of academics, Silicon Valley futurists and computer scientists. They viewed the technology with a mix of fear and awe, and worried about theoretical future events like the singularity, a point at which A.I. would outstrip our ability to contain it. Many were affiliated with philosophical groups like the Effective Altruists, a movement that uses data and rationality to make moral decisions, and were persuaded to work in A.I. out of a desire to minimize the technologys destructive effects.

This was the vibe around A.I. in 2015, when OpenAI was formed as a nonprofit, and it helps explain why the organization kept its convoluted governance structure which gave the nonprofit board the ability to control the companys operations and replace its leadership even after it started a for-profit arm in 2019. At the time, protecting A.I. from the forces of capitalism was viewed by many in the industry as a top priority, one that needed to be enshrined in corporate bylaws and charter documents.

But a lot has changed since 2019. Powerful A.I. is no longer just a thought experiment it exists inside real products, like ChatGPT, that are used by millions of people every day. The worlds biggest tech companies are racing to build even more powerful systems. And billions of dollars are being spent to build and deploy A.I. inside businesses, with the hope of reducing labor costs and increasing productivity.

The new board members are the kinds of business leaders youd expect to oversee such a project. Mr. Taylor, the new board chair, is a seasoned Silicon Valley deal maker who led the sale of Twitter to Elon Musk last year, when he was the chair of Twitters board. And Mr. Summers is the Ur-capitalist a prominent economist who has said that he believes technological change is net good for society.

There may still be voices of caution on the reconstituted OpenAI board, or figures from the A.I. safety movement. But they wont have veto power, or the ability to effectively shut down the company in an instant, the way the old board did. And their preferences will be balanced alongside others, such as those of the companys executives and investors.

Thats a good thing if youre Microsoft, or any of the thousands of other businesses that rely on OpenAIs technology. More traditional governance means less risk of a sudden explosion, or a change that would force you to switch A.I. providers in a hurry.

And perhaps what happened at OpenAI a triumph of corporate interests over worries about the future was inevitable, given A.I.s increasing importance. A technology potentially capable of ushering in a Fourth Industrial Revolution was unlikely to be governed over the long term by those who wanted to slow it down not when so much money was at stake.

There are still a few traces of the old attitudes in the A.I. industry. Anthropic, a rival company started by a group of former OpenAI employees, has set itself up as a public benefit corporation, a legal structure that is meant to insulate it from market pressures. And an active open-source A.I. movement has advocated that A.I. remain free of corporate control.

But these are best viewed as the last vestiges of the old era of A.I., in which the people building A.I. regarded the technology with both wonder and terror, and sought to restrain its power through organizational governance.

Now, the utopians are in the drivers seat. Full speed ahead.

Read the original:

The OpenAI Drama Has a Clear Winner: The Capitalists - The New York Times

OpenAI’s Board Set Back the Promise of Artificial Intelligence – The Information

I was the first venture investor in OpenAI. The weekend drama illustrated my contention that the wrong boards can damage companies. Fancy titles like Director of Strategy at Georgetowns Center for Security and Emerging Technology can lead to a false sense of understanding of the complex process of entrepreneurial innovation. OpenAIs board members' religion of effective altruism and its misapplication could have set back the worlds path to the tremendous benefits of artificial intelligence. Imagine free doctors for everyone and near free tutors for every child on the planet. Thats whats at stake with the promise of AI.

The best companies are those whose visions are led and executed by their founding entrepreneurs, the people who put everything on the line to challenge the status quofounders like Sam Altmanwho face risk head on, and who are focusedso totallyon making the world a better place. Things can go wrong, and abuse happens, but the benefits of good founders far outweigh the risks of bad ones.

View post:

OpenAI's Board Set Back the Promise of Artificial Intelligence - The Information

What the OpenAI drama means for AI progress and safety – Nature.com

OpenAI fired its charismatic chief executive, Sam Altman, on 17 November but has now reinstated him.Credit: Justin Sullivan/Getty

OpenAI the company behind the blockbuster artificial intelligence (AI) bot ChatGPT has been consumed by frenzied changes for almost a week. On 17 November, the company fired its charismatic chief executive, Sam Altman. Five days, and much drama, later, OpenAI announced that Altman would return with an overhaul of the companys board.

The debacle has thrown the spotlight on an ongoing debate about how commercial competition is shaping the development of AI systems, and how quickly AI can be deployed ethically and safely.

The push to retain dominance is leading to toxic competition. Its a race to the bottom, says Sarah Myers West, managing director of the AI Now Institute, a policy-research organization based in New York City.

Altman, a successful investor and entrepreneur, was a co-founder of OpenAI and its public face. He had been chief executive since 2019, and oversaw an investment of some US$13 billion from Microsoft. After Altmans initial ousting, Microsoft, which uses OpenAI technology to power its search engine Bing, offered Altman a job leading a new advanced AI research team. Altmans return to OpenAI came after hundreds of company employees signed a letter threatening to follow Altman to Microsoft unless he was reinstated.

The OpenAI board that ousted Altman last week did not give detailed reasons for the decision, saying at first that he was fired because he was not consistently candid in his communications with the board and later adding that the decision had nothing to do with malfeasance or anything related to our financial, business, safety or security/privacy practice.

But some speculate that the firing might have its origins in a reported schism at OpenAI between those focused on commercial growth and those uncomfortable with the strain of rapid development and its possible impacts on the companys mission to ensure that artificial general intelligence benefits all of humanity.

OpenAI, which is based in San Francisco, California, was founded in 2015 as a non-profit organization. In 2019, it shifted to an unusual capped-profit model, with a board explicitly not accountable to shareholders or investors, including Microsoft. In the background of Altmans firing is very clearly a conflict between the non-profit and the capped-profit; a conflict of culture and aims, says Jathan Sadowski, a social scientist of technology at Monash University in Melbourne, Australia.

Ilya Sutskever, OpenAIs chief scientist and a member of the board that ousted Altman, this July shifted his focus to superalignment, a four-year project attempting to ensure that future superintelligences work for the good of humanity.

Its unclear whether Altman and Sutskever are at odds about speed of development: after the board fired Altman, Sutskever expressed regret about the impacts of his actions and was among the employees who signed the letter threatening to leave unless Altman returned.

With Altman back, OpenAI has reshuffled its board: Sutskever and Helen Toner, a researcher in AI governance and safety at Georgetown Universitys Center for Security and Emerging Technology in Washington DC, are no longer on the board. The new board members include Bret Taylor, who is on the board of e-commerce platform Shopify and used to lead the software company Salesforce.

It seems likely that OpenAI will shift further from its non-profit origins, says Sadowski, restructuring as a classic profit-driven Silicon Valley tech company.

OpenAI released ChatGPT almost a year ago, catapulting the company to worldwide fame. The bot was based on the companys GPT-3.5 large language model (LLM), which uses the statistical correlations between words in billions of training sentences to generate fluent responses to prompts. The breadth of capabilities that have emerged from this technique (including what some see as logical reasoning) has astounded and worried scientists and the general public alike.

OpenAI is not alone in pursuing large language models, but the release of ChatGPT probably pushed others to deployment: Google launched its chatbot Bard in March 2023, the same month that an updated version of ChatGPT, based on GPT-4, was released. West worries that products are appearing before anyone has a full understanding of their behaviour, uses and misuses, and that this could be detrimental for society.

The competitive landscape for conversational AI is heating up. Google has hinted that more AI products lie ahead. Amazon has its own AI offering, Titan. Smaller companies that aim to compete with ChatGPT include the German effort Aleph Alpha and US-based Anthropic, founded in 2021 by former OpenAI employees, which released the chatbot Claude 2.1 on 21 November. Stability AI and Cohere are other often-cited rivals.

West notes that these start-ups rely heavily on the vast and expensive computing resources provided by just three companies Google, Microsoft and Amazon potentially creating a race for dominance between these controlling giants.

Computer scientist Geoffrey Hinton at the University of Toronto in Canada, a pioneer of deep learning, is deeply concerned about the speed of AI development. If you specify a competition to make a car go as fast as possible, the first thing you do is remove the brakes, he says. (Hinton declined to comment to Nature on the events at OpenAI since 17 November.)

OpenAI was founded with the specific goal of developing an artificial general intelligence (AGI) a deep-learning system thats trained not just to be good at one specific thing, but to be as generally smart as a person. It remains unclear whether AGI is even possible. The jury is very much out on that front, says West. But some are starting to bet on it. Hinton says he used to think AGI would happen on the timescale of 30, 50 or maybe 100 years. Right now, I think well probably get it in 520 years, he says.

The imminent dangers of AI are related to it being used as a tool by human bad actors people who use it to, for example, create misinformation, commit scams or, potentially, invent new bioterrorism weapons1. And because todays AI systems work by finding patterns in existing data, they also tend to reinforce historical biases and social injustices, says West.

In the long term, Hinton and others worry about an AI system itself becoming a bad actor, developing sufficient agency to guide world events in a negative direction. This could arise even if an AGI was designed in line with OpenAIs superalignment mission to promote humanitys best interests, says Hinton. It might decide, for example, that the weight of human suffering is so vast that it would be better for humanity to die than to face further misery. Such statements sound like science fiction, but Hinton argues that the existential threat of an AI that cant be turned off and veers onto a destructive path is very real.

The AI Safety Summit hosted by the United Kingdom in November was designed to get ahead of such concerns. So far, some two dozen nations have agreed to work together on the problem, although what exactly they will do remains unclear.

West emphasizes that its important to focus on already-present threats from AI ahead of far-flung concerns and to ensure that existing laws are applied to tech companies developing AI. The events at OpenAI, she says, highlight how just a few companies with the money and computing resources to feed AI wield a lot of power something she thinks needs more scrutiny from anti-trust regulators. Regulators for a very long time have taken a very light touch with this market, says West. We need to start by enforcing the laws we have right now.

Continued here:

What the OpenAI drama means for AI progress and safety - Nature.com

OpenAI staff reportedly warned the board about an AI breakthrough that could threaten humanity before Sam Altman … – Fortune

A potential breakthrough in the field of artificial intelligence may have contributed to Sam Altmans recent ouster as CEO of OpenAI.

According to a Reutersreportciting two sources acquainted with the matter, several staff researchers wrote a letter to the organizations board warning of a discovery that could potentially threaten the human race.

The two anonymous individuals claim this letter, which informed directors that a secret project named Q* resulted in A.I. solving grade school level mathematics, reignited tensions over whether Altman was proceeding too fast in a bid tocommercialize the technology.

Just a day before he was sacked, Altman may have referenced Q* (pronounced Q-star) at a summit of world leaders in San Francisco when he spoke of what he believed was a recent breakthrough.

Four times now in the history of OpenAIthe most recent time was just in the last couple of weeksIve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, said Altmanat a discussion during the Asia-Pacific Economic Cooperation.

He has since beenreinstated as CEOin a spectacular reversal of events after staffthreatened to mutinyagainst the board.

According to one of the sources, after being contacted by Reuters, OpenAIs chief technology officer Mira Murati acknowledged in an internal memo to employees the existence of the Q* project as well as a letter that was sent by the board.

OpenAI could not be reached immediately by Fortune for a statement, but it declined to provide a comment to Reuters.

So why is all of this special, let alone alarming?

Machines have been solving mathematical problems for decades going back to the pocket calculator.

The difference is conventional devices were designed to arrive at a single answer using a series of deterministic commands that all personal computers employ where values can only either be true or false, 0 or 1.

Under this rigid binary system, there is no capability to diverge from their programming in order to think creatively.

By comparison, neural nets are not hard coded to execute certain commands in a specific way. Instead, they are trained just like a human brain would be with massive sets of interrelated data, giving them the ability to identify patterns and infer outcomes.

Think of Googles helpful Autocomplete function that aims to predict what an internet user is searching for using statistical probabilitythis is a very rudimentary form of generative AI.

Thats why Meredith Whittaker, a leading expert in the field, describesneural netslike ChatGPT as probabilistic engines designed to spit out what seems plausible.

Should generative A.I. prove able to arrive at the correct solution to mathematical problems on its own, it suggests a capacity for higher reasoning.

This could potentially be the first step towards developing artificial general intelligence, a form of AI that can surpass humans.

The fear is that an AGI needs guardrails since it one day might come to view humanity as a threat to its existence.

See the article here:

OpenAI staff reportedly warned the board about an AI breakthrough that could threaten humanity before Sam Altman ... - Fortune

First international benchmark of artificial intelligence and machine … – Nuclear Energy Agency

Recent performance breakthroughs in artificial intelligence (AI) and machine learning (ML) have led to unprecedented interest among nuclear engineers. Despite the progress, the lack of dedicated benchmark exercises for the application of AI and ML techniques in nuclear engineering analyses limits their applicability and broader usage. In line with the NEA strategic target to contribute to building a solid scientific and technical basis for the development of future generation nuclear systems and deployment of innovations, theTask Force on Artificial Intelligence and Machine Learning for Scientific Computing in Nuclear Engineering was established within theExpert Group on Reactor Systems Multi-Physics (EGMUP) of the Nuclear Science Committees Working Party on Scientific Issues and Uncertainty Analysis of Reactor Systems (WPRS). The Task Force will focus on designing benchmark exercises that will target important AI and ML activities, and cover various computational domains of interest, from single physics to multi-scale and multi-physics.

A significant milestone has been reached with the successful launch of a first comprehensive benchmark of AI and ML to predict the Critical Heat Flux (CHF). This CHF corresponds in a boiling system to the limit beyond which wall heat transfer decreases significantly, which is often referred to as critical boiling transition, boiling crisis and (depending on operating conditions) departure from nucleate boiling (DNB), or dryout. In a heat transfer-controlled system, such as a nuclear reactor core, CHF can result in a significant wall temperature increase leading to accelerated wall oxidation, and potentially to fuel rod failure. While constituting an important design limit criterion for the safe operation of reactors, CHF is challenging to predict accurately due to the complexities of the local fluid flow and heat exchange dynamics.

Current CHF models are mainly based on empirical correlations developed and validated for a specific application case domain. Through this benchmark, improvements in the CHF modelling are sought using AI and ML methods directly leveraging a comprehensive experimental database provided by the US Nuclear Regulatory Commission (NRC), forming the cornerstone of this benchmark exercise. The improved modelling can lead to a better understanding of the safety margins and provide new opportunities for design or operational optimisations.

The CHF benchmark phase 1 kick-off meeting on 30 October 2023 gathered 78 participants, representing 48 institutions from 16 countries. This robust engagement underscores the profound interest and commitment within the global scientific community toward integrating AI and ML technologies into nuclear engineering. The ultimate goal of the Task Force is to leverage insights from the benchmarks and distill lessons learnt to provide guidelines for future AI and ML applications in scientific computing in nuclear engineering.

eNrVmE1z2jAQhu/8Co/v/iBpAukYMi1NWmaaKSVh2umFEfICIkJyVjIf/fWVgbSkYzdFRIcc7ZV319K7z66dXK7m3FsAKiZFy6+Hse+BoDJlYtLyB3fXQdO/bNeSGVmQ/WXNMB6exCe+RzlRquUX9nAERKjw+83nD2A8APrtmpfI0QyofrIu14yHn4ia3pCsWOMlC8lSbw56KtOWn+V6c9dLlEaTR3sp8V5lhEIS7e7sWzXK4dnZebxvTKLC43+4zhXgZyImpZ5BWPmkOSII3SEaJhLXlUk3Lhp2STPVByVzpNAjetpDuWAppKVxxoQrsAoyXqa3gAsOughS6jya0bmyck5mZNWHh2550u+MtaNXOoiDeiOOm/X6xcXpyfmpVSjc26rSaMVLRBkfNptv4kY0Zqh0wIQGFEQbtRMejExFTOcE7wM5DghqNmaUmfvFKs7ZxJghICIN5oROmYCAA0FhfJsVgchpcRkgEKolBtl0rRhVlgffk6gJd3TkTHWeStdRHISHZ6WVMpVxsg5nKrPdKoLEmAENZdy9SPEGd2i4x82e/eVf5JxHB2Y92AHJUcYF7zoyF7qCS9d9243oSFMNq+oTtUOpXu20yEC9nNufUpT3kl4+4oza8tIQLQelB/1uNS5fDWneEwUDdIeab0ykcqkqumNuTbB9xThKPttA+F/DyLl1cf4w0qxoi1c5ygwiwzWmjsFVV4zlsaAyai939aj11yHzzfgnKeFQMQAOLYFo9P04tDqrIHfVuTWUOv14dWerva854Pp2c1nqmqWtR9XYNQsXHcgIvTLvw8tmS45nvwwMQep2esZyLE21ztTbKFoul6EEmgYCSChx8rpa0t6I4e6jx8kcs53rtmx3lPpo268PO37bUn5u0jl2et89v/tKeOkZYfCb+s7Y3L16edz/Gd2dpd17wid3YTZj9oYwrka0fFTq8agGY45VXKPhw5exgSAcJssk2v4Ja9eSqPgL1q79AnXDevo=

ermjsW29gKwt2T3e

Read the original:

First international benchmark of artificial intelligence and machine ... - Nuclear Energy Agency

US agency streamlines probes related to artificial intelligence – Reuters

[1/2]AI (Artificial Intelligence) letters and robot hand miniature in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo Acquire Licensing Rights

WASHINGTON, Nov 21 (Reuters) - Investigations of cases where artificial intelligence (AI) is used to break the law will be streamlined under a new process approved by the U.S. Federal Trade Commission, the agency said on Tuesday.

The move, along with other actions, highlights the FTC's interest in pursuing cases involving AI. Critics of the technology have said that it could be used to turbo-charge fraud.

The agency, which now has three Democrats, voted unanimously to make it easier for staff to issue a demand for documents as part of an investigation if it is related to AI, the agency said in a statement.

In a hearing in September, Commissioner Rebecca Slaughter, a Democrat who has been nominated to another term, agreed with two Republicans nominated to the agency that the agency should focus on issues like use of AI to make phishing emails and robocalls more convincing.

The agency announced a competition last week aimed at identifying the best way to protect consumers against fraud and other harms related to voice cloning.

Reporting by Diane Bartz Editing by Marguerita Choy

Our Standards: The Thomson Reuters Trust Principles.

Here is the original post:

US agency streamlines probes related to artificial intelligence - Reuters

Live chat: A new writing course for the age of artificial intelligence – Yale News

How is academia dealing with the influence of AI on student writing? Just ask ChatGPT, and itll deliver a list of 10 ways in which the rapidly expanding technology is creating both opportunities and challenges for faculty everywhere.

On the one hand, for example, while there are ethical concerns about AI compromising students academic integrity, there is also growing awareness of the ways in which AI tools might actually support students in their research and writing.

Students in Writing Essays with AI, a new English seminar taught by Yales Ben Glaser, are exploring the many ways in which the expanding number of AI tools are influencing written expression, and how they might help or harm their own development as writers.

We talk about how large language models are already and will continue to be quite transformative, Glaser said, not just of college writing but of communication in general.

An associate professor of English in Yales Faculty of Arts and Sciences, Glaser sat down with Yale News to talk about the need for AI literacy, ChatGPTs love of lists, and how the generative chatbot helped him write the course syllabus.

Ben Glaser: Its more the former. None of the final written work for the class is written with ChatGPT or any other large language model or chatbot, although we talk about using AI research tools like Elicit and other things in the research process. Some of the small assignments directly call for students to engage with ChatGPT, get outputs, and then reflect on it. And in that process, they learn how to correctly cite ChatGPT.

The Poorvu Center for Teaching and Learning has a pretty useful page with AI guidelines. As part of this class, we read that website and talked about whether those guidelines seem to match students own experience of usage and what their friends are doing.

Glaser: I dont get the sense that they are confused about it in my class because we talk about it all the time. These are students who simultaneously want to understand the technology better, maybe go into that field, and they also want to learn how to write. They dont think theyre going to learn how to write by using those AI tools better. But they want to think about it.

Thats a very optimistic take, but I think that Yale makes that possible through the resources it has for writing help, and students are often directed to those resources. If youre in a class where the writing has many stages drafting, revision its hard to imagine where ChatGPT is going to give you anything good, partly because youre going to have to revise it so much.

That said, its a totally different world if youre in high school or a large university without those resources. And then of course there are situations that have always led to plagiarism, where youre strung out at the last minute and you copy something from Google.

Glaser: First of all, its a really interesting thing to study. Thats not what youre asking youre asking what it can do or where does it belong in a writing process. But when you talk to a chatbot, you get this fuzzy, weird image of culture back. You might get counterpoints to your ideas, and then you need to evaluate whether those counterpoints or supporting evidence for your ideas are actually good ones. Theres no understanding behind the model. Its based on statistical probabilities its guessing which word comes next. It sometimes does so in a way that speeds things along.

If you say, give me some points and counterpoints in, say, AI use in second-language learning, it might spit out 10 good things and 10 bad things. It loves to give lists. And theres a kind of literacy to reading those outputs. Students in this class are gaining some of that literacy.

Glaser: I dont love the word brainstorming, but I think there is a moment where you have a blank page, and you think you have a topic, and the process of refining that involves research. ChatGPTs not the most wonderful research tool, but it sure is an easy one.

I asked it to write the syllabus for this course initially. What it did was it helped me locate some researchers that I didnt know, it gave me some ideas for units. And then I had to write the whole thing over again, of course. But that was somewhat helpful.

Glaser: It can be. I think thats a limited and effective use of it in many contexts.

One of my favorite class days was when we went to the library and had a library session. Its an insanely amazing resource at Yale. Students have personal librarians, if they want them. Also, Yale pays for these massive databases that are curating stuff for the students. The students quickly saw that these resources are probably going to make things go smoother long-term if they know how to use them.

So it's not a simple AI tool bad, Yale resource good. You might start with the quickly accessible AI tool, and then go to a librarian, and say, like, heres a different version of this. And then youre inside the research process.

Glaser: One thing that some writers have done is, if you interact with it long enough, and give it new prompts and develop its outputs, you can get something pretty cool. At that point youve done just as much work, and youve done a different kind of creative or intellectual project. And Im all for that. If everythings cited, and you develop a creative work through some elaborate back-and-forth or programming effort including these tools, youre just doing something wild and interesting.

Glaser: Im glad that I could offer a class that students who are coming from computer science and STEM disciplines, but also want to learn how to write, could be excited about. AI-generated language, thats the new medium of language. The Web is full of it. Part of making students critical consumers and readers is learning to think about AI language as not totally separate from human language, but as this medium, this soup if you want, that were floating around in.

See the article here:

Live chat: A new writing course for the age of artificial intelligence - Yale News

New Tool for Building and Fixing Roads and Bridges: Artificial … – The New York Times

In Pennsylvania, where 13 percent of the bridges have been classified as structurally deficient, engineers are using artificial intelligence to create lighter concrete blocks for new construction. Another project is using A.I. to develop a highway wall that can absorb noise from cars and some of the greenhouse gas emissions that traffic releases as well.

At a time when the federal allocation of billions of dollars toward infrastructure projects would help with only a fraction of the cost needed to repair or replace the nations aging bridges, tunnels, buildings and roads, some engineers are looking to A.I. to help build more resilient projects for less money.

These are structures, with the tools that we have, that save materials, save costs, save everything, said Amir Alavi, an engineering professor at the University of Pittsburgh and a member of the consortium developing the two A.I. projects in conjunction with the Pennsylvania Department of Transportation and the Pennsylvania Turnpike Commission.

The potential is enormous. The manufacturing of cement alone makes up at least 8 percent of the worlds carbon emissions, and 30 billion tons of concrete are used worldwide each year, so more efficient production of concrete would have immense environmental implications.

And A.I. essentially machines that can synthesize information and find patterns and conclusions much as the human mind can could have the ability to speed up and improve tasks like engineering challenges to an incalculable degree. It works by analyzing vast amounts of data and offering options that give humans better information, models and alternatives for making decisions.

It has the potential to be both more cost effective one machine doing the work of dozens of engineers and more creative in coming up with new approaches to familiar tasks.

But experts caution against embracing the technology too quickly when it is largely unregulated and its payoffs remain largely unproven. In particular, some worry about A.I.s ability to design infrastructure in a process with several regulators and participants operating over a long period of time. Others worry that A.I.s ability to draw instantly from the entirety of the internet could lead to flawed data that produces unreliable results.

American infrastructure challenges have become all the more apparent in recent years Texas power grid failed during devastating ice storms in 2021 and continues to grapple with the states needs; communities across the country from Flint, Mich., to Jackson, Miss., have struggled with failing water supplies; and more than 42,000 bridges are in poor condition nationwide.

A vast majority of the countrys roadways and bridges were built several decades ago, and as a result infrastructure challenges are significant in many dimensions, said Abdollah Shafieezadeh, a professor of civil, environmental and geodetic engineering at Ohio State University.

The collaborations in Pennsylvania reflect A.I.s potential to address some of these issues.

In the bridge project, engineers are using A.I. technology to develop new shapes for concrete blocks that use 20 percent less material while maintaining durability. The Pennsylvania Department of Transportation will use the blocks to construct a bridge; there are more than 12,000 in the state that need repair, according to the American Road & Transportation Builders Association.

Engineers in Pittsburgh are also working with the Pennsylvania Turnpike Commission to design a more efficient noise-absorbing wall that will also capture some of the nitrous oxide emitted from vehicles. They are planning to build it in an area that is disproportionately affected by highway sound pollution. The designs will save about 30 percent of material costs.

These new projects have not been tested in the field, but they have been successful in the lab environment, Dr. Alavi said.

In addition to A.I.s speed at developing new designs, one of its largest draws in civil engineering is its potential to prevent and detect damage.

Instead of investing large sums of money in repair projects, engineers and transportation agencies could identify problems early on, experts say, such as a crack forming in a bridge before the structure itself buckled.

This technology is capable of providing an analysis of what is happening in real time in incidents like the bridge collapse on Interstate 95 in Philadelphia this summer or the fire that shut down a portion of Interstate 10 in Los Angeles this month, and could be developed to deploy automated emergency responses, said Seyede Fatemeh Ghoreishi, an engineering and computer science professor at Northeastern University.

But, as in many fields, there are increasingly more conversations and concerns about the relationship between A.I., human work and physical safety.

Although A.I. has proved helpful in many uses, tech leaders have testified before Congress, pushing for regulations. And last month, President Biden issued an executive order for a range of A.I. standards, including safety, privacy and support for workers.

Experts are also worried about the spread of disinformation from A.I. systems. A.I. operates by integrating already available data, so if that data is incorrect or biased, the A.I. will generate faulty conclusions.

It really is a great tool, but it really is a tool you should use just for a first draft at this point, said Norma Jean Mattei, a former president of the American Society of Civil Engineers.

Dr. Mattei, who has worked in education and ethics for engineering throughout her career, added: Once it develops, Im confident that well get to a point where youre less likely to get issues. Were not there yet.

Also worrisome is a lack of standards for A.I. The Occupational Safety and Health Administration, for example, does not have standards for the robotics industry. There is rising concern about car crashes involving autonomous vehicles, but for now, automakers do not have to abide by any federal software safety testing regulations.

Lola Ben-Alon, an assistant professor of architecture technology at Columbia University, also takes a cautionary approach when using A.I. She stressed the need to take the time to understand how it should be employed, but she said that she was not condemning it" and that it had many great potentials.

Few doubt that in infrastructure projects and elsewhere, A.I. exists as a tool to be used by humans, not as a substitute for them.

Theres still a strong and important place for human existence and experience in the field of engineering, Dr. Ben-Alon said.

The uncertainty around A.I. could cause more difficulties for funding projects like those in Pittsburgh. But a spokesman for the Pennsylvania Department of Transportation said the agency was excited to see how the concrete that Dr. Alavi and his team are designing could expand the field of bridge construction.

Dr. Alavi said his work throughout his career had shown him just how serious the potential risks from A.I. are.

But he is confident about the safety of the designs he and his team are making, and he is excited for the technologys future.

After 10, 12 years, this is going to change our lives, Dr. Alavi said.

Go here to read the rest:

New Tool for Building and Fixing Roads and Bridges: Artificial ... - The New York Times

Five reasons I would take INT D 161 – Artificial Intelligence Everywhere – University of Alberta

Over the past few years, artificial intelligence (AI) has gone from being something I would see in sci-fi movies and shows (always set in the future) to something that feels very present, both in my life and our society. Ive learnt a bit about AI from playing with OpenAIs ChatGPT, checking out Midjourney and reading a few news articles in the media, but at this point, my knowledge of what AI is, how it actually works and where it can be applied feels pretty superficial.

When I learned about the Artificial Intelligence Everywhere course taught by computing science professor Dr. Adam White, I was really excited to check it out. Its really easy to fit into a lot of degree pathways: as an INT D (interdisciplinary) course its open to almost every undergraduate student, and best of all its offered both on-campus (in-person) and asynchronous (online) in Winter 2024, so people can choose what works best for them.

Here are my five big reasons why Im considering registering for this course:

Anyone whos chatted with ChatGPT or asked DALL-E to make an image is often surprised by just hownatural everything feels. Here are a few examples:

While I was used to the magic of computers, processing datasets often required some fiddling with Python or Excel macros, this isnt so straightforward for everyone. Generative AI is massively powerful when its implemented specifically to deal with large datasets, but just pasting a big blob of numbers into ChatGPT can get you some surprisingly useful insights (note: dont try this for anything that actually matters).

On many ualberta.ca web pages, Im running into Vera, the generative-AI-powered chatbot assistant who often has the answer Im looking for. I can text Vera like I text a friend, which feels a lot different than playing with search terms in Google.

And when I need a witty response to the group chat? I want to act like Im coming up with all of my comedic bits on my own, but lets be honestthere mightve been some AI help.

There are a lot of buzzwords being thrown arounddataset, library, iterative processing, neural networks, etc., and I dont really understand what all of these mean or how they fit together. While I could spend some time in a Wikipedia rabbit hole trying to figure out whats going on, the chance to learn from a computing science professor with a strong background in the area sounds a lot more enticing. And these credits apply to a degree? Sign me up!

Theres a lot of talk in the news about what AI means for our society - will it affect jobs? Will it affect learning? Will it go rogue? While I dont think this course will have ALL the answers to ALL of these topics, Id like to be able to form some of my own opinions about AI, and I think a good foundational understanding of it is the right first step. There are famous quotes like the internet is a series of tubes which might show what happens when the people in charge of making major societal decisions about something dont understand it. And I definitely dont want to be caught saying, Well, AI is really just a lot of layered spreadsheets.

There are a ton of job titles like data scientist, CAD modeller, systems administrator, software engineer or web designer that all benefit from (or pretty much require) a strong foundational knowledge of computers and the internet. Im sure that there are going to be a lot of new jobs related to both implementing AI and using it in the workplace and as something without a perfectly clear-cut career path, I want to be ready for these. I feel the foundational knowledge will be really useful to see if I want to pursue a career related to AI.

It wasnt actually that long ago when the internet was launched (the formal date is in the 80s, but it didnt really show up in most homes and schools in Canada until the 90s), and then social media was another big thing that followed in the 2000s. Now these things are everywhere, even though they were pretty niche in the beginning. AI seemed like a sci-fi movie trope until a few years ago, and now, almost everyone I know has used it (well, maybe not my grandparents). Its certainly the next ubiquitous thing, and I want to be ready.

Learn more about the course

More here:

Five reasons I would take INT D 161 - Artificial Intelligence Everywhere - University of Alberta

Artificial intelligence and church – UM News

Key Points:

Artificial intelligence technology, the subject of buzz and anxiety at the moment, has made its way to religion circles.

Pastor Jay Cooper, who heads Violet Crown City Church, a United Methodist congregation in Austin, Texas, took AI out for a spin recently at his Sept. 17 worship service.

The verdict? Interesting, but something was missing.

They were glad we did it, Cooper said of his congregation, and let's not do it again.

Cooper usedChatGPTto put together the entireworship service, including the sermon and an original song. He said the result was a stilted atmosphere.

The human element was lacking, he said. It seemed to in some way prevent us from connecting with each other. The heart was missing.

AI leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind, according to theIBM website. It has been around since the 1950s, and is used to power web search engines and self-driving cars, can compete in games of strategy such as chess, and create works such as songs, sermons and prose by using data collected on the internet.

AI-based software transcribed the interviews for this story. The remaining Beatles created a new song, Now and Then, using AI to extract John Lennons vocals from a poorly recorded demo cassette tape he made in the 1970s.

The CEO of Google said that this is bigger than fire, bigger than electricity, said the Rev. James Lee, director of communications for the Greater New Jersey and Eastern Pennsylvania conferences. I really believe that this is going to be how we do everything within the next five to 10 years.

Cooper said he has strong feelings against using AI to write a sermon again.

Even if it's not as eloquent or if it's a little messy or last minute, it needs to be from the heart of the pastor.

Lee concurs. ChatGBT is pretty bad at writing good sermons. That's my own opinion, but they're very vanilla, he said.

Philip Clayton, Ingraham Professor of Theology at Claremont School of Theology, said that religion tends to be slow to pick up on new technology.

I think our fear of technology is not a good thing, especially when we're trying to attract younger people to be involved in churches, he said.

AI is a means to get something done, like using a typewriter years ago, he added. For us as Christians, the key question is, Do the means become the end?

Like what you're reading and want to see more? Sign up for our free daily and weekly digests of important news and events in the life of The United Methodist Church.

Keep me informed!

A sermon is an attempt to speak the word of God to people of God assembled at a particular time and place, Clayton said.

It takes prayer, it takes the knowledge of the people, it takes allusions to my community in my country and all kinds of frameworks, he said. If I don't do that task, what have I carried out? What are my responsibilities as one who rightly divines the word of God?

Lee suggests treating AI technology as an intern.

They are able to do a lot of work for you and support you, and almost treat them like an additional member of the team, he said.

The Rev. Stacy Minger, associate professor of preaching at Asbury Theological Seminary, believes AI could be helpful as long as the preacher does their due diligence of preparation.

The way I teach preaching is that the preacher invests in praying over the text, reading the text and using all of their biblical studies and skills, and then they consult the commentaries or the scholars, she said.

If you're maybe missing an illustration or missing a transition or theres something that just hasn't kind of come together and you're banging your head against the wall, I think at that point, after you've done all of your own work, that it could be a helpful tool.

Ted Vial is the Potthoff Professor of Theology and Modern Western Religious Thought and vice president of innovation, learning and institutional research at Iliff School of Theology. Photo by EB Pixs, courtesy of Iliff School of Theology.

It is important to verify the work of programs like ChatGPT, saidTed Vial, the Potthoff Professor of Theology and Modern Western Religious Thought and vice president of innovation, learning and institutional research at Iliff School of Theology.

Theres a lot of bad information (on the internet), Vial said. My experience with the current level of (AI) sophistication is they can produce a clearly written and well-organized essay. They're not very inspirational.

AI programs do not include the most current information, he said.

I think ChatGPT is built on data that goes through November of 2021, Vial said. So, if sermons are supposed to relate what's happening in the world to the Bible, its going to be out of date.

Humans have emotions and creativity that are hard for a computer to emulate, he said.

But the technology continues to improve.

Whatever humans can do, I'm pretty sure AI will be able to do it soon also, Vial said. So, the question isn't, Would you need a human? The question is, Are you and your congregation OK with a service that's produced by a machine?

Even if the answer to that is No, there will be pastors who want to use it because it makes their lives easier, he added.

If it's a personal connection between the pastor and a community, then it's important to have the pastor's voice and personality, Vial said. If it's exegesis of a text, there may not be anything wrong with having a computer produce it.

Looking at it from another direction, a pastor might be cheating themselves as well as their congregation if they skip doing most of the work, Minger said.

I would be concerned that if you're not spending that time, using all of your biblical study skills and prayerfully invested in the reading of Scripture, that you as a preacher are skipping over a wonderful formative opportunity in your own life, she said.

As I'm hammering out a sermon, I'm really wrestling with it, she said. You need images and metaphors, word choices and illustrations.

And so, as preachers, it's not only that we would be short-circuiting the congregation, I think we would be tamping down our own creative outlets in the effort to become more efficient.

Patterson is a UM News reporter in Nashville, Tennessee. Contact him at 615-742-5470 or [emailprotected]. To read more United Methodist news,subscribeto the free Daily or Weekly Digests.

Read the original post:

Artificial intelligence and church - UM News

Inventorship and Patentability of Artificial Intelligence-Based … – Law.com

The concept of inventorship continues to evolve with the advent of artificial intelligence (AI). AI-generated inventions spur disagreement among Patent Offices across the world as to who, or what, qualifies as an inventor in patent applications. The quandary is whether an AI platform that assists in creating an invention should be named as an inventor in a patent. For example, an AI-powered system may be employed pharmaceutically to identify molecular compounds in the discovery of a newly invented drug, without much human involvement. Does such AI assistance rise to the level of inventorship? The answer is nonot in the United Statesaccording to the U.S. Court of Appeals for the Federal Circuit. See Thaler v. Vidal, 43 F.4th 1207, 1213 (Fed. Cir. 2022).

In a hallmark decision, the Court of Appeals for the Federal Circuit recently established that artificial intelligence cannot be named as an inventor on a U.S. patent. In Thaler v. Vidal, Dr. Stephen Thaler represented that he developed an AI platform named DABUS (device for the autonomous bootstrapping ofunified sentience) that created inventions. Thaler subsequently filed two patent applications at the U.S. Patent and Trademark Office (USPTO) for inventions generated by DABUS, attempting to list the DABUS-AI system as the inventor. The USPTO declined Thalers request. The circuit court affirmed, holding that only a human being can be an inventor in the United States. See Thaler, 43 F.4th at 1209, 1213.

Here is the original post:

Inventorship and Patentability of Artificial Intelligence-Based ... - Law.com

Opera gives voice to Alan Turing with help of artificial intelligence – Yale News

A few years ago, composer Matthew Suttor was exploring Alan Turings archives at Kings College, Cambridge, when he happened upon a typed draft of a lecture the pioneering computer scientist and World War II codebreaker gave in 1951 foreseeing the rise of artificial intelligence.

In the lecture, Intelligent Machinery, a Heretical Theory, Turing posits that intellectuals would oppose the advent of artificial intelligence out of fear that machines would replace them.

It is probable though that the intellectuals would be mistaken about this, Turing writes in a passage that includes his handwritten edits. There would be plenty to do, trying to understand what the machines were trying to say, i.e., in trying to keep ones (sic) intelligence up to the standard set by the machines

To Suttor, the passage underscores Turings visionary brilliance.

Reading it was kind of a mind-blowing moment as were now on the precipice of Turings vision becoming our reality, said Suttor, program manager at Yales Center for Collaborative Arts and Media (CCAM) a campus interdisciplinary center engaged in creative research and practice across disciplines and a senior lecturer in the Department of Theater and Performance Studies in Yales Faculty of Arts and Sciences.

Inspired by Turings 1951 lecture, and other revelations from his papers, Suttor is working with a team of musicians, theater makers, and computer programmers (including several alumni from the David Geffen School of Drama at Yale) to create an experimental opera, called I AM ALAN TURING, which explores his visionary ideas, legacy, and private life.

I didnt envision a chronological biographical operatic piece To me, it was much more interesting to investigate Turings ideas.

Matthew Suttor

In keeping with Turings vision, the team has partnered with artificial intelligence on the project, using successive versions of GPT, a large language model, to help write the operas libretto and spoken text.

Three work-in-progress performances of the opera formed the centerpiece of the Machine as Medium Symposium: Matter and Spirit, a recent two-day event produced by CCAM that investigated how AI and other technologies intersect with creativity and alter how people approach timeless questions on the nature of existence.

The symposium, whose theme Matter and Spirit was derived from Turings writings, included panel discussions with artists and scientists, an exhibition of artworks made with the help of machines or inspired by technology, and tour of the Yale School of Architectures robotic lab led by Hakim Hasan, a lecturer at the school who specializes in robotic fabrication and computational design research.

All sorts of projects across fields and disciplines are using AI in some capacity, said Dana Karwas, CCAMs director. With the opera, Matthew and his team are using it as a collaborative tool in bringing Alan Turings ideas and story into a performance setting and creating a new model for opera and other types of live performance.

Its also an effective platform for inviting further discussion about technology that many people are excited about or questioning right now, and is a great example of the kind of work were encouraging at CCAM.

Turing is widely known for his work at Bletchley Park, Great Britains codebreaking center during World War II, where he cracked intercepted Nazi ciphers. But he was also a path-breaking scholar whose work set the stage for the development of modern computing and artificial intelligence.

His Turing Machine, developed in 1936, was an early computational device that could implement algorithms. In 1950, he published an article in the journal Mind that asked: Can machines think? He also made significant contributions to theoretical biology, which uses mathematical abstractions in seeking to better understand the structures and systems within living organisms.

A gay man, Turing was prosecuted in 1952 for gross indecency after acknowledging a sexual relationship with a man, which was then illegal in Great Britain, and underwent chemical castration in lieu of a prison sentence. He died by suicide in 1954, age 41.

Before visiting Turings archive, Suttor had read Alan Turing: The Enigma, Andrew Hodges authoritative 1983 biography, and believed the mathematicians life possessed an operatic scale.

I didnt envision a chronological biographical operatic piece, which frankly is a pretty dull proposition, Suttor said. To me, it was much more interesting to investigate Turings ideas. How do you put those on stage and sing about them in a way that is moving, relevant, and dramatically exciting?

Thats when Smita Krishnaswamy, an associate professor of genetics and computer science at Yale, introduced Suttor and his team to OpenAI and several Zoom conversations with representatives of the company about the emerging technology followed. Working with Yale University Librarys Digital Humanities Lab, the team built an interface to interact with an instance, or single occurrence, of GPT-2, training it with materials from Turings archive and the text of books hes known to have read. For example, they knew Turing enjoyed George Bernard Shaws play Back to Methuselah, and Snow White, the Brothers Grimm fairytale, so they shared those texts with the AI.

The team began asking GPT-2 the kinds of questions that Turing had investigated, such as Can machines think? They could control the temperature of the models answers or, the creativity or randomness and the number of characters the responses contained. They continually adjusted the settings on those controls and honed their questions to vary the answers.

Some of the responses are just jaw-droppingly beautiful, Suttor said. You are the applause of the galaxy, for instance, is something you might print on a T-shirt.

In one prompt, the team asked the AI technology to generate lyrics for a sexy song about the operas subject, which yielded the lyrics to Im a Turing Machine, Baby.

In composing the operas music, Suttor and his team incorporated elements of Turings work on morphogenesis the biological process that develops cells and tissues and phyllotaxis, the botanical study of mathematical patterns found in stems, leaves, and seeds. For instance, Suttor found that diagrams Turing had produced showing the spiral patterns of seeds in a sunflower head conform to a Fibonacci sequence, in which each number is the sum of the two before it. Suttor superimposed the circle of fifths a method in music theory of organizing the 12 chromatic pitches as a sequence of perfect fifths onto Turings diagram, producing a unique mathematical, harmonic progression.

Suttor repeated the process using prime numbers numbers greater than 1 that are not the product of two smaller numbers in place of the Fibonacci sequence, which also produced a harmonic series. The team sequenced analog synthesizers to these harmonic progressions.

It sounds a little like Handel on acid, he said.

The workshop version of I AM ALAN TURING was performed on three consecutive nights before a packed house in the CCAM Leeds Studio. The show, in its current form, consists of eight pieces of music that cross genres. Some are operatic with a chorus and soloist, some sound like pop music, and some evoke musical theater. While Suttor composed key structural pieces, the entire team has collaborated like a band while creating the music.

At the same time, the shows storytelling is delivered through various modes: opera, pop, and acted drama. At the beginning, an actor portraying Turing stands at a chalkboard drawing the sunflowers spiral pattern.

Another scene is drawn from a transcript of Turings comments during a panel discussion, broadcast by the BBC, about the potential of artificial intelligence. In that conversation, Turing spars with a skeptical colleague who doesnt believe machines could reach or exceed human levels of intelligence.

Turing made that point during that BBC panel that hed trained machines to do things, which took a lot of work, and they both learned something from the process, Suttor said. I think that captures our experience working with GPT to draft the script.

The show also contemplates Turings sexuality and the persecution he endured because of it. One sequence shows Turing enjoying a serene morning in his kitchen beside a partner, sipping tea and eating toast. His partner reads the paper. Turing scribbles in a notebook. A housecat makes its presence felt.

Its the life that Turing never had, Suttor said.

In high school, Turing had a close friendship with classmate Christopher Morcom, who succumbed to tuberculosis while both young men were preparing to attend Cambridge. Morcom has been described as Turings first true love.

Turing wrote a letter called Nature of Spirit to Christophers mother in which he imagines the possibility of multiple universes and how the soul and the body are intrinsically linked.

In the opera, a line from the letter is recited following the scene, in Turings kitchen, that showed a glimmer of domestic tranquility: Personally, I think that spirit is really eternally connected with matter but certainly not always by the same kind of body.

The show closed with an AI-generated text, seemingly influenced by Snow White: Look in the mirror, do you realize how beautiful you are? You are the applause of the galaxy.

The I AM ALAN TURING experimental opera was just one of many projects presented during Machine as Medium: Matter and Spirit, a two-day symposium that demonstrated the kinds of interdisciplinary collaborations driven by Yales Center for Collaborative Arts and Media (CCAM).

An exhibition at the centers York Street headquarters highlighted works created with, or inspired by, various kinds of machines and technology, including holograms, motion capture, film and immersive media, virtual reality, and even an enormous robotic chisel. An exhibition tour allowed the artists to connect while describing their work to the public. The discussion among the artists and guests typifies the sense of community that CCAM aims to provide, said Lauren Dubowski 14 M.F.A., 23 D.F.A.,CCAM's assistant director,who designed and led the event.

We work to create an environment where anyone can come in and be a part of the conversation, Dubowski said. CCAM is a space where people can see work that they might not otherwise see, meet people they might not otherwise meet and talk about the unique things happening here.

Follow this link:

Opera gives voice to Alan Turing with help of artificial intelligence - Yale News

Formula One trials AI to tackle track limits breaches – Reuters

[1/2]Artificial Intelligence words are seen in this illustration taken March 31, 2023. REUTERS/Dado Ruvic/Illustration/File Photo Acquire Licensing Rights

ABU DHABI, Nov 23 (Reuters) - Formula One's governing body is trialling artificial intelligence (AI) to tackle track limits breaches at this weekend's season-ending Abu Dhabi Grand Prix.

The Paris-based FIA said it would be using 'Computer Vision' technology that uses shape analysis to work out the number of pixels going past the track edge.

The AI will sort out the genuine breaches, where drivers cross the white line at the edge of the track with all four wheels, reducing the workload for the FIA's remote operations centre (ROC) and speeding up the response.

The July 2 Austrian Grand Prix was a high water mark for the sport with just four people having to process an avalanche of some 1,200 potential violations.

By the title-deciding Qatar weekend in October there were eight people assigned to assess track limits and monitor 820 corner passes, with 141 reports sent to race control who then deleted 51 laps.

Some breaches still went unpunished at October's U.S. Grand Prix in Austin, however.

Stewards said this month that their inability to properly enforce track limits violations at turn six was "completely unsatisfactory" and a solution needed to be found before the start of next season.

Tim Malyon, the FIA's head of remote operations and deputy race director, said the Computer Vision technology had been used effectively in medicine in areas such as scanning data from cancer screening.

"They dont want to use the Computer Vision to diagnose cancer, what they want to do is to use it to throw out the 80% of cases where there clearly is no cancer in order to give the well trained people more time to look at the 20%," he said.

"And thats what we are targeting."

Malyon said the extra Computer Vision layer would reduce the number of potential infringements being considered by the ROC, with still fewer then going on to race control for further action.

"The biggest imperative is to expand the facility and continue to invest in software, because thats how well make big strides," he said. "The final takeaway for me is be open to new technologies and continue to evolve.

"Ive said repeatedly that the human is winning at the moment in certain areas. That might be the case now but we do feel that ultimately, real time automated policing systems are the way forward."

Reporting by Alan Baldwin in London, editing by Toby Davis

Our Standards: The Thomson Reuters Trust Principles.

More:

Formula One trials AI to tackle track limits breaches - Reuters

Artificial Intelligence (AI) and Human Intelligence (HI) in the future of … – TechNode Global

In an era of rapid technological advancement, the dynamic interplay between Artificial Intelligence (AI) and Human Intelligence (HI) is shaping the future of education. As we peer into the horizon, it becomes increasingly clear that our approach to preparing the next generation for the challenges and opportunities that await must evolve. Here, we explore how AI and HI are set to play harmonious, complementary roles in the realm of preschool education and beyond.

Before we embark on this educational journey, it is crucial to envision the future professions that will shape the educational landscape. Two significant examples illustrate the evolving role of AI in various fields:

Future-proofing students for a rapidly evolving world equips students to be both AI-competent and distinctly human.

The collaboration between AI and human educators forms the core of the new educational landscape. This symbiotic relationship aims to merge the precision and efficiency of AI with the empathetic and creative guidance provided by human mentors. Heres how this partnership is poised to revolutionize education:

The integration of AI in preschool education unfolds a unique paradigm where young children can seamlessly transition between the roles of the tutee and the tutor, fostering a multifaceted and enriching educational experience. This concept manifests in three distinct applications of AI in education: as a tutor, as a tutee, and as a tool.

In essence, this multifaceted interaction with AI establishes a symbiotic relationship where the roles of teacher and student are fluid. By seamlessly transitioning between these roles, preschool children not only benefit from tailored guidance but also actively engage with AI as a creative collaborator and a student, cultivating a well-rounded skill set that extends beyond conventional learning approaches.

In summary, the future of education hinges on the collaborative efforts of human educators and AI. Together, they create an educational ecosystem that leverages the strengths of each, ensuring that students are not only equipped with knowledge but also possess the essential skills and attributes needed to thrive in a future where human qualities play a pivotal role.

Dr. Richard Yen, Harvard University PhD, founder of Ednovation, which develops edtech and operates Cambridge, ChildFirst, and Shaws Preschools in Singapore, South East Asia, and China.

TNGlobal INSIDERpublishes contributions relevant to entrepreneurship and innovation. You maysubmit your own original or published contributionssubject to editorial discretion.

How Southeast Asias SMEs can benefit from digital transformation and cloud combined

Read more here:

Artificial Intelligence (AI) and Human Intelligence (HI) in the future of ... - TechNode Global