Samsung Electronics Explores Future of AI Research

Under the themes Shaping the Future with AI and Semiconductor and Scaling AI for the Real World, renowned experts will share the latest AI research achievements

Samsung Electronics today announced that it will host the Samsung AI Forum 2022 from November 8 to 9.

The Samsung AI Forum, now in its sixth year, is a place for exchanging technological advances with world-renowned AI scholars and experts, sharing the latest AI research achievements and exploring future research direction.

This years forum will be held in-person for the first time in three years and will also be live-streamed on Samsung Electronics YouTube channel.

Those who are interested in the event can register to participate in the forum from October 18 to the day of the event on the Samsung AI Forum website. Registered participants will be able to receive a detailed program agenda and submit questions online.

Day one will be hosted by Samsung Advanced Institute of Technology (SAIT) under the theme Shaping the Future with AI and Semiconductor. Participants will discuss the current status and research direction on AI that will lead the future of innovations in other fields including semiconductors and materials.

Jong-Hee (JH) Han, Vice Chairman, CEO and Head of Device eXperience (DX) Division at Samsung Electronics, will start the forum by giving the opening remarks, followed by a keynote speech from Professor Yoshua Bengio of the University of Montreal, Canada. Afterward, technology sessions, such as AI for R&D Innovation, Recent Advances of AI Algorithms and Large Scale Computing for AI and HPC will be held.

During each technology session, renowned AI experts and the AI research leaders at SAIT will be on stage to share their findings.Minjoon Seo, Professor at KAIST, and Hyunoh Song, Professor at the Seoul National University, will introduce the latest research achievements on AI algorithms, and the former IBM and Intel Fellow Alan Gara, who is one of the leading researchers on supercomputers, will make a presentation on the evolution of computing and the future of AI. AI research leaders at SAIT including Changkyu Choi, Executive Vice President and Head of SAITs AI Research Center, will share the status and vision of Samsungs research on AI.

This years AI forum will be prepared to be a place to discuss the direction of AI research to create a better future by applying AI technology to various fields, especially semiconductor, in the future. said Gyo-Young Jin, President and the Head of SAIT as well as Co-chair of the Samsung AI Forum.

The Samsung AI Researcher of the Year awards, which were established to discover excelling rising researchers in the field of AI, will also be presented during the forum. In addition, various programs, including poster presentations of excellent research papers, introduction of the SAIT, exhibition of its research projects and networking event for researchers and students in the field of AI will be held to accelerate active research in AI.

Day two of the forum will be hosted by Samsung Research under the theme Scaling AI for the Real World. Participants will share the direction of future AI technological advancement that will have an important impact on our lives, such as hyperscale AI, digital human and robotics technology, which are the latest trending topics.

Sebastian Seung, President and Head of Samsung Research, will start with a welcoming remark and a keynote speech on Evolutionary approach to brain-inspired learning algorithms.

Daniel Lee, Executive Vice President and Head of Samsung Researchs Global AI Center, will give a presentation on current status of Samsung Researchs AI research, which will be followed by invited talks by AI experts, including the heads of Global Research Institutes.

Terrence Sejnowski, Professor at the University of California San Diego and founder of NeurIPS (The Conference and Workshop on Neural Information Processing Systems), one of the most prestigious international conferences on AI, will speak on whether large language models have intelligent, and Dr. Johannes Gehrke, Head of Microsoft Research Lab, will explain the core technology of hyperscale AI and research directions of Microsofts next-generation AI.

Afterwards, Dieter Fox, Senior Director of Robotics Research at NVIDIA, will give a presentation on robot technology that controls objects without an explicit model and Seungwon Hwang, Professor at the Seoul National University, will share knowledge on robust natural language processing technology.

Furthermore, Daniel Lee will moderate the panel discussion on the latest AI trends and the future outlook with fellow speakers. There will also be times allotted for presentation and demonstration of the latest research status by the researchers at Samsung Researchs AI Research Center.

This years Samsung AI Forum will be a place for participants to better understand various AI researches currently underway in terms of Scaling AI for the Real World to increase the value of our lives, said Dr. Sebastian Seung, President and Head of Samsung Research. We hope many people, who are interested in the field of AI, will participate in this years forum, which will be held both online and in person.

Read more:

Samsung Electronics Explores Future of AI Research

Eyenuk Raises $26M for AI-Powered Eye Screening & Predictive Biomarkers

What You Should Know:

Eyenuk, Inc., a global artificial intelligence (AI) digital health company and the leader in real-world applications for AI Eye Screening and AI Predictive Biomarkers, today announced it has secured $26 million in a Series A financing round, bringing the Companys total funding to over $43 million.

The capital raise was led by AXA IM Alts and was joined by new and existing investors including T&W Medical A/S, A&C Foelsgaard Alternativer ApS, Kendall Capital Partners, and KOFA Healthcare.

Accelerating Global Access to AI-Powered Eye-Screening Technology

Eyenuk, Inc. is a global artificial intelligence (AI) digital health company and the leader in real-world AI Eye Screening for autonomous disease detection and AI Predictive Biomarkers for risk assessment and disease surveillance. Eyenuk is on a mission to screen every eye in the world to ensure timely diagnosis of life- and vision-threatening diseases, including diabetic retinopathy, glaucoma, age-related macular degeneration, stroke risk, cardiovascular risk, and Alzheimers disease.

Eyenuk will use the capital to expand its AI product platform with additional disease indications and advanced care coordination and to accelerate the platforms global commercialization and adoption.

We are thrilled that AXA IM Alts, T&W Medical A/S, A&C Foelsgaard Alternativer ApS, Kendall Capital Partners, and our other new and existing investors have joined us in furthering our mission of using AI to screen every eye in the world to help eliminate preventable vision loss and transition the world to predictive and preventative healthcare, said Eyenuk CEO and Founder Kaushal Solanki, Ph.D. Our Series A fundraise validates the strong market performance of the EyeArt system and provides us with critical resources as we expand our platform capabilities this year to include solutions for detecting additional diseases.

Todays announcement follows the Sept. 29, 2022 publication of a major peer-reviewed study in Ophthalmology Science, a publication of the American Academy of Ophthalmology. The study found that the EyeArt AI system is far more sensitive in identifying referable diabetic retinopathy than dilated eye exams by ophthalmologists and retina specialists.

Eyenuk is leading the way in harnessing the power of AI to eliminate preventable blindness globally, through its versatile digital health platform that enables automated AI diagnosis and coordination of care. Eyenuks flagship EyeArt AI system has been more broadly adopted worldwide than any other autonomous AI technology for ophthalmology. Since its FDA clearance in 2020, the EyeArt system has been used in over 200 locations in 18 countries, including 14 U.S. states, to screen over 60,000 patients and counting. It is the first and only technology to be cleared by the FDA for autonomous detection of both referable and vision-threatening diabetic retinopathy without any eye care specialist involvement.

The EyeArt system is reimbursed by Medicare in the US, and has regulatory approvals globally, including CE Marking, Health Canada license, and approvals in multiple markets in Latin America and the Middle East.

View post:

Eyenuk Raises $26M for AI-Powered Eye Screening & Predictive Biomarkers

Has There Been A Second AI Big Bang? – Forbes

Aleksa Gordic, an AI researcher with DeepMind

The Big Bang in artificial intelligence (AI) refers to the breakthrough in 2012, when a team of researchers led by Geoff Hinton managed to train an artificial neural network (known as a deep learning system) to win an image classification competition by a surprising margin. Prior to that, AI had performed some remarkable feats, but it had never made much money. Since 2012, AI has helped the big technology companies to generate enormous wealth, not least from advertising.

Has there been a new Big Bang in AI, since the arrival of Transformers in 2017? In episodes 5 and 6 of the London Futurist podcast, Aleksa Gordic explored this question, and explained how todays cutting-edge AI systems work. Aleksa is an AI researcher at DeepMind, and previously worked in Microsofts Hololens team. Remarkably, his AI expertise is self-taught so there is hope for all of us yet!

Transformers are deep learning models which process inputs expressed in natural language and produce outputs like translations, or summaries of texts. Their arrival was announced in 2017 with the publication by Google researchers of a paper titled Attention is All You Need. This title referred to the fact that Transformers can pay attention simultaneously to large corpus of text, whereas their predecessors, Recurrent Neural Networks, could only pay attention to the symbols either side of the segment of text being processed.

Transformers work by splitting text into small units, called tokens, and mapping them onto high-dimension networks - often thousands of dimensions. We humans cannot envisage this. The space we inhabit is defined by three numbers or four, if you include time, and we simply cannot imagine a space with thousands of dimensions. Researchers suggest that we shouldnt even try.

For Transformer models, words and tokens have dimensions. We might think of them as properties, or relationships. For instance, man is to king as woman is to queen. These concepts can be expressed as vectors, like arrows in three-dimensional space. The model will attribute a probability to a particular token being associated with a particular vector. For instance, a princess is more likely to be associated with the vector which denotes wearing a slipper than to the vector that denotes wearing a dog.

There are various ways in which machines can discover the relationships, or vectors, between tokens. In supervised learning, they are given enough labelled data to indicate all the relevant vectors. In self-supervised learning, they are not given labelled data, and they have to find the relationships on their own. This means the relationships they discover are not necessarily discoverable by humans. They are black boxes. Researchers are investigating how machines handle these dimensions, but it is not certain that the most powerful systems will ever be truly transparent.

The size of a Transformer model is normally measured by the number of parameters it has. A parameter is analogous to a synapse in a human brain, which is the point where the tendrils (axons and dendrites) of our neurons meet. The first Transformer models had a hundred million or so parameters, and now the largest ones have trillions. This is still smaller than the number of synapses in the human brain, and human neurons are far more complex and powerful creatures than artificial ones.

A surprising discovery made a couple of years after the arrival of Transformers was that they are able to tokenise not just text, but images too. Google released the first vision Transformer in late 2020, and since then people around the world have marvelled at the output of Dall-E, MidJourney, and others.

The first of these image-generation models were Generative Adversarial Networks, or GANs. These were pairs of models, with one (the generator) creating imagery designed to fool the other into accepting it as original, and the second system (the discriminator) rejecting attempts which were not good enough. GANs have now been surpassed by Diffusion models, whose approach is to peel noise away from the desired signal. The first Diffusion model was actually described as long ago as 2015, but the paper was almost completely ignored. They were re-discovered in 2020.

Transformers are gluttons for compute power and for energy, and this has led to concerns that they might represent a dead end for AI research. It is already hard for academic institutions to fund research into the latest models, and it was feared that even the tech giants might soon find them unaffordable. The human brain points to a way forward. It is not only larger than the latest Transformer models (at around 80 billion neurons, each with around 10,000 synapses, it is 1,000 times larger). It is also a far more efficient consumer of energy - mainly because we only need to activate a small portion of our synapses to make a given calculation, whereas AI systems activate all of their artificial neurons all of the time. Neuromorphic chips, which mimic the brain more closely than classic chips, may help.

Aleksa is frequently surprised by what the latest models are able to do, but this is not itself surprising. If I wasnt surprised, it would mean I could predict the future, which I cant. He derives pleasure from the fact that the research community is like a hive mind: you never know where the next idea will come from. The next big thing could come from a couple of students at a university, and a researcher called Ian Goodfellow famously created the first GAN by playing around at home after a brainstorming session over a couple of beers.

See the rest here:

Has There Been A Second AI Big Bang? - Forbes

‘State of AI in the Enterprise’ Fifth Edition Uncovers Four Key Actions to Maximize AI Value – PR Newswire

Research reveals the key actions leaders can take to accelerate AI outcomes

NEW YORK, Oct. 18, 2022 /PRNewswire/ --

Key takeaways

Why this matters

The Deloitte AI Institute's fifth edition of the "State of AI in the Enterprise" survey, conducted between April and May 2022, provides organizations with a roadmap to navigate lagging AI outcomes. Twenty-nine percent more respondents surveyed classify as underachievers this year, yet 79% of respondents say they've fully deployed three or more types of AI. It is clear despite rapid advancement in the AI market that organizations are struggling to turn implementation into scalable transformation. This year's report digs deeper into the actions that lead to successful outcomes providing leaders with a guide to overcome roadblocks and drive business results with AI.

The report surveyed 2,620 executives from 13 countries across the globe, outlining detailed recommendations for leaders to cultivate an AI-ready enterprise and improve outcomes for their AI efforts. Similar to last year's report, Deloitte grouped responding organizations into four profiles Transformers, Pathseekers, Starters and Underachievers based on how many types of AI applications they have deployed full-scale and the number of outcomes achieved to a high degree. The findings in the report aim to help companies overcome deployment and adoption challenges to become AI-fueled organizations that realize value and drive transformational outcomes from AI.

Key quotes

"Amid unprecedented disruption in the global economy and society at large, it is clear today's AI race is no longer about just adopting AI but instead driving outcomes and unleashing the power of AI to transform business from the inside out. This year's report provides a clear roadmap for business leaders looking to apply next-level human cognition and drive value at scale across their enterprise."

Costi Perricos, Deloitte Global AI and Data leader

"Since 2017, we have been tracking the advancement of AI as industries navigate the "Age of With." The fifth edition of our annual report outlines how AI can propel businesses beyond automating processes for efficiency to redesigning work itself. While organizations face the challenge of middling results, it is clear successful AI transformation requires strong leadership and focused investment, a through-line consistently evident in our annual research."

Beena Ammanath, executive director of the Deloitte AI Institute, Deloitte Consulting LLP

Four key actions powering widespread value from AI

Based on Deloitte's analysis of the behaviors and responses of high- and low-outcome organizations, the report identifies four key actions leaders can take now to improve outcomes for their AI efforts.

Action 1: Invest in Leadership and Culture

When it comes to successful AI deployment and adoption, leadership and culture matter. The workforce is increasingly optimistic, and leaders should do more to harness that optimism for culture change, establishing new ways of working to drive greater business results with AI.

Action 2: Transform Operations

An organization's ability to build and deploy AI ethically and at scale depends on how well they have redesigned their operations to accommodate the unique demands of new technologies.

Action 3: Orchestrate Tech and Talent

Technology and talent acquisition are no longer separate. Organizations need to strategize their approach to AI based on the skillsets they have available, whether they derive from humans or pre-packaged solutions.

Action 4: Select Use Cases that Accelerate Outcomes

The report found that selecting the right use cases to fuel an organization's AI journey depends largely on the value-drivers for the business based on sector and industry. Starting with use cases that are easier to achieve or have a faster or higher return on investment can create momentum for further investment and make it easier to drive internal cultural and organizational changes that accelerate the benefits of AI.

Connect with us:@Deloitte, @DeloitteAI, @beena_ammanath

TheDeloitte AI Institutesupports the positive growth and development of AI through engaged conversations and innovative research. It also focuses on building ecosystem relationships that help advance human-machine collaboration in the "Age of With," a world where humans work side-by-side with machines.

About DeloitteDeloitte provides industry-leading audit, consulting, tax and advisory services to many of the world's most admired brands, including nearly 90% of the Fortune 500 and more than 7,000 private companies.Our people come together for the greater good and work across the industry sectors that drive and shape today's marketplace delivering measurable and lasting results that help reinforce public trust in our capital markets, inspire clients to see challenges as opportunities to transform and thrive, and help lead the way toward a stronger economy and a healthier society. Deloitte is proud to be part of the largest global professional services network serving our clients in the markets that are most important to them.Building on more than 175 years of service, our network of member firms spans more than 150 countries and territories. Learn how Deloitte's approximately 415,000 people worldwide connect for impact at http://www.deloitte.com.

Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee ("DTTL"), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as "Deloitte Global") does not provide services to clients. In the United States, Deloitte refers to one or more of the US member firms of DTTL, their related entities that operate using the "Deloitte" name in the United States and their respective affiliates. Certain services may not be available to attest clients under the rules and regulations of public accounting. Please see http://www.deloitte.com/aboutto learn more about our global network of member firms.

SOURCE Deloitte Consulting LLP

Go here to read the rest:

'State of AI in the Enterprise' Fifth Edition Uncovers Four Key Actions to Maximize AI Value - PR Newswire

VIDEO: Role of AI in breast imaging with radiomics, detection of breast density and lesions – Health Imaging

"I think AI is still in its relatively early phase of adoption," Lehman said. "We do have some centers that are not academic centers that are very forward thinking and really wanting to bring AI into their practices. However, we are also seeing a story that is very familiar when we are bringing computer-aided detection (CAD) into both academic and community centers. The technology is being incorporated into clinical care, but we are still studying what the actual outcomes are on patients who are being screened with mammography where AI tools are or are not being used."

This includes AI for automated detection of breast cancer lesions and flagging these to show the areas of interest on mammogram images, or to flag studies that need closer attention. AI also can take a first pass look at mammograms to determine if they appear to be normal, so radiologists can prioritize which exams need to be read first and which may be more complex.

This technology will likely become more important as the number of breast imaging exams switches over from traditional four-image mammogram studies to much larger 3D mammogram digital breast tomosythesis (DBT) exams of 50 or more images that are more time consuming to read. AI is already being used to flag images that deserve a closer look in these datasets.

AI is also finding use as an automated way to grade breast density to help eliminate the variation of grading the same patient by human readers.

However, the most exciting area of AI for breast imaging is in the potential of radiomics, where the AI will view medical imaging in ways that human readers cannot to identify very complex and small patterns that will help better assess patient risk scores, or what the best outcomes will be based on various cancer treatments.

"What I am really excited about is the domain where investigators are considering the power of artificial intelligence to do things that humans cannot or are not very good at, and then to allow the humans to really focus on those tasks where humans excel. As of today, these AI tools have not even really scratched the surface," Lehman explained.

She said this area of research using radiomics moves beyond training AI to look at images like a human radiologist and to instead pull out details that are usually hidden from the human eye. This includes rapid computer segmentation and analysis of the morphology of disease or tissue patterns seen in images, looking for minute regional structures that can be detected by AI.

"This is not to train AI to look at mammograms like I do, but to train the AI to look for patterns and signals that my human eyes and human brain cannot detect or process," Lehman said.

She said today, we are just scratching the surface of the data potential of AI analysis of cancers in imaging. Deeply embedded patterns within cancers on imaging may be able to tell us a lot about which concerns will or will not respond to different drugs or therapies. AI may be able to tell us this from a much deeper analysis of the imaging, including the subtypes of that particular cancer. This would enable much better tailored, personalized medicine and treatments for each patient.

Read the rest here:

VIDEO: Role of AI in breast imaging with radiomics, detection of breast density and lesions - Health Imaging

What the White House’s ‘AI Bill of Rights’ blueprint could mean for HR tech – HR Dive

Over the last decade, the use of artificial intelligence in areas like hiring, recruiting and workplace surveillance has shifted from a topic of speculation to a tangible reality for many workplaces. Now, those technologies have the attention of the highest office in the land.

On Oct. 4, the White Houses Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights, a 73-page document outlining guidance on addressing bias and discrimination in automated technologies so that protections are embedded from the beginning, where marginalized communities have a voice in the development process, and designers work hard to ensure the benefits of technology reach all people.

The blueprint focuses on five areas of protections for U.S. citizens in relation to AI: system safety and effectiveness; algorithmic discrimination; data privacy; notice and explanation when an automated system is used; and access to human alternatives when appropriate. It also follows the publication in May of two cautionary documents by the U.S. Equal Employment Opportunity Commission and the U.S. Department of Justice specifically addressing the use of algorithmic decision-making tools in hiring and other employment actions.

Employment is listed in the blueprint as one of several sensitive domains deserving of enhanced data and privacy protections. Individuals handling sensitive employment information should ensure it is only used for functions strictly necessary for that domain while consent for all non-necessary functions should be optional.

Additionally, the blueprint states that continuous surveillance and monitoring systems should not be used in physical or digital workplaces, regardless of a persons employment status. Surveillance is particularly sensitive in the union context; the blueprint notes that federal law requires employers, and any consultants they may retain, to report the costs of surveilling employees in the context of a labor dispute, providing a transparency mechanism to help protect worker organizing.

The prevalence of employment-focused AI and automation may depend on the size and type of organization studied, though research suggests a sizable portion of employers have adopted the tech.

For example, a February survey by the Society for Human Resource Management found that nearly one-quarter of employers used such tools, including 42% of employers with more than 5,000 employees. Of all respondents utilizing AI or automation, 79% said they were using this technology for recruitment and hiring, the most common such application cited, SHRM said.

Similarly, a 2020 Mercer study found that 79% of employers were either already using, or planned to start using that year, algorithms to identify top candidates based on publicly available information. But AI has applications extending beyond recruiting and hiring. Mercer found that most respondents said they were also using the tech to handle employee self-service processes, conduct performance management and onboard workers, among other needs.

Employers should note that the blueprint is not legally binding, does not constitute official U.S. government policy and is not necessarily indicative of future policy, said Niloy Ray, shareholder at management-side firm Littler Mendelson. Though the principles contained in the document may be appropriate for AI and automation systems to follow, the blueprint is not prescriptive, he added.

It helps add to the scholarship and thought leadership in the area, certainly, Ray said. But it does not rise to the level of some law or regulation.

Employers may benefit from a single federal standard for AI technologies, Ray said, particularly given that this is an active legislative area for a handful of jurisdictions. A New York City law restricting the use of AI in hiring will take effect next year. Meanwhile, a similar law has been proposed in Washington, D.C., and Californias Fair Employment and Housing Council has proposed regulations on the use of automated decision systems.

Then there is the international regulatory landscape, which can pose even more challenges, Ray said. Because of the complexity involved, Ray added that employers might want to see more discussion around a unified federal standard, and the Biden administrations blueprint may be a way of jump-starting that discussion.

Lets not have to jump through 55 sets of hoops, Ray said of the potential for a federal standard. Lets have one set of hoops to jump through.

The blueprints inclusion of standards around data privacy and other areas may be important for employers to consider, as AI and automation platforms used for hiring often take into account publicly available data that job candidates do not realize is being used for screening purposes, said Julia Stoyanovich, co-founder and director at New York Universitys Center for Responsible AI.

Stoyanovich is co-author on an August paper in which a group of NYU researchers detailed their analysis of two personality tests used by two automated hiring vendors, Humantic AI and Crystal. The analysis found that the platforms exhibited substantial instability on key facets of measurement and concluded that they cannot be considered valid personality assessment instruments.

Even before AI is introduced into the equation, the idea that a personality profile of a candidate could be a predictor of job performance is a controversial one, Stoyanovich said. Laws like New York Citys could help to provide more transparency on how automated hiring platforms work, she added, and could provide HR teams a better idea of whether tools truly serve their intended purposes.

The fact that we are starting to regulate this space is really good news for employers, Stoyanovich said. We know that there are tools that are proliferating that dont work, and it doesnt benefit anyone except for the companies that are making money selling these tools.

See the original post here:

What the White House's 'AI Bill of Rights' blueprint could mean for HR tech - HR Dive

Microsoft’s GitHub Copilot AI is making rapid progress. Here’s how its human leader thinks about it – CNBC

Earlier this year, LinkedIn co-founder and venture capitalist Reid Hoffman issued a warning mixed with amazement about AI. "There is literally magic happening," said Hoffman, speaking to technology executives across sectors of the economy.

Some of that magic is becoming more apparent in creative spaces, like the visual arts, and the idea of "generative technology" has captured the attention of Silicon Valley. AI has even recently won awards at art exhibitions.

But Hoffman's message was squarely aimed at executives.

"AI will transform all industries," Hoffman told the members of the CNBC Technology Executive Council. "So everyone has to be thinking about it, not just in data science."

The rapid advances being made by Copilot AI, the automated code writing tool from the GitHub open source subsidiary of Microsoft, were an example Hoffman, who is on the Microsoft board, directly cited as a signal that all firms better be prepared for AI in their world. Even if not making big investments today in AI, business leaders must understand the pace of improvement in artificial intelligence and the applications that are coming or they will be "sacrificing the future," he said.

"100,000 developers took 35% of the coding suggestions from Copilot," Hoffman said. "That's a 35% increase in productivity, and off last year's model. ... Across everything we are doing, we will have amplifying tools, it will get there over the next three to 10 years, a baseline for everything we are doing," he added.

Copilot has already added another 5% to the 35% cited by Hoffman. GitHub CEO Thomas Dohmke recently told us that Copilot is now handling up to 40% of coding among programmers using the AI in the beta testing period over the past year. Put another way, for every 100 lines of code, 40 are being written by the AI, with total project time cut by up to 55%.

Copilot, trained on massive amounts of open source code, monitors the code being written by a developer and works as an assistant, taking the input from the developer and making suggestions about the next line of code, often multi-line coding suggestions, often "boilerplate" code that is needed but is a waste of time for a human to recreate. We all have some experience with this form of AI now, in places like our email, with both Microsoft and Google mail programs suggesting the next few words we might want to type.

AI can be logical about what may come next in a string of text. But Dohmke said, "It can't do more, it can't capture the meaning of what you want to say."

Whether a company is a supermarket working on checkout technology or a banking company working on customer experience in an app, they are all effectively becoming software companies, all building software, and once a C-suite has developers it needs to be looking at developer productivity and how to continuously improve it.

That's where the 40 lines of code come in. "After a year of Copilot, about 40% of code was written by the AI where Copilot was enabled," Dohmke said. "And if you show that number to executives, it's mind-blowing to them. ... doing the math on how much they are spending on developers."

With the projects being completed in less than half the time, a logical conclusion is that there will be less work to do for humans. But Dohmke says another way of looking at the software developer job is that they do many more high-value tasks than just rewrite code that already exists in the world. "The definition of 'higher value' work is to take away the boiler-plate menial work writing things already done over and over again," he said.

The goal of Copilot is to help developers "stay in the flow" when they are on the task of coding. That's because some of the time spent writing code is really spent looking for existing code to plug in from browsers, "snippets from someone else," Dohmke said. And that can lead coders to get distracted. "Eventually they are back in editor mode and copy and paste a solution, but have to remember what they were working on," he said. "It's like a surfer on a wave in the water and they need to find the next wave. Copilot is keeping them in the editing environment, in the creative environment and suggesting ideas," Dohmke said. "And if the idea doesn't work, you can reject it, or find the closest one and can always edit," he added.

The GitHub CEO expects more of those Copilot code suggestions to be taken in the next five years, up to 80%. Unlike a lot going on in the computer field, Dohmke said of that forecast, "It's not an exact science ... but we think it will tremendously grow."

After being in the market for a year, he said new models are getting better fast. As developers reject some code suggestions from Copilot, the AI learns. And as more developers adopt Copilot it gets smarter by interacting with developers similar to a new coworker, learning from what is accepted or rejected. New models of the AI don't come out every day, but every time a new model is available, "we might have a leap," he said.

But the AI is still far short of replacing humans. "Copilot today can't do 100% of the task," Dohmke said. "It's not sentient. It can't create itself without user input."

With Copilot still in private beta testing among individual developers 400,000 developer signed up to use the AI in the first months it was available and hundreds of thousands of more developers since GitHub has not announced any enterprise clients, but it expects to begin naming business customers before the end of the year. There is no enterprise pricing information being disclosed yet, but in the beta test Copilot pricing has been set at a flat rate per developer $10 per individual per month or $100 annually, often expensed by developers on company cards. "And you can imagine what they earn per month so it's a marginal cost," Dohmke said. "If you look at the 40% and think of the productivity improvement, and take 40% of opex spend on developers, the $10 is not a relevant cost. ... I have 1,000 developers and it's way more money than 1000 x 10," he said.

The GitHub CEO sees what is taking place now with AI as the next logical phase of the productivity advances in a coding world he has been a part of since the late 1980s. That was a time when coding was emerging out of the punch card phase, and there was no internet, and coders like Dohmke had to buy books and magazines, and join computer clubs to gain information. "I had to wait to meet someone to ask questions," he recalled.

That was the first phase of developer productivity, and then came the internet, and now open source, allowing developers to find other developers on the internet who had already "developed the wheel," he said.

Now, whether the coding task is related to payment processing or a social media login, most companies whether startups or established enterprises put in open source code. "There is a huge dependency tree of open source that already exists," Dohmke said.

It's not uncommon for up to 90% of code on mobile phone apps to be pulled from the internet and open source platforms like GitHub. In a coding era of "whatever else is already available," that's not what will differentiate a developer or app.

"AI is just the third wave of this," Dohmke said. "From punch cards to building everything ourselves to open source, to now withina lot of code, AI writing more," he said. "With 40%, soon enough if AI spreads across industries, the innovation on the phone will be created with the help of AI and the developer."

Today, and into the foreseeable future, Copilot remains a technology that is trained on code, and is making proposals based on looking things up in a library of code. It is not inventing any new algorithms, but at the current pace of progress, eventually, "it is entirely possible that with help of a developer it will create new ideas of source code,," Dohmke said.

But even that still requires a human touch. "Copilot is getting closer, but it will always need developers to create innovation," he said.

Continue reading here:

Microsoft's GitHub Copilot AI is making rapid progress. Here's how its human leader thinks about it - CNBC

Nouriel Roubini: Why AI poses a threat to millions of workers – Yahoo Finance

Business sectors ranging from agriculture and manufacturing to automotive and financial services are increasingly turning to artificial intelligence as a means to automate large swaths of their organizationsand, along the way, save enormous sums through improved efficiencies.

But, says Megathreats' Author and NYU Stern School of Business professor Nouriel Roubini, the rise of AI will also have a massively negative impact on workers throughout the economy.

AI has helped revolutionize everything from the smartphones in our pockets to our grocery stores, which use the technology to better predict which items customers want to see on shelves. However, Roubini, whose prediction of the 2008 financial crisis earned him the moniker Dr. Doom, says AI poses a threat to millions of workers.

The downside is that while AI, machine learning, robotics, automation increases the economic pie, potentially, it also leads to losses of jobs and labor income, Roubini said during an interview at Yahoo Finances All Markets Summit.

Take autonomous cars. While they could dramatically reduce the number of car accidents, significantly cutting down on the number of deaths and injuries caused on the nations roadways, theyll also put millions out of work. You have, what, 5 million Uber and Lyft drivers, 5 million truckers and teamsters, and theyre going to be gone for good, Roubini said. And which jobs are they going to get?

CERNOBBIO, ITALY - SEPTEMBER 07: Nouriel Roubini professor of economics at New York University attends the Ambrosetti International Economic Forum 2019 "Lo scenario dell'Economia e della Finanza" on September 6, 2019 in Cernobbio, Italy. (Photo by Pier Marco Tacca/Getty Images)

Fully autonomous vehicles are still years away from hitting the roads. The majority of the technology thats currently available is meant to assist drivers rather than actually control vehicles themselves. But automakers have made it clear that they are intent on developing the technology to the point where theres no need for a driver at all.

But according to Roubini, its not just drivers and truckers who might be at risk of losing their jobs. As AI becomes more powerful, it could be used to replace workers in creative fields including the arts.

Story continues

Increasingly, even cognitive jobs that can be divided into a number of tasks are also being automated, Roubini said. Even creative jobs; there are now AIs that will create a script or a movie, or make a poem, or write...or paint, or even [write] a piece of music that soon enough is going to be top 10 in the Billboard Magazine chart.

While it might be some time before AI is winning any major awards or art prizes, if ever, it is being used to create digital art. Take the open-source DALL-E, which allows users to type in a series of words and get an image based on millions of photos pulled from the internet.

While artists are unlikely to disappear anytime soon, the fact that AI is racing into once unimaginable sectors of the economy could eventually mean Roubini's prognostications, like some of his others, will prove true.

Sign up for Yahoo Finance's Tech newsletter

More from Dan

Got a tip? Email Daniel Howley at dhowley@yahoofinance.com. Follow him on Twitter at @DanielHowley.

Click here for the latest technology business news, reviews, and useful articles on tech and gadgets

Read the latest financial and business news from Yahoo Finance

See the original post:

Nouriel Roubini: Why AI poses a threat to millions of workers - Yahoo Finance

Red Hat and IBM Research Advance IT Automation with AI-Powered Capabilities for Ansible – Business Wire

CHICAGO ANSIBLEFEST--(BUSINESS WIRE)--Red Hat, Inc., the world's leading provider of open source solutions, and IBM Research today announced Project Wisdom, the first community project to create an intelligent, natural language processing capability for Ansible and the IT automation industry. Using an artificial intelligence (AI) model, the project aims to boost the productivity of IT automation developers and make IT automation more achievable and understandable for diverse IT professionals with varied skills and backgrounds.

According to a 2021 IDC prediction1, by 2026, 85% of enterprises will combine human expertise with AI, ML, NLP, and pattern recognition to augment foresight across the organization, making workers 25% more productive and effective. Technologies such as machine learning, deep learning, natural language processing, pattern recognition, and knowledge graphs are producing increasingly accurate and context-aware insights, predictions, and recommendations.

Project Wisdom underpinned by AI foundation models derived from IBMs AI for Code efforts works by enabling a user to input a command as a straightforward English sentence. It then parses the sentence and builds the requested automation workflow, delivered as an Ansible Playbook, which can be used to automate any number of IT tasks. Unlike other AI-driven coding tools, Project Wisdom does not focus on application development; instead the project centers on addressing the rise of complexity in enterprise IT as hybrid cloud adoption grows.

From human readable to human interactive

Becoming an automation expert demands significant effort and resources over time, with a learning curve to navigate varying domains. Project Wisdom intends to bridge the gap between Ansible YAML code and human language, so users can use plain English to generate syntactically correct and functional automation content.

It could enable a system administrator who typically delivers on-premises services to reach across domains to build, configure, and operate in other environments using natural language to generate playbook instructions. A developer who knows how to build an application, but not the skillset to provision it in a new cloud platform, could use Project Wisdom to expand proficiencies in these new areas to help transform the business. Novices across departments could generate content right away while still building foundational knowledge, without the dependencies of traditional teaching models.

Driving open source innovation with collaboration

While the power of AI in enterprise IT cannot be denied, community collaboration, along with insights from Red Hat and IBM, will be key in delivering an AI/ML model that aligns to the key tenets of open source technology. Red Hat has more than two decades of experience in collaborating on community projects and protecting open source licenses in defense of free software. Project Wisdom, and its underlying AI model, are an extension of this commitment to keeping all aspects of the code base open and transparent to the community.

As hybrid cloud operations at scale become a key focus for organizations, Red Hat is committed to building the next wave of innovation on open source technology. As IBM Research and Ansible specialists at Red Hat work to fine tune the AI model, the Ansible community will play a crucial role as subject matter experts and beta testers to push the boundaries of what can be achieved together. While community participation is still being worked through, those interested can stay up to date on progress here.

Supporting Quotes

Chris Wright, CTO and SVP of Global Engineering, Red HatThis project exemplifies how artificial intelligence has the power to fundamentally shift how businesses innovate, expanding capabilities that typically reside within operations teams to other corners of the business. With intelligent solutions, enterprises can decrease the barrier to entry, address burgeoning skills gaps, and break down organization-wide siloes to reimagine work in the enterprise world.

Ruchir Puri, chief scientist, IBM Research; IBM Fellow; vice president, IBM Technical CommunityProject Wisdom is proof of the significant opportunities that can be achieved across technology and the enterprise when we combine the latest in artificial intelligence and software. Its truly an exciting time as we continue advancing how todays AI and hybrid cloud technologies are building the computers and systems of tomorrow.

1IDC FutureScape: Worldwide Artificial Intelligence and Automation 2022 Predictions, Doc # US48298421, Oct 2021

Additional Resources

Connect with Red Hat

About Red Hat, Inc.

Red Hat is the worlds leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers integrate new and existing IT applications, develop cloud-native applications, standardize on our industry-leading operating system, and automate, secure, and manage complex environments. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future.

About IBM

IBM is a leading global hybrid cloud and AI, and business services provider, helping clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries. Nearly 4,000 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM's hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently and securely. IBM's breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and business services deliver open and flexible options to our clients. All of this is backed by IBM's legendary commitment to trust, transparency, responsibility, inclusivity and service. For more information, visit https://research.ibm.com.

Forward-Looking Statements

Except for the historical information and discussions contained herein, statements contained in this press release may constitute forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are based on the companys current assumptions regarding future business and financial performance. These statements involve a number of risks, uncertainties and other factors that could cause actual results to differ materially. Any forward-looking statement in this press release speaks only as of the date on which it is made. Except as required by law, the company assumes no obligation to update or revise any forward-looking statements.

Red Hat, the Red Hat logo and Ansible are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the U.S. and other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries.

The rest is here:

Red Hat and IBM Research Advance IT Automation with AI-Powered Capabilities for Ansible - Business Wire

Google TV is getting parent-controlled watchlists and AI-powered suggestions for kids – TechCrunch

Google is bringing a set of new kids-focused features such as parent-controlled watchlists and AI-powered suggestions to Google TV, the latest in a series of efforts from the Android-maker as it attempts to broaden the offerings of its TV operating system for family consumption.

The company said it is adding these features to the kids profiles to improve content recommendation and exploration. Parents can directly push titles to the must watch lists for kids from their profiles (by just tapping the watchlist button on the titles they came across and pressing add), the company explained in a blog post.

Image Credits: Google

The company is also introducing AI-powered recommendations for kids because Google loves AI. Children can now look at popular shows and movies on their Google TV home screen based on installed apps and parent-defined ratings levels. If they dont like a title that has been recommended to them and dont wish to see it again, they can press and hold the select button and then tap hide to remove the suggestion from the list.

Image Credits: Google

The new additions are Googles ongoing efforts to make its services more appropriate for kids.Google introduced supervised accounts for YouTube last year that helps children migrate from YouTube Kids to the main YouTube app in a safe manner.

Parents can additionally create restrictions on content exploration. It allows guardians to define three levels of access: Explore for content suitable for viewers 9 and above; Explore more for viewers 13 and above; and Most of YouTube to enable access to all videos sans the age-restricted content.

Image Credits: YouTube

The search giant said it is also bringing this supervised experience to Google TV so kids can access the main YouTube app with appropriate content restrictions. Notably, when parents set up these supervised accounts, they provide consent for the collection and use of data collection from kids profiles for COPPA compliance a U.S. privacy law that defines limits for websites providing services to children.

The company first introduced kids profiles on Google TV last year that allows parents to set limits on app access and screen time.

Google said these features are rolling out starting today on the Chromecast with Google TV (both 4K and HD variants) and other Google TV devices from manufacturers like Hisense and Philips, Sony and TCL.

The rest is here:

Google TV is getting parent-controlled watchlists and AI-powered suggestions for kids - TechCrunch

Joe Rogan and Steve Jobs Have a 20-Minute Chat in AI-Powered Podcast – HYPEBEAST

Artificial intelligence has allowed us to simulate all kinds of situations through computer systems. Some of its main applications are language processing and speech recognition, and now, through play.ht and podcast.ai were actually able to see how far the technology has come by experiencing a conversation with someone who is not even on Earth anymore.

In an entirely AI-generated podcast, podcast.ai has created a full interview between Joe Rogan and Steve Jobs. While the first bit of the podcast is clunky with weird pauses and awkward laughing, it does start to move into real conversation touching on faith, tech companies, drugs, and at one point the AI-generated Jobs uses the analogy of a car where you have to buy all four wheels separately to Adobes services.

The crazy thing is some parts begin to sound believable, and actually keep you listening as you start to make a connection to what they are saying. This could be enforced by the prevalence of Joe Rogan in the current podcast sphere, and the general curiosity of witnessing what Steve Jobs would have said if the two ever did meet. Have a listen below to experience the AI podcast for yourself.

In other tech news, unopened first-generation apple iPhone from 2007 auctions for $39,000 USD.

See the article here:

Joe Rogan and Steve Jobs Have a 20-Minute Chat in AI-Powered Podcast - HYPEBEAST

DigestAIs 19-year-old founder wants to make education addictive – TechCrunch

When Quddus Pativada was 14, he wished that he had an app that could summarize his textbooks for him. Just five years later, Pativada has been there and done that earlier this year, he launched the AI-based app Kado, which turns photos, documents or PDFs into flash cards. Now, as the 19-year-old founder takes the stage for Startup Battlefield, hes looking to take his company, DigestAI, beyond flashcards to create an AI dialogue assistant that we can all carry around on our phones.

If we make learning truly easy and accessible, its something you could do as soon as you open your phone, Pativada told TechCrunch. We want to put a teacher in every single persons phone for every topic in the world.

Quddus Pativada, founder at DigestAI pitches as part of TechCrunch Startup Battlefield at TechCrunch Disrupt in San Francisco on October 18, 2022. Image Credits: Haje Kamps / TechCrunch

The companys AI is trained on data from the internet, but the algorithm is fine-tuned to recall specific use cases to make sure that its responses are accurate and not too thrown off by online chaos.

We train it on everything, but the actual use cases are called within silos. Were calling it federated learning, where its sort of siloed in and language models are operating on a use case basis, Pativada said. This is good because it avoids malicious use.

Pativada said that this kind of product would be different from smart assistants like Apples Siri or Amazons Alexa because the information it provides would be more personalized and detailed. So, for certain use cases, like asking for sources to use in an essay, the AI will pull from academic journals to make sure that the information is accurate and appropriate for a classroom.

Despite running an educational AI startup, Pativada isnt currently in school. He took a gap year before going to college to work on his startup, but as DigestAI took off, he decided to keep building instead of going back to school. Growing up, he taught himself to code because he loved video games, so he wanted to make his own by age 10, he published a Flappy Bird clone on the App Store. Naturally, his technological ambitions matured a bit over time. Before founding DigestAI, Pativada built a COVID-19 contact tracing platform. At first, he just made the app as a tool for his classmates but his work ended up being honored by the United Arab Emirates government.

Image Credits: DigestAI

So far, the outlook is good for the Dubai-based company. Pativada who says he feels skittish about the CEO label, and prefers to think of himself as just a founder has raised $600,000 so far from angel investors like Mark Cuban and Shaan Patel, who struck a deal on Shark Tank for his SAT prep company, Prep Expert.

How does a 19-year-old in Dubai capture the attention of one of thee most well-known startup investors? A cold email. Mark, we apologize if this admission makes your inbox even more nightmarish.

I was watching a GQ video of Mark Cubans daily routine, Pativada said. He said he reads his emails every morning at 9 AM, and I looked at the time in Dallas, and it was about 9 AM. So I was like, maybe I should just shoot him an email and see what happens. While he was at it, he reached out to Patel, whose educational startup has done over $20 million in sales. Patel hopped on a video call with the teenage founder, and by the next week, he and Cuban both offered to invest in DigestAI.

We raised our entire round through cold emails and Zoom, Pativada told TechCrunch. It sort of helped because no one can see how young I look in person.

Before he decided to eschew college altogether, Pativada applied to Stanford and interviewed with an alumnus, as is standard in the admissions process. He didnt end up getting into the competitive Palo Alto university, but his interviewer, who works at Stanford, did end up investing in his company. Go figure.

Our goal is to work with universities like Stanford, Pativada said. The company is also targeting enterprise clients. Currently, DigestAI works with some U.S.-based universities, Bocconi University in Italy, a European law firm and other clients. At the law firm, DigestAI is testing a tool that allows associates to text a WhatsApp number to quickly brush up on legal terms.

In the long term, DigestAI wants to create an SMS system where people can text the AI asking for help learning something he wants information to be so accessible that its addictive.

That is what AI is its almost the best version of a human being, Pativada said.

View original post here:

DigestAIs 19-year-old founder wants to make education addictive - TechCrunch

AI-powered government finances: making the most of data and machines – Global Government Forum

Photo by Karolina Grabowska via Pexels

Governments are paying growing attention to the potential of artificial intelligence the simulation of human intelligence processes by machines to enhance what they do.

To explore how public authorities are approaching the use of AI for tasks related to public finances,Global Government Fintech the sister title of Global Government Forum convened an international panel on 4 October 2022 for a webinar titled How can AI help public authorities save money and deliver better outcomes?.

The discussion, organised in partnership with SAS and Intel, highlighted how AI is already helping departments to deliver results. But also that AI remains very much an emerging and, to many, rather nebulous field with many hurdles to clear before widespread use. Discussions of artificial intelligence often bring up connotations of an Orwellian nature, dystopian futures, Frankenstein said Peter Kerstens, advisor, technological innovation & cyber security at the European Commissions Financial Services Department. That is really a challenge for positive adoption and fair use of artificial intelligence because people are apprehensive about it.

Like most technology-based areas, it is a field that is also moving very quickly. If the last class you took in data science was three years ago, its already dated, cautioned Steve Keller, acting director of data strategy at the US Treasurys Bureau of the Fiscal Service, in his own opening remarks.

Kerstens began by describing the very name artificial intelligence as a big problem, asserting that AI is neither artificial nor is it particularly intelligent at least not in a way that humans are intelligent.

A better way to think about artificial intelligence and machine learning is self-learning high-capacity data processing and data analytics, and the application of mathematical and statistical methodologies to data, he explained. That is, of course, not a very appealing name, but that is what it is. But the self-learning or self-empowering element is very important in AI because you have to look at it in comparison to traditional data processing.

Continuing this theme of caution he further explained: Like old technology, AI enhances human and organisational capability for the better, but potentially also for the worse. So, it really depends on what use you make of that tool. You can make very positive use of it. But you can also make very negative uses of it. And thats why governance of your artificial intelligence and machine learning, and potentially rules and ethics, are important.

For financial regulators, AI is proving useful to help process the vast amounts of data and reports that companies must submit. It goes beyond human capability, or you have to put lots and lots of people onto it to process just the incoming information, he said.

Read more:Biden sets out AI Bill of Rights to protect citizens from threats from automated systems

Kerstens then mentioned AIs potential for law enforcement. Monitoring the vast volumes of money moving through the financial system for fraud, sanctions and money laundering requires very powerful systems. But this is also risky because it comes very close to mass surveillance, he said. So, if you apply artificial intelligence or machine learning engines onto all of these flows, you really get into this dystopian future of Big Brother.

Kerstens also touched on AIs use in understanding macroeconomic developments. Typically, macro- economic policy assessment is very politically driven, and this blurs the objectivity of the assessment. AI assessment is much more independent, because it just looks at the data without any preconceived notions and draws conclusions, including conclusions that may not necessarily be very desirable, he said.

The US Treasurys Keller described the ultimate aim of AI as being to improve decision accuracy, forecasting and speed trying to use data to make scientific decisions. This includes, he continued, testing and verifying our assumptions with data to help make sure that we dont break things, but also help us ask important questions.

He provided four AI use areas for the Bureau of the Fiscal Service: Treasury warrants (authorisations that a payment be made); fraud detection; monitoring; and entity resolution.

In the first area, he said the focus was turning bills into literally a dataset the bureau has experimented with using natural language processing to turn written legislation into coherent, machine-readable data that has account codes and budgeted dollars for those account codes; in the second area, he said the focus was checking people are who they say they are (and how we detect that at scale); in the third area, uses include monitoring whether people are using services correctly.

Were collecting data from so many elements, and often in large public-sector areas, the left hand doesnt talk to the right hand, he said, in the context of entity resolution. We often need to find a way to connect these two up in such a way that we are looking at the same entity so that we can share data in the long run. So, data can be brought together and utilised by data scientists or eventually to create AI that would help these other three things to happen.

Read more: Artificial intelligence in the public sector: an engine for innovation in government if we get it right

Keller also raised ethical, upskilling and cultural considerations. If people start buying IT products that are going to have AI organically within them, or theyre building them [questions should arise such as]: are we doing it ethically? Do we have analytics standards? How are we testing? Are we actually getting value from the product? Or is it a total risk?.

He concluded his opening remarks by outlining how the bureau was building an internal data ecosystem, including a data governance council, data analytics lab, high-value use case compendium and data university.

The Centre for Data Ethics & Innovation (CDEI), which is part of the UK Department for Digital, Culture, Media and Sport, was established three years ago to drive responsible innovation across the public sector.

A huge focus is around supporting teams to think about governance approaches, the centres deputy director, Sam Cannicott, explained. How do they develop and deploy technology in a responsible way? How do they have mechanisms for identifying and then addressing some of the ethical questions that these technologies raise?.

The CDEI has worked with a varied cross-section of the public sector including the Ministry of Defence (to explore responsible AI use in defence); police forces; and the Department for Education and local authorities to explore the use of data analytics in childrens social care. These are all really sensitive often controversial areas, but also where data can help inform decision-making, he said.

Read more: Canada to create top official to police artificial intelligence under new data law

The CDEI does not prescribe what should be done. Instead it helps different teams to think through these questions themselves.

Ultimately, the questions are complex, Cannicott said. While lots of teams might seek an easy answer, [to] be told what youre doing is fine, its often more complicated, particularly when we look at how you develop a system, then deploy it, and continue to monitor and evaluate. So, we support teams to think about the whole lifecycle process.

The CDEIs current work programme is focused on three areas: building an effective AI assurance ecosystem (including exploring standards and impact assessments, as well as risk assessments that might be undertaken before a technology is deployed); responsible data access, including a focus on privacy-enhancing technologies; and transparency (the CDEI has been working with theCentral Digital and Data Officeto develop the UKs first public sector algorithmic transparency standard).

This is underpinned by a public attitudes function to ensure citizens views inform the CDEIs work important when it comes to the critical challenge of trust.

Dr Joseph Castle, adviser on strategic relationships and open source technologies at SAS, described how public authorities around the globe are using AI across diverse set of fields, ranging from areas such as infrastructure and transport through to healthcare.

In government finance, he said, authorities are using analytics and AI to assess policy, risk, fraud and improper payments.

Castle, who previously worked for more than 20 years in various US federal government roles, provided two examples of SAS work in the public sector: with Italys Ministry of Economics and Finance (MEF), and with Belgiums Federal Public Service Finance.

In the Italian example, he said MEF used analytics to calculate risk on financial guarantees, providing up-to-date reporting for improved systematic liquidity and risk management during COVID-19; work with the Belgian ministry, meanwhile, has been on using analytics and AI to predict the impact of new tax rules.

The most recent focus for public entities has been on AI research and governance, leading to a better understanding of AI technology itself and responsible innovation, he said. Public sector AI maturation allows for improved service, reduced costs and trusted outcomes.

Australias National Artificial Intelligence Centre launched in December 2021. It aims to accelerate positive AI adoption and innovation to benefit businesses and communities.

Stela Solar, who is the centres director, described AIs ability to scale as incredibly powerful. But, she said, it is incredibly important that organisations exploring and using AI tools do so responsibly and inclusively.

In opening remarks reflecting the centres focus, she proposed three factors that would be important to help maximise AIs impact beyond government.

The first, she said, is that more should be done to connect businesses with research- and innovation-based organisations. A national listening tour organised by the centre had found, she said, low awareness of AIs capabilities. Unless we empower every business to be connected to those opportunities, we wont really succeed, she warned.

Her second point focused on small- and medium-sized businesses. Much of the guidance that exists is really targeted at large enterprises to experience, create and adopt AI, she said. But small and medium business is really struggling in this area, which is ironic as AI really presents as a great equaliser opportunity because it can deal with scale and take action at scale. It can really uplift the impact that small and medium businesses can have.

Her third point focused on community understanding, which she described as a critical factor in accelerating the uptake of AI technologies. This includes achieving engagement from diverse perspectives in how AI is shaped, created [and] implemented.

Topics including trust in AI systems, the risk of bias and overcoming scepticism were addressed further during the webinars Q&A.

In terms of trust, what goes in to any AI tool affects what comes out. How reliable they are [AI systems] depends on how good and how unbiased the dataset was, Kerstens said. Does it have known biases or something that is a proxy for biases? For example, sometimes people use addresses. Peoples addresses, especially in countries where you have very diverse populations, and where different population groups and different racial or religious groups live in particular areas, can be a proxy for religious affiliation, or for race. If youre not careful, your artificial intelligence engine is going to build in these biases, and therefore its going to be biased.

Its not just about bias within AI, its bias in the data, said Castle, emphasising the importance of responsible innovation across the analytics lifecycle.

Read more: Brazils national AI strategy is unachievable, government study finds

Solar provided a further dimension, adding that organisations can often find themselves working with substantial gaps in data (which she referred to as data deserts). Its actually been impressive to see some of the grassroots efforts across communities to gather datasets to increase representation and diversity in data, she said, giving examples from Queensland and New South Wales where, respectively, communities had provided data to help shape and steer investments and fill gaps in elderly health data.

On this theme she said that co-design of AI systems with the communities who the technology serves or affects will go a long way to address some of the biases and also will go a long way into the question of what should be done and what shouldnt be done.

Scepticism about the use of AI from policymakers, particularly those who are not technologists, was discussed as a common challenge.

Sometimes theres a push to use these technologies because they can be seen as a way to save money, observed Cannicott. There is also nervousness because some have seen where things have gone wrong, and they dont want to be to blame.

He emphasised the importance of experimentation, governance (having really clear accountability and decision-making frameworks to walk through the ethical challenges that might come up and how you might address them) and public engagement.

Some polling we did fairly recently suggested that around half of people dont think the data that government collects from them is used for their benefit, he said. Theres quite a bit of a trust gap there [so] decision makers [have] to start demonstrating that they are able to use data in a way that benefits peoples lives.

Keller emphasised the importance of incorporating recourse into AI systems. If I build a system that detects fraud, and flag somebody is a villain and theyre not, we need to give them an easy route to appeal that process, he said.

AI is often a purely technical conversation. But, when it comes to government use of AI, policy and politics inevitably get entwined.

To develop artificial intelligence, you need vast amounts of data. Europeans tend to look at personal data protection in a different way than people in the US do, pointed out Kerstens.

Organisational leaders driven by doctrines could struggle to accept a role for AI. If you run an organisation or a governmental entity based on politics, artificial intelligence isnt something youre going to like very much because it is the data speaking to you, he continued. They do like artificial intelligence and data when the data confirms a doctrinal or political view. But if the data does not support [their] view, theyll dismiss it.

Public sector agencies also need to be savvy about AI solutions they are buying. Increasingly, public-sector organisations are being sold off-the-shelf tools. And actually, thats quite a dangerous space to be in, said Cannicott. Because, for example, if you [look at] childrens social-care different geographies, different populations theres all sorts of different factors in that data. If youre not clear on where the data is coming from to build those tools initially, then you probably shouldnt be using that technology. Thats also where testing and experimentation is very important.

There is clearly momentum building behind AI. But an over-riding theme from the webinar was the extent to which many remain in the dark or deeply sceptical.

Often Ive seen AI be implemented by someone whos very passionate, and it stays as this hobby experiment and project, said Solar, emphasising the importance of developing a base-level understanding of AI across all levels of an organisation. For it really to get the momentum across the organisation and to be rolled out into full production, with all the benefits that it can bring, you really need to bring along the policy decision-makers, the leaders the entire organisational chain, she said.

Kerstens concluded by emphasising that the story of AIs growing deployment across the public sector (and beyond) remains in the early chapters. AI is very powerful. Its just very early days, he said. But what people are most afraid of is that they dont understand how the artificial intelligence engine thinks. We should focus on productive, useful applications and not the nefarious ones.

AIs advocates will be hoping that fewer people, over time, come to compare it to the tale to Frankenstein.

The Global Government Fintech webinar How can AI help public authorities save money and deliver better outcomes? was held on 4 October 2022, with the support of knowledge partners SAS and Intel. You can watch the 75-minute webinar via our dedicated event page.

Read more: AI intelligence: equipping public and civil service leaders with the skills to embrace emerging technologies

Go here to see the original:

AI-powered government finances: making the most of data and machines - Global Government Forum

Artificially Intelligent? The Pros and Cons of Using AI Content on Your Law Firms Website – JD Supra

Artificial intelligence (AI) is powerfuland the use of it for content generation is on the rise. In fact, some experts estimate that as much as 90 percent of online content may be generated by AI algorithms by 2026.

Many of the popular AI content generators produce well-written, informative content. But is it the right choice for your firm? Before you decide, lets consider the pros and cons of using this unique sort of copy with your digital marketing.

This article explains how AI content generators works, the pros and cons of AI-generated content, and a few tips for utilizing AI content in your digital marketing workflow.

Consumer-facing artificial intelligence tools are pretty straightforward, as far as the consumer is concerned. You provide some inputs, and the machine provides some outputs.

Heres how it works with content writing. You generally provide the AI generator with a topic and keywords. You can usually select the format youd like the output to take, such as a blog post or teaser copy. Then, its as simple as clicking GO.

The content generator will scrape the web and draft copy for your needs. Some tools can take existing content and rewrite it, which can make content marketing a lot easier.

Not all AI content generators cost money, but youll need to pay something to access the better toolsor to produce a lot of content.

If youre excited about the possibilities, great! There are some significant benefits to AI content generators.

Here are a few pros of AI content tools:

To sum up, AI content tools can quickly produce natural-sounding copy at a fraction of the cost of paying a real copywriter.

There are several important drawbacks to consider with AI-generated content. Speed and cost arent everything when it comes to content generation.

Here are several cons that come with using AI content tools:

AI tools can be hit-or-miss when it comes to empathy and accuracy. Law firms should be very careful when publishing this type of content. There are also serious SEO concerns with using AI content.

Overall, its clear that AI-generated content can provide value. The question is how to best incorporate AI content into your digital marketing efforts.

Here are a few best practices if you choose to use AI-generated content.

All AI-generated content should be reviewed by a real human being prior to publication.We recommend hiring a legal professional to review and edit AI copy. A copywriter can help smooth the rough edges, too. Because the content is already written, the hourly rate youll pay these professionals should be minimal.

Dont Use AI-generated content on your website. This type of tool should be a last resort. If you do use machine-generated copy on your website, make sure to block it from being crawled to avoid search engine penalties. Your website developer can advise on the best way to do this.

Do not hire an agency that brags about AI content as a core strategy.SEO and web development companies should be very aware of the risks that come with using AI content. If they suggest AI-generated content, ask them how they plan to protect your firm against search engine penaltiesand dont work with them if they dont have a good answer.

Our current position is that AI-generated content can be helpful for short blurbs, such as newsletters to clients. All AI content should only be deployed with human oversight.

We recommend against using AI-generated content for website copy. If it must be used, its important to work with a developer or agency that understands how to communicate with search engines so you arent penalized for using AI tools.

[View source.]

Read the original post:

Artificially Intelligent? The Pros and Cons of Using AI Content on Your Law Firms Website - JD Supra

Misinformation research relies on AI and lots of scrolling – NPR

Atilgan Ozdil/Anadolu Agency/Getty Images

Atilgan Ozdil/Anadolu Agency/Getty Images

What sorts of lies and falsehoods are circulating on the internet? Taylor Agajanian used her summer job to help answer this question, one post at a time. It often gets squishy.

She reviewed a social media post where someone had shared a news story about vaccines with the comment "Hmmm, that's interesting." Was the person actually saying that the news story was interesting, or insinuating that the story isn't true?

Agajanian read around and between the lines often while working at University of Washington's Center for an Informed Public, where she reviewed social media posts and recorded misleading claims about COVID-19 vaccines.

As the midterm election approaches, researchers and private sector firms are racing to track false claims about everything from ballot harvesting to voting machine conspiracies. But the field is still in its infancy even as the threats to the democratic process posed by viral lies loom. Getting a sense of which falsehoods people online talk about might sound like a straightforward exercise, but it isn't.

"The broader question is, can anyone ever know what everybody is saying?" says Welton Chang, CEO of Pyrra, a startup that tracks smaller social media platforms. (NPR has used Pyrra's data in several stories.)

Automating some of the steps the University of Washington team uses humans for, Pyrra uses artificial intelligence to extract names, places and topics from social media posts. Using the same technologies that in recent years enable AI to write remarkably like humans, the platform generates summaries of trending topics. An analyst reviews the summaries, weeds out irrelevant items like advertising campaigns, gives them a light edit and shares them with clients.

A recent digest of such summaries include the unsubstantiated claim "Energy infrastructure under globalist attack."

The University of Washington and Pyrra's approaches are on the more extreme ends in terms of automation - few teams have so many staff - around 15 - just to monitor social media, or rely so heavily on algorithms as to have it synthesize material and output.

All methods carry caveats. Manually monitoring and coding content could miss out on developments; and while capable of processing huge amounts of data, artificial intelligence struggles to handle the nuances of distinguishing satire from sarcasm.

Although incomplete, having a sense of what's circulating in the online discourse allows society to respond. Research into voting-related misinformation in 2020 has helped inform election officials and voting rights groups about what messages to emphasize this year.

For responses to be proportionate, society also needs to evaluate the impact of false narratives. Journalists have covered misinformation spreaders who seem to have very high total engagement numbers but limited impact, which risks "spreading further hysteria over the state of online operations," wrote Ben Nimmo, who now investigates global threats at Meta, Facebook's parent company.

While language can be ambiguous, it's more straight forward to track who's been following and retweeting whom. Other researchers analyze networks of actors as well as narratives.

The plethora of approaches is typical of a field that's just forming, says Jevin West, who studies the origins of academic disciplines at University of Washington's Information School. Researchers come from different fields and bring methods they're comfortable with to start, he says.

West corralled research papers from academic database Semantic Scholar mentioning 'misinformation' or 'disinformation' in their title or abstract, and found that many papers are from medicine, computer science, psychology and there also geology, mathematics and art.

"If we're a qualitative researcher, we'll go...and literally code everything that we see." West says. More quantitative researchers do large scale analysis like mapping topics on Twitter.

Projects often use a mix of methods. "If [different methods] start converging on similar kinds of...conclusions, then I think we'll feel a little bit better about it." West says.

One of the very first steps of misinformation research - before someone like Agajanian starts tagging posts - is identifying relevant content under a topic. Many researchers start their search with expressions they think people talking about the topic could use, see what other phrases and hashtags appear in the search results, add that to the query, and repeat the process.

It's possible to miss out on keywords and hashtags, not to mention that they change over time.

"You have to use some sort of keyword analysis. " West says, "Of course, that's very rudimentary, but you have to start somewhere."

Some teams build algorithmic tools to help. A team at Michigan State University manually sorted over 10,000 tweets to pro-vaccine, anti-vaccine, neutral and irrelevant as training data. The team then used the training data to build a tool that sorted over 120 million tweets into these buckets.

For the automatic sorting to remain relatively accurate as the social conversation evolves, humans have to keep annotating new tweets and feed them the training set, Pang-Ning Tan, a co-author of the project, told NPR in an email.

If the interplay between machine detection - human review rings familiar, that might be because you've heard of large social platforms like Facebook, Twitter and Tik Tok describing similar processes to moderate content.

Unlike the platforms, another fundamental challenge researchers have to face is data access. Much misinformation research uses Twitter data, in part because Twitter is one of the few social media platforms that easily lets users tap into its data pipeline - known as Application Programming Interface or API. This allows researchers to easily download and analyze large numbers of tweets and user profiles.

The data pipelines of smaller platforms tend to be less well-documented and could change on short notice.

Take the recently-deplatformed Kiwi Farms as an example. The site served as a forum for anti-LGBTQ activists to harass gay and trans people. "When it first went down, we had to wait for it to basically pop back up somewhere, and then for people to talk about where that somewhere is." says Chang.

"And then we can identify, okay, the site is now here - it has this similar structure, the API is the same, it's just been replicated somewhere else. And so we're redirecting the data ingestion and pulling content from there."

Facebook's data service CrowdTangle, while purporting to serve up all publicly available posts, has been found to not have consistently done so. On another occasion, Facebook bungled data sharing with researchers Most recently, Meta is winding down CrowdTangle, with no alternatives announced set to be in place.

Other large platforms, like YouTube and TikTok, do not have an accessible API , a data service or collaboration with researchers at all. Tik Tok has promised more transparency for researchers.

In such a vast, fragmented, and shifting landscape, West says there's no great way at this point to say what's the state of misinformation on a given topic.

"If you were to ask Mark Zuckerberg, what are people saying on Facebook today? I don't think he could tell you." says Chang.

Continued here:

Misinformation research relies on AI and lots of scrolling - NPR

The Regulation of Artificial Intelligence in Canada and Abroad: Comparing the Proposed AIDA and EU AI Act – Fasken

Laws governing technology have historically focused on the regulation of information privacy and digital communications. However, governments and regulators around the globe have increasingly turned their attention to artificial intelligence (AI) systems. As the use of AI becomes more widespread and changes how business is done across industries, there are signs that existing declarations of principles and ethical frameworks for AI may soon be followed by binding legal frameworks. [1]

On June 16, 2022, the Canadian government tabled Bill C-27, the Digital Charter Implementation Act, 2022. Bill C-27 proposes to enact, among other things, the Artificial Intelligence and Data Act (AIDA). Although there have been previous efforts to regulate automated decision-making as part of federal privacy reform efforts, AIDA is Canadas first effort to regulate AI systems outside of privacy legislation. [2]

If passed, AIDA would regulate the design, development, and use of AI systems in the private sector in connection with interprovincial and international trade, with a focus on mitigating the risks of harm and bias in the use of high-impact AI systems. AIDA sets out positive requirements for AI systems as well as monetary penalties and new criminal offences on certain unlawful or fraudulent conduct in respect of AI systems.

Prior to AIDA, in April 2021, the European Commission presented a draft legal framework for regulating AI, the Artificial Intelligence Act (EU AI Act), which was one of the first attempts to comprehensively regulate AI. The EU AI Act sets out harmonized rules for the development, marketing, and use of AI and imposes risk-based requirements for AI systems and their operators, as well as prohibitions on certain harmful AI practices.

Broadly speaking, AIDA and the EU AI Act are both focused on mitigating the risks of bias and harms caused by AI in a manner that tries to be balanced with the need to allow technological innovation. In an effort to be future-proof and keep pace with advances in AI, both AIDA and the EU AI Act define artificial intelligence in a technology-neutral manner. However, AIDA relies on a more principles-based approach, while the EU AI Act is more prescriptive in classifying high-risk AI systems and harmful AI practices and controlling their development and deployment. Further, much of the substance and details of AIDA are left to be elaborated in future regulations, including the key definition of high risk AI systems to which most of AIDAs obligations attach.

The table below sets out some of the key similarities and differences between the current drafts of AIDA and the EU AI Act.

High-risk system means:

The EU AI Act does not apply to:

AIDA does not stipulate an outright ban on AI systems presenting an unacceptable level of risk.

It does, however, make it an offence to:

The EU AI Act prohibits certain AI practices and certain types of AI systems, including:

Persons who process anonymized data for use in AI systems must establish measures (in accordance with future regulations) with respect to:

High-risk systems that use data sets for training, validation and testing must be subject to appropriate data governance and management practices that address:

Data sets must:

Transparency. Persons responsible for high-impact systems must publish on a public website a plain-language description of the AI system which explains:

Transparency. AI systems which interact with individuals and pose transparency risks, such as those that incorporate emotion recognition systems or risks of impersonation or deception, are subject to additional transparency obligations.

Regardless of whether or not the system qualifies as high-risk, individuals must be notified that they are:

Persons responsible for AI systems must keep records (in accordance with future regulations) describing:

High-risk AI systems must:

Providers of high-risk AI systems must:

The Minister of Industry may designate an official to be the Artificial Intelligence and Data Commissioner, whose role is to assist in the administration and enforcement of AIDA. The Minister may delegate any of their powers or duties under AIDA to the Commissioner.

The Minister of Industry has the following powers:

The European Artificial Intelligence Board will assist the European Commission in providing guidance and overseeing the application of the EU AI Act. Each Member State will designate or establish a national supervisory authority.

The Commission has the authority to:

Persons who commit a violation of AIDA or its regulations may be subject to administrative monetary penalties, the details of which will be establish by future regulations. Administrative monetary penalties are intended to promote compliance with AIDA.

Contraventions to AIDAs governance and transparency requirements can result in fines:

Persons who commit more serious criminal offences (e.g., contravening the prohibitions noted above or obstructing or providing false or misleading information during an audit or investigation) may be liable to:

While both acts define AI systems relatively broadly, the definition provided in AIDA is narrower. AIDA only encapsulates technologies that process data autonomously or partly autonomously, whereas the EU AI Act does not stipulate any degree of autonomy. This distinction in AIDA is arguably a welcome divergence from the EU AI Act, which as currently drafted would appear to include even relatively innocuous technology, such as the use of a statistical formula to produce an output. That said, there are indications that the EU AI Acts current definition may be modified before its final version is published, and that it will likely be accompanied by regulatory guidance for further clarity. [4]

Both acts are focused on avoiding harm, a concept they define similarly. The EU AI Act is, however, slightly broader in scope as it considers serious disruptions to critical infrastructure a harm, whereas AIDA is solely concerned with harm suffered by individuals.

Under AIDA, high-impact systems will be defined in future regulations, so it is not yet possible to compare AIDAs definition of high-impact systems to the EU AI Acts definition of high-risk systems. The EU AI Act identifies two categories of high-risk systems. The first category is AI systems intended to be used as safety components of products, or as products themselves. The second category is AI systems listed in an annex to the act and which present a risk to the health, safety, or fundamental rights of individuals. It remains to be seen how Canada would define high-impact systems, but the EU AI Act provides an indication of the direction the federal government could take.

Similarly, AIDA also defers to future regulations with respect to risk assessments, while the proposed EU AI Act sets out a graduated approach to risk in the body of the act. Under the EU AI Act, systems presenting an unacceptable level of risk are banned outright. In particular, the EU AI Act explicitly bans manipulative or exploitive systems that can cause harm, real-time biometric identification systems used in public spaces by law enforcement, and all forms of social scoring. AI systems presenting low or minimal risk are largely exempt from regulations, except for transparency requirements.

AIDA only imposes transparency requirements on high-impact AI systems, and does not stipulate an outright ban on AI systems presenting an unacceptable level of risk. It does, however, empower the Minister of Industry to order that a high-impact system presenting a serious risk of imminent harm cease being used.

AIDAs application is limited by the constraints of the federal governments jurisdiction. AIDA broadly applies to actors throughout the AI supply chain from design to delivery, but only as their activities relate to international or interprovincial trade and commerce. AIDA does not expressly apply to intra-provincial development and use of AI systems. Government institutions (as defined under the Privacy Act) are excluded from AIDAs scope, as are products, services, and activities that are under the direction or control of specified federal security agencies.

The EU AI Act specifically applies to providers (although this may be interpreted broadly) and users of AI systems, including government institutions but excluding where AI systems are exclusively developed for military purposes. The EU AI Act also expressly applies to providers and users of AI systems insofar as the output produced by those systems is used in the EU.

AIDA is largely silent on requirements with respect to data governance. In its current form, it only imposes requirements on the use of anonymized data in AI systems, most of which will be elaborated in future regulations. AIDAs data governance requirements will apply to anonymized data used in the design, development, or use of any AI system, whereas the EU AI Acts data governance requirements will apply only to high-impact systems.

The EU AI Act sets the bar very high for data governance. It requires that training, validation, and testing datasets be free of errors and complete. In response to criticisms of this standard for being too strict, the European Parliament has introduced an amendment to the act that proposes to make error-free and complete datasets an overall objective to the extent possible, rather than a precise requirement.

While AIDA and the EU AI Act both set out requirements with respect to assessment, monitoring, transparency, and data governance, the EU AI Act imposes a much heavier burden on those responsible for high-risk AI systems. For instance, under AIDA, persons responsible for such systems will be required to implement mitigation, monitoring, and transparency measures. The EU AI Act goes a step further by putting high-risk AI systems through a certification scheme, which requires that the responsible entity conduct a conformity assessment and draw up a declaration of conformity before the system is put into use.

Both acts impose record-keeping requirements. Again, the EU AI Act is more prescriptive, but contrary to AIDA, its requirements will only apply to high-risk systems, whereas AIDAs record-keeping requirements would apply to all AI systems.

Finally, both acts contain notification requirements that are limited to high-impact (AIDA) and high-risk (EU AI Act) systems. AIDA imposes a slightly heavier burden, requiring notification for all uses that are likely to result in material harm. The EU AI Act only requires notification if a serious incident or malfunction has occurred.

Both AIDA and the EU AI Act provide for the creation of a new monitoring authority to assist with administration and enforcement. The powers attributed to these entities under both acts are similar.

Both acts contemplate significant penalties for violations of their provisions. AIDAs penalties for more serious offences up to $25 million CAD or 5% of the offenders gross global revenues from the preceding financial year are significantly greater than those found in Quebecs newly revised privacy law and the EUs General Data Protection Regulation (GDPR). The EU AI Acts most severe penalty is higher than both the GDPR and AIDAs most severe penalty: up to 30 million or 6% of gross global revenues from the preceding financial year for non-compliance with prohibited AI practices or the quality requirements set out for high-risk AI systems.

In contrast to the EU AI Act, AIDA also introduces new criminal offences for the most serious offences committed under the act.

Finally, the EU AI Act would also grant discretionary power to Member States to determine additional penalties for infringements of the act.

While both AIDA and the EU AI Act have broad similarities, it is impossible to predict with certainty how similar they could eventually be, given that so much of AIDA would be elaborated in future regulations. Further, at the time of writing, Bill C-27 has only completed first reading, and is likely to be subject to amendments as it makes its way through Parliament.

It is still unclear how much influence the EU AI Act will have on AI regulations globally, including in Canada. Regulators in both Canada and the EU may aim for a certain degree of consistency. Indeed, many have likened the EU AI Act to the GDPR, in that it may set global standards for AI regulation just as the GDPR did for privacy law.

Regardless of the fates of AIDA and the EU AI Act, organizations should start considering how they plan to address a future wave of AI regulation.

For more information on the potential implications of the new Bill C-27, Digital Charter Implementation Act, 2022, please see our bulletin,The Canadian Government Undertakes a Second Effort at Comprehensive Reform to Federal Privacy Law, on this topic.

[1]There have been a number of recent developments in AI regulation, including the United Kingdoms Algorithmic Transparency Standard, Chinas draft regulations on algorithmic recommendation systems in online services, the United States Algorithmic Accountability Act of 2022, and the collaborative effort between Health Canada, the FDA and the United Kingdoms Medicines and Healthcare Products Regulatory Agency to publish Guiding Principles on Good Machine Learning Practice for Medical Device Development.

[2]In the public sphere, the Directive on Automated Decision-Makingguides the federal governments use of automated decision systems.

[3]This prohibition is subject to three exhaustively listed and narrowly defined exceptions where the use of such AI systems is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks: (1) the search for potential victims of crime, including missing children; (2) certain threats to the life or physical safety of individuals or a terrorist attack; and (3) the detection, localization, identification or prosecution of perpetrators or suspects of certain particularly reprehensible criminal offences.

[4]As an indication of potential changes, the Slovenian Presidency of the Council of the European Union tabled a proposed amendment to the act in November 2021 that would effectively narrow the scope of the regulation to machine learning.

Continued here:

The Regulation of Artificial Intelligence in Canada and Abroad: Comparing the Proposed AIDA and EU AI Act - Fasken

Ai-Da the robot sums up the flawed logic of Lords debate on AI – The Guardian

When it announced that the worlds first robot artist would be giving evidence to a parliamentary committee, the House of Lords probably hoped to shake off its sleepy reputation.

Unfortunately, when the Ai-Da robot arrived at the Palace of Westminster on Tuesday, the opposite seemed to occur. Apparently overcome by the stuffy atmosphere, the machine, which resembles a sex doll strapped to a pair of egg whisks, shut down halfway through the evidence session. As its creator, Aidan Meller, scrabbled with power sockets to restart the device, he put a pair of sunglasses on the machine. When we reset her, she can sometimes pull quite interesting faces, he explained.

The headlines that followed were unlikely to be what the Lords communications committee had hoped for when inviting Meller and his creation to give evidence as part of an inquiry into the future of the UKs creative economy. But Ai-Da is part of a long line of humanoid robots who have dominated the conversation around artificial intelligence by looking the part, even if the tech that underpins them is far from cutting edge.

The committee members and the roboticist seem to know that they are all part of a deception, said Jack Stilgoe, a University College London academic who researches the governance of emerging technologies. This was an evidence hearing, and all that we learned is that some people really like puppets. There was little intelligence on display artificial or otherwise.

If we want to learn about robots, we need to get behind the curtain, we should hear from roboticists, not robots. We need to get roboticists and computer scientists to help us understand what computers cant do rather than being wowed by their pretences.

There are genuinely important questions about AI and art who really benefits? Who owns creativity? How can the providers of AIs raw material like Dall-Es dataset of millions of previous artists get the credit they deserve? Ai-Da clouds rather than helps this discussion.

Stilgoe was not alone in bemoaning the missed opportunity. I can only imagine Ai-Da has several purposes and many of them may be good ones, said Sami Kaski, a professor of AI at the University of Manchester. The unfortunate problem seems to be that the public stunt failed this time and gave the wrong impression. And if the expectations were really high, then whoever sees the demo can generalise that oh, this field doesnt work, this technology in general doesnt work.

In response, Meller told the Guardian that Ai-Da is not a deception, but a reflector of our own current human endeavours to decode and mimic the human condition. The artwork encourages us to reflect critically on these societal trends, and their ethical implications.

Ai-Da is Duchampian, and is part of a discussion in contemporary art and follows in the footsteps of Andy Warhol, Nam June Paik, Lynn Hershman Leeson, all of whom have explored the humanoid in their art. Ai-Da can be considered within the dada tradition, which challenged the notion of art. Ai-Da in turn challenges the notion of artist. While good contemporary art can be controversial it is our overall goal that a wide-ranging and considered conversation is stimulated.

As the peers in the Lords committee heard just before Ai-Da arrived on the scene, AI technology is already having a substantial input on the UKs creative industries just not in the form of humanoid robots.

There has been a very clear advance particularly in the last couple of years, said Andres Guadamuz, an academic at the University of Sussex. Things that were not possible seven years ago, the capacity of the artificial intelligence is at a different level entirely. Even in the last six months, things are changing, and particularly in the creative industries.

Guadamuz appeared alongside representatives from Equity, the performers union, and the Publishers Association, as all three discussed ways that recent breakthroughs in AI capability were having real effects on the ground. Equitys Paul Fleming, for instance, raised the prospect of synthetic performances, where AI is already directly impacting the condition of actors. For instance, why do you need to engage several artists to put together all the movements that go into a video game if you can wantonly data mine? And the opting out of it is highly complex, particularly for an individual. If an AI can simply watch every performance from a given actor and create character models that move like them, that actor may never work again.

The same risks apply for other creative industries, said Dan Conway from the Publishers Association, and the UK government is making them worse. There is a research exception in UK law and at the moment, the legal provision would allow any of those businesses of any size located anywhere in the world to access all of my members data for free for the purposes of text and data mining. There is no differentiation between a large US tech firm in the US and a AI micro startup in the north of England. The technologist Andy Baio has called the process AI data laundering and it is how a company such as Meta can train its video-creation AI using 10m video clips scraped for free from a stock photo site.

The Lords inquiry into the future of the creative economy will continue. No more robots, physical or otherwise, are scheduled to give evidence.

See the article here:

Ai-Da the robot sums up the flawed logic of Lords debate on AI - The Guardian

Jesse Williams Returning to ‘Grey’s Anatomy’ in New Episode – Glitter Magazine

Paging plastics, Dr. Jackson Avery is returning to Grey Sloan Memorial. Jesse Williams will be reprising his role in an upcoming episode of the hit medical drama. Williams will guest star and direct the fifth episode of season 19, When I Get to the Border, airing on November 3.

After twelve seasons of playing the famous plastic surgeon, Dr. Avery, the show announced that Williams would leaveGreys Anatomyin its seventeenth season. The news of the actors departure was revealed in the fourteenth episode of season 17, and his final appearance as a series regular was in the following episode.

In episode fifteen, Jackson moved to Boston to run his familys foundation. Sarah Drew, who embodied one of Williams love interests, and fellow surgeons, Dr. April Kepner, also appeared in the episode as a guest star. Williams and Drew returned to the series again last season, revealing that their characters had reconciled their relationship.

Deadlinereported that Williams will only appear in the upcoming episode in passing. Dr. Avery will catch up with Ellen Pompeos Dr. Meredith Grey when she takes a trip to Boston. Episode five of the new season will be the fourth episode of the ABC drama that Williams has directed.

News of Williams return is not the first development from season 19 regardingGreys Anatomysveteran cast members. Before the seasons premiere, Deadline also reported that Pompeo will scale back her appearances as the shows titular character. In addition, Pompeo will only appear in eight episodes of the new season to have time to pursue other projects.

Fans are thrilled that Dr. Avery will return to the series and are excited to tune in to the next episode of Greys Anatomy. All episodes ofGreys Anatomyare available to stream on Hulu.

Read the rest here:
Jesse Williams Returning to 'Grey's Anatomy' in New Episode - Glitter Magazine

Greys Anatomy season 19 cast: Whos in the season 19 main cast? – Netflix Life

This fall, Greys Anatomy season 19 extends the ABC series as televisions longest-running American medical drama. After 400 episodes, the series remains a top draw and a hit among fans who have been there since the beginning or jumped in via a Netflix binge-watch.

Throughout the shows nearly two-decade run, the cast has changed quite a bit. Stars like Patrick Dempsey, Katherine Heigl, and Sandra Oh launched the series along with Ellen Pompeo, who has remained the leading star of the show since the first episode.

Along with Pompeo, Greys Anatomy season 19 will feature two additional cast members who have been with the show since season 1: Chandra Wilson and James Pickens Jr. Many of the cast members have been with the series for multiple seasons.

However, ahead of the season 19 premiere this fall, ABC has announced a handful of fresh faces that fans will be seeing around Grey-Sloan Memorial Hospital. Find out whos in the main cast, whos recurring, and whos new, with updates leading up to the premiere!

In another history-making season, theGreys Anatomyseason 19 cast once again brings back a fair amount of its familiar faces. ABC has solidified whos in and whos out, and the only season 18 cast member confirmed to not be returning is Richard Flood as Dr. Cormac Hayes.

Heres the Greys Anatomy season 19 cast:

While those are the cast members in the main cast, season 19 will also feature plenty of usual suspects in recurring roles, including but not limited to:

In season 18, Kate Walsh reprised her role as Dr. Addison Montgomery, who appeared in the early days of Greys and six seasons of the spin-off Private Practice. On Sept. 7, Variety reported that Walsh would recur in season 19, beginning in the third episode of the season.

Walsh makes her season 19 debut in the third episode of the season, airing Thursday, Oct. 20 and titled Lets Talk About Sex. As of this writing, its unclear how many season 19 episodes Walsh will be featured in, but with a recurring role, we could see her for a good handful.

On Aug. 3, Deadline reported that Ellen Pompeos role as Meredith Grey would be reduced in Greys Anatomy season 19. Pompeo will only appear in eight episodes of the upcoming season, which is expected to have 22 episodes. The star will still narrate each episode and act as an executive producer.

The huge change in Ellen Pompeos on-screen role is a first for the series. While shes not leaving Greys, her reduced role comes on the heels of Pompeo taking on her first non-Greys Anatomy role in over a decade. She will lead Hulus Orphan-themed limited seriesand juggle both series. Its currently unknown whether season 19 will be the last of the series or the last for Pompeo.

On Oct. 17, Greys Anatomy fans received some amazing news! A former favorite character and cast member will be making a long-awaited return trip to Grey Sloan Memorial. On Oct. 17, Deadline reported that Jesse Williams will guest star in season 19.

Williams will reprise his role as Dr. Jackson Avery in the fifth episode of season 19, which is set to air on Thursday, Nov. 3. The episode titled When I Go to the Border will also be directed by Williams. Perhaps Greys fans can potentially look forward to Williams directing more episodes.

Since his departure in the season 17 finale, Jesse Williams has appeared in the Broadway play Take Me Out, the Paramount+ movie Secret Headquarters, and the upcoming Netflix romantic comedy Your Place or Mine. His Greys return will be the fourth episode of the series hes directed.

In the lead up to the season 19 premiere this fall, ABC has announced a handful of new series regulars joining the iconic series as brand new characters. They might be new to Greys Anatomy, but you have definitely seen their faces before:

Alexis Floyd had a recurring role in Freeforms The Bold Type and had a breakthrough role in Netflixs limited series Inventing Anna. Niko Terho starred with fellow Greys cast member Jake Borelli in Freeforms romantic comedy The Thing About Harry. Midori Francis has appeared in titles such as Dash & Lily, The Sex Lives of College Girls, and Afterlife of the Party. Adelaide Kane was the lead in The CWs historical drama Reign and appeared in the shows Teen Wolf, Once Upon a Time, Into the Dark, and This Is Us. Harry Shum Jr. is known for his roles in Glee and Shadowhunters.

Greys Anatomy season 19 premieres Thursday, Oct. 6 at 9/8c on ABC. Watch season 1-18 now on Netflix.

See original here:
Greys Anatomy season 19 cast: Whos in the season 19 main cast? - Netflix Life

Anatomy: A Cadaveric Poem in-Training, the online peer-reviewed publication for medical students – Pager Publications, Inc.

Anatomy is more than flesh and bone and blood.Its more than the donor and the scalpel teaching the student.Anatomy shakes hands it tells and creates stories.Anatomy smiles it cries.It can do both at the same time.Anatomy carries the soul.It carries the spirit and the mind.Just as anatomy teaches in life,so too in death.Anatomy births life and returns to dust Complete.

Artists statement: This poem is simple and reaches into my heart with thought provoking images of those who donated their bodies before they passed. The beautiful souls who donated their bodies for our education should be treated with nothing but respect. At times, students forget that they are working with and cutting open a human being. Someone with jobs, stories, memories and countless other experiences. There were times when I felt disconnected from that fact, and I wrote this poem to assure that I would not forget how meaningful these peoples lives were and continue to be.

Poetry Thursdays is an initiative that highlights poems by medical students. If you are interested in contributing or would like to learn more, please contact our editors.

Contributing Writer

Philadelphia College of Osteopathic Medicine South Georgia Campus

Jordan Erdfrocht is a fourth year medical student at Philadelphia College of Osteopathic Medicine South Georgia Campus in Moultrie, GA class of 2023. In 2017, he graduated from the University of Central Florida with a Bachelor of Science in health sciences. He enjoys playing video games, reading history books, and finding the best cup of coffee in his free time. After graduating medical school, Jordan would like to pursue a career in Physical Medicine and Rehabilitation.

More here:
Anatomy: A Cadaveric Poem in-Training, the online peer-reviewed publication for medical students - Pager Publications, Inc.