Page 28«..1020..27282930..4050..»

Category Archives: Ai

Mindtech Launches New Series on Synthetic Data – The Go-To Guide for Anyone Training AI to See and Understand Our World – Business Wire

Posted: July 14, 2022 at 10:28 pm

SHEFFIELD, England--(BUSINESS WIRE)--Mindtech Global, developer of the worlds leading platform for the creation of synthetic data for training AI, has today launched its first guide on how to use synthetic data to resolve visual AIs training problems.

From retail to law enforcement, and from healthcare to driverless cars, data scientists the world over are developing powerful visual AI applications that are bringing the benefits of deep machine learning networks to a whole swathe of industries.

However, trouble is stalking visual AIs brave, new world. A clutch of problematic, real-world data acquisition issues collectively amounting to whats being called a data roadblock are holding up the advancement of visual AI.

The answer to these data roadblocking issues, however, is a relatively simple one: visual AI developers need to augment what real-world data they can acquire with as much synthetic data as they can generate.

Using Chameleon, Mindtechs synthetic data creation platform, users set up a scene of buildings and environments, and then import all the assets relevant to their application which could be anything: people, bicycles, cars or crowds in which people mill in multiple directions (with collision detection). They then set up activities, events and what if scenarios that will generate images to be captured by one or more virtual cameras in a series of simulation runs the images that will ultimately form the basis of the data used to train a users AI.

Benefits of creating training data this way include it arrives perfectly annotated, privacy-compliant, and ready to use by machine learning engineers and/or data scientists with no need for 3D graphics expertise on the part of the user.

Chris Longstaff, VP Product Management, Mindtech Global said,

AI models are infamous for fragility throwing up bizarre, unexpected results due to the fact that they sometimes generalize from incomplete datasets, or a fault with the model design. For that reason, a synthetic data platform must be capable as Chameleon is of reproducing a dataset it once generated at a later time, should anyone need to forensically check why an ML model in development needs troubleshooting.

That key error checking capability ensures those tasked with training AI models can have as much faith in synthetic data as they currently do in real-world data perhaps even more.

You can read the full guide on how synthetic data resolves visual AIs training problems here: https://bit.ly/3Pp0RY1

Ends.

Mindtech Global http://www.mindtech.global

Mindtech Global is the developer of the worlds leading end-to-end synthetic data creation platform for the training of AI vision systems. The companys Chameleon platform is a step change in the way AI vision systems are trained, helping computers understand and predict human interactions in applications ranging across retail, smart home, healthcare and smart city.

Mindtech is headquartered in the UK, with operations across the US and Far East and is funded by investors including Mercia, Deeptech Labs, In-Q-Tel and Appen.

Link:

Mindtech Launches New Series on Synthetic Data - The Go-To Guide for Anyone Training AI to See and Understand Our World - Business Wire

Posted in Ai | Comments Off on Mindtech Launches New Series on Synthetic Data – The Go-To Guide for Anyone Training AI to See and Understand Our World – Business Wire

AI Startup Speeds Healthcare Innovations To Save Lives – Forbes

Posted: at 10:28 pm

Michelle Wu, cofounder and CEO and KK (Qiang Kou ) tech cofounder at Nyquist Data, an AI powered ... [+] cloud-based platform providing business, clinical, and regulatory intelligence and analytics for medical devices and pharmaceuticals companies

How long does it take to get FDA approval for a heart-failure drug?

It sounds like a simple question, but without the help of an artificial intelligence (AI) powered MedTech cloud-based platform, it could take months and millions of dollars to find out. The market size for AI in healthcare is projected to reach $187.95 billion by 2030, according to Precedence Research.

When Michelle Wu was first asked this question, global clinical and regulatory healthcare information was publicly available, but it was scattered around the world in different databases and languages. Worse yet, keywords were misspelled or there were handwritten notes included in the databases, making what should be searchable unsearchable.

Using AI, big data, and machine learning, Wu launched Nyquist to provide business, clinical, and regulatory intelligence and analytics on medical devices and pharmaceuticals across major markets, such as the U.S., Japan, the E.U., and Chinawithin seconds.

When Wu was the global strategy manager at Novartis, the CEO asked her about the length of time it takes the FDA to approve a new heart medication. "It's the #1 cause of death in the U.S.," she said. Heart disease costs the U.S. about $363 billion annually, according to the CDC.

Over the next three months, Wu read tons of FDA approvals that were 5,000 pages long in which maybe there were one or two relevant paragraphs. She also spoke with many experts. Then Wu compiled the data into Excel spreadsheets so that she could answer that simple question.

"This is insane," exclaimed Wu. Vital healthcare information that could be mined to provide insights into how to develop life-saving medical innovations faster and at a lower cost was available but in arcane, black-box computer systems. "There has to be a better way," Wu thought.

This project was a light-bulb moment for her. The financial industry had Bloomberg to analyze content and data to help investors uncover opportunities and minimize risk, and pharmaceutical, biotech, and medical device companies needed something similar.

Wu left Novartis to attend Stanford Graduate School of Business (GSB) to work on the idea. She also worked at a couple of startups before starting Nyquist.

In 2020, Wu and her cofounder KK (Qiang Kou), raised $523,000 in pre-seed funding from former Google, Amazon, big pharma, and MedTech executives and launched Nyquist. The startup aggregated medical data worldwide and then connected the dots, making the information valuable to analysts, R&D departments, and commercial teams in pharmaceutical and medical device companies.

By using Nyquist platform, pharma and medical device companies can speed up the process of medical innovations in the U.S., Japan, and the E.U. reaching China, India, Africa, and other emerging markets and vice versa. Companies in emerging markets are developing many cheaper and effective healthcare innovations. The world needs to know about these. The process of getting approval from one country to another can take 2 to 7 years.

Over the course of two years, Nyquist has developed the largest AI data platform for medical devices. "We have about 30 customers from seven countries," said Wu.

Working collaboratively with prospects and customers like Medtronic, Nyquist has discovered new uses for its platform. During a pitch call with Medtronic, an executive asked if the platform could be used to research supply-chain factories. Within a few moments, Medtronic had a list. There was silence and Wu thought the video had frozen. Then she heard mumbling and paper rustling from the Medtronic side of the call. Shortly after the call, Nyquist had a new customer.

When raising Nyquist's seed round, Wu experienced the typical naysayers. Old white men who told her no one would pay more than $50 for publicly available information.

Then she was introduced to Ilana Stern of Peterson Ventures, who quickly saw how arcane, manual, messy, costly, and lengthy the current process was. "They're helping medical device companies, and eventually pharma companies accelerate and increase the success of clinical trials and bringing products to market," said Stern. "What's most important is the lives impacted in getting innovation to market more quickly."

"It's the way they're harnessing natural-language processing, AI, and machine learning to ingest and organize data to surface insights is incredibly powerful," said Stern. "With the click of a button, you can look at the equivalent of FDA data in Japan, China [and many more countries]."

Stern was also impressed that Wu raised half a million dollars in pre-seed funding, considered a small amount of money. Yet, she built out a platform used by customers, including Medtronic, and Becton, Dickinson and Company, also known as BD.

The startup raised $6 million in March of 2022. Peterson Ventures led the round with participation from GSR Ventures, Lightspeed Ventures Partners, and Village Global. "We will soon launch our pharma platform," said Wu. Nyquist will also expand its global MedTech platform to include 108 clinical sites worldwide.

Another challenge is constantly aggregating all the global insight and data. One way they have overcome this challenge is that the team is geographically diverse, coming from Asia, Switzerland, Germany, and the U.S. But it is also diverse in gender and sexual orientation. "More than 50% of our employees are women and we have a lot of queer moms." Diversity improves performance and outcomes.

In addition, "We just graduated from Google Accelerator," said Wu. "It's like learning to paint from Leonardo da Vinci. They bring the creme de la creme of AI experts [for participants to learn from]."

Still, this young company is doing more. "There are a lot of clinical trials and medical device companies that have suffered during the Russian-Ukrainian war," said Wu. "We are helping a couple of medical device companies in Eastern Europe pro-bono move their clinical trials and manufacturers outside the war zone."

How can your business fill a market gap?

Read more:

AI Startup Speeds Healthcare Innovations To Save Lives - Forbes

Posted in Ai | Comments Off on AI Startup Speeds Healthcare Innovations To Save Lives – Forbes

ECOO seeking optometrists to take part in AI screening project – AOP website

Posted: at 10:28 pm

The European Council of Optometry and Optics (ECOO) is calling for optometrists to express an interest in an EU project investigating artificial intelligence (AI) and retinal imaging.

The I(eye)-Screen project seeks to partner AI experts with ophthalmologists and optometrists who use optical coherence tomography (OCT) to screen for age-related macular degeneration (AMD).

The project identified AMD as the most common cause of legal blindness in people over the age of 50, with 110 million individuals at risk. Signs of the disease can be identified by OCT before visual symptoms occur.

The initiative, which involves all EU countries as well as Great Britain, Norway and Switzerland, will see participating optometrists performing OCT imaging with adults above 55 years old with functional vision. Images will be analysed for early or intermediate age-related macular degeneration by an AI group.

If AMD is identified, the patient will be referred to a partnering ophthalmologist who will perform a clinical follow up four times a year over the course of three years, to identify progression.

If the project is approved, it is expected that a budget will be granted to cover the screening and referral of images.

The optometrist will screen approximately 200 to 250 individuals over one year, and refer around 25 individuals to the partnering ophthalmologist.

Following the study, researchers seek to make a reliable AI-based tool available for the optical community, along with a legal framework as a base for collaboration.

ECOO has outlined that all optometrists using a Topcon Maestro OCT are encouraged to apply. Applications can be made through an online form.

Julie-Anne Little, AOP chairman, said of the initiative: Involvement in studies such as these are important opportunities to ensure that optometry has a voice in how AI tools and machine learning may be employed in eye care in the future.

Little added: Primary care optometrists need to be central to how such tools can deliver effective integrated eye care for patients.

Optometrists who are already in contact with an ophthalmologist using a Spectralis OCT, and would like to work with them, can indicate this on the application form. Optometrists without a contact will be matched through the initiative.

The deadline for expressions of interest is 30 July, with earlier applications standing a higher chance of success.

Inviting optometrists to express an interest in the study, ECOO agreed that the project is a key opportunity for the profession to have a voice in the use of AI tools, to input in any guidance on the topic and to define how shared care is taken forward.

ECOO, which represents national associations from 21 countries across Europe, is part of a consortium of stakeholders collaborating on the EU funding application for the project.

An EU research grant for Personalised screening and risk assessment next door for life-long healthy vision based on automated AI-tools has already been successfully submitted, receiving the highest score and moving to the final round of the funding application.

The proposal has been based on the fact that 200 million individuals worldwide are affected by AMD, and 1.9 billion people are at risk, ECOO said.

The project will first gather retrospective data sets to train an advanced AI-based algorithm for early and intermediate AMD identification and detection of risk progression.

A prospective clinical study would then take place, collaborating with optometrists to validate the performance of the AI algorithm using OCT devices. This would be followed by a proof-of-concept of feasibility, where independent optometrist sites across Europe will screen individuals in a real-world setting to collect data for validation.

The final stage of the project would be an EU-conforming ethical and legal framework providing health care measures, preparing regulatory approval, and establishing AI-based prevention strategies.

Read more:

ECOO seeking optometrists to take part in AI screening project - AOP website

Posted in Ai | Comments Off on ECOO seeking optometrists to take part in AI screening project – AOP website

Retouch4Me Review: AI Retouching That Actually Works – PetaPixel

Posted: at 10:28 pm

Some may enjoy editing, but most photographers will agree that actually taking the photos is the best part of the job. So finding a way to speed up and reduce the amount of time spent at the computer is more than welcome, which is where Retouch4Me comes into play.

The Retouch4Me plugins are designed specifically to address the most common workflow issues portrait photographers face in their day-to-day work, with each plugin separately developed to address a specific retouching situation. The company has plugins that specifically target background cleanup, healing skin blemishes, cleaning up wrinkles and tears in fabrics, cleaning up and enhancing eyes, whitening teeth, making skin tones uniform, adding volume and contrast, as well as an overall dodge and burning of the visible skin in the image.

While each tool is a separate task, they actually speed up the retouching process by a noticeable amount, making it easy to breeze through dozens of images with only minor required manual corrections, effectively freeing up hours of post-production time.

For this review, I tested all of the available plugins, including the whitening teeth tool and the heavy-handed eye brilliance tool on an Intel based MacBook Pro and M1-equipped Mac mini.

As full-time photographers, we dont make money while were sitting behind the computer doing the edits. So we take classes, buy tutorials, and then buy plugins and actions to speed up that post-processing work so we can get out and book the next client.

Naturally then, I was both excited to see if the Retouch4Me plugins worked and skeptical given how tools to this point have not come close to the performance expectations of a professional retoucher. Luckily enough, I was actually very pleasantly surprised with just how powerful these plugins actually are.

Unlike most software suites where everything is baked into one application as a big all-encompassing platform, Retouch4Me is offered as individual plugins that, while not cheap, allow potential users to pick and choose exactly what steps in their workflow they would like to have sped up and automated.

Retouch4Me currently have nine individual plugins for a variety of tasks that can be used as a stand-alone app or launched through Adobe Photoshop, Lightroom Classic, or Capture One Pro. In their current state, each plugin has to be run individually on a particular image (meaning there isnt a bulk-edit option), however, you can make your own Action in Photoshop that can automate the process of running multiple plugins one after the other. This can lead to some hiccups and crashes (from my experience) depending on the images and what is in them, so while you can do this, Id personally recommend just running things manually.

Overall, the plugins are designed to take the stress and pressure out of portrait retouching, while saving you time behind the computer. While the plugins can get expensive (they range from $124 to $149 per plugin), once you start adding them together, their value in the time saved will be beyond measure.

For the purpose of my review, I tested the plugins through Adobe Photoshop as a filter/plugin applying masks and opacity changes as they were needed. The first thing worth noting is that the Retouch4Me plugins work on whatever the current layer you have selected in Photoshop, so if you start using multiple plugins or prefer to keep things organized in your layers, you will have to pay close attention to click the tools as they do not label themselves or create new layers.

I recommend creating a new layer for each plugin you plan to run, that way you can mask and apply as lightly or heavily as you like after the fact. In the gif below, I took a RAW file from a recent fashion shoot and did a fully automated retouch (with some minor manual tweaks in between) on new layers for each tool labeling each along the way.

I choose the following workflow order: Backdrop Clean up, Skin Tone, Portrait Volume, Dodge and Burn, Fabric, Eye Vessels, Eye Brilliance, and then Healing.

For this particular image, everything worked almost perfectly with no adjustments needed except for dialing back the Eye Brilliance (it was super heavy-handed and I ended up turning it down by about 70%). While Eye Brilliance does work, there is never a good reason to go that crazy on it unless youre making a superhero or monster movie poster.

The plugins work impressively well and the entire process took less than five minutes from loading into Photoshop and saving as a final jpeg. In most situations, especially in fashion work, there will be some things that need cleaning up like stray hairs and fixing bulges or folds in the clothing, but thats the easy part most of the time. The skin work and fabric and backdrop cleanup in this particular shot were executed pretty flawlessly.

While each application is run individually, the layout and tools available within them are rather similar to one another. For instance, below is how the plugin windows look while running the Fabric Clean up tool on the above image:

When you load the layer into the plugin, the program analyzes the image and then chooses what it feels the best course of action is to fix the image. You can choose between a full-length, three-quarter, close-up/headshot, or Auto modes to operate with. For the most part, the auto mode works brilliantly, at least in clean studio images, but for busier shots with lots happening in the background or foreground, you may want to play with these options to ensure you get the best results.

The plugins allow you to see the before and after of the edits and even provide you with some tools to paint in additional areas or even erase them where the app may have gone a bit heavy. While those tools do work, at least in my experience they didnt work very well and were quite laggy. It was actually a lot easier to just let the application do what it needs to, then mask/paint things away in photoshop directly afterward.

Here are the masks and edits applied from the rest of the toolset:

As you can see from the images above, sometimes the masking and AI tools can go a little heavy-handed, or even miss things like some threads on the floor. For example, the Eye Brilliance tool and the Dodge and Burn screenshots show some unnecessary masking and applications.

And like most plugins, Retouch4Me can go pretty heavy-handed on the tools, meaning in practical applications you may want to scale back the opacity on some of the effects. For the image above, I left everything (except the eye brilliance) at 100% just so it could be easy to see how it all works, but in actual use cases, most every tool was scaled back anywhere from 90% down to 30% opacity depending on the project.

One plugin, in particular, gave me quite a few headaches, and that was the healing tool. While it works really well, it often goes very overkill with its healing by removing jewelry, piercings, and even tattoos or tufts of hair, as the application interprets it as a blemish. Other than it going a bit hard on my subjects, it still did a great job fixing the skin, and 90% of the time Id let it go at 100% and then just mask the areas away where it shouldnt have fixed.

Additionally, it is worth noting that there were some crashes in the application that would cause not just the plugin, but Photoshop as a whole to crash. This would cause me to lose anything that wasnt saved before running the particular plugin , so you are going to want to save often.

When communicating with the support team, they said the crashes were caused by me having too many things running on my computer, (Capture One, Lightroom, Photoshop, Chrome, etc.) which could cause my computer to run out of memory during a process, causing the crash.

To me, that sounds like the plugins can gobble up a lot of memory and/or there is a memory leak issue. I did some heavy testing of this and did find that after running the plugins on dozens of images, everything did slow down and the plugins did indeed chew through memory. A simple reboot seemed to fix those issues when they would happen.

For the most part, the heal, dodge and burn, portrait volume, skin tone, eye brilliance, vessels, and teeth whitening tools all worked perfectly and as intended 100% of the time. The plugins that I found were more likely to cause a crash were the backdrop cleanup and fabric tools. With images that were incredibly busy and colorful or had extremely shallow depth of field shot outside of the studio, running the fabric and background tools could cause a total crash. The support team is working on this, but at the time of publication, I still havent gotten an answer.

What I believe is happening is the application is scanning the image and looking for imperfections in the background or clothing, but gets overwhelmed with everything it finds and just crashes. The behavior I would have expected is the app simply doesnt do anything in that case or maybe throws an error back saying it couldnt work, but instead it just crashes. It doesnt happen all of the time, but it is consistent enough to warrant a warning for potential users to be sure to save their files frequently and especially before running those particular plugins, just in case.

Despite the occasional application crash or heavy-handed application of a tool, the Retouch4Me plugins can transform your images incredibly fast with minimal effort. If nothing else, they can help you get into a great starting point to do some manual tweaks and retouching by hand, which still saves you hours of work behind the computer.

The fact that the set of plugins is broken into separate components can feel clunky, but I understand the decision since it allows users to pick and choose the elements that give them the most headaches and target that area to eliminate.

The best part about these tools is that unlike other AI-based applications, the Retouch4Me plugins do not require an active internet connection to work. All of the edits happen locally and your clients photos remain secure on your computer.

The bottom line here is yes, these plugins are actually good! Even with some of the quirkiness, the results speak for themselves and I definitely plan on incorporating these plugins into my commercial client work moving forward.

It should be noted that none of these are actual full-on replacement alternatives for what Retouch4Me has here, but AI has become heavily integrated into editing software in recent years, so there are a lot of applications out ther for photographers to choose from based on their personal preferences.

One of the most popular and commonly known is the $89 Luminar AI / Luminar NEO tools from Skylum that offers a broad suite of AI and manual editing tools under one roof. ImagenAI is another all-in-one solution that even offers batch processing from $0.06 per image, but this application is really meant for high-volume photographers who deal with thousands and thousands of images, and works specifically with Lightroom catelogues.

Additionally Topaz Labs has a series of AI based tools ranging from $59 to $99 per application, or available for $199 as a bundle. Finally, even Adobe Photoshop is getting into the AI game with its Neural Filter beta program which includes things like Sky Replacements, Skin Smoothing, and JPEG artifact removal. While these arent exactly time savers like some of the others, they can provide some good starting points and creative features for retouchers looking for a head start on their work.

Yes. Honestly, every one of the tools that Retouch4Me has built (except for the Eye Brilliance) is worth the investment. If you spend a large amount of time behind a computer retouching your portrait work, any one of these individual tools can potentially save you hundreds of hours of retouching time every year.

Go here to see the original:

Retouch4Me Review: AI Retouching That Actually Works - PetaPixel

Posted in Ai | Comments Off on Retouch4Me Review: AI Retouching That Actually Works – PetaPixel

Innovative Dental AI Solution from VideaHealth to Power Exceptional Patient Experience at 42 North Dental Supported Practices – Business Wire

Posted: at 10:28 pm

BOSTON--(BUSINESS WIRE)--VideaHealth, the leading dental diagnostic AI solution, and 42 North Dental today announced a partnership to scale patient-centered AI technology in the dental industry. By deploying VideaHealth AI in several of its supported practices, which include Gentle Dental and over 38 additional brands, 42 North Dental is equipping dentists with the power of AI for chair-side decision making. The VideaHealth AI solution, coupled with 42 North Dentals relentless focus on delivering the highest quality patient experience possible, will set the industry standard for how todays dentists engage with patients.

Michael Scialabba, DDS, Chief Clinical Officer at 42 North Dental, said, When I evaluate partners for 42 North Dental, Im looking for cutting edge leaders that provide a new benefit to patients. We knew partnering with VideaHealths dental AI solution would bring the power of AI to dentistry and give our dentists a powerful tool to provide more accurate diagnoses than ever possible before. In turn, this helps our dentists provide the highest-quality dental care to patients through a combination of traditional dentistry and advanced digital aids like VideaHealth dental AI.

The VideaHealth dental diagnostic AI solution eliminates patient concerns such as transparency of diagnosis or the potential overuse of X-rays by giving dentists the benefit of the industrys most accurate AI in an easy-to-deploy way. Launching the VideaHealth dental diagnostic AI solution with visionary leaders like 42 North Dental will help pave the way for massive AI adoption in the industry.

42 North Dental is a leader in delivering top-tier comprehensive care and an exceptional patient experience. Were honored to be working together to improve the lives of patients, said Florian Hillen, CEO and founder of VideaHealth. Our partnership will move the entire industry forward as we work with select 42 North Dental supported practices to provide fair, accurate and equitable treatment for everyone who needs it.

With technology and software solutions that assist dentists in analyzing patient X-rays, VideaHealths AI technology helps ensure every patient gets an accurate diagnosis and every provider captures efficiencies and faster reimbursement to scale and grow. Its solutions are designed to integrate seamlessly with existing practice software and tools without adding steps or complexity to the dentists workflow. VideaHealths FDA-cleared AI algorithms are based on the VideaFactory, which houses the industrys most diverse dataset with more than 100 million data points.

About 42 North Dental LLC

42 North Dental is a leading dental organization supporting 42 practice brands in 113 locations. Committed to eliminating barriers to quality patient care by providing administrative support to dental practices, 42 North Dental presents opportunities that help doctors and their teams professionally advance while growing the practice to its fullest potential. 42 North Dental's affiliation model offers dental providers clinical autonomy and equity ownership, as well as unmatched administrative support. 42 North Dental was created for dentists and is rooted in over 40 years of experience in dentistry.

About VideaHealth

Founded in 2018 and born out of Harvard and MIT artificial intelligence (AI) research, VideaHealth is on a mission to improve dental patient health through the power of AI. VideaHealths FDA 501(k) cleared platform drives the improved quality of care for patients by using AI to augment the diagnosis and treatment planning capabilities of providers. Partnering with leading DSOs across the country, VideaHealth is committed to helping usher in the age of preventative care in dentistry. Backed by leading venture capital firms Spark Capital, Zetta Venture Partners and Pillar VC, and angel investors, VideaHealth is headquartered in Boston. For more information, visit https://www.videa.ai.

Read the original post:

Innovative Dental AI Solution from VideaHealth to Power Exceptional Patient Experience at 42 North Dental Supported Practices - Business Wire

Posted in Ai | Comments Off on Innovative Dental AI Solution from VideaHealth to Power Exceptional Patient Experience at 42 North Dental Supported Practices – Business Wire

Sentient AI? Do we really care? – The Hill

Posted: July 3, 2022 at 3:56 am

Artificial Intelligence (AI) headlined the news recently when a Google engineer named Blake Lemoine became convinced that a software program was sentient. The program, Language Models for Dialog Applications (LaMDA), is a chatbot designed to mimic human conversation. So thats what it did.

In a Medium post, Lemoine declared LaMDA had advocated for its rights as a person, and wants to be acknowledged as an employee of Google rather than as property of Google. This development, as they now say, blew up the internet. Philosophers, ethicists, and theologians weighed in.

For engineers and technologists, however, its just another illustration of the overly broad and frustratingly mushy definition of artificial intelligence that has confused the public conversation since Mary Shelley published Frankenstein. As always, defining terms is a good place to start. Sentience is the ability to feel and experience sensation. Its a word invented specifically to distinguish from the ability to think. Therefore, sentience and intelligence are not synonyms. Google may very well have created an intelligence. In fact, Google and numerous other companies including my employer, SAIC, already have. But absent the biological prerequisite of a central nervous system, they are not sentient, even if they pass Alan Turings famous Imitation Game test of seeming human.

But more to the point, for engineering applications, the question of sentience is not immediately relevant. The real question is one of application. What can AI the practice of infusing machines with the capacity to perform analysis and make evidence-based recommendations previously believed to be the exclusive purview of humans actually do to enhance business performance, to drive better mission outcomes, to improve the world? Waves of data fog our view; what can the clarifying lens of AI help us see?

Hindsight: If, as George Santayana said, those who cannot remember the past are condemned to repeat it, then lessons derived from historical data inoculate us from future mistakes. By crunching mountains of data from myriad inputs, AI can leverage real world, real-time experience to allow leaders to confidently make plans and install course corrections. AI can provide dashboard views without the hassle of Oracle queries, data calls, and spreadsheets to underscore comparisons quickly and without knowledge gaps.

Foresight: When will a hurricane make landfall? Where will a satellite in decaying orbit re-enter the atmosphere? How often will an offshore wind turbine require maintenance? AI is already at work providing predictive answers to grand engineering questions formerly addressed by a ghastly gaggle of guesswork.

Insight: AI is not a replacement for human judgment, but it can and does recommend action by computing conditional probability of multiple scenarios. Result: business decisions statistically more likely to succeed. This is especially useful in crisis situations such as a global epidemic when stakes are high, precedents are few, and decisions are quick.

Oversight: Analog methods always have struggled with organizing complex and sensitive data from many sources at various clearance levels. Because interoperability and oversight are essential in defense and intelligence agencies, where missions require the ability to co-locate large amounts of both confidential data and open-source intelligence, AI is certain to play a growing role in battlespace decisions.

Rightsight: Even the best data analyst cant connect all the dots simultaneously. Yet missions often depend on surfacing granular data immediately. Imagine a soldier on the battlefield armed with essential intel in an instant. Deep machine learning fueled by AI provides amplified intelligence so users can act quickly and accurately, bringing each of the sights together to operate as one.

AI algorithms can work harmoniously to achieve efficiency and modernize legacy systems. This human-machine partnership already is underway and is to be embraced, not feared. When machines drive digital transformation and empower human innovation, everyone wins.

So, leave the question of sentience to the poets. Those of us focused on the science of the mission rather than science fiction will leverage the burgeoning power of AI to simply get the job done.

Jay Meil is Data Science Director for Artificial Intelligence at the defense technology firm SAIC.

Read more from the original source:

Sentient AI? Do we really care? - The Hill

Posted in Ai | Comments Off on Sentient AI? Do we really care? – The Hill

Harvard Developed AI Identifies the Shortest Path to Human Happiness – SciTechDaily

Posted: at 3:56 am

The researchers created a digital model of psychology aimed to improve mental health. The system offers superior personalization and identifies the shortest path toward a cluster of mental stability for any individual.

Deep Longevity has published a paper in Aging-US outlining a machine learning approach to human psychology in collaboration with Nancy Etcoff, Ph.D., Harvard Medical School, an authority on happiness and beauty.

The authors created two digital models of human psychology based on data from the Midlife in the United States study.

The first model is an ensemble of deep neural networks that predicts respondents chronological age and psychological well-being in 10 years using information from a psychological survey. This model depicts the trajectories of the human mind as it ages. It also demonstrates that the capacity to form meaningful connections, as well as mental autonomy and environmental mastery, develops with age. It also suggests that the emphasis on personal progress is constantly declining, but the sense of having a purpose in life only fades after 40-50 years. These results add to the growing body of knowledge on socioemotional selectivity and hedonic adaptation in the context of adult personality development.

The article describes an AI-based recommendation engine that can estimate ones psychological age and future well-being based on a constructed psychological survey. The AI uses the information from a respondent to place them on a 2D map of all possible psychological profiles and derive ways to improve their long-term well-being. This model of human psychology can be used in self-help digital applications and during therapist sessions. Credit: Michelle Keller

The second model is a self-organizing map that was created to serve as the foundation for a recommendation engine for mental health applications. This unsupervised learning algorithm splits all respondents into clusters depending on their likelihood of developing depression and determines the shortest path toward a cluster of mental stability for any individual. Alex Zhavoronkov, the chief longevity officer of Deep Longevity, elaborates, Existing mental health applications offer generic advice that applies to everyone yet fits no one. We have built a system that is scientifically sound and offers superior personalization.

To demonstrate this systems potential, Deep Longevity has released a web service FuturSelf, a free online application that lets users take the psychological test described in the original publication. At the end of the assessment, users receive a report with insights aimed at improving their long-term mental well-being and can enroll in a guidance program that provides them with a steady flow of AI-chosen recommendations. Data obtained on FuturSelf will be used to further develop Deep Longevitys digital approach to mental health.

FuturSelf is a free online mental health service that offers guidance based on a psychological profile assessment by AI. The core of FuturSelf is represented by a self-organizing map that classifies respondents and identifies the most suitable ways to improve ones well-being. Credit: Fedor Galkin

A leading biogerontology expert, professor Vadim Gladyshev from Harvard Medical School, comments on the potential of FuturSelf:

This study offers an interesting perspective on psychological age, future well-being, and risk of depression, and demonstrates a novel application of machine learning approaches to the issues of psychological health. It also broadens how we view aging and transitions through life stages and emotional states.

The authors plan to continue studying human psychology in the context of aging and long-term well-being. They are working on a follow-up study on the effect of happiness on physiological measures of aging.

The study was funded by the National Institute on Aging.

Reference: Optimizing future well-being with artificial intelligence: self-organizing maps (SOMs) for the identification of islands of emotional stability by Fedor Galkin, Kirill Kochetov, Michelle Keller, Alex Zhavoronkov and Nancy Etcoff, 20 June 2022, Aging-US.DOI: 10.18632/aging.204061

See the original post here:

Harvard Developed AI Identifies the Shortest Path to Human Happiness - SciTechDaily

Posted in Ai | Comments Off on Harvard Developed AI Identifies the Shortest Path to Human Happiness – SciTechDaily

AI Algorithm Predicts Future Crimes One Week in Advance With 90% Accuracy – SciTechDaily

Posted: at 3:56 am

A new computer model uses publicly available data to predict crime accurately in eight cities in the U.S., while revealing increased police response in wealthy neighborhoods at the expense of less advantaged areas.

Advances in artificial intelligence and machine learning have sparked interest from governments that would like to use these tools for predictive policing to deter crime. However, early efforts at crime prediction have been controversial, because they do not account for systemic biases in police enforcement and its complex relationship with crime and society.

University of Chicago data and social scientists have developed a new algorithm that forecasts crime by learning patterns in time and geographic locations from public data on violent and property crimes. It has demonstrated success at predicting future crimes one week in advance with approximately 90% accuracy.

In a separate model, the team of researchers also studied the police response to crime by analyzing the number of arrests following incidents and comparing those rates among neighborhoods with different socioeconomic status. They saw that crime in wealthier areas resulted in more arrests, while arrests in disadvantaged neighborhoods dropped. Crime in poor neighborhoods didnt lead to more arrests, however, suggesting bias in police response and enforcement.

What were seeing is that when you stress the system, it requires more resources to arrest more people in response to crime in a wealthy area and draws police resources away from lower socioeconomic status areas, said Ishanu Chattopadhyay, PhD, Assistant Professor of Medicine at UChicago and senior author of the new study, which was published on June 30, 2022, in the journal Nature Human Behaviour.

The new tool was tested and validated using historical data from the City of Chicago around two broad categories of reported events: violent crimes (homicides, assaults, and batteries) and property crimes (burglaries, thefts, and motor vehicle thefts). These data were used because they were most likely to be reported to police in urban areas where there is historical distrust and lack of cooperation with law enforcement. Such crimes are also less prone to enforcement bias, as is the case with drug crimes, traffic stops, and other misdemeanor infractions.

When you stress the system, it requires more resources to arrest more people in response to crime in a wealthy area and draws police resources away from lower socioeconomic status areas.

Ishanu Chattopadhyay, PhD

Previous efforts at crime prediction often use an epidemic or seismic approach, where crime is depicted as emerging in hotspots that spread to surrounding areas. These tools miss out on the complex social environment of cities, however, and dont consider the relationship between crime and the effects of police enforcement.

Spatial models ignore the natural topology of the city, said sociologist and co-author James Evans, PhD, Max Palevsky Professor at UChicago and the Santa Fe Institute. Transportation networks respect streets, walkways, train and bus lines. Communication networks respect areas of similar socio-economic background. Our model enables discovery of these connections.

The new model isolates crime by looking at the time and spatial coordinates of discrete events and detecting patterns to predict future events. It divides the city into spatial tiles roughly 1,000 feet across and predicts crime within these areas instead of relying on traditional neighborhood or political boundaries, which are also subject to bias. The model performed just as well with data from seven other U.S. cities: Atlanta, Austin, Detroit, Los Angeles, Philadelphia, Portland, and San Francisco.

We demonstrate the importance of discovering city-specific patterns for the prediction of reported crime, which generates a fresh view on neighborhoods in the city, allows us to ask novel questions, and lets us evaluate police action in new ways, Evans said.

Chattopadhyay is careful to note that the tools accuracy does not mean that it should be used to direct law enforcement, with police departments using it to swarm neighborhoods proactively to prevent crime. Instead, it should be added to a toolbox of urban policies and policing strategies to address crime.

We created a digital twin of urban environments. If you feed it data from happened in the past, it will tell you whats going to happen in future. Its not magical, there are limitations, but we validated it and it works really well, Chattopadhyay said. Now you can use this as a simulation tool to see what happens if crime goes up in one area of the city, or there is increased enforcement in another area. If you apply all these different variables, you can see how the systems evolves in response.

Reference: Event-level Prediction of Urban Crime Reveals Signature of Enforcement Bias in U.S. Cities by Victor Rotaru, Yi Huang, Timmy Li, James Evans and Ishanu Chattopadhyay, 30 June 2022, Nature Human Behaviour.DOI: 10.1038/s41562-022-01372-0

The study was supported by the Defense Advanced Research Projects Agency and the Neubauer Collegium for Culture and Society. Additional authors include Victor Rotaru, Yi Huang, and Timmy Li from the University of Chicago.

View original post here:

AI Algorithm Predicts Future Crimes One Week in Advance With 90% Accuracy - SciTechDaily

Posted in Ai | Comments Off on AI Algorithm Predicts Future Crimes One Week in Advance With 90% Accuracy – SciTechDaily

The AI ‘gold rush’ in Washington- POLITICO – POLITICO

Posted: at 3:56 am

With help from Ben Schrekinger.

A Skydio R1 drone | AP Photo/Jeff Chiu

AIs little guys are getting into the Washington influence game.

Tech giants and defense contractors have long dominated AI lobbying, seeking both money and favorable rules. And while the largest companies still dominate the debate, pending legislation in Congress aimed at getting ahead of China on innovation, along with proposed bills on data privacy, have caused a spike in lobbying by smaller AI players.

A number of companies focused on robotics, drones and self-driving cars are all setting up their own Washington influence machines, positioning them to shape the future of AI policy to their liking.

A lot of it is spurred by one major piece of legislation: The Bipartisan Innovation Act, commonly referred to as USICA an acronym for its previous title, and its goal to out-innovate China.

One tech lobbyist, granted anonymity to speak candidly, called AI-lobbying on USICA a gold rush. If the bill passes as currently written, it will bring about $50 billion in extra research spending over the next five years. Senate Majority Leader Chuck Schumer has touted USICA, previously known as the United States Innovation and Competition Act, as the best response to Chinas technological dominance.

Robotics company iRobot registered The Vogel Group, a major D.C. firm led by former GOP leadership aide Alex Vogel, to lobby for the bill. Argo AI, an autonomous driving technology company, deployed its in-house lobbyists including the former chief of staff of Rep. Debbie Dingell (D-Mich.) and former legislative assistant to Sen. Lindsey Graham (R-S.C.) to lobby on supply chain issues within USICA.

Ryan Hagemann, co-director of the IBM Policy Lab, said most of the attention in the AI space is on USICA legislation right now.

But the expansion in lobbying goes way beyond USICA, and its about more than chasing after government grants.

The most recent version of the American Data Privacy and Protection Act as well as the Algorithmic Accountability Act, propose government-mandated impact assessments for any companies that use algorithms. That means companies could suddenly have to turn over audits of their technology to regulators, a lengthy process that some companies argue should only fall to firms that produce high-risk AI, such as facial recognition technology used by police to catch criminals, versus low-risk AI like chatbots. IBM, for instance, argues that it should not have to perform the same kinds of impact assessments on its general-purpose AI systems as do companies that train AI on their own proprietary data sets.

Its not a question of who ought to perform the impact assessments, but when the impact assessment should have to occur, Hagemann said.

Merve Hickok, senior of the Center for AI and Digital Policy, a non-profit digital rights advocacy group, says theres a lot at stake. Only a handful of companies would have to submit algorithmic audits if their lobbying is effective.

You see a lot of companies not only big tech, but some industry groups as well pushing and lobbying against these obligations, Hickok said, pointing to efforts underway in Europe.

The definition of what constitutes AI is fuzzy in the first place. But a lot of the companies that use AI to operate their technology such as drone companies are buckling in for a bumpy ride in Washington. Drone company Skydio, seeking more funding for a Federal Aviation Administration training initiative and drone acquisitions by the Defense Department, almost doubled its lobbying spending from $160,000 in 2020 to $304,000 in 2021. Shield AI, which creates artificial intelligence that controls drones for military operations, went from spending $65,000 on lobbying in 2020 to spending more than $1.5 million in 2021, a number that it is on track to exceed this year. Skydio declined to comment and Shield AI did not respond to a request for comment.

Meanwhile, facial recognition companies like Clearview AI are fighting bills that would pause the use of the technology, such as the Facial Recognition and Biometric Technology Moratorium Acta. Clearview AI, which has faced enormous scrutiny from lawmakers over its controversial facial recognition technology, spent $120,000 on lobbying in 2021 after registering lobbyists for the first time in May 2021.

Hickok pointed out that U.S. lobbying around AI is still dominated by big companies like Google and Amazon, even with the proliferation of smaller companies registering to lobby. Hickock said that because the U.S. has not passed significant AI regulations, it has become a testbed, while the corporations are enjoying the benefits.

A message from Ericsson:

Ericsson helps the U.S. build 5G infrastructure. Ericsson is the leading provider of 5G equipment in the U.S. From our 5G smart factory in Lewisville, Texas, we are able to supply equipment directly to leading nationwide service providers. Learn more at ericsson.com/us.

The financial crisis in crypto markets continues today, with a court in the British Virgin Islands ordering the liquidation of crypto hedge fund Three Arrows Capital.

POLITICOs Sam Sutton reports that two executives from the politically connected consulting firm Teneo will oversee that process.

For investors curious about how and why the fund got to this point and worried about what could further destabilize crypto markets a new report today from on-chain analytics firm Nansen traces some of the interconnected moves. Dominoes are falling, is how Nansen researcher Andrew Thurman summarized it in an email.

The report highlights the role of staked Ether, a derivative of Ether, the second-largest cryptocurrency, issued by Lido Finance. (Staked Ether is not the currency itself, but rather a token that can be redeemed for Ether after the Ethereum network completes a complicated upgrade process.) When times were booming, the market treated staked Ether like it was as good as Ether. But last month, as the algorithmic stablecoin TerraLuna melted down, staked Ether began to trade at a discount to the real thing.

Three Arrows had invested in both Luna and staked Ether; after the TerraLuna meltdown, it sold its staked Ether at a loss, said Thurman, and ultimately couldnt recover.

Despite the fears of further contagion provoked by Three Arrows downfall, the market may now get a respite. Thurman said the on-chain positions of crypto lender Celsius which recently raised concerns by suspending withdrawals have improved and that emergency measures taken by Lido appear to have calmed investors.- Ben Schreckinger

A message from Ericsson:

A new GAO report on government use of facial recognition tech found that a slew of federal and state agencies use facial recognition. The GAO found that most of these agencies did not assess the privacy risks associated with the facial recognition. Fourteen agencies ranging from NASA to the Department of Justice use facial recognition to unlock agency-issued smartphones. Its a sign that facial recognition has become so quotidian that its taken for granted, leading agencies to use it without fully parsing its implications.- Konstantin Kakaes

- Popular apps for tracking pregnancy and ovulation reserve the right to turn user data over to law enforcement, a Forbes analysis found.

- Are cutting-edge technologies just too hard to scale?

- A finance professor offers a world-historical way to think about blockchains.

- Its possible that AI can manufacture ideas

- What does human-centered AI actually look like?

A message from Ericsson:

Ericsson. 5G Made for US.

5G will be a platform for a new economy, driven by cutting edge use cases that take advantage of 5Gs speed, low latency and reliability. Ericssons 120 year history in the U.S. and recent investments, like the $100 million factory in Texas, make the company the right partner to build the open and secure networks that will be the backbone of the 5G economy.

Learn more.

See original here:

The AI 'gold rush' in Washington- POLITICO - POLITICO

Posted in Ai | Comments Off on The AI ‘gold rush’ in Washington- POLITICO – POLITICO

How to Make Teachers Informed Consumers of Artificial Intelligence – Market Brief – EdWeek

Posted: at 3:56 am

New Orleans Artificial intelligences place in schools may be poised to grow, but school districts and companies have a long way to go before teachers buy into the concept.

At a session on the future of AI in school districts, held at the ISTE conference this week, a panel of leaders discussed its potential to shape classroom experiences and the many unresolved questions associated with the technology.

The mention of AI can intimidate teachers asits so often associated with complex code and sophisticated robotics. But AI is already a part of daily life in the way our phones recommend content to us or the ways that our smart home technology responds to our requests.

When AI is made relatable, thats when teachers buy into it, opening doors for successful implementation in the classroom, panelists said.

AI sounds so exotic right now, but it wasnt that long ago that even computer science in classrooms was blowing our minds, said Joseph South, chief learning officer for ISTE. South is a former director of the office of educational technology at the U.S. Department of Education.It doesnt matter how much we do out here. If the teacher doesnt believe in what youre bringing to the table, it will not be successful.Nneka McGee, South San Antonio Independent School District

The first step in getting educators comfortable with AI is to provide them the support to understand it, said Nancye Blair Black, ISTEs AI Explorations project lead, who moderated the panel. That kind of support needs to come from many sources, from federal officials down to the state level and individual districts.

We need to be talking about, What is AI? and it needs to be explained, she said. A lot of people think AI is magic, but we just need to understand these tools and their limitations and do more research to get people on board.

With the use of machine learning, AI technologies can adapt to individual students needs in real-time, tracking their progress and providing immediate feedback and data to teachers as well.

In instances where a student may be rushing through answering questions, AI technology can pick up on that and flag the student to slow down, the speakers said. This can provide a level of individual attention that cant be achieved by a teacher whos expected to be looking over every students shoulder simultaneously.

Others see reasons to be wary of AIs potential impact on teaching and learning. Many ed-tech advocates and academic researchers have raised serious concerns that the technology could have a negative impact on students.

One longstanding worry is that the data AI systems rely on can be inaccurate or even discriminatory, and that the algorithms put into AI programs make faulty assumptions about students and their educational interests and potential.

For instance, if AI is used to influence decisions about which lessons or academic programs students have access to, it could end up scuttling students opportunities, rather than enhancing them.

Nneka McGee, executive director for learning and innovation for the South San Antonio ISD, mentioned in the ISTE panel that a lot more research still has to be done on AI, regarding opportunity, data, and ethics.

Some districts that are more affluent will have more funding, so how do we provide opportunities for all students? she said.

We also need to look into the amount of data that is needed and collected for AI to run effectively. Your school will probably need a data- sharing agreement with the companies you work with.

A lot of research needs to be done on AIs data security and accessibility, as well as how to best integrate such technologies across the curriculum not just in STEM-focused courses.

Its important to start getting educators familiar with the AI and how it works, panelists said, because when used effectively, AI can increase student engagement in the classroom, and give teachers more time to customize lessons to individual student needs.

As AI picks up momentum within the education sphere, the speakers said that teachers need to start by learning the fundamentals of the technology and how it can be used in their classrooms.But a big share of the responsibilityalso falls on company officials developing new AI products, Black said.

When asked about advice for ed-tech organizations that are looking to expand into AI capabilities, Black emphasized the need for user-friendliness and an interface that can be seamlessly assimilated into existing curriculum and standards.

Hand [teachers] something they can use right away, not just another thing to pile on what they already have, she said.

McGee, of the South San Antonio ISD,urges companies to include teachers in every part of the process when it comes to pioneering AI.

Involve teachers because theyre on the front lines; theyre the first ones who see our students, she said. It doesnt matter how much we do out here. If the teacher doesnt believe in what youre bringing to the table, it will not be successful.

FollowEdWeek Market Briefon Twitter@EdMarketBriefor connect with us onLinkedIn.

Photo Credit: International Society for Technology in Education

See also:

Go here to see the original:

How to Make Teachers Informed Consumers of Artificial Intelligence - Market Brief - EdWeek

Posted in Ai | Comments Off on How to Make Teachers Informed Consumers of Artificial Intelligence – Market Brief – EdWeek

Page 28«..1020..27282930..4050..»