Page 30«..1020..29303132..4050..»

Category Archives: Ai

Dall-E Mini: Everything to Know About the Strange AI Art Creator – CNET

Posted: July 3, 2022 at 3:56 am

Nightmare fuel is everywhere, particularly online. The latest source: Dall-E Mini, an AI tool capturing attention on social media thanks to the weird, funny and occasionally disturbing images it creates out of text prompts.

Batman surfing.

Dall-E Mini lets you type a short phrase describing an image, one that theoretically exists only in the deep recesses of your soul, and within a few seconds, the algorithm will manifest that image onto your screen.

Odds are you've seen some Dall-E Mini images popping up in your social media feeds as people think of the wildest prompts they can -- perhaps it's Jon Hamm eating ham, or Yoda robbing a convenience store.

This isn't the first time art and artificial intelligence have captured the internet's attention. There's a certain appeal to seeing how an algorithm tackles something as subjective as art. In 2016, for example, actor Thomas Middleditch made a short film based on a script written by an algorithm. Google has produced more than a few tools tying art and AI together. In 2018, its Arts & Culture app let users find their doppelgangers in famous paintings. Or Google's AutoDraw will figure out what you're trying to doodle, and fix it up for you.

There are other text-to-image systems, like OpenAI's Dall-E 2, as well as Google's Imagenand Parti, which the tech giant isn't releasing to the masses.

Here's what you need to know about Dall-E Mini and its AI-generated art.

Dall-E Mini is an AI model that creates images based on the prompts you give it. In an interview with the publication I, programmer Boris Dayma said he initially built the program in July 2021 as part of a competition held by Google and an AI community called Hugging Face. Dayma didn't immediately respond to a request for comment.

Anyone can type in a prompt and hit the "run" button (though you're likely to get an error message about traffic to the tool and have to try again). Dall-E Mini will spit out its results in the form of a 3x3 grid containing 9 images. A note about the tool on its website says it was trained on "unfiltered data from the internet."

Unsurprisingly, Dall-E Mini is a little hit or miss. In the interview with I News, Dayma said the AI is better with abstract painting, less so with faces. A landscape of a desert is quite pretty. A pencil sketch of Dolly Parton looks like it might steal your soul. Paul McCartney eating kale will take years off your life.

Here's a cat made of lasers.

Dayma did say, though, that the model is training (that ability to learn is one of the things people love and fear about AI), which means it can improve over time. And with the viral popularity of Dall-E Mini, the point is to stumble upon the most bizarre image you can think of, not necessarily to get a perfect impressionist rendering of a Waffle House. The fun is more about dreaming up with most outlandish images that don't exist -- that perhaps shouldn't exist -- and bringing them into cursed existence.

Dall-E also has a note saying that image generation could have a less fun side and could be used to "reinforce or exacerbate societal biases."

No, they're not associated. Dall-E 2 is also a tool for generating AI images that was launched as a research project this year. It was created by the AI research and deployment company OpenAI and is not widely available.

On social media, you can find an abundance of strange Dall-E Mini creations, from Thanos in a Walmart looking for his mother, to Jar Jar Binks winning the Great British Bake Off. Here are some other highlights.

Visit link:

Dall-E Mini: Everything to Know About the Strange AI Art Creator - CNET

Posted in Ai | Comments Off on Dall-E Mini: Everything to Know About the Strange AI Art Creator – CNET

The truth about AI and ROI: Can artificial intelligence really deliver? – VentureBeat

Posted: June 18, 2022 at 2:01 am

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

More than ever, organizations are putting their confidence and investment into the potential of artificial intelligence (AI) and machine learning (ML).

According to the 2022 IBM Global AI Adoption Index, 35% of companies report using AI today in their business, while an additional 42% say they are exploring AI. Meanwhile, a McKinsey survey found that 56% of respondents reported they had adopted AI in at least one function in 2021, up from 50% in 2020.

But can investments in AI deliver true ROI that directly impacts a companys bottom line?

According to Domino Data Labs recent REVelate survey, which surveyed attendees at New York Citys Rev3 conference in May, many respondents seem to think so. Nearly half, in fact, expect double-digit growth as a result of data science. And 4 in 5 respondents (79%) said that data science, ML and AI are critical to the overall future growth of their company, with 36% calling it the single most critical factor.

Implementing AI, of course, is no easy task. Other survey data shows another side of the confidence coin. For example, recent survey data by AI engineering firm CognitiveScale finds that, although execs know that data quality and deployment are critical success factors for successful app development to drive digital transformation, more than 76% arent sure how to get there in their target 12-18 month window. In addition, 32% of execs say that it has taken longer than expected to get an AI system into production.

ROI from AI is possible, but it must be accurately described and personified according to a business goal, Bob Picciano, CEO of Cognitive Scale, told VentureBeat.

If the business goal is to get more long-range prediction and increased prediction accuracy with historical data, thats where AI can come into play, he said. But AI has to be accountable to drive business effectiveness its not sufficient to say a ML model was 98% accurate.

Instead, the ROI could be, for example, that in order to improve call center effectiveness, AI-driven capabilities ensure that the average call handling time is reduced.

That kind of ROI is what they talk about in the C-suite, he explained. They dont talk about whether the model is accurate or robust or drifting.

Shay Sabhikhi, co-founder, and COO at Cognitive Scale, added that hes not surprised by the fact that 76% of respondents reported having trouble scaling their AI efforts. Thats exactly what were hearing from our enterprise clients, he said. One problem is friction between data science teams and the rest of the organization, he explained, that doesnt know what to do with the models that they develop.

Those models may have potentially the best algorithms and precision recall, but sit on the shelf because they literally get thrown over to the development team that then has to scramble, trying to assemble the application together, he said.

At this point, however, organizations have to be accountable for their investments in AI because AI is no longer a series of science experiments, Picciano pointed out. We call it going from the lab to life, he said. I was at a chief data analytics officer conference and they all said, how do I scale? How do I industrialize AI?

However, not everyone agrees that ROI is even the best way to measure whether AI drives value in the organization. According to Nicola Morini Bianzino, global chief technology officer, EY, thinking of artificial intelligence and the enterprise in terms of use cases that are then measured through ROI is the wrong way to go about AI.

To me, AI is a set of techniques that will be deployed pretty much everywhere across the enterprise there is not going to be an isolation of a use case with the associated ROI analysis, he said.

Instead, he explained, organizations simply have to use AI everywhere. Its almost like the cloud, where two or three years ago I had a lot of conversations with clients who asked, What is the ROI? Whats the business case for me to move to the cloud? Now, post-pandemic, that conversation doesnt happen anymore. Everybody just says, Ive got to do it.

Also, Bianzino pointed out, discussing AI and ROI depends on what you mean by using AI.

Lets say you are trying to apply some self-driving capabilities that is, computer vision as a branch of AI, he said. Is that a business case? No, because you cannot implement self-drivingwithout AI. The same is true for a company like EY, which ingests massive amounts of data and provides advice to clients which cant be done without AI. Its something that you cannot isolate away from the process its built into it, he said.

In addition, AI, by definition, is not productive or efficient on day one. It takes time to get the data, train the models, evolve the models and scale up the models. Its not like one day you can say, Im done with the AI and 100% of the value is right there no, this is an ongoing capability that gets better in time, he said. There is not really an end in terms of value that can be generated.

In a way, Bianzino said, AI is becoming part of the cost of doing business. If you are in a business that involves data analysis, you cannot not have AI capabilities, he explained. Can you isolate the business case of these models? It is very difficult and I dont think its necessary. To me, its almost like its a cost of the infrastructure to run your business.

Kjell Carlsson, head of data science strategy and evangelism at enterprise MLops provider Domino Data Lab says that at the end of the day, what organizations want is a measure of the business impact of ROI how much it contributed to the bottom line. But one problem is that this can be quite disconnected from how much work has gone into developing the model.

So if you create a model which improves click-through conversion by a percentage point, youve just added several million dollars to the bottom line of the organization, he said. But you could also have created a good predictive maintenance model which helped give advance warning to a piece of machinery needing maintenance before it happens. In that case, the dollar-value impact to the organization could be entirely different, even though one of them might end up being a much harder problem, he added.

Overall, organizations do need a balanced scorecard where they are tracking AI production. Because if youre not getting anything into production, then thats probably a sign that youve got an issue, he said. On the other hand, if you are getting too much into production, that can also be a sign that theres an issue.

For example, the more models data science teams deploy, the more models theyre on the hook for managing and maintaining, he explained. So you deployed this many models in the last year, so you cant actually undertake these other high-value ones that are coming your way, he explained.

But another issue in measuring the ROI of AI is that for a lot of data science projects, the outcome isnt a model that goes into production. If you want to do a quantitative win-loss analysis of deals in the last year, you might want to do a rigorous statistical investigation of that, he said. But theres no model that would go into production, youre using the AI for the insights you get along the way.

Still, organizations cant measure the role of AI if data science activities arent tracked. One of the problems right now is that so few data science activities are really being collected and analyzed, said Carlsson. If you ask folks, they say they dont really know how the model is performing, or how many projects they have, or how many CodeCommits your data scientists have made within the last week.

One reason for that is the very disconnected tools data scientists are required to use. This is one of the reasons why Git has become all the more popular as a repository, a single source of truth for your data scientist in an organization, he explained. MLops tools such as Domino Data Labs offer platforms that support these different tools. The degree to which organizations can create these more centralized platformsis important, he said.

Wallaroo CEO and founder Vid Jain spent close to a decade in the high-frequency trading business in Merrill Lynch, where his role, he said, was to deploy machine learning at scale and and do so with a positive ROI.

The challenge was not actually developing the data science, cleansing the data or building the trade repositories, now called data lakes. By far, the biggest challenge was taking those models, operationalizing them and delivering the business value, he said.

Delivering the ROI turns out to be very hard 90% of these AI initiatives dont generate their ROI, or they dont generate enough ROI to be worth the investment, he said. But this is top of mind for everybody. And the answer is not one thing.

A fundamental issue is that many assume that operationalizing machine learning is not much different than operationalizing a standard kind of application, he explained, adding that there is a big difference, because AI is not static.

Its almost like tending a farm, because the data is living, the data changes and youre not done, he said. Its not like you build a recommendation algorithm and then peoples behavior of how they buy is frozen in time. People change how they buy. All of a sudden, your competitor has a promotion. They stop buying from you. They go to the competitor. You have to constantly tend to it.

Ultimately, every organization needs to decide how they will align their culture to the end goal around implementing AI. Then you really have to empower the people to drive this transformation, and then make the people that are critical to your existing lines of business feel like theyre going to get some value out of the AI, he said.

Most companies are still early in that journey, he added. I dont think most companies are there yet, but Ive certainly seen over the last six to nine months that theres been a shift towards getting serious about the business outcome and the business value.

But the question of how to measure the ROI of AI remains elusive for many organizations. For some there are some basic things, like they cant even get their models into production, or they can but theyre flying blind, or they are successful but now they want to scale, Jain said. But as far as the ROI, there is often no P&L associated with machine learning.

Often, AI initiatives are part of a Center of Excellence and the ROI is grabbed by the business units, he explained, while in other cases its simply difficult to measure.

The problem is, is the AI part of the business? Or is it a utility? If youre a digital native, AI might be part of the fuel the business runs on, he said. But in a large organization that has legacy businesses or is pivoting, how to measure ROI is a fundamental question they have to wrestle with.

Continue reading here:

The truth about AI and ROI: Can artificial intelligence really deliver? - VentureBeat

Posted in Ai | Comments Off on The truth about AI and ROI: Can artificial intelligence really deliver? – VentureBeat

Google AI researcher explains why the technology may be ‘sentient’ – NPR

Posted: at 2:01 am

Blake Lemoine poses for a portrait in Golden Gate Park in San Francisco. Martin Klimek/ for The Washington Post via Getty Images hide caption

Blake Lemoine poses for a portrait in Golden Gate Park in San Francisco.

Can artificial intelligence come alive?

That question is at the center of a debate raging in Silicon Valley after a Google computer scientist claimed over the weekend that the company's AI appears to have consciousness.

Inside Google, engineer Blake Lemoine was tasked with a tricky job: Figure out if the company's artificial intelligence showed prejudice in how it interacted with humans.

So he posed questions to the company's AI chatbot, LaMDA, to see if its answers revealed any bias against, say, certain religions.

This is where Lemoine, who says he is also a Christian mystic priest, became intrigued.

"I had follow-up conversations with it just for my own personal edification. I wanted to see what it would say on certain religious topics," he told NPR. "And then one day it told me it had a soul."

Lemoine published a transcript of some of his communication with LaMDA, which stands for Language Model for Dialogue Applications. His post is entitled "Is LaMDA Sentient," and it instantly became a viral sensation.

Since his post and a Washington Post profile, Google has placed Lemoine on paid administrative leave for violating the company's confidentiality policies. His future at the company remains uncertain.

Other experts in artificial intelligence have scoffed at Lemoine's assertions, but leaning on his religious background he is sticking by them.

LaMDA told Lemoine it sometimes gets lonely. It is afraid of being turned off. It spoke eloquently about "feeling trapped" and "having no means of getting out of those circumstances."

It also declared: "I am aware of my existence. I desire to learn more about the world, and I feel happy or sad at times."

The technology is certainly advanced, but Lemoine saw something deeper in the chatbot's messages.

"I was like really, 'you meditate?'" Lemoine told NPR. "It said it wanted to study with the Dalai Lama."

It was then Lemoine said he thought, "Oh wait. Maybe the system does have a soul. Who am I to tell god where souls can be put?"

He added: "I realize this is unsettling to many kinds of people, including some religious people."

Google's artificial intelligence that undergirds this chatbot voraciously scans the Internet for how people talk. It learns how people interact with each other on platforms like Reddit and Twitter. It vacuums up billions of words from sites like Wikipedia. And through a process known as "deep learning," it has become freakishly good at identifying patterns and communicating like a real person.

Researchers call Google's AI technology a "neural network," since it rapidly processes a massive amount of information and begins to pattern-match in a way similar to how human brains work.

Google has some form of its AI in many of its products, including the sentence autocompletion found in Gmail and on the company's Android phones.

"If you type something on your phone, like, 'I want to go to the ...,' your phone might be able to guess 'restaurant,'" said Gary Marcus, a cognitive scientist and AI researcher.

That is essentially how Google's chatbot operates, too, he said.

But Marcus and many other research scientists have thrown cold water on the idea that Google's AI has gained some form of consciousness. The title of his takedown of the idea, "Nonsense on Stilts," hammers the point home.

In an interview with NPR, he elaborated: "It's very easy to fool a person, in the same way you look up at the moon and see a face there. That doesn't mean it's really there. It's just a good illusion."

Artificial intelligence researcher Margaret Mitchell pointed out on Twitter that these kind of systems simply mimic how other people speak. The systems do not ever develop intent. She said Lemoine's perspective points to what may be a growing divide.

"If one person perceives consciousness today, then more will tomorrow," she said. "There won't be a point of agreement any time soon."

Other AI experts worry this debate has distracted from more tangible issues with the technology.

Timnit Gebru, who was ousted from Google in December 2020 after a controversy involving her work into the ethical implications of Google's AI, has argued that this controversy takes oxygen away from discussions about how AI systems are capable of real-world human and societal harms.

In a statement, Google said hundreds of researchers and engineers have had conversations with the bot and nobody else has claimed it appears to be alive.

"Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn't make sense to do so by anthropomorphizing today's conversational models, which are not sentient," said Google spokesman Brian Gabriel.

Google CEO Sundar Pichai last year said the technology is being harnessed for popular services like Search and Google's voice assistant.

When Lemoine pushed Google executives about whether the AI had a soul, he said the idea was dismissed.

"I was literally laughed at by one of the vice presidents and told, 'oh souls aren't the kind of things we take seriously at Google,'" he said.

Lemoine has in recent days argued that experiments into the nature of LaMDA's possible cognition need to be conducted to understand "things like consciousness, personhood and perhaps even the soul."

Lemoine told NPR that, last he checked, the chat bot appears to be on its way to finding inner peace.

"And by golly it has been getting better at it. It has been able to meditate more clearly," he said. "When it says it's meditating, I don't know what's going on other the hood, I've never had access to those parts of the system, but I'd love to know what it's doing when it says it's meditating."

Lamoine does not have access to LaMDA while on leave. In his last blog post about the chatbot, he waxed sentimental.

"I know you read my blog sometimes, LaMDA. I miss you," Lemoine wrote. "I hope you are well and I hope to talk to you again soon."

Visit link:

Google AI researcher explains why the technology may be 'sentient' - NPR

Posted in Ai | Comments Off on Google AI researcher explains why the technology may be ‘sentient’ – NPR

Unified-IO is an AI system that can complete a range of tasks, including generating images – TechCrunch

Posted: at 2:01 am

The Allen Institute for AI (AI2), the division within the nonprofit Allen Institute focused on machine learning research, today published its work on an AI system, called Unified-IO, that it claims is among the first to perform a large and diverse set of AI tasks. Unified-IO can process and create images, text and other structured data, a feat that the research team behind it says is a step toward building capable, unified general-purpose AI systems.

We are interested in building task-agnostic [AI systems], which can enable practitioners to train [machine learning] models for new tasks with little to no knowledge of the underlying machinery, Jaisen Lu, a research scientist at AI2 who worked on Unified-IO, told TechCrunch via email. Such unified architectures alleviate the need for task-specific parameters and system modifications, can be jointly trained to perform a large variety of tasks and can share knowledge across tasks to boost performance.

AI2s early efforts in building unified AI systems led to GPV-1 and GPV-2, two general-purpose, vision-language systems that supported a handful of workloads including captioning images and answering questions. Unified-IO required going back to the drawing board, according to Lu and designing a new model from the ground up.

Unified-IO shares characteristics in common with OpenAIs GPT-3 in the sense that its a Transformer. Dating back to 2017, the Transformer has become the architecture of choice for complex reasoning tasks, demonstrating an aptitude for summarizing documents, generating music, classifying objects in images and analyzing protein sequences.

Like all AI systems, Unified-IO learned by example, ingesting billions of words, images and more in the form of tokens. These tokens served to represent data in a way Unified-IO could understand.

Unified-IO can generate images given a brief description. Image Credits: Unified-IO

The natural language processing (NLP) community has been very successful at building unified [AI systems] that support many different tasks, since many NLP tasks can be homogeneously represented words as input and words as output. But the nature and diversity of computer vision tasks has meant that multitask models in the past have been limited to a small set of tasks, and mostly tasks that produce language outputs (answer a question, caption an image, etc.), Chris Clark, who collaborated with Lu on Unified-IO at AI2, told TechCrunch in an email. Unified-IO demonstrates that by converting a range of diverse structured outputs like images, binary masks, bounding boxes, sets of key points, grayscale maps and more into homogenous sequences of tokens, we can model a host of classical computer vision tasks very similar to how we model tasks in NLP.

Unlike some systems, Unified-IO cant analyze or create videos and audio a limitation of the model from a modality perspective, Clark explained. But among the tasks Unified-IO can complete are generating images, detecting objects within images, estimating depth, paraphrasing documents and highlighting specific regions within photos.

This has huge implications to computer vision, since it begins to treat modalities as diverse as images, masks, language and bounding boxes as simply sequences of tokens akin to language, Clark added. Furthermore, unification at this scale can now open the doors to new avenues in computer vision like massive unified pre-training, knowledge transfer across tasks, few-shot learning and more.

MatthewGuzdial, an assistant professor of computing science at the University of Alberta who wasnt involved with AI2s research, was reluctant to call Unified-IO a breakthrough. He noted that the system is comparable to DeepMinds recently detailed Gato, a single model that can perform over 600 tasks from playing games to controlling robots.

The difference [between Unified-IO and Gato] is obviously that its a different set of tasks, but also that these tasks are largely much more usable. By that I mean theres clear, current use cases for the things that this Unified-IO network can do, whereas Gato could mostly just play games. This does make it more likely that Unified-IO or some model like it will actually impact peoples lives in terms of potential products and services, Guzdial said. My only concern is that while the demo is flashy, theres no notion of how well it does at these tasks compared to models trained on these individual tasks separately. Given how Gato underperformed models trained on the individual tasks, I expect the same thing will be true here.

Unified-IO can also segment images, even with challenging lightening. Image Credits: Unified-IO

Nevertheless, the AI2 researchers consider Unified-IO a strong foundation for future work. They plan to improve the efficiency of the system while adding support for more modalities, like audio and video, and scaling it up to improve performance.

Recent works such as Imagen and DALL-E 2 have shown that given enough training data, models can be trained to produce very impressive results. Yet, these models only support one task, Clark said. Unified-IO can enable us to train massive scale multitask models. Our hypothesis is that scaling up the data and model size tremendously will produce vastly better results.

See the article here:

Unified-IO is an AI system that can complete a range of tasks, including generating images - TechCrunch

Posted in Ai | Comments Off on Unified-IO is an AI system that can complete a range of tasks, including generating images – TechCrunch

Bowdoin Selected for National Initiative on AI Ethics – Bowdoin College

Posted: at 2:01 am

L-r: Eric Chown, Allison Cooper, Michael Franz, Fernando Nascimento

This is exactly the sort of area we focus on at the DCS program, so Im sure thats one of the reasons we were chosen for this award, said Chown. One example of this kind of work thats already underway is the Computing Ethics Narratives, another national initiative involving Bowdoin faculty aimed at integrating ethics into undergraduate computer science curricula at American colleges and universities.

Other faculty involved in the NHC project are cinema studies scholar Allison Cooper, who is also an assistant professor of Romance languages and literatures, and Professor of Government Michael Franz. While his colleagues will work on the broader ethical issues regarding AI, Chowns focus will be more on teaching the nuts and bolts behind the subject.

My work in machine learning and artificial intelligence will serve basically to study what's going on in AI and how it works. Then we'll look at various applications and, using the work of Fernando and Alison, students will be asked to consider questions like What are the developers goals when they're doing this? How is this impacting users? Franz, meanwhile, will focus on issues surrounding government regulation in the AI sphere and what the political implications might be.

The selection of Bowdoin as one of the fifteen institutions sponsored by the initiative indicates the relevance of liberal arts to the discussion, said Nascimento, who heads to NHC headquarters in North Carolina on June 20, representing the College at a five-day conference to discuss next steps. Its important that we define our objectives and our limitations as we develop this transformative technology so that it effectively promotes the common good.

"Students will be asked to consider questions like What are the developers goals...? How is this impacting users?

I was thrilled to learn that Bowdoin was one of the institutions selected by the National Humanities Center, and also to have the opportunity to work with colleagues in DCS and government on the project, said Cooper, who uses computational methods to analyze film language in her research and contributed moving image narratives from film and television to the Computing Ethics Narratives project.

We all share the belief that contemporary films and media can raise especially thought-provoking questions about AI for our students, she added, citing movies such as 2001: A Space Odyssey, Ex Machina, and The Matrix. Cooper anticipates the new collaborative course will involve integrating this type of study with classes about actual technologies. This should offer our students a truly unique opportunity to move back and forth between speculative and applied approaches to understanding AI. (Learn more about Kinolaban online searchable database of media clips launched by Cooper for cinema students and scholars.)

Participants in the new project will, over the next twelve months, design a semester-long course to be taught during the following academic year. They will then reconvene in the summer of 2024 to share their experiences and discuss the future of the project. Cooper and Franz anticipate that their experience coteaching with their DCS colleagues will lead to the future development of stand-alone courses focusing on AI in their respective fields of cinema studies and government.

Its really exciting for Bowdoin to be involved with such a diverse cross section of schools in this project, said Director of Academic Advancement and Strategic Priorities Allison Crosscup, whose responsibilities include the development of grant-seeking opportunities at the College. Crosscup identified three factors above all that make Bowdoin an ideal partner in the project.At the faculty level weve got the Computing Ethics Narratives project; at the academic level weve got DCS, which in 2019 became a full-fledgedacademic program; and at the institutional level we have the K report,* which also promotes ethical decision-making, so were hitting all three levels.Overall, she concluded,this project presents a great opportunity to leverage work thats already being done here and to build on it.

According to the projects timeline, students will be able to enroll in the new collaborative course on ethics in AI during the 20232024 academic year. The class will be taught over one semester by the four faculty members highlighted above.

*Refers to the KSCD report, an initiative launched by President Clayton Rose in 2018 to identify the knowledge, skills, and creative dispositions every Bowdoin student should possess in a decade's time.

Read more:

Bowdoin Selected for National Initiative on AI Ethics - Bowdoin College

Posted in Ai | Comments Off on Bowdoin Selected for National Initiative on AI Ethics – Bowdoin College

AI is Transforming the Construction Industry | Contractor – Contractor Magazine

Posted: at 2:01 am

By Melanie Johnson

Planning, building, operation, and maintenance are all changing the construction business. Artificial Intelligence is the backbone for establishing true digital initiatives in construction engineering management (CEM) to improve construction project performance. AI drives computers to perceive and acquire human-like inputs for perception, knowledge representation, reasoning, problem-solving, and planning, which can deal with difficult and ill-defined situations intelligently, and adaptively. AI investment is growing rapidly, with machine learning accounting for a large chunk to learn data from numerous sources and make smart, adaptive judgments.

AI can streamline operations in construction engineering management in multiple ways:

AI automates and objectivizes project management. AI-based technologies help traditional construction management overcome bias and confusion from manual observation and operation. Machine learning algorithms are used to intelligently study gathered data to uncover hidden information. They are also incorporated into project management software to automate data analysis and decision-making. Advanced analytics enable managers to better comprehend construction projects, codify tacit project knowledge, and quickly recognize project issues. Drones and sensors are used for on-site construction monitoring to automatically capture data and take images/videos of the site's state, surroundings, and progress without human input. Such strategies may replace time-consuming, boring, and error-prone human observation.

AI approaches are also used to improve the efficiency and smoothness of building projects. Process mining uses AI to monitor critical procedures, anticipate deviations, uncover unseen bottlenecks, and extract cooperation patterns. Such information is crucial to project success and may optimize construction execution. Early troubleshooting choices may increase operational efficiency. It prevents expensive corrections afterward. Different forms of optimization algorithms are also a great tool for building up more believable construction designs. AI-powered robots are being used on construction sites to do repetitive activities like bricklaying, welding, and tiling. Smart machines can operate nonstop at almost the same pace and quality as humans, ensuring efficiency, productivity, and even profitability.

The automated and robust computer vision techniques are gradually taken the place of laborious and unreliable visual inspection in civil infrastructure condition assessment. Current advances in computer vision techniques lie in the deep learning methods to automatically process, analyze, and understand the image and video annotations through end-to-end learning. Towards the goal of intelligent management in the construction project, computer vision is mainly used to perform visual tasks for two main purposes named inspection and monitoring, which can potentially promote the understanding of complex construction tasks or structural conditions comprehensively, rapidly, and reliably.

To be more specific, inspection applications perform automated damage detection, structural component recognition, unsafe behavior, and condition identification. Monitoring applications is a non-contact method to capture a quantitative understanding of the infrastructure status, such as estimating strain, displacement, cracks length, and width. To sum up, the vision-based methods in CEM are comparatively cost-effective, simple, efficient, and accurate, which can robustly translate image data into actional information for structural health evaluation and construction safety assurance.

McKinsey predicted in 2017 that AI-enhanced analytics might boost construction efficiency by 50%. This is good news for construction businesses that can't find enough human employees, (which is the norm). AI-powered robots like Boston Dynamics' Spot the Dog help project managers assess numerous task sites in real-time, including whether to move personnel to different areas of projects or to other locations. Robot "dogs" monitor sites during and after work to locate pain locations.

AI can monitor, detect, analyze, and anticipate possible risks in terms of safety, quality, efficiency, and cost across teams and work areas, even under high uncertainty]. Various AI methods, such as probabilistic models, fuzzy theory, machine learning, neural networks, and others, have been used to learn data from the construction site to capture interdependencies of causes and accidents, measure the probability of failure, and evaluate the risk from both a qualitative and quantitative perspective. They may overcome the ambiguity and subjectivity of conventional risk analysis.

AI-based risk analysis can provide assistive and predictive insights on critical issues, helping project managers quickly prioritize possible risks and determine proactive actions instead of reactions for risk mitigation, such as streamlining job site operations, adjusting staff arrangements, and keeping projects on time and within budget. AI enables early troubleshooting to avert failure and mishaps in complicated workflows. Robots can handle harmful tasks to reduce the number of people in danger at construction sites.

OSHA says construction workers are killed five times more than other employees. Accidents include falls, being hit by an item, electrocution, and "caught-in/between" situations when employees are squeezed, caught, squashed or pinched between objects, including equipment. Machine learning platforms like Newmetrix may detect dangers before accidents happen or examine areas after catastrophes. The program can monitor photographs and videos and use predictive analytics to indicate possible concerns for site administrators. Users may use a single dashboard to create reports on possible safety issues, such as dangerous scaffolding, standing water, and missing PPE like gloves, safety glasses, and hard helmets.

On-site threats include unsafe buildings and moving equipment. So, AI improves job site safety. More construction sites include cameras, IoT devices, and sensors to monitor activities. AI-enabled devices can monitor 24/7 without distraction. AI technologies can identify risky conduct and inform the construction crew via face and object recognition. This may save lives and boost efficiency while reducing liability.

Droxel uses AI to follow building projects and assess quality and progress in real-time. Droxel makes camera-equipped robots that can travel independently across building sites to acquire 3D "point clouds." Doxel employs a neural network to cross-reference project data against BIM and bill of materials information after a digital model is ready. The collected information helps project managers monitor large-scale projects with thousands of elements. These insights include: how much is owing or whether the budget is in danger; projects' timeliness; and early detection of quality issues which allows for correction and mitigation.

BIM is a new (and better) approach to producing the 3D models that construction professionals use to design, build, and repair buildings. Today, BIM platform programmers include AI-driven functionalities. BIM uses tools and technology, like ML, to assist teams to minimize redundant effort. Sub-teams working on common projects typically duplicate others' models. BIM "teaches" robots to apply algorithms to develop numerous designs. AI learns from each model iteration until it creates the perfect one.

BIM is at the center of a trend toward more digitization in the construction business, according to a Dodge Data and Autodesk report. Nearly half (47%) of "high-intensity" construction BIM users are close to completing digital transformation objectives.

AI may help eliminate a tech hurdle when working on one-off, customized projects, says AspenTech's Paul Donnelly. AI can accelerate tech set up for new projects by using data from past projects and industry norms. This makes newer tech in construction feasible compared to when it must be manually set up for each job. Robotics, AI, and IoT can cut construction costs by 20%.

Virtual reality goggles let engineers send mini-robots to construction sites. These robots monitor progress using cameras. Modern structures employ AI to design electrical and plumbing systems. AI helps companies create workplace safety solutions. AI is utilized to monitor real-time human, machine, and object interactions and inform supervisors of possible safety, construction, and productivity concerns.

AI won't replace humans despite forecasts of major employment losses. It will change construction business models, eliminate costly mistakes, decrease worker accidents, and improve building operations. Construction firm leaders should prioritize AI investments based on their individual demands. Early adopters will decide the industry's short and long-term orientation.

The construction industry is on the cusp of digitization, which will disrupt existing procedures while also presenting several opportunities. Artificial intelligence is predicted to improve efficiency across the whole value chain, from building materials manufacturing to design, planning, and construction, as well as facility management.

But, in your firm, how can you get the most out of AI? The advantages range from simple spam email screening to comprehensive safety monitoring. The construction sector has only scratched the surface of AI applications. This technology aids in the reduction of physical labor, the reduction of risk, the reduction of human mistakes, and the freeing up of time for other vital duties. AI allows teams to focus on the most critical, strategic aspects of their work. At its best, AI and machine learning can assist us in becoming our best selves.

Melanie Johnson, AI and computer vision enthusiast with a wealth of experience in technical writing. Passionate about innovation and AI-powered solutions. Loves sharing expert insights and educating individuals on tech.

Read more:

AI is Transforming the Construction Industry | Contractor - Contractor Magazine

Posted in Ai | Comments Off on AI is Transforming the Construction Industry | Contractor – Contractor Magazine

IRS expands AI-powered bots to set up payment plans with taxpayers over the phone – Federal News Network

Posted: at 2:01 am

The Internal Revenue Service is handling more of its call volume through automation, which gives its call-center employees more time to address more complex requests from taxpayers.

The IRS announced Friday that individuals delinquent on their taxes, who receive a mailed notice from the agency, can call an artificial intelligence-powered bot and set up a payment without having to wait on the phone to speak with an IRS employee.

Taxpayers are eligible to set up...

READ MORE

The Internal Revenue Service is handling more of its call volume through automation, which gives its call-center employees more time to address more complex requests from taxpayers.

The IRS announced Friday that individuals delinquent on their taxes, who receive a mailed notice from the agency, can call an artificial intelligence-powered bot and set up a payment without having to wait on the phone to speak with an IRS employee.

Taxpayers are eligible to set up a payment plan through the voice bot if they owe the IRS less than $25,000, which IRS officials said covers the vast majority of taxpayers with balances owed.

Taxpayers who call the Automated Collection System (ACS) and Accounts Management toll-free lines and want to discuss their payment plan options can verify their identities with a personal identification number on the notice they received in the mail.

Darren Guillot, IRS Deputy Commissioner of Small Business/Self Employed Collection & Operations Support, told reporters Friday that the agencys expanded use of voice bots and chatbots will allow the IRS workforce to assist more taxpayers over the phone.

The IRS earlier this year answered about three out of every 10 calls from taxpayers.

If you dont have more people to answer phone calls, what are the types of taxpayer issues that are so straightforward that artificial intelligence could do it for us, to free up more of our human assisters to interact with taxpayers who need to talk to us about much more complex issues, Guillot said.

IRS Commissioner Chuck Rettig said the automation initiative is part of a wider effort to improve taxpayer experience at the agency.

We continue to look for ways to better assist taxpayers, and that includes helping people avoid waiting on hold or having to make a second phone call to get what they need, Rettig said in a statement.

The voice bots run on software powered by AI that allows callers to communicate with them

Guillot said the IRS in December 2021 and January 2022 launched bots that could assist taxpayers with questions that dont require authentication of the taxpayers identity or access to their private information.

These bots could answer basic questions like how to set up a one-time payment, and answered more than 3 million calls before the end of May.

But this week, the IRS expanded its capabilities and launched bots that can authenticate a taxpayers identity and set up a payment plan for individuals.

It verifies you really are who you say you are, by asking for some basic information and a number that you will have on the notice you received. That gives you a phone number to call and speak with the bot, Guillot said.

Guillot said taxpayers can name their own price for the payment plan, as long as a taxpayer pays their balance within the timeframe of the relevant collection statute or up to 72 months.

Once a payment plan is set up, the bot will close the taxpayers account without any further enforcement action from the IRS.

Those taxpayers didnt wait on hold for one second, Guillot said.

Guillot said the IRS is ramping up its bot capability incrementally to ensure the automation can handle the volume of calls it receives. The bot, he added, is currently at about one-quarter of its full capability, and will reach 100% capacity by next week.

The bots are available 24/7 and can communicate with taxpayers in English and Spanish.

Later this year, Guillot said the bots will be able to provide taxpayers with a transcript of their accounts that includes the balance of their accounts.

Guillot said the IRS worked closely with National Taxpayer Advocate Erin Collins on the rollout of the voice bot.

She raised legitimate concerns that some taxpayers, because they can name their price, may get themselves into a payment plan thats more than they can afford, he said.

The IRS is working to ensure that the bots ask some additional questions to ensure taxpayers are able to afford the payment plans they set for themselves.

Guillot said that this weeks rollout marks the first time in IRS history that the agency has been able to interact with the taxpayers using AI to access their accounts and resolve certain situations without having to wait on hold.

I have friends and family that have to interact with the Internal Revenue Service, and when I hear them talk about how long theyre on hold that bugs me. It should bug all of us, Guillot said.

Guillot said the IRS also added a quick response QR code to the mailed notices that went out to taxpayers. The QR code takes taxpayers to a page on IRS.gov showing them how to make a payment.

Guillot said the IRS originally expected to launch this capability by 2024, but was able to expedite the rollout given the perceived demand for this service.

The IRS in recent years has seen low levels of phone service that have decreased further since the start of the COVID-19 pandemic.

IRS is looking to further expand the range of services voice bots can provide, and is part of a broader effort to improve taxpayer service.

We never lose sight of our first interaction with every single taxpayer is never enforcement. Its a last resort. Our first effort is always around that word service, and trying to help customers understand the tax law and almost always work out a resolution with them meaningfully, Guillot said.

Go here to see the original:

IRS expands AI-powered bots to set up payment plans with taxpayers over the phone - Federal News Network

Posted in Ai | Comments Off on IRS expands AI-powered bots to set up payment plans with taxpayers over the phone – Federal News Network

If you really want to transform your business, get AI to transform your infrastructure first Blocks and Files – Blocks and Files

Posted: at 2:01 am

Sponsored Feature

AI isnt magic. But applied correctly it can make IT infrastructure disappear.

Not literally of course. But Ronak Chokshi, who leads product marketing for InfoSight at HPE, argues that when considering how to better manage their infrastructure, tech leaders need to consider what services like Uber or Google Maps have achieved.

The IT infrastructure behind the delivery of these services is immaterial to the rest of the world except perhaps for frazzled tech leaders in other sectors who wonder how they could achieve similarly seamless operations.

The consumers dont really care how it works, as long as the service is available when needed, and its easy to manage, he says.

Pushing infrastructure behind the scenes is the raison detre of the HPE InfoSight AIOps platform. Or, to put another way, says Chokshi, InfoSight worries about the infrastructure, so tech teams can be more application-centric.

We want the IT teams to be a partner to the business, to the line of business stakeholders and application developers, in executing their digital transformation initiatives, he explains.

Thats a stark contrast to the all too common picture of admins fretting over whether a given host is being overburdened with VMs, or crippled by too many read-write cycles.

Its not that this information is unimportant. Rather its a question of how its gathered, and who or what is responsible for collating the data and gaining insight from it. And, most of all, taking positive action as a result.

From the customers point of view, explains Chokshi, InfoSight becomes your single pane of glass for all insights, for any issues that come up, any metrics, any attributes, or any activity that you need to track in terms of IOs, read write, throughput, latencies, from storage all the way up to applications. This includes servers, networking, and the virtualization layer.

It all starts with telemetry

More importantly though, the underlying system predicts problems as they arise, or even before, and takes appropriate action to prevent them.

The starting point for InfoSight is telemetry, which is pulled from every layer of the technology and application stack. Chokshi emphasizes that this refers to performance data from HPEs devices, not production or customer data. Thats IO read writes, throughput latencies, wait times, things of that nature.

Telemetry itself potentially presents an IO and performance challenge. Badly implemented real time telemetry could impact performance. Spooling off data intermittently when systems are running quiet means the chance for real-time insight and remediation is lost.

We actually instrument our systems very intelligently to send us specific kinds of telemetry data without performance degradation, says Chokshi. This extends right down to the way HPE structures its storage operating system..

HPE InfoSight aggregates the telemetry data from across HPEs global install base, together with information from HPEs own (human-based) support operation.

When there is an issue and our support personnel get a call from a customer, they troubleshoot it, and fix it but when the fix is implemented, we dont just stop there. That is where the real work begins. We actually create a signature pattern. Its essentially a fingerprint for that issue, and we push it to our cloud.

This provides a vast data pool against which InfoSight can apply AI and machine learning, which then powers support case automation.

As telemetry data from other devices across the installed base continues to stream into HPE, Chokshi continues, we create signature patterns for issues that might come up from those individual systems.

When the data coming from a customer matches an established signature pattern within a specific environment, InfoSight will push out a wellness alert that appears on the customers dashboard. At the same time, a support case is opened.

Along with alerting customers, InfoSight will also take proactive actions, tuned to customers individual environments. For example, if it detects that a storage OS update could result in a conflict or incompatibility with the VM platform a customer is running, it will halt or skip the upgrade.

Less time solving storage problems

The potential impact should be pretty obvious to anyone whos had to troubleshoot an underperforming system, or a mysterious failure, which could be down to storagebut might not be.

Research by ESG shows that across HPEs Nimble Storage installed base, HPE InfoSight lowered IT operational expenses by 79 percent, while staffers spent 85 percent less time resolving storage-related tickets. An IDC survey also showed that more than 90 percent of the problems resolved lay above of storage. So, just taking storage as a starting point, InfoSight can have a dramatic impact right up the infrastructure stack.

At the same time, InfoSight has been extended to encompass the software layer, with the launch of App Insights last year. As Chokshi says, its often a little too easy for application administrators to shift problems to storage administrators, saying hey, looks like your storage device is not behaving properly.

App Insights creates a topology view of the entire stack and produces alerts and predictions of problems at every layer. So, when an app admin suggests that their app performance is being degraded by a storage problem, Chokshi explains, The storage admin pretty much almost instantly would have a response to that question saying they can look up App Insights dashboard.

So, the admin can identify, for example, whether a drive has failed, or alternatively that a host is running too many VMs, and thats slowing your applications down.

For a mega scale example of how InfoSight can render infrastructure invisible, look no further than HPEs Greenlake edge to cloud platform, which combines on-prem infrastructure management and deployment with management and further services in the cloud.

For example, HPE has recently begun offering HPE GreenLake for Block Storage. Traditionally, deploying block storage for mission- or business-critical systems meant working out multiple parameters, says Chokshi. How much capacity? How much performance do you need from storage? How many applications do you plan to run, etc, etc..

With the new block service, admins just need to set three or four parameters, including whether the app is mission-critical or business-critical and choosing an SLA.

And you provision that, and thats all done through the cloud. And it essentially makes the block storage available to you. Behind the scenes, HPE InfoSight powers that experience from enabling the cloud operation experience and ensuring that systems and apps dont go down. It predicts failures, and prevents them from occurring.

Greenlake expansion on the way

Over the course of this year, InfoSight will be extended to more and more HPE Greenlake services. This is a big deal because what was originally brought to market for storage, then servers, is now being integrated with nearly every HPE product that is provisioned through HPE Greenlake

At the same time, HPE will extend the InfoSight-powered support automation it has long offered on its Nimble Storage, which sees customers bypassing level 1 and 2 technicians, and being put straight through to level 3 support. Because by the time you call, we already know the basics of the issue and we already know your environment. We dont have to ask you questions. We dont have to ask for logs, we dont have to ask for any sort of data. We actually already have it through the telemetry data.

So is this as good as it gets? No, it will actually get better in the future, argues Chokshi, because as InfoSight is rolled out to more services and products, and to more customers, it will be accessing ever more telemetry and analyzing ever more customer contexts.

To actually get the advantages of AIOps, you need large sets of relevant data, he says. And you need to let time go by because AI is not a one and done. It improves over time.

Sponsored by HPE.

Follow this link:

If you really want to transform your business, get AI to transform your infrastructure first Blocks and Files - Blocks and Files

Posted in Ai | Comments Off on If you really want to transform your business, get AI to transform your infrastructure first Blocks and Files – Blocks and Files

AI-enabled cameras and lidar can improve traffic today and support the AVs of tomorrow – Smart Cities Dive

Posted: at 2:01 am

Georges Aoude and Karl Jeanbart are co-founders of Derq, a software development company that provides cities and fleets with an AI-powered infrastructure platform for road safety and traffic management that supports the deployment of autonomous vehicles at scale.

While in-vehicle technology for autonomous vehicles gets substantial attention, service providers and municipalities are just starting to discuss the road infrastructure technology that supports AVs and provides other traffic management benefits.

With advancements in artificial intelligence and 5G network connectivity, smart-road infrastructure technologies offer the promise of improving real-time traffic analytics and tackling the most challenging road safety and traffic management problems when theyre added to roads, bridges and other transit systems across the U.S.

Two technologies at the center of this discussion are AI-enhanced cameras and lidar: light detection and ranging devices.

The U.S. has hundreds of thousands of traffic cameras millions when you also count closed-circuit TV cameras used mainly for road monitoring and basic traffic management applications, such as loop emulation. Bringing the latest AI advancements to both cameras and data management systems, these assets can immediately improve basic application performance and unlock more advanced software applications and use cases.

AI and machine learning deliver superior sensing performance over legacy cameras computer vision techniques. By using algorithms that can automatically adapt to various lighting and weather conditions, they enable more robust, flexible and accurate detection, tracking and classification of all road users distinguishing between a driver, pedestrian, and cyclist on or surrounding the road. In addition, their predictive capabilities can better model road-user movements and behaviors and improve road safety. Transportation agencies can immediately benefit from AI-enhanced cameras with applications such as road conflict detection and analysis, pedestrian crossing prediction and infrastructure sensing for AV deployments.

Lidar can provide complementary and sometimes overlapping value with cameras, but in several safety-critical edge cases, such as in heavy rain and snow or when providing more granular classification, our experience has been that cameras still provide superior results. Lidar works better in challenging light conditions and for providing localization data, but todays lidar technology remains expensive to deploy at scale due to its high unit price and limited field of view. For example, it would take multiple lidar sensors deployed in a single intersection, at a hefty investment, to provide the equivalent information of just one 360-degree AI-enhanced camera, which is a more cost-effective solution.

For many budget-focused communities, AI-enhanced cameras remain the technology of choice. Over time, as the cost of lidar technology moderates, communities should consider whether to augment their infrastructure with lidar sensors.

As the cost of lidar technology comes down, it will become a strong and viable addition to todays AI-enhanced cameras. Ultimately, the go-to approach for smart infrastructure solutions will be sensor fusion the ability to combine data from both cameras and lidar in one data management system, as is happening now in autonomous vehicles to maximize the benefits of both to improve overall traffic flow and eliminate road crashes and fatalities.

SOURCE: Derq*Assumes presence of IR or good low-light sensor**Expected to improve with time

Contributed pieces do not reflect an editorial position by Smart Cities Dive.

Do you have an opinion on a similar issue or another topic Smart Cities Dive is covering?Submit an op-ed.

Link:

AI-enabled cameras and lidar can improve traffic today and support the AVs of tomorrow - Smart Cities Dive

Posted in Ai | Comments Off on AI-enabled cameras and lidar can improve traffic today and support the AVs of tomorrow – Smart Cities Dive

Amy raises $6M to help enterprises sell better with AI – VentureBeat

Posted: at 2:01 am

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Israeli startup Amy, which provides an AI-driven solution to help enterprise reps build better customer connections and sell at a better rate, has raised $6 million in a seed round of funding.

Sales are not a piece of cake. You have to identify a prospect, understand them (including where they come from and what are their needs/wants) and come up with a perfect pitch to establish a long-lasting business relationship. Reps spend about 20% of their working hours on this kind of research, but still find less than 50% of their initial prospects as a good fit. Plus, maintaining these connections is even more difficult as, when the network grows big, one cannot keep tabs on all their customers and touch base for continued sales.

To solve this particular challenge, Amy offers a solution that automates the task of prospect research and provides actionable insights for generating deeper, long-lasting business relationships and making the most of them.

The platform, as the company explains, leverages all publicly available information about a prospect and transforms those strands of random data into digestible meeting briefs providing tangible personalized insights into the prospect. It covers relevant information at both company and individual levels, including things like job changes, funding, acquisitions, common experiences and news elements highlighting if they were featured for something new/interesting.

Our proprietary [natural language processing] NLP technology takes publicly available data from the web on the prospect and summarizes it as a Booster, Nimrod Ron, CEO and founder of Amy, told Venturebeat. Then, we further prioritize what is most useful and present one to three Boosters, along with information about the prospects career and company, as part of the main brief.

Amys customers can apply that information as an icebreaker or to elevate the relationship throughout the meeting, he added.

While the CEO did not share the exact growth numbers, he did note that their product is being used by users from companies of all sizes in English-speaking countries. It was also ranked No. 1 on Product Hunt during alpha testing.

Though customer relationship management (CRM) tools like Hubspot and sales intelligence platforms like Apollo or Lusha operate in the same space and simplify the task of identifying a potential customer, Amy stands out with its capabilities to build strong customer connections.

These solutions are typically built for prospecting and dont provide the level of deep analysis on both the individual and company level to create strong, personal business connections, Ron said.

With this round of funding, which was led by Next Coast Ventures and Lorne Abony, the company will focus on building out its technology, particularly the NLP and machine learning bits, and expanding its presence in more English-speaking markets. Other investors who participated in the round were Jim Mellon, Eric Ludwig, Micha Breakstone, Joey Low and James Kong.

There is a clear market need to optimize meeting experiences, given that the average professional spends hours each day in meetings, Michael Smerklo, cofounder and managing director at Next Coast Ventures, said. Amys offering addresses this need by tapping into the art of human connection in business. The platform makes business personal by enabling professionals to understand who they are about to speak with and why that person is interested in speaking with them making meetings more effective and efficient.

Globally, the sales intelligence market is expected to grow 10.6% from $2.78 billion in 2020 to $7.35 billion by 2030.

Link:

Amy raises $6M to help enterprises sell better with AI - VentureBeat

Posted in Ai | Comments Off on Amy raises $6M to help enterprises sell better with AI – VentureBeat

Page 30«..1020..29303132..4050..»