Two UWF teams place in top 5 in national artificial intelligence competition – University of West Florida Newsroom – UWF Newsroom

Hosted by the Naval Information Warfare Center Pacific and the Naval Science, Technology, Engineering, and Mathematics Coordination Office, 32 teams from public and private institutions across the country participated in the competition. Various research institutions including those from the Ivy League, Historically Black Colleges and Universities, and Hispanic Serving Institutes competed in the $200,000-prize challenge.

UWF graduate computer science students Tobias Jacob, Raffaele Galliera and Muddasar Ali placed third, winning $35,000 for the UWF Department of Computer Science. The students participated in the competition as members of UWFs AI and Data Analytics (AIDA) Research Group. Dr. Thomas Reichherzer, chair of computer science, served as the sponsor and Dr. Sikha Bagui, professor in computer science, served as the faculty advisor for the AIDA Research Group. UWF computer science major Zach Mueller, a machine learning intern at Novetta Solutions LLC mentored the students.

When we found out we finished third we couldnt believe it, said Galliera, who, like Ali, is pursuing dual masters degrees from UWF and Ferrara University in Italy. That whole morning, we talked about the things we went through during the challenge. We were so happy and so proud of what we accomplished.

The second UWF team, ArgoTracks, finished fifth and secured $20,000 for the Department of Intelligent Systems & Robotics. The team consisted of Bhavyansh Mishra, a doctoral student in intelligent systems & robotics, and mechanical engineering majors Brendon Ortolano and Luke Fina. Dr. Hakki Erhan Sevil, intelligent systems & robotics assistant professor, served as their sponsor and faculty advisor. The students are members of the Sevil Research Group at UWF. UWF alumnus Carson Wilber, a research associate at Florida Institute for Human & Machine Cognition, mentored the students.

ArgoTracks formed its team a month after the challenge started. Mishra learned about the challenge from Wilber and then contacted Sevil who assisted him in finding teammates. The students put in long hours to catch up and submitted their entry just a few hours before the deadline.

Computer vision is my bread and butter, so I just hopped onto it as soon as Carson told me, Mishra said. Considering the fact we only had one month compared to most teams that had two months, we felt good about where we placed.

Each team was tasked with developing a computer vision system capable of plotting the tracks of shipping traffic exclusively using the passive sensing capability of a single onboard camera. Current traffic avoidance software relies on an automatic identification system and radar for tracking other craft and avoiding collisions. In a contested environment, emitting radar energy presents a vulnerability to detection by adversaries.

Organizers provided each team a dataset consisting of recorded camera imagery of vessel traffic along with the recorded GPS track of a vessel of interest that was seen in the imagery. Submitted solutions were evaluated against additional camera data correlated to recorded vessel tracks. The same vessel and the same instrumentation were used in both the competition dataset and the judging dataset. Judging criteria was based on track accuracy and overall processing time.

The two UWF teams were among only five that submitted solutions that worked. Ten of the teams submitted solutions that only partially worked or failed to work, and the remaining teams failed to submit solutions.

It was really difficult because the data they gave us wasnt preprocessed, the camera wasnt calibrated and we didnt have a lot of data, said Jacob, who is pursuing a masters degree at UWF after earning his bachelors degree from RWTH Aachen University in Germany. We had many moments where we thought, OK, this isnt going to work, and then we always found a way to make it work.

For more information on the AI Tracks at Sea Challenge, visit challenge.gov/challenge/AI-tracks-at-sea/. For more information on UWFs Department of Computer Science, visit uwf.edu/computerscience. For more information on UWFs Department of Intelligent Systems and Robotics, visit uwf.edu/isr.

See the original post here:

Two UWF teams place in top 5 in national artificial intelligence competition - University of West Florida Newsroom - UWF Newsroom

Off The Menu: Artificial intelligence lends hand in recipe development – MassLive.com

Among the most significant technological advances of the last few decade, artificial intelligence (AI) and robotics, have the potential to revolutionize the restaurant industry. Already automation is making its way into fast food kitchens, where its taking on repetitive tasks such as flipping burgers and working the fry station.

AI, the smart technology that powers robocalls and helps forecasting models to predict the weather, may also soon play a role in the food service industry, not just by taking on simple tasks but also by dealing with higher order responsibilities like ordering food and writing menus.

OpenAI, a San Francisco-based software company that develops and deploys artificial general intelligence (AGI), recently put its GPT-3 software to the test. GPT-3 is a third generation, deep learning language model that draws upon information it finds anywhere on the internet to develop answers to user queries.

OpenAI put GPT-3 to the test by asking it to develop recipes based on simple language requests like beef bourguignon and Mexican lasagna. The recipes GPT-3 compiled were then prepared and evaluated by a group of volunteers. Well-known recipes developed by the likes of Julia Child, Wolfgang Puck, and Rachael Ray served as benchmarks for the evaluators.

Though GPT-3 produced some interesting results, its recipes, with one exception, were not scored as high as those developed by human chefs.

Nonetheless, the study illustrated AIs potential to take over higher order tasks like menu development and recipe creation. Thus the day might not be very far off when the product development chef at a restaurant chain is actually a piece of AI software.

For the full report on OpenAIs AI vs. Famous Chef Recipes culinary challenge, go to refluxgate.com/ai-vs-famous-chef-recipes.

Winter is a season during which restaurants have traditionally promoted game dinners. This year, given the unique circumstances under which we are all living our lives, those sorts of events arent easy to put together.

Delaneys Market has developed a strategy by which the Log Cabin-Delaney Group can deliver a socially-distanced game dinner experience.

Delaneys Market locations will be featuring a Game Dinner at Home this month. The four-course meal includes bison meatballs, a venison hunters stew, a wild pheasant turnover, and a wildberry cobbler with whipped cream.

Each take-home package is designed to serve two, and Delaneys Market is providing a cooking video to help those receiving the package finish the meal preparation.

Contact one of the three Delaneys Market locations - Longmeadow, Westfield, or Wilbraham - on Wednesday, Feb. 17 to order the Game Dinner package, which will be ready for pickup on Saturday, Feb. 20.

Pancake Sundaes Diner and Bakery in Westfield, a family-owned breakfast and lunch restaurant, has been turning out its own unique style of morning food since it opened in 2015.

Run by the husband-and-wife team of Frank and Shelly Baldwin, Pancake Sundaes is currently limiting its operation to Saturdays and Sundays from 7 a.m. to 1:30 p.m.

Pandemic-constrained operating hours arent curbing Frank Baldwins creativity, however. Every weekend he puts together an inventive menu of breakfast specialties to supplement Pancake Sundaes basic repertoire.

Offerings can include the likes of bacon-chocolate chip pancakes, apple crisp French toast, and Baldwins Dirty Philly omelet thats filled with shaved ribeye, sauteed onions, and fried peppers.

Theres usually an exhaustive list of eggs Benedict variations; homemade corned beef hash and crispy Homies are menu regulars. Each weekends specials can be found on Pancake Sundaes Facebook page, Facebook.com/pancakesundaes.

The restaurant, which is currently offering limited indoor dining as well as contactless to-go service, answers at (413) 572-6832.

Maxs Tavern at the Naismith Memorial Basketball Hall of Fame in Springfield will be presenting its winter food and wine pairing dinner, Cabs & Slabs, on Thursday, Feb. 25.

The dinner this year is different from past such events. In addition to Napa Valley Cabernet varietals, the Cabs include a Washington State vintage by Canvasback Winery of Red Mountain, WA.

Maxs Tavern Chef Nathaniel Waugamans menu for the evening has a game dinner sensibility, featuring Wagyu beef tartare, braised wild boar shank, Denver lamb ribs, and a grilled bison strip loin.

The five-course menu will also include a chocolate raspberry mousse bar for desert.

Reservations for the dinner are available from 5:30 p.m. on through the evening. Cost to attend is $115 per person, not including tax or gratuity.

Call (413) 746-6299 for reservations.

The Munich Haus German Restaurant in Chicopee will be featuring a Valentines Dinner menu on Feb. 12-14.

Available as either a dine-in or a take-home option, the menu includes an appetizer, a choice of any two schnitzel or chicken entrees with side dishes and salad, and a house-made dessert (either red velvet cheesecake or a Black Forest cake heart) to share.

Upgrades are available, including a sausage sampler, salmon filet, or filet mignon option. Selected wines by the bottle are also available.

Reservations for on-premises dining are required, and take-home packages must be pre-ordered. Contact the Munich Haus German Restaurant at (413) 594-8788 for more information.

February limited-time offerings at participating Dunkin locations are, not surprisingly, Valentines Day-themed.

The chain is offering two heart-shaped donut selections - a brownie batter donut filled with brownie-flavored buttercream and a Cupids choice donut filled with Bavarian kreme and iced with pink, strawberry-flavored icing.

Featured beverages this month include a mocha macchiato and a pink velvet macchiato that features red velvet flavoring. Both drinks are available either hot or iced.

Participating McDonalds restaurants are spicing up mid-winter by bringing back Spicy Chicken McNuggets, a menu item that was last featured in Fall 2020.

Mighty Hot Sauce, spicy, garlicky, and slightly sweet, will also be around for the duration of this limited time only offering.

The Spicy McNuggets feature a tempura-style coating enlivened with cayenne and chile pepper. Pricing is the same as for the chains regular McNuggets items.

Partners Restaurant in Feeding Hills will be hosting dinner by candlelight on Valentines Day weekend. For dine-in purposes Mark and Sue Tansey have put together a special prix-fixe, four-course dinner for Friday and Saturday evenings, Feb. 12 and 13. Three dinner to-go packages will also be available.

The dine-in menu, which will be available from 5 p.m. to 9 p.m. both evenings, includes a choice from among five entree options: braised short ribs, grilled salmon, chicken saltimbocca, ricotta ravioli, and filet mignon Oscar. Reservations are required for socially-distanced, on-premises dining.

Finish-at-home dinners include short ribs, chicken saltimbocca, or seafood casserole; sides, salad, and a dessert selection are included. Takeout orders must be placed by Thursday, Feb. 11.

More details on these special Valentines Day offerings can be found at the restaurants Facebook page, facebook.com/Partners.RestaurantCatering

Partners Restaurant answers at (413) 786-0975.

Chez Josef in Agawam is offering a delivery or pickup date night this year in the form of a Valentines Dinner for Two To-Go.

The all-inclusive, heat @ home package include a selection of hors doeuvres, a salad course, and a choice of two entrees.

Main course selections include filet mignon, parmesan chicken breast, seared sea bass, or lentil-stuffed sweet pepper. A surf and turf upgrade is also available. Dessert is part of the take-home package, as is a bottle of house wine.

In addition Chez Josef is offering individual meal selections as well as a brunch box that can be customizes to serve either two or four.

Curbside pickup is available at Chez Josefs Agawam location; local delivery is also available. An online ordering platform is available at linktr.ee/chez2go; questions about menus, pricing, and delivery area can also be phoned in to (413) 355-5393.

Hugh Robert is a faculty member in Holyoke Community Colleges hospitality and culinary arts program and has nearly 45 years of restaurant and educational experience. Robert can be reached on-line at OffTheMenuGuy@aol.com.

Excerpt from:

Off The Menu: Artificial intelligence lends hand in recipe development - MassLive.com

Artificial intelligence must not be allowed to replace the imperfection of human empathy – The Conversation UK

At the heart of the development of AI appears to be a search for perfection. And it could be just as dangerous to humanity as the one that came from philosophical and pseudoscientific ideas of the 19th and early 20th centuries and led to the horrors of colonialism, world war and the Holocaust. Instead of a human ruling master race, we could end up with a machine one.

If this seems extreme, consider the anti-human perfectionism that is already central to the labour market. Here, AI technology is the next step in the premise of maximum productivity that replaced individual craftmanship with the factory production line. These massive changes in productivity and the way we work created opportunities and threats that are now set to be compounded by a fourth industrial revolution in which AI further replaces human workers.

Several recent research papers predict that, within a decade, automation will replace half of the current jobs. So, at least in this transition to a new digitised economy, many people will lose their livelihoods. Even if we assume that this new industrial revolution will engender a new workforce that is able to navigate and command this data-dominated world, we will still have to face major socioeconomic problems. The disruptions will be immense and need to be scrutinised.

The ultimate aim of AI, even narrow AI which handles very specific tasks, is to outdo and perfect every human cognitive function. Eventually, machine-learning systems may well be programmed to be better than humans at everything.

What they may never develop, however, is the human touch empathy, love, hate or any of the other self-conscious emotions that make us human. Thats unless we ascribe these sentiments to them, which is what some of us are already doing with our Alexas and Siris.

The obsession with perfection and hyper-efficiency has had a profound impact on human relations, even human reproduction, as people live their lives in cloistered, virtual realities of their own making. For instance, several US and China-based companies have produced robotic dolls that are selling out fast as substitute partners.

One man in China even married his cyber-doll, while a woman in France married a robo-man, advertising her love story as a form of robo-sexuality and campaigning to legalise her marriage. Im really and totally happy, she said. Our relationship will get better and better as technology evolves. There seems to be high demand for robot wives and husbands all over the world.

In the perfectly productive world, humans would be accounted as worthless, certainly in terms of productivity but also in terms of our feeble humanity. Unless we jettison this perfectionist attitude towards life that positions productivity and material growth above sustainability and individual happiness, AI research could be another chain in the history of self-defeating human inventions.

Already we are witnessing discrimination in algorithmic calculations. Recently, a popular South Korean chatbot named Lee Luda was taken offline. She was modelled after the persona of a 20-year-old female university student and was removed from Facebook messenger after using hate speech towards LGBT people.

Meanwhile, automated weapons programmed to kill are carrying maxims such as productivity and efficiency into battle. As a result, war has become more sustainable. The proliferation of drone warfare is a very vivid example of these new forms of conflict. They create a virtual reality that is almost absent from our grasp.

But it would be comical to depict AI as an inevitable Orwellian nightmare of an army of super-intelligent Terminators whose mission is to erase the human race. Such dystopian predictions are too crude to capture the nitty gritty of artificial intelligence, and its impact on our everyday existence.

Societies can benefit from AI if it is developed with sustainable economic development and human security in mind. The confluence of power and AI which is pursuing, for example, systems of control and surveillance, should not substitute for the promise of a humanised AI that puts machine learning technology in the service of humans and not the other way around.

To that end, the AI-human interfaces that are quickly opening up in prisons, healthcare, government, social security and border control, for example, must be regulated to favour ethics and human security over institutional efficiency. The social sciences and humanities have a lot to say about such issues.

One thing to be cheerful about is the likelihood that AI will never be a substitute for human philosophy and intellectuality. To be a philosopher, after all, requires empathy, an understanding of humanity, and our innate emotions and motives. If we can programme our machines to understand such ethical standards, then AI research has the capacity to improve our lives which should be the ultimate aim of any technological advance.

But if AI research yields a new ideology centred around the notion of perfectionism and maximum productivity, then it will be a destructive force that will lead to more wars, more famines and more social and economic distress, especially for the poor. At this juncture of global history, this choice is still ours.

View original post here:

Artificial intelligence must not be allowed to replace the imperfection of human empathy - The Conversation UK

Artificial intelligence and the Gamestonk blowback – The Next Web

Surrounded by rallies of power to the people, a rag-tag group of scrappy underdogs recently managed to bring Wall Street to its knees through a dazzlingdisplay of disobedient investing that saw Gamestop stocks rocket Moonward. This unprecedented seizure of power by the proletariat has been lauded far and wide as a smack in the mouth for the establishment. Some say its a warning shot to the financial kings and queens of the Earth.

The Gamestonk legend will be told for years to come Hollywoods already making sure of that. But the story is far from done. As any fan of epic cinema knows: anytime theres a New Hope, youd better believe the Empire will Strike(s) Back.

Im not being subtle here because the storys pretty simple: rebelinvestors beat Wall Street at its own game this time. If you need an explainer on what happened heres a great in-depth one.

Heres the bullet points:

And heres a brief description of shorting stocks from TNWs Ivan Mehta:

A vague definition of shorting is betting against a company in the share market. Shorters borrow a companys stock from someone, sell it to another investor, and wait for that stocks price to go down. Then the shorter will buy them back for a lesser amount, return these stocks to borrowers, and keep the difference.

The big deal here is that the Redditors know Gamestop stock is basically worthless. The company is failing and no amount of continued investment is likely to see it thrive under its current business model. So why did people (not hedge funds) decide to throw their hard-earned money at a failing stock?

The answer is that its not really about Gamestop, they figured out that big leaguers were shorting it and saw a way to legally exploit that to make some money. Were talking big money too, one investor turned $50K into over $40 million (as of two days ago).

In response, the financial institutions (including the app most of the Redditors traded on, Robinhood) blocked further investors from purchasing new stock. This seems like a rather small reaction, especially when you consider that hedge funds have lost somewhere around $5 billion (yes, thats billion with a B).

So, aside from leaning on the scale and/or paying the US government to enact more big league-friendly laws, how do the hedge funds and financial institutions fight back? The answer is to get ahead of these rallies using brute force AI.

A team of researchers from the University of Gottingen recently converted an algorithmic approach to fighting fake news into a method for detecting online market manipulation.

Per a university press release:

In order to detect false information often fictitious data that presents a company in a positive light the scientists used machine learning methods and created classification models that can be applied to identify suspicious messages based on their content and certain linguistic characteristics. Here we look at other aspects of the text that makes up the message, such as the comprehensibility of the language and the mood that the text conveys, says Professor Jan Muntermann from the University of Gttingen.

The relevancy here is that Gamestonk didnt happen as a result of small-time investment firms fighting against their bigger cousins. Gamestonk was a meme on a message board.

Hedge funds are the result of market savvy companies throwing around enough money to get rich in the margins. Gamestonk was a direct assault on that strategy anda catastrophe-level event for everyone in the hedge fund community.

The big deal here is that the collective will of the online financial-meme community is a serious threat to hedge funds and other areas of the market that can be influenced near-equally by large groups of investors and whales. Traditionally the little person hasnt had the ability to influence themarket so quickly, but brokerless, real-time, automatedtrading apps have shifted the balance of power towards whoever can get the word out fastest.

And thats where AI comes in. In combating fake news, AI systems look for keywords and phrases and then flag those for human investigators.

The problem with this approach, especially if you want to apply it to finding fake news being used to manipulate markets, is that those propagating the message can simply identify what keywords and phrases are triggering the AI. Once they know what the machines are looking for they can remove and replace those words and slip past detection.

The Gottingen teams work seeks to thwart these kinds of efforts by searching for semantics and context surrounding the phrases and keywords that identify a post as fake news. In other words: the AI looks for tell-tale signs surrounding fake statements, as well as the statements themselves.

Its important to understand that, in the case of something like the Gamestonk market manipulation, when were talking about fake news, were simply referring to the meme that Gamestop was undervalued. The usefuleness of a system like this would be in uncovering memes and calls-to-investment as they occur on social media so hedge fund managers could adjust before the markets are manipulated. This could lead to counter-shorters losing their entire stakes while the billionaires regulate the market again.

You can read the whole paper here. Though its not specifically aimed at situations like Gamestonk, its easy to see how it could be used to suss out such manipulations as quickly as possible. It might feel like a game of cat-and-mouse right now, but when billionaires fire back they tend to use every possible method at their disposal.

Wall Street has entered the meme warsand its bringing AI to the party.

Published February 1, 2021 20:07 UTC

The rest is here:

Artificial intelligence and the Gamestonk blowback - The Next Web

Artificial Intelligence in Policing Is the Focus of Encode Justice – Teen Vogue

Nijeer Parks was bewildered when he was arrested and taken into custody in February 2019. Apparently, hed been accused of shoplifting and attempting to hit a police officer with a car at a Hampton Inn, as the New York Times reported. But Woodbridge, New Jersey, where the crime had taken place, was 30 miles from his home, and Parks had neither a car nor a drivers license at the time, according to NBC News. Court documents indicated that he had no idea how hed been implicated in a crime he knew he didnt commit until he discovered that the case against him was based solely on a flawed facial-recognition match. According to a December report by the Times, this was the third-known instance of a wrongful arrest caused by facial recognition in the U.S. All three of those victims were Black men.

Algorithms failed Parks twice: First, he was mistakenly identified as the suspect; then, he was robbed of due process and jailed for 10 days at the recommendation of a risk assessment tool used to assist pretrial release decisions. These tools have been adopted by courts across the country despite evidence of racial bias and a 2018 letter signed by groups like the ACLU and NAACP cautioning against their use. At one point, Parks told the Times, he even considered pleading guilty. The case was ultimately dropped, but hes now suing the Woodbridge Police Department, the city of Woodbridge, and the prosecutors involved in his wrongful arrest.

These are the costs of algorithmic injustice. Were approaching a new reality, one in which machines are weaponized to undermine liberty and automate oppression with a pseudoscientific rubber stamp; in which opaque technology has the power to surveil, detain, and sentence, but no one seems to be held accountable for its miscalculations.

Stay up-to-date with the Teen Vogue politics team. Sign up for the Teen Vogue Take!

U.S. law enforcement agencies have embraced facial recognition as an investigative aid in spite of a 2018 study from MIT that discovered software error rates ranging from 0.8% for light-skinned men to 34.7% for dark-skinned women. In majority-Black Detroit, the police chief approximated a 96% error rate in his departments software last year (though the company behind the software told Vice they dont keep statistics on the accuracy of its real-world use), but he still refuses a ban.

Artificial intelligence (AI) works by supplying a computer program with historical data so it can deduce patterns and extrapolate from those patterns to make predictions independently. But this often creates a feedback loop of discrimination. For example, so-called predictive policing tools are purported to identify future crime hot spots and optimize law enforcement resource allocation, but because training data can reflect racially disparate levels of police presence, they may merely flag Black neighborhoods irrespective of a true crime rate. This is exactly what Minority Report warned us about.

Princeton University sociologist Ruha Benjamin has sounded the alarm about a new Jim Code, a reference to the Jim Crow laws that once enforced segregation in the U.S. Others have alluded to a tech-to-prison pipeline, making it crystal clear that mass incarceration isnt going away its just being warped by a sophisticated, high-tech touch.

Thats not to say that AI cant be a force for good. It has revolutionized disease diagnosis, helped forecast natural disasters, and uncovered fake news. But the misconception that algorithms are some sort of infallible silver bullet for all our problems technochauvinism, as data journalist Meredith Broussard put it in her 2018 book has brought us to a place where AI is making high-stakes decisions that are better left to humans. And in the words of Silicon Valley congressman Ro Khanna (D-CA), the technological illiteracy of most members of Congress is embarrassing, precluding effective governance.

Read the original here:

Artificial Intelligence in Policing Is the Focus of Encode Justice - Teen Vogue

‘Artificial Intelligence’ Integrated PET-CT launched at Yashoda Hospitals, Hyderabad on the occasion of World Cancer Day 2021 – PR Newswire India

"This year's World Cancer Day's theme, 'I Am and I Will', is all about you and your commitment to act. The new state-of-the-art artificial intelligence integrated PET-CT scanner at Yashoda Hospital Somajiguda is one more step towards our commitment to early detection of Cancer. The new scanner is now two times faster than the old generation scanners primarily due to the advanced technology known as 'Time of Flight'. The scanner provides best quality images with reduced scanning duration and lesser radiation dose," said Dr. G. Srinivasa Rao, Director of Public Health & Family Welfare, Government of Telangana.

Yashoda Hospitals Somajiguda is well equipped with a comprehensive Nuclear Medicine set up providing services like PET-CT, Gamma camera imaging and radionuclide therapy under one roof. Apart from the newly upgraded imaging of FDG PET-CT, the department provides advanced and rare imaging like Ga-68 DOTA, Ga-68 PSMA, 18F DOPA PET-CTs, DAT imaging & WBC scans, apart from routine Gamma imaging like bone scan & renal scintigraphy.

"Yashoda Hospitals Somajiguda is one of the busiest and high volume centres of radionuclide therapies for thyroid cancer, neuroendocrine tumours, and prostate cancer. The Centre also provides rare therapies like radiosynovectomy for inflammatory joint disease. Patients not only from Telangana and Andhra Pradesh, but across India, visitus for these rare therapies. NextGen PET-CT is effective in the diagnosis of Cancer, Endocrine Abnormalities and Neurodegenerative Disease," said Dr. Lingaiah Amidayala, Director - Medical Services, Yashoda Hospitals Group, Hyderabad.

The Combined PET-CT Scan at Yashoda Hospitals, Somajiguda merges PET and CT images and provides detailed information about the size, shape and differentiating cancerous lesions from normal structures with accuracy. It is a diagnostic examination that combines two state-of-the-art imaging modalities and produces 3 dimensional (3D) images of the body based on the detection of radiation from the emission of positrons. It helps in early detection of cancer and any potential health problem that reveals how the tissues and organs are functioning by identifying a variety of conditions.

Dr. Hrushikesh Aurangabadkar and Dr. A Naveen Kumar Reddy, Consultants in Nuclear Medicine while explaining about the PET-CT said, "The cancer cells require a great deal of sugar, or glucose, to have enough energy to grow. PET scanning utilizes a radioactive molecule that is similar to glucose, called fluorodeoxyglucose (FDG). FDG accumulates within malignant cells because of their high rate of glucose metabolism. Once injected with this agent, the patient is imaged on the whole body PET scanner to reveal cancer growth, which are usually difficult to characterize by conventional CT, X-Ray, or MRI."

With this new technology, motion artifacts caused by respiration can be decreased and accurate diagnosis achieved.

The use of PET scans will also help the doctors to more accurately detect the presence and location of new or recurrent cancers.

Relevant Links: https://www.yashodahospitals.com/location/somajiguda/

Nuclear Medicine: https://www.yashodahospitals.com/specialities/nuclear-medicine-hospital-in-hyderabad/

About Yashoda Hospitals Hyderabad

Yashoda Group of Hospitals has been providing quality healthcare for 3 decades for people with diverse medical needs. Under astute leadership and a strong management, Yashoda Group of Hospitals has evolved as a centre of excellence in medicine providing the highest quality standards of medical treatment. Guided by the needs of patients and delivered by perfectly combined revolutionary technology even for rare and complex procedures, the Yashoda Group hosts medical expertise and advanced procedures by offering sophisticated diagnostic and therapeutic care in virtually every specialty and subspecialty of medicine and surgery. Currently operating with 3 independent hospitals in Secunderabad, Somajiguda and Malakpet and an upcoming hospital (currently under development) in Hi-Tech city, Telangana which is expected to be one of the largest medical facilities in India and will be spread over 20 lakhs sq. ft. with a capacity of 2000 beds. With a constant and relentless emphasis on quality, excellence in service, empathy, Yashoda Group provides world-class healthcare services at affordable costs.

Photo: https://mma.prnewswire.com/media/1433696/AI_PET_CT_Launched_Yashoda.jpg

SOURCE Yashoda Hospitals Hyderabad

More:

'Artificial Intelligence' Integrated PET-CT launched at Yashoda Hospitals, Hyderabad on the occasion of World Cancer Day 2021 - PR Newswire India

SLAS Technology Special Collection on Artificial Intelligence in Process Automation Available Now – Newswise

Newswise Oak Brook, IL The February edition of SLAS Technology is a special collection of articles focused on Artificial Intelligence in Process Automation by Guest Editor Cenk ndey, Ph.D. (Amgen, Thousand Oaks, CA, USA).

This SLAS Technology special collection targets the use of artificial intelligence (AI) techniques and technologies as applied specifically to drug discovery, automated gene editing and machine learning. As AI becomes increasingly more prevalent in research, medicine and even everyday life, laboratory automation has gone beyond hardware advancements toward new levels of precision and complexity. Beyond research, AI serves as a powerful tool for clinicians diagnosing and treating patients in a medical setting. The AI advancements presented in this issue highlight the wide spectrum of medical AI breakthroughs.

This months issue of SLAS Technology also celebrates the top 10 most-cited articles within the journals history. Over the past decade, the publications priority has been to provide a platform for researchers to share technological advancements as well as a resource to continually share the impact of technology on life sciences and biomedical research.

The February issue of SLAS Discovery includes nine articles of original research in addition to the cover article.

Articles of Original Research include:

Other articles include:

Access to Februarys SLAS Technology issue is available at http://journals.sagepub.com/toc/jlad/26/1.

For more information about SLAS and its journals, visitwww.slas.org/journals. Access a behind the scenes look at the latest issue with SLAS Technology Authors Talk Tech podcast. Tune into Februarys episode by visiting https://slastechnology.buzzsprout.com/.

*****

SLAS (Society for Laboratory Automation and Screening) is an international professional society of academic, industry and government life sciences researchers and the developers and providers of laboratory automation technology. The SLAS mission is to bring together researchers in academia, industry and government to advance life sciences discovery and technology via education, knowledge exchange and global community building.

SLAS Discovery: Advancing the Science of Drug Discovery, 2019 Impact Factor 2.195. Editor-in-Chief Robert M. Campbell, Ph.D., Twentyeight-Seven Therapeutics, Boston, MA (USA).

SLAS Technology: Translating Life Sciences Innovation, 2019 Impact Factor 2.174. Editor-in-Chief Edward Kai-Hua Chow, Ph.D., National University of Singapore (Singapore).

###

Excerpt from:

SLAS Technology Special Collection on Artificial Intelligence in Process Automation Available Now - Newswise

How to Build a Modern Workplace with Artificial Intelligence and Internet of Things – BBN Times

1. Automating Tasks

Workplaces have several tasks that are routine and mundane such as scheduling meetings. Usually, employees may send emails back-and-forth to several other employees and enquire about an open slot on their calendar. This process can be increasingly tedious and time-consuming for employees.

Business leaders can adopt AI in the workplace to enhance employee productivity. Organizations can deploy AI-powered personal assistants for scheduling, cancelling, and rescheduling meetings. AI-enabled assistants can analyze an employees schedule and suggest time slots to other employees based on their availability. When a time slot gets decided, an AI assistant will notify all participants of the meeting. Also, AI can be used to automatically transcribe meetings and create a text file. Furthermore, the introduction of AI in the workplace can also automate other tasks such as sorting and categorizing emails.

HR executives usually resolve employee queries related to various workplace policies. Also, HR executives have other core tasks such as managing payroll, recruiting talent, and onboarding new employees. Similarly, IT professionals can be caught up employee queries along with their core tasks. Hence, the productivity of HR and IT department can be severely affected.

The deployment of AI in the workplace can enable organizations to resolve employee queries without interrupting their HR or IT departments.Several organizations are deploying AI-powered chatbotsin the workplace. Similarly, every organization can deploy AI-enabled chatbots that can answer different employee queries accurately. Employees can ask queries using emails, text messages, and online messengers and AI will respond accordingly. If the chatbot is unable to answer a query, then it will assign a request to the concerned personnel who can resolve the query. With this approach, AI-enabled chatbots can also learn how to respond to various queries. Additionally, the deployment of AI in the workplace will allow employees to communicate in various language as AI can translate their queries to English.

The adoption of AI in the workplace can streamline the onboarding process. AI systems can automate various tasks such as generating offer letters, sending documents, and walking new employees through various company-wide policies. Additionally, AI can coach new employees by observing and analyzing how they conduct various tasks. Then, AI tools can suggest ways to improve their efficiency. For instance, AI systems can analyze sales calls of multiple employees and offer tips to improve their performance. For this purpose, AI systems can record sales calls and generate statistics for each employee. Using this approach, AI systems can offer suggestions based on every employees data. Likewise, AI can also train customer service executives to help them deliver better services. With this approach, AI can provide personalized training for each employee.

In the digital age, running a competitive business without data is almost impossible. Businesses collectdifferent types of datasuch as social media data, customer data, and operational data for various applications. However, the obtained cannot be useful if it is not used for generating analytics. Hence, deployment of AI in the workplace can enable business leaders to generate valuable insights from the acquired data. For this purpose, AI systems will optimize data collected from various sources such as social media and personal information of customers and store it in a centralized location. Then, AI systems will analyze the collected data to offer profound insights that can help business leaders predict industry trends, identify anomalies, and generate detailed reports.

The introduction of IoT in the workplace can benefit organizations in the following manner:

Every employee prefers a different temperature on the thermostat and this disagreement can be atopic of conflict in the workplace.Business leaders can install smart thermostats and temperature sensors in the workplace to automate thermostat settings. Smart thermostats learn from employee temperature preferences and set the temperature accordingly. Similarly, business leaders can install several IoT-powered appliances such as smart lights, smart air conditioners, and coffee machines that can be operated using a smartphone.

Organizations can install IoT sensors in the workplace to notify employees about empty conference rooms. These sensors will monitor all conference rooms and display their status as available or busy in a centralized location. With this approach, employees can effortlessly find empty conference rooms.

Business leaders can introduce effective security measures and access control in the workplace with the help of IoT. Conventional keys, badges, and passes can be easily forgotten or duplicated. Hence, organizations can deploy smart locks that can be effortlessly unlocked using a smartphone. Such locks can also enable access control for certain rooms. For instance, only a few employees will have access to rooms that contain crucial paperwork and confidential data. With the help of IoT, business leaders can offer a granular approach to access control in the workplace. Also, smart locks can integrate with existing security systems in an organization.

The US consumes around23% of the worlds energy.Such statistics can be worrying after knowing about depleting energy reserves, overpopulation, and climate change. Energy in the form of electricity is extensively utilized in the workplace for several business procedures. Also, the cause of excessive energy consumption may be the inability to track energy utilization in the workplace. Hence, business leaders can deploy IoT sensors that can monitor energy usage. IoT sensors can monitor energy consumption in real-time and present the data to concerned parties. Concerned personnel can analyze the acquired data and take necessary steps to reduce energy consumption. Also, IoT sensors and smart appliances can help in controlling energy usage. For instance, smart lights have IoT sensors that can detect people in a room. In case a room is empty, lights would shut off and turn back on when someone enters the room. With this approach, organizations can monitor and control energy consumption and conserve energy.

The introduction of IoT and AI in the workplace will help businesses deliver more efficient operations and workflow, leading to a better ROI. Also, IoT and AI can significantly improve employee experience, which can help organizations in attracting and retaining the best talent. Additionally, AI and IoT can work together to make the existing applications more advanced. Hence, business leaders must invest in these modern technologies to reap their benefits and gain a competitive edge.

Read the rest here:

How to Build a Modern Workplace with Artificial Intelligence and Internet of Things - BBN Times

Artificial Intelligence Will Change The Way We Work Once We Get Back To The Office – Forbes

Laptop screen webcam view different ethnicity and age people engaged in group videocall. Video ... [+] conference lead by african businessman leader. Modern technology, easy convenient on-line meeting concept

Have you given any thought how the post-pandemic workplace might look and function? Our working practices have changed dramatically since the pandemic struck, undergoing one of the biggest shake-ups since the industrial revolution and bringing the wheel full circle with many of us returning to home working.

Overnight, many of us turned from being office-based commuters to stay-at-home knowledge workers. Now we liaise remotely with colleagues as we work on projects, deliver great customer service or simply manage the work of others. Weve done all this remotely using our webcams and laptops. Its a massive change that would ordinarily have taken a decade to achieve but which has now happened in an instant because of a virus.

Thankfully, effective vaccines are being rolled out and some sort of end is in sight. The pandemic has shown large corporations that remote working is viable and, in many cases, can increase productivity. Many of us prefer this new way of working. As a result, some companies will downsize offices and rely on hot-desking to make best use of available space. Of course, there will still need to be some sort of central hub for workers to meet because some people enjoy office life and face-to-face contact is still essential for some kind of work.

Group Of Diverse Businesspeople Looking At Television While Video Conferencing In Boardroom

Remote working has worked well, especially with colleagues who already had a working relationship before the pandemic struck. All they needed to do was continue their working relationships remotely. Unfortunately, itss not so easy for workers joining companies since the pandemic. Many are working alongside team members who already know each other. This is where video conferencing can help and why some companies encourage workers to hold social activities online after-work usingZoomorMicrosoft Teams.

Another change triggered by the pandemic is the shutting down of business travel. We now depend on video conferencingto liaise with overseas colleagues. The days of hopping on a plane to make a presentation halfway around the world are at an end. In my own world of journalism, most trade shows, product launches and interviews are now conducted by video. I havent had to travel this past year and its done wonders for my productivity and carbon footprint.

Eventually, though, the day will come when we return to a more blended way of working with some time will be spent at the office and the rest of our time working from home. How will we adapt to this more mixed way of working when were not in front of our personal webcams? This is a question Logitech thinks it has an answer to. The webcam and mouse manufacturer also makes high-end video conferencing equipment. Its teamed up with Zoom, the developer of one of the most popular video conferencing software being used this past year, to shake up the market.

Logitech is using AI to make video conferencing as a group easier and more personal by tying in ... [+] equipment with software packages like Zoom Rooms and Microsoft Teams Rooms.

Both companies have collaborated to develop a new style of video conferencing. Theyve realized that when we return to our offices, were going to be using video much more because weve become so used to using it at home. We find it a much better way of communicating than a voice call.

Because were now used to making webchats in solo mode instead of being grouped around a conference table, its clear our approach to video conferencing will have to change in the office. In the past, big corporations had video conferencing rooms filled with expensive equipment and reserved for the senior management. Now were all used to being able to see as well as hear the people were calling.

The old exclusive way of video conferencing has its origins in the days of expensive 128kbps ISDN lines and costly international calls. In the past, the video conference room was too expensive for just anyone to use and was largely reserved for senior management. Now the coronavirus has democratized the whole process and that change must be embraced in the post-pandemic office.

To address this challenge, Logitech has been researching artificial intelligence for use in its video conferencing equipment. In future, video conferencing will be less boring and not limited to static and wide-angle views of a meeting room showing people sitting around a table. Were now used to seeing peoples faces close up and we can register facial expressions and hear voices more clearly. The old style of video conferencing simply wont cut it.

New video conferencing using AI will zoom in on speaker's faces and dynamically alter the volume and ... [+] microphone sensitivity for a more enjoyable experience.

There also needs to be changes in the software we use for video conferencing.Zoomssingle-user mode needs to adapt to the office environment of teams. This is where Logitech and Zoom have been working together to create a new experience. Recently,I spoke withScottWharton, LogitechsVice President and General Managerfor video conferencing, and Oded Gal,Chief Product Officerat Zoom. They explained how both companies see the professional end of the video conferencing market changing as we come out of the pandemic and some of us return to the office.

ScottWharton explained: Its no longer good enough to see people sitting around a conference table in a static wide-angle shot. We need to see peoples faces and reactions. We need to see them close up and hear them clearly as well. That can be a challenge in a larger space.

Zooms Oded Gal agreed. The video conference room has to mirror the experience workers have had at home with Zoom calls.

The companies collaborated on a newproduct portfolioof video equipmentcalled Rallyincluding the Rally Bar, Rally Mini andRoomMate. Each of the new products has been purpose-built to work withZoom Rooms. Logitechs native appliance integration forZoom Roomshas been designed to enhance the user experience while still delivering enterprise-grade video and voice quality, as well as the robust security that companies demand.

The latest Logitech Rally videoconferencing products are designed to work with Zoom and Microsoft ... [+] Teams to bring a more personal experience to the video conferencing room.

The idea of the new portfolio ofproducts is to harness AI to identify participants in a meeting if required and to zoom in on the face of the person talking so the experience feels more like using an individual webcam at home. Logitech has also been working on the sound side of things with microphones that can home in on a voice and ensuresthat someone at the back of the room is heard as clearly as someone sitting at the front.

This is a major shift in how we communicate, a practise that has been stuck for so long in a static group-shot mode. The new Rally productscan blend two images on one screen to add context to a video call. Wharton says: Its a bit like having your own TV studio director in the conference room, adjusting the framing shots while altering sound levels dynamically to create a less static and a more enjoyable meeting.

The new products offer video resolutions up to 4K and the cameras have motorized pan-and-tilt heads, as well as optical zoom lenses, enabling crisp images and perfect framing. The lens on the Rally Bar has a 5x optical zoom that can be extended to 15x digital for larger spaces.

Both video bars also feature ultra-low distortion speakers that can fill a room with sound while an adaptive beamforming mic array picks up voices for much clearer audio quality. The microphone array can focus on the active speaker and auto-adjust for louder and softer voices while suppressing any unwanted background noise in a room such as HVAC or any other persistent sounds. Logitech has also included a patented anti-vibration suspension system so speaker vibrations are minimized from being carried through walls, stands or tables.

The new video bars are equipped withLogitech RightSensetechnology and an AI Viewfinder. This viewfinder works like a second camera but is purely dedicated to computer vision for detecting human figures and can process and track where faces are in the room in real-time. This AI technique enhances the precision of the auto-framing feature and the camera control. Participants are always in focus, whether they are late joining the meeting or moving about while making a presentation.

How will we manage the transmission to using video conferencing at home to a more collaborative ... [+] group set up when we are back in the office?

Logitech has worked closely with Zoom and Microsoft to ensure that the software and hardware work seamlessly together.The company hasalso developedRoomMate, a dedicatedAndroidcomputer designedsolelyto work withthe Rally Bar and Rally Bar Mini vas well as USB conferencing cameras such as Logitechs older Rally Plus, turning them into an appliance. Alternatively, users can bring their own Mac or PCs to a meeting and plug that into the video bar.

Its interesting to see Logitech partnering with big names in video conferencing software space to improve peoples experience by tying in the hardware more tightly. The use of hardware and AI can anticipate shots and framing as well as adjusting sound levels for a more enjoyable and productive meeting experience.

However, the tech doesnt just address the audio and visual side of things. Both Wharton and Gal say facial recognitioncould eventually bedevelopedto count the participants in a meeting room and then post a notice to an active display screen outside the room indicating if the meeting room is full or being used. Extra participants could be directed to another meeting room where they can join in with the conference. This could be a game-changing feature in an era of social distancing.

Facial recognition also has downsides and both Wharton and Gal were at pains to point out that privacy is at the forefront of their minds and something both companies are determined to protect. My suggestion that facial recognition could be used to conduct a register of attendees or send a reminder to anyone who is late for the meeting was noted but not endorsed. That, they said, would have to be a decision for the companys employing the technology and the employees who are using it.

It was fascinating to get a brief glimpse into the future of video conferencing from these two giants in the field. It looks like AI could change the way we use video conferencing when we get back to our workplaces. The hardware is ready and waiting in the form of the Logitech RallyBar, RallyBarMini and RoomMate. The software integration with Zoom is also ready and waiting to go. Now, all we need is to get our vaccinations and wait for the call to return to the office.

Pricing and Availability: Logitech Rally Bar is the first from the next generation appliance portfolio that will be broadly available at the end of this quarter. Rally Bar also comes with built-in support for Microsoft Teams Rooms on Android and Zoom Rooms Appliances, with Zoom available immediately. Rally Bar Mini and Logitech RoomMate availability will follow. Pricing for Rally Bar starts at $3,999; Rally Bar Mini starts at $2,999; and Logitech RoomMate starts at $999. Logitechs portfolio will also work with GoTo, Pexip and RingCentral.

More info:www.logitech.com

Link:

Artificial Intelligence Will Change The Way We Work Once We Get Back To The Office - Forbes

Chemistry and computer science join forces to apply artificial intelligence to chemical reactions – Princeton University

In the past few years, researchers have turned increasingly to data science techniques to aid problem-solving in organic synthesis.

Researchers in the lab ofAbigail Doyle, Princeton's A. Barton Hepburn Professor of Chemistry,have developed open-source software that provides them with a state-of-the-art optimization algorithm to use in everyday work, folding whats been learned in the machine learning field into synthetic chemistry.

Princeton chemists Benjamin Shields and Abigail Doyle worked with computer scientist Ryan Adams (not pictured) to create machine learning software that can optimize reactions using artificial intelligence to speed through thousands of reactions that chemists used to have to labor through one by one.

Photo by

C. Todd Reichart, Department of Chemistry

The software adapts key principles of Bayesian Optimization (BO) to allow faster and more efficient syntheses of chemicals.

Based on the Bayes Theorem, a mathematical formula for determining conditional probability, BO is a widely used strategy in the sciences. Broadly defined, it allows people and computersuse prior knowledge to inform and optimize future decisions.

The chemists in Doyle's lab, in collaboration withRyanAdams, a professor of computer science,and colleagues at Bristol-Myers Squibb, comparedhuman decision-making capabilities with the software package. They found that the optimization tool yields both greater efficiency over human participants and less bias on a test reaction. Their work appears in the current issue of the journal Nature.

Reaction optimization is ubiquitous in chemical synthesis, both in academia and across the chemical industry, said Doyle.Since chemical space is so large, it is impossible for chemists to evaluate the entirety of a reaction space experimentally. We wanted to develop and assess BO as a tool for synthetic chemistry given its success for related optimization problems in the sciences.

Benjamin Shields, a former postdoctoral fellow in the Doyle lab and the papers lead author, created the Python package.

I come from a synthetic chemistry background, so I definitely appreciate that synthetic chemists are pretty good at tackling these problems on their own, said Shields. Where I think the real strength of Bayesian Optimization comes in is that it allows us to model these high-dimensional problems and capture trends that we may not see in the data ourselves, so it can process the data a lot better.

And two, within a space, it will not be held back by the biases of a human chemist, he added.

The software started as an out-of-field project to fulfill Shields doctoral requirements. Doyle and Shield then formed a team under the Center for Computer Assisted Synthesis (C-CAS), a National Science Foundation initiative launched at five universities to transform how the synthesis of complex organic molecules is planned and executed. Doyle has been a principal investigator with C-CAS since 2019.

Reaction optimization can be an expensive and time-consuming process, said Adams, who is also the director of the Program in Statistics and Machine Learning. This approach not only accelerates it using state-of-the-art techniques, but also finds better solutions than humans would typically identify. I think this is just the beginning of whats possible with Bayesian Optimization in this space.

Users start by defining a search space plausible experiments to consider such as a list of catalysts, reagents, ligands, solvents, temperatures, and concentrations. Once that space is prepared and the user defines how many experiments to run, the software chooses initial experimental conditions to be evaluated. Thenit suggests new experiments to run, iterating through a smaller and smaller cast of choices until the reaction is optimized.

In designing the software, I tried to include ways for people to kind of inject what they know about a reaction, said Shields. No matter how you use this or machine learning in general, theres always going to be a case where human expertise is valuable.

The software and examples for its use can be accessed at this repository. GitHub links are available for the following: software that represents the chemicals under evaluation in a machine-readable format via density-functional theory; software for reaction optimization; and the game that collects chemists decision-making on optimization of the test reaction.

"Bayesian reaction optimization as a tool for chemical synthesis," byBenjamin J. Shields, Jason Stevens, Jun Li, Marvin Parasram, Farhan Damani, Jesus I. Martinez Alvarado, Jacob M. Janey, Ryan P. Adams andAbigail G. Doyle, appears in the Feb. 3 issue of the journal Nature (DOI:10.1038/s41586-021-03213-y). This research was supported by funding from Bristol-Myers Squibb, the Princeton Catalysis Initiative, the National Science Foundation under the CCI Center for Computer Assisted Synthesis (CHE-1925607), and the DataX Program at Princeton University through support from the Schmidt Futures Foundation.

Editor's note: You can read the unabridged version of this story on the Department of Chemistry homepage.

Read the original here:

Chemistry and computer science join forces to apply artificial intelligence to chemical reactions - Princeton University

Morality Poses the Biggest Risk to Military Integration of Artificial Intelligence – The National Interest

Finding an effective balance between humans and artificial intelligence (AI) in defense systems will be the sticking point for any policy promoting the distancing from humans in the loop. Within this balance, we must accept some deviations when considering concepts such as thekill chain.How would a progression of policy look within a defense application? Addressing the political, technological, and legal boundaries of AI integration would allow the benefits of AI, notably speed, to be incorporated into the kill chain. Recently, former Secretary of Defense Ash Carter stated We all accept that bad things can happen with machinery. What we dont accept is when it happens amorally. Certainly, humans will retain the override ability and accountability without exception. Leaders will be forever bound by the actions of AI guided weapon systems, perhaps no differently than they would be responsible for the actions of a service member in combat and upholding ethical standards of which the AI has yet to grasp.

The future of weapon systems will include AI guiding the selection of targets, information gathering and processing, and ultimately, delivering force as necessary. Domination on the battlefield will not be in traditional means, rather conflicts dominated by AI with competing algorithms. The normalcy of a human-dominated decisionmaking process does provide allowances for AI within the process, however, not in a meaningful way. At no point does artificial intelligence play a significant role in making actual decisions towards the determination of lethal actions. Clearly, the capability and technology supporting integration havefar surpassed the tolerance of our elected officials. We must build confidence with them and the general public with a couple of fundamental steps.

First, information gathering and processing can be controlled primarily by the AI with little to no friction from officials. This integration, although not significant by way of added capability in a research and development (R&D) perspective, will aid in building confidence and can be completed quickly. Developing elementary protocols for the AI to follow for individual systems such as turrets, easy at first then slowly increasing in difficulty, would allow the progression of technology from an R&D standpoint while incrementally building confidence and trust. The inclusion of recognition software into the weapon system would allow specific target selections, be it civilians or terrorists, of which could be presented, prioritized, and then given to the commander for action. Once functioning within a set of defined perimeters confidently, you can increase the number of systems for overlapping coverage. A human can be at the intersection of all the data via a command center supervising these systems with a battlement management system; effectively being a human on the loop with the ability to stop any engagements as required or limiting AI roles based on individual or mission tolerance.

This process must not be encapsulated solely within an R&D environment. Rather, transparency to the public and elected officials alike, must know and be accepting. Yes, these steps seem elementary, however, they are not being done. Focus has been concentrated on capability development without a similar concern for associated policy development when both must progress together. Small concrete steps with sound policy and oversight are crucial. Without such an understanding, decisionmakers cannot in their conscience approve, rather defaulting to the safe and easy answer, no. Waiting to act on AI integration into our weapons systems puts us behind the technological curve required to effectively compete with our foes. It would be foolish to believe our adversaries and their R&D programs are being held up on AI integration due to moral and public support requirements; the Chinese call it intelligentized war and have invested heavily. Having humans on the loop during successful testing and fielding will be the bridge to additionalAIauthorities and public support necessary for the United States to continue to develop these technologies as future warfare will dictate.

John Austerman is an experienced advisor to senior military and civilian leaders focusing on armaments policy primarily within research and development. Experience with 50+ countries and the Levant to include hostile-fire areas and war zones.

Image: Reuters.

See the article here:

Morality Poses the Biggest Risk to Military Integration of Artificial Intelligence - The National Interest

Canon Medical expands reach of its MRI artificial intelligence programs – FierceBiotech

Canon Medical is expanding the clinical reach of its artificial intelligence programs designed to improve MRI image quality, saying it can now be used in 96% of all scanning procedures.

The companys Advanced intelligent Clear-IQ Engine, or AiCE, aims to sharpen scans taken by lower-dose, 1.5 Tesla MRIs, to bring their image quality up to par with 3.0 Tesla machines.

The system was previously cleared by the FDA for certain brain- and knee-focused indications, using Canon Medicals Vantage Orian 1.5 Tesla system. Now its applications span all joints, as well as cardiac, abdomen, spine and pelvic scans.

Blazing a Trail to Clinical Trial Diversity: Four-Part Webinar Series from Syneos Health, Featuring Pharma, Clinical Research and Community Health Leaders

This series will identify obstacles that stifle appropriate patient diversity in trials; unpack the organizational overhaul needed; share how sponsors, patients & investigators have come together to overcome hurdles; and explore how policy innovations can move the industry forward.

In todays environment, making images easy to read and acquire is more important than ever, and this is the latest demonstration of our commitment to offering accessible AI that clinicians can use to make the greatest impact on patient care, said Jonathan Furuyama, managing director of Canon Medicals MR business unit.

RELATED: Canon gets FDA nod for high-resolution CT system

The expansion follows two recent FDA clearances for Canon Medical in December and January, including an AI-equipped, large-bore CT scanner and software designed to boost 3D MRI imaging times.

The companys Speeder software, also for its Vantage Orian 1.5 Tesla system, was cleared to help accelerate surgical planning and orthopedic applicationsby reconstructing full resolution images from under-sampled data. This allows technicians to perform a scan at least twice as fast, the company said. The software also includes an application to help clinicians quantify fatty liver disease.

The companys Aquilion Exceed large-bore CT system, meanwhile, uses AiCE technology to provide more distinct images with an opening nearly one meter wide, with an extended field-of-view of 90 centimeters.

View post:

Canon Medical expands reach of its MRI artificial intelligence programs - FierceBiotech

AI-Powered Cooperation Will Transform US Military Operations Homeland Security Today – HSToday

Between the kickoff of a new presidential administration, a seasoned (and recent) wartime leader as the new secretary of Defense and a government-wide modernization push, change is under way at the Pentagon. And as 2021 shifts into gear, the U.S. military is operating against a backdrop of not just these changes, but also an evolving threat landscape.

Chiefs across the service branches over the past year made clear their sharp focus on technology-driven dominance over near-peer adversaries, including China and Russia. One result of this emphasis is a collaborative effort to operate jointly and more effectively across battle domains and between branches, levels and areas of responsibility. This mindset has given rise to the concept of (Combined) Joint All-Domain Command and Control, or CJADC2/JADC2.

Well have to have capabilities that allow us to present a credible threat, a credible deterrent to China in the future, retired Army Gen. Lloyd Austin, the new Defense secretary, said in his Jan. 19 Senate confirmation hearing. Well have to make some strides in the use of quantum computing, the use of AI, the advent of connected battlefields, the space-based platforms. Those kinds of things I think can give us the types of capabilities that well need to be able to hold large pieces of Chinese military inventory at risk.

Thats the goal behind CJADC2, which is currently underpinned by a handful of key programs focused on training, experimentation and expansion. These include the Army-led Project Convergence that will return for an expanded run in 2021 with Air Force participation after a successful 2020 exercise, and the Navys Project Overmatch, a comprehensive effort targeting buildout of networks, infrastructure, data architecture, tools and analytics. The Air Forces Advanced Battle Management System, as well as weapons systems and platforms from across the services, will also be essential to end-to-end success of the concept in execution.

But at the center of it all is artificial intelligence the driving force that provides machine-speed situational awareness and informing decision-making up and down the chain of command.

This is about your insight into the battlefield. This is about your processes for generating that insight. This is about your ability to quickly game potential outcomes and be able to pick whats best, Lt. Gen. Mike Groen, director of the Defense Departments Joint Artificial Intelligence Center, told Breaking Defense. JADC2 is a construct where we start bringing capabilities and implementing them now, and then start stitching them together. Every defense agency will develop its own unique AIs.

Making Battlefield AI into a Reality

Leaders from across the military have made clear that a new race is truly on the U.S. focus on and application of AI will need to edge out that of China or Russia. And capturing that edge and maintaining its decisive advantage requires superiority in deployed compute power, data integrity, modeling and training AI algorithms, and the ability to collect, synthesize and process data into intelligence on the ground.

While the introduction of AI into battlefield operations may be a newer development, the impetus behind it particularly the need for real-time collaboration across coalition partners is not new. Manufacturers already have seen this demand signal and have been working to design and provide high-performance computer solutions capable of incorporating AI in tactical environments while still meeting strict Defense Department requirements.

In the operational theater, those requirements include more CUDA cores and more small, rugged devices used in a variety of applications for specific purposes. For example, applications could focus on detecting imagery, tracking electronic warfare signatures, or executing intelligence, surveillance and reconnaissance. Engineers are currently designing and manufacturing dedicated embedded packages a significant shift away from the large, power-hungry devices of yesterday.

Instead of combining dozens of sensors for situational awareness, operators will shift to feeding data from fewer sensors into AI engines that are scanning images from frame to frame and detecting threats or changes, whether its to quickly identify a hypersonic missile or a certain type of plume. The dedicated packages allow for a smaller footprint to do this work and share intelligence at the tactical edge a crucial element the military is proving out.

We used ground robots paired with small UAVs to digitally map and transmit that map over the network so that it could be aggregated and then sent across the force, Brig. Gen. Ross Coffman, director of the Armys Next Generation Combat Vehicle Cross Functional Team, said in a Sept. 23 Project Convergence media roundtable. We used artificial intelligence to autonomously conduct a ground reconnaissance, employ sensors and then pass that information back. We used artificial intelligence in [air defense artillery] target recognition, and machine learning to train algorithms on various types of enemy forces.

Project Convergence, along with similar exercises aimed at integrating high-powered battlefield computing and improved collaboration, will help define the militarys path into a new era. With the new leadership footstomping the militarys strides under way, the combination of experimentation, training and cutting-edge technology promise to transform the future of warfare. Whether its offense or defense, networks or weapons systems, ships or ground vehicles or aircraft, or even space and cyberspace, the mandate is clear: AI will shape tactical operations, starting now.

(Visited 6 times, 6 visits today)

Read the rest here:

AI-Powered Cooperation Will Transform US Military Operations Homeland Security Today - HSToday

Skinopathy Files Provisional Patent for Artificial Intelligence and Augmented Reality Powered Technology that will Guide Skin Cancer Surgeries – Yahoo…

TORONTO, Feb. 02, 2021 (GLOBE NEWSWIRE) -- Skinopathy, a Canadian medical company founded in 2020, has filed a provisional patent with the United States Patent and Trademark Office (USPTO) for Artificial Intelligence (AI) and Augmented Reality (AR) technology that will help guide surgeons when performing skin cancer excisions.

Healthcare practitioners will be able to use the AI technology which determines, pixel by pixel, the boundaries of cancerous skin tissue by simply taking a picture with their smartphones. The AR technology will then provide surgeons with an overlay of those boundaries through the screen.

Next level healthcare Cancerous skin tissue can sometimes extend beyond the measurable lesion and is typically unseen to even the most eagle-eyed surgeon.

That is why some surgeons choose to be overly cautious and remove more skin than might be necessary to prevent the need for further surgery, which can lead to visible scarring and other disfigurements. Conversely, it is possible that cancerous tissue remains following an excision due to the vagaries of the human body, quality-of-life considerations, or experience of the surgeon. This can potentially lead to continued growth and additional excisions in the future.

Once ready, surgeons will have access to cutting-edge technology that will lead to more informed medical decisions and significantly reduce the hardships felt by the patient and the strain levied on the healthcare system.

This will revolutionize skin cancer treatment, says Alexander Shevchenko, Lead Engineer at Skinopathy. We are providing surgeons with an additional skin cancer fighting tool they can carry in their pockets every day.

Preventing advanced stages of skin cancer Skin cancer is more prevalent than colon, lung, breast, and prostate cancers combined and typically present in very unspectacular ways.

Moles, skin tags, and rashes are rarely viewed as causes for concern and are often dismissed until the discomfort becomes greater than the hassle of seeing a doctor. But it is during that time of latency where dangerous conditions can fester and become deadly. When people finally take action, they are often subject to long wait times or need to travel hundreds (if not thousands) of kilometres to access physicians in a major urban centre.

Story continues

Using this technology, people will be able to take pictures of their skin lesion and get an immediate and accurate analysis that will advise on the severity of their condition and provide online access to healthcare practitioners in a matter of days, sometimes even hours.

This is a tremendous milestone for skin cancer, says Dr. Colin Hong, Co-Founder and Chief Medical Officer of Skinopathy. We are using technology to streamline the medical bureaucracy to ensure no one slips through the cracks.

It now takes weeks, sometimes months, to see a skin cancer specialist, and during that time cancerous tissue can grow rapidly and spread to other organs. Making matters worse is the ongoing COVID-19 pandemic which has added more delays and obstacles.

Geographically agnostic technologySkinopathy is using the same kind of technology used for the Re-Captcha security feature found on many websites. However, instead of using AI to determine the difference between a fire hydrant and a bus, Skinopathy is using it to determine the miniscule differences between a mole and a cancerous lesion.

Using real-world images taken by physicians and beta users using their smartphones, the Skinopathy prototypes have yielded 87% accuracy for nine different skin conditions, such as keratosis, and performed even more impressively for basal cell carcinoma, squamous cell carcinoma, and melanoma with accuracy rates ranging from 87% to 96%.

We are very excited about these results, says Dr. Rakesh Joshi, Lead Data Scientist at Skinopathy. There are very subtle nuances on how skin lesions present on the skin, and our models are able to detect the smallest of variances.

Since the technology being developed is geographically agnostic, it can be deployed anywhere in the world and bring needed medical care to under-serviced regions. You can learn more about the technology here.

Skin cancer facts and statsThe Canadian Skin Cancer Foundation states that 1 in 3 cancers diagnosed worldwide is skin cancer and that they outnumber lung, breast, prostate, and colon cancers combined.

Data from the Public Health Agency of Canada suggests the costs associated with skin and subcutaneous tissue diseases was over 2 Billion Dollars in 2010.

Research suggests there is a skin cancer epidemic in the elderly.

About Skinopathy Founded in 2020, Skinopathy is a medical company creating Artificial Intelligence (AI), Augmented Reality (AR), and automation technology that will ensure people and healthcare practitioners receive convenient, reliable, and State-Of-The-Art skin cancer mitigation tools. Its first service, GetSkinHelp.com, is already helping patients connect with specialists through its virtual platform.

ContactKeith LooCo-Founder & Chief Executive Officer(833) 272-7546 x700keith@skinopathy.com

Here is the original post:

Skinopathy Files Provisional Patent for Artificial Intelligence and Augmented Reality Powered Technology that will Guide Skin Cancer Surgeries - Yahoo...

Harnessing the Power of Artificial Intelligence with High-Performance Computing: Dell Technologies – CRN – India – CRN.in

Artificial intelligence (AI) is poised to change lives. From fueling medical discoveries to smart collars that can decipher and display the emotions of our household pets, this emerging technology is enabling organisations to innovate. According to IDC[1], by 2025, AI-powered enterprises will see a 100% increase in productivity and new product introduction success rates, higher than those of their non-intelligent peers. By being able to anticipate the market and operational changes with AI, organisations will respond much faster than their competitors. They will be agile enough to adapt to changes in the market and innovate.

Hence, accelerating AI solutions for businesses should be the focus for organizations in India. For example, HPC workloads are becoming more data centric and adding AI technologies to advance the capabilities of traditional HPC modeling and simulation. In the next few years, HPC technologies, such as HPC-enabled machine learning training, will go from experimentation models to production models. As CTOs and CIOs in India look to create an enterprise infrastructure that provides robust performance and scalability for large and highly complex AI models while keeping deployment costs low, the answer may lie in HPC for three key reasons:

Data analytics: Businesses relying heavily on data analytics generate new insights which improve efficiencies and give them a competitive edge. However, when it comes to analysing large sets of unstructured data that are exponentially increasing in volume and velocity, traditional IT infrastructure is often hamstrung due to slow storage speeds. To adapt to significantly larger data sets and compute-intensive analytics processes, researchers are looking to exascale computing systems, capable of performing one quintillion calculations per second. Powered with HPC, these advanced performance systems are expected to have a profound impact in the faster identification of pandemics, discovery of effective medication and indication of hazardous weather conditions before they happen.

Acceleration of AI deployments: According to a study, in India, almost 70% of organisations are currently investing in Artificial Intelligence to deliver better business results. As HPC and AI continue to converge and evolve to involve more use cases across industries, the possibilities are nearly endless. Organisations are increasingly looking for pre-designed and pre-validated solutions to generate value instead of constructing IT infrastructure. To help companies with roadblocks involved in setting up an AI system, large vendors are also beginning to introduce reference architectures together with HPC infrastructure for data scientists and researchers working. With HPC and software combined, launching new AI applications is becoming easier and faster. Technology vendors are helping data scientists focus less on maintaining AI systems but more on experimenting, exploring and uncovering insights. Organizations do not need to walk their AI journey alone. By collaborating and working with end-to-end technology infrastructure providers, organisations can co-design and customise unique HPC infrastructures to meet AI research, development and AI model deployment needs.

Growing support for HPC globally and in India: The business benefits of AI are being increasingly recognised, with infrastructure providers now helping the research community and customers capitalise on HPC, expanding it from a niche market to a broader audience. More than ever, customers and partners have access to guidance from technology vendors on how to kick-start their AI initiatives including design, installation, maintenance and most importantly, delivering tangible business outcomes for their organisations. Organisations are focused on promoting open collaboration, bringing together the wide-ranging experience and knowledge of technology developers, service providers and end-users in a worldwide forum that promotes the advancement of innovative, powerful HPC and AI solutions.

If you have an interesting article / experience / case study to share, please get in touch with us at [emailprotected]

Read the original post:

Harnessing the Power of Artificial Intelligence with High-Performance Computing: Dell Technologies - CRN - India - CRN.in

How Artificial intelligence is Transforming the Apparel Industry – BBN Times

Trend Spotting

Taking into account the fast-changing fashion trends, it goes without saying that anticipating fashion trends is not only tricky but also a time-consuming task. Manually researching the previously popular styles, social media fashion trends, and customer preferences, analysts were expected to spot the upcoming trends. The guesswork done by the professionals may or may not be ac

curate. Besides the hassles of manual work, spotting fashion trends can also pose cost issues before fashion brands if not forecasted rightly. Instead, if the brands invest in leveraging AI, they can cut down all the problems quickly.

The AI tool, trained with quality and quantity data, will analyze past fashion data, check out the customer demand and preferences, gauge competitors moves, and identify the market trends. After processing the data, the AI tool will give accurate details on trendy styles and designs within minutes. With AI, fashion brands can bolster their apparel business by tracking the latest fashion trends in just minutes, which would otherwise take days or even months.

Realizing the potential of AI in design, many tech giants are already making big moves by integrating the technology for their benefit. For instance, a group of professionals inAmazon developed an AI toolthat is capable of analyzing and learning the images that are entered, and then generating an altogether new fashion design by itself. Besides, the industry behemoth - Amazon - has developed another AI application that can analyze and process the fed pictures, thereby giving a conclusion on whether a particular style will look trendy or not. Not only Amazon, but there are dozens of other such tech giants who have already embarked on their AI journey, streaming their design creation process completely.IBM, in collaboration with Tommy Hilfiger and The Fashion Institute of Technology (FIT) is using AIto empower designers in boosting the pace of the product development lifecycle.

With customers becoming restless, irritated, and grumpy on not receiving quick assistance or service, fashion retailers are faced with constant pressure to offer what customers want almost instantaneously. Several industry giants have already come up with the newest technology-powered applications that promote enhanced customer experience, one that goes beyond personalized ads, notification alerts on price drops, or chatbot assistance. Using this sophisticated technology, fashion brands strive to put customization at the forefront for customers during their buying journey. There areAI-powered personal stylist appsin the market that allow users to browse clothes online or to click pictures of their clothes. Giving these images as inputs, the app will recommend the best style according to the user's body type, complexion, and preferences while keeping the fashion trends in mind. From providing customers with personalized advertisement notifications to alerting them on price drops to clearing their doubts or queries with chatbots to now being a personal stylist and providing instant outfit suggestions, fashion brands can meet their aim ofelevating customer experience with the help of AI. With AI being able to act as both, design assistants for designers and personal stylist for consumers, it is pretty much clear that the impact of the technology is more than what we ever imagined.

The emergence of trend-setting technology, AI, has changed the way businesses carry out their processes. And, the discussion weve had, is a proof of the fact that the apparel industry is no exception. With a majority of big fashion brands already tapping into the benefits and applications of AI, it is undeniable that soon the technology will become mainstream in medium-sized companies and startups also. So, for garment companies, who haven't planned to adopt AI yet, the right time to plan and kick-start their digital transformation journey is today. After all, no one would want to be left behind in the digital race, isnt it?

View post:

How Artificial intelligence is Transforming the Apparel Industry - BBN Times

How Airbus And Boeing Are Using Artificial Intelligence To Advance Autonomous Flight – Simple Flying

Pilot-less jetliners may still be far off in the future due to several reasons, public trust in automated systems not being the least of them. However, this does not mean the software technology to support such operations has not developed in leaps and bounds. While there are several start-ups in tech-driven unmanned airborne vehicles, lets take a look at how the two main aircraft manufacturers use artificial intelligence in the quest for safe autonomous flight.

Artificial Intelligence (AI) is a divisive subject. Some herald it as the key solution to everything from Alzheimers and cancer to food shortages and climate change. Others, more pessimistically or dystopically inclined, say it will be the end of humanity or, at the very least, take most of our jobs.

One thing is for certain, though; AI is here to stay, and it will have a massive impact on our everyday lives in the future. Aviation is often critiqued for having been slow on the ball when it comes to AI. However, things have begun to change, and its various applications will transform the industry in the decades to come.

Data-driven sophisticated algorithms will revolutionize everything from ticket pricing, air traffic control, crew and maintenance schedules to aircraft assembly, natural language processing in the cockpit. And, of course, it will have an enormous impact on more advanced technology such as autonomous vision-based navigation or pilot-less planes, if you will.

Stay informed:Sign up for ourdaily aviation news digest.

A little over a year ago, on January 16th, 2020, Airbus completed the first fully automatic vision-based take-off and landing within the framework of its Autonomous Taxi, Take-Off and Landing (ATTOL) project. Rather than relying on an Instrument Landing System (ILS), the AI-controlled take-off was governed by image-recognition software installed on the aircraft.

Image recognition is softwares ability to identify people, places, objects, etc., in images. You are involved in it every time you respond to a prompt to identify yourself as a human online by clicking on all the images containing a cross-walk, traffic light, or motorcycle. In the video below, it is clearly distinguishable how the software reads the visual input of the aircrafts surroundings to perform the take-off procedure.

The ATTOL project was completed in June last year. However, Airbus has stated that its goal is for autonomous technologies to improve flight operations and overall performance not to reach autonomous flight as a target in itself. Pilots, the planemaker says, will remain at the heart of operations.

Over in the other corner, in December 2020, Boeing completed a series of test-flights exploring how high-performance uncrewed aircraft can operate together controlled by AI using onboard command and data sharing. Aircraft were added one by one over a period of ten days until five operated as an autonomous unit, reaching speeds of up to 167 miles per hour.

The tests demonstrated our success in applying artificial intelligence algorithms to teach the aircrafts brain to understand what is required of it, Emily Hughes, director of Phantom Works, Boeings prototyping arm for its defense branch, said in a statement shared with Vision Systems Design at the time.

With the size, number and speed of aircraft used in the test, this is a very significant step for Boeing and the industry in the progress of autonomous mission systems technology, Hughes continued.

While Decembers test-flights were part of its defense part of the business, Boeing stated that the technologies developed from the program would not only inform its developmental Airpower Teaming System (ATS) but apply to all future autonomous aircraft.

Meanwhile, Boeings subsidiary Aurora Flight Sciences, part of Boeing NeXt, is building smaller autonomous flight vehicles. This includes the Centaur, configured for autonomous flight featuring a detect-and-avoid technology supported by radar.

How soon would you get on a crewless aircraft? Are you excited about the prospects of autonomous flight? What do you consider to be the main issues? Let us know in the comments.

Here is the original post:

How Airbus And Boeing Are Using Artificial Intelligence To Advance Autonomous Flight - Simple Flying

Artificial Intelligence Robots Market to Witness Huge Growth by Top Key Players | Welltok, Inc., Intel Corporation, Nvidia Corporation And More KSU |…

By using, Artificial Intelligence Robots Market research report, organizations can gain vital information about the competitors, economic shifts, demographics, current market trends and spending traits of the customers. This global marketing report puts forth real world research solutions for every industry sector, along with meticulous data collection from non-public sources to better equip businesses with the information they need most. The report comprises of the scope, size, disposition and growth of the industry including the key sensitivities and success factors. The winning Artificial Intelligence Robots Market report also covers five year industry forecasts, growth rates and an analysis of the industry key players and their market shares.

To steer clear of organizational slip-ups and to take critical business decisions, adequate market research is very essential where this excellent Artificial Intelligence Robots Market research report is a pre-requisite. While formulating the report, research analysts conduct smart, resourceful, and engaging surveys that are sure to present the better results. By leveraging the use of smart strategies and formats, the report helps businesses gain more conversions. With the high level skills and expertise, DBMR team provide clients with the top notch market research report. Artificial Intelligence Robots Market report is highly beneficial to grow customer base as it helps identify the various hidden opportunities.

Get Free Sample Report at https://www.databridgemarketresearch.com/request-a-sample/?dbmr=global-artificial-intelligence-robots-market&yog

Artificial Intelligence Robots Market Drivers, Restraint and Key Development:

Focus on developing robots with special application cases that work and add value acts as an opportunity.

Long time to commercialize robots is one of the challenges faced by the artificial intelligence robots market.

Asia-Pacific will dominate the artificial intelligence robots market because of the increasing adoption of deep learning and NLP technologies for retail and security applications in this region in the forecast period of 2020-2027.

Artificial Intelligence Robots Market Key Competitors:

The major players covered in the artificial intelligence robots market report are Welltok, Inc., Intel Corporation, Nvidia Corporation, Google Inc., IBM, Microsoft Corporation, General Vision, Enlitic, Inc., Next IT Corporation, iCarbonX, Amazon Web Services Inc., Apple Inc., Facebook Inc., Siemens, General Electric, Micron Technology, Samsung, Xillinx, Iteris, Atomwise, Inc., Lifegraph, Sense.ly, Inc., Zebra Medical Vision, Inc., Baidu, Inc., H2O ai, Enlitic, Inc. and Raven Industries, among other domestic and global players.

Artificial Intelligence Robots Market Analysis:

Artificial intelligence robots market is expected to grow at a CAGR of 20.38% in the forecast period of 2020 to 2027. Data Bridge Market Research report on artificial intelligence robots market provides analysis and insights regarding the various factors expected to be prevalent throughout the forecasted period while providing their impacts on the markets growth.

For Detailed Inquiry Contact us at https://www.databridgemarketresearch.com/inquire-before-buying/?dbmr=global-artificial-intelligence-robots-market&yog

Competitive Landscape:

The major players covered in the artificial intelligence robots market report are Welltok, Inc., Intel Corporation, Nvidia Corporation, Google Inc., IBM, Microsoft Corporation, General Vision, Enlitic, Inc., Next IT Corporation, iCarbonX, Amazon Web Services Inc., Apple Inc., Facebook Inc., Siemens, General Electric, Micron Technology, Samsung, Xillinx, Iteris, Atomwise, Inc., Lifegraph, Sense.ly, Inc., Zebra Medical Vision, Inc., Baidu, Inc., H2O ai, Enlitic, Inc. and Raven Industries, among other domestic and global players.

Some of the Major Highlights of TOC covers: Global Artificial Intelligence Robots Market

Chapter 1: Methodology & Scope

Definition and forecast parameters

Methodology and forecast parameters

Data Sources

Chapter 2: Artificial Intelligence Robots Market Executive Summary

Business trends

Regional trends

Product trends

End-use trends

Chapter 3: Artificial Intelligence Robots Market Industry Insights

Segmentation

Industry landscape

Vendor matrix

Technological and innovation landscape

Chapter 4: Artificial Intelligence Robots Market, By Region

Chapter 5: Artificial Intelligence Robots Market Company Profile

Business Overview

Financial Data

Product Landscape

Strategic Outlook

SWOT Analysis

Thanks for reading this article, you can also get individual chapter wise section or region wise report version like North America, Europe or Asia.

Get Detailed Table of Content at https://www.databridgemarketresearch.com/toc/?dbmr=global-artificial-intelligence-robots-market&yog

Reason to Buy:

About Data Bridge Market Research

An absolute way to forecast what future holds is to comprehend the trend today!

Data Bridge set forth itself as an unconventional and neoteric Market research and consulting firm with unparalleled level of resilience and integrated approaches. We are determined to unearth the best market opportunities and foster efficient information for your business to thrive in the market. Data Bridge endeavors to provide appropriate solutions to the complex business challenges and initiates an effortless decision-making process.

Data Bridge adepts in creating satisfied clients who reckon upon our services and rely on our hard work with certitude. We are content with our glorious 99.9 % client satisfying rate.

Data Bridge Market Research

US: +1 888 387 2818

UK: +44 208 089 1725

Hong Kong: +852 8192 7475

Corporatesales@databridgemarketresearch.com

Visit link:

Artificial Intelligence Robots Market to Witness Huge Growth by Top Key Players | Welltok, Inc., Intel Corporation, Nvidia Corporation And More KSU |...

A crewless boat is recreating the Mayflower’s 400-year-old journey, with the help of artificial intelligence – CBS News

The Mayflower carried some of the first European settlers across the Atlantic Ocean to North America, 400 years ago this year.

To commemorate the anniversary, another vessel is recreating that voyage, with the help of artificial intelligence.

"We don't know how it's going to go. Is it going to make it across the Atlantic?" software engineer and emerging technology specialist Rosie Lickorish told CBS News' Roxana Saberi. "Fingers crossed that it does have a successful first voyage."

The vessel, docked in the harbor of Plymouth, England, will rely on the latest navigation technology when it sets out to sea but it won't be carrying a crew or captain.

"We've got all sorts of cameras We've got global positioning systems on either side," robotics expert Brett Phaneuf said.

What it won't have, he said, is "people space."

Instead the ship will be guided by artificial intelligence designed by IBM.

Phaneuf explained how the technology is supposed to work.

"It looks at its own cameras like eyes, it looks at the radar, it looks at all sorts of other sensors," he said. "Then it charts its own course and it can deal with unique situations without any human input."

Those situations include encountering other ships during the voyage something software engineer Ollie Thompson is working hard to train the ship's programming to recognize using more than a million different images.

"We're simulating what she's seeing," he said of the boat.

Programmers are also setting the ship's destination to Plymouth, Massachusetts to retrace the Mayflower's four centuries-old passage.

It took the wooden merchant ship 66 days to transport dozens of pilgrims across the Atlantic.

A replica sailed from England to Massachusetts in the 1950s, and is still docked there today.

But Phenauf, who grew up near Plymouth, Massachusetts, wanted to mark the Mayflower's famous past by looking ahead instead.

"I thought, well, we should build a ship that speaks to the next 400 years. What the marine enterprise will look like then, as opposed to what it looked like 400 years ago," he said.

An international team turned his vision into the solar-and-wind-powered Mayflower autonomous ship. Its mission is to learn more about Earth's oceans by gathering data on plastic pollution, warming waters and their effects on marine life.

Software developer Rosie Likorish said the autonomous ship is a more cost-effective way to perform the research.

"It's very expensive at the moment for scientists actually go out on these research missions," she said. "So having autonomous vessels like the Mayflower Autonomous Ship is a really important step and kind of actually enabling us to go out to these dangerous places and learn a lot more."

In addition to cost-saving, not having a crew means the size of the vessel can be compact, and there are no concerns over someone getting sick or hurt.

Brett Phenauf said his biggest worry would be if something broke.

If the boat capsized, the team plans to track it via satellite and salvage it.

And if the unknown voyage succeeds, Phenauf says it would commemorate history while charting a new path.

He said, "I want people to look back on this 400 years from now and think about how different this was from what other people were doing."

Link:

A crewless boat is recreating the Mayflower's 400-year-old journey, with the help of artificial intelligence - CBS News

Artificial Intelligence (AI): Is It All Just Costly Hype? – Dice Insights

Earlier this year, two partners at prominent venture-capital firm Andreessen Horowitz published an interesting blog post about artificial intelligence (A.I.). Specifically, is A.I. (and by extension, machine learning) capable of powering a sustainable business? Or is the tech industry infatuated with a technology thats just a lot of empty hype?

Its a worthy question as we close out 2020, considering how much money and resources companies are pouring into all things A.I.-related (often despite budget cutbacks related to the COVID-19 pandemic). Martin Casado and Matt Bornstein, the partners in question, conclude that A.I. is indeed viablebut that A.I.-centric businesses cant operate like traditional software firms.

Specifically, A.I. companies have lower gross margins (dueto the need for lots of expensive and talented humans, as well asinfrastructure expenses), scaling challenges (due to edge cases), and weakerdefensive moats (because of more A.I. tools and apps becoming commoditized,among other issues).

Traininga single A.I. model can cost hundreds of thousands of dollars(ormore) incomputeresources, they wrote. Whileits tempting to treat this as a one-time cost, retraining is increasinglyrecognized as an ongoing cost, since the data that feeds AI models tends tochange over time (a phenomenon known as data drift).

If the A.I. model is training on something storage-intensive like video, things get even worse. Add on top of that the cost of humans to design and wrangle the models, and you can see how any hoped-for profits from an A.I. project could quickly evaporate.

The entire Andreessen Horowitz posting is worth reading, especially if youre debating whether to jump aboard an artificial intelligence startup. Amidst all the discussions of cloud-infrastructure costs and model complexity, though, one thing stands out: the overwhelming presence of human beings within A.I. systems that are supposedly becoming more and more automated.

Its not just a question of employing people who can build and continually maintain models. For many tasks, especially those requiring greater cognitive reasoning, humans are often plugged into A.I. systems in real time, the posting added. Social media companies, for example, employ thousands of human reviewers to augment A.I.-based moderation systems. Many autonomous vehicle systems include remote human operators, and most A.I.-based medical devices interface with physicians as joint decision makers.

And theres no end in sight to intervention: Many problemslike self-driving carsare too complex to be fully automated with current-generation A.I. techniques. Issues of safety, fairness, and trust also demand meaningful human oversighta fact likely to be enshrined in A.I. regulations currently under development in theUS,EU, and elsewhere.

Weve seen these sorts of issues cropping up already among companies with artificial intelligence products. A few years ago, for example, Google rolled out Duplex, its automated voice assistant, which it predicted would revolutionize the process of making reservations and dealing with customer service. However, journalists quickly demonstrated there were relatively straightforward ways to stump Duplex. As of mid-2019, 25 percent of Google Duplex calls were supposedly made by human operators as opposed to an A.I.

Now consider all the A.I.-centric (or A.I. hopeful, for those still trying to develop an application) businesses that dont have Googles talent or resources. The dream of building an artificial intelligence model thats fully capable of performing its assigned task without any sort of human interventionwell, thats likely years away.

Andreessen Horowitz isnt the first firm to warn about thisissue. In 2019, ArvindKrishna, IBMs senior vice president of cloud and cognitive software, warnedthat A.I. initiatives could implode once companies realize how much effort istruly necessary to prep the related data. You run out of patience along theway, because you spend your first year just collecting and cleansing thedata,he told the audience at The Wall Street Journals Future ofEverything Festival,according to the newspaper.

Ina 2018 blog posting,A.I. researcher Filip Piekniewski listed all the ways in which theartificial intelligencehype wasnt matching withreality, includinga lack of progress in Googles DeepMind. Two years later, its clear thatA.I. is still grinding forward as a discipline, consuming lots of cash andtalent as companies hope for incremental advances.

But at least artificial intelligence researchers are still making lots of cash. And, despite these challenges, keep in mind that automation is still a long-term risk to many professions.

Ultimately, A.I. and machine learning technologies that help companies handle customer personalization and communication, data analytics and processing, and a host of other applications will continue to grow, even if it takes longer than expected to achieve seamless automation. An IDC report found three-quarters of commercial enterprise applications could lean on A.I. by next year alone, while an Analytics Insight report projects more than 20 million available jobs inartificial intelligenceby 2023.

Whether youre a manager or a software developer, in other words, prepare for A.I. (even weaker A.I.) to change how you work. Make sure to review the 10 jobs that could be radically impacted by these technologies sooner than you think.

Want more great insights?Create a Dice profile today to receive the weekly Dice Advisor newsletter, packed with everything you need to boost your career in tech. Register now

Visit link:

Artificial Intelligence (AI): Is It All Just Costly Hype? - Dice Insights