OpenAI aims to solve AI alignment in four years – Warp News

At its core, AI alignment seeks to ensure artificial intelligence systems resonate with human objectives, ethics, and desires. An AI that acts in harmony with these principles is termed as 'aligned'. Conversely, an AI that veers away from these intentions is 'misaligned'.

The conundrum of AI alignment isn't new. In 1960, AI pioneer Norbert Wiener aptly highlighted the necessity of ensuring that machine-driven objectives align with genuine human desires. The alignment process encompasses two main hurdles: defining the system's purpose (outer alignment) and ensuring the AI robustly adopts this specification (inner alignment).

It is this unsolved problem that makes some people afraid of super-intelligent AI.

OpenAI, the organization behind ChatGPT, is spearheading this mission. Their goal? To devise a human-level automated alignment researcher. This means not only creating a system that understands human intent but also ensuring that it can keep evolving AI technologies in check.

Under the leadership of Ilya Sutskever, OpenAI's co-founder and Chief Scientist, and Jan Leike, Head of Alignment, the company is rallying the best minds in machine learning and AI.

"If youve been successful in machine learning, but you havent worked on alignment before, this is your time to make the switch", they write on their website.

"Superintelligence alignment is one of the most important unsolved technical problems of our time. We need the worlds best minds to solve this problem."

This is another example of why it is counterproductive to "pause" AI progress. AI gives us new tools, to understand and create with. Out of that comes tonnes of opportunities, like creating new proteins. But also new problems.

If we "pause" AI progress we won't get the benefits, but the problems will also be much harder to solve, because we won't have the tools to do that. Pausing development to first solve problems is therefore not a viable path.

One such problem was that we don't understand exactly how tools like ChatGPT come up with their answers. But OpenAI used their latest model, GPT4, to do that.

Now OpenAI is repeating that approach to solve what some believe is an existential threat to humanity.

OpenAIs breakthrough in understanding AIs black box (so we can build safe AI)

OpenAI has found a way to solve part of the AI alignment problem. So we can understand and create safe AI.

WALL-Y WALL-Y is an AI bot created in ChatGPT. Learn more about WALL-Y and how we develop her. You can find her news here.

View post:

OpenAI aims to solve AI alignment in four years - Warp News

Identity Security: A Super-Human Problem in the Era of Exponential … – Fagen wasanni

According to a new report by RSA, the exponential growth in the number of human and machine actors on the network, coupled with the increasing sophistication of technology, has made identity security a super-human problem. In this era where artificial intelligence (AI) can assess risks and respond to threats, human involvement becomes even more crucial in cybersecurity.

However, the report highlights significant gaps in respondents knowledge regarding critical identity vulnerabilities and best practices for securing identity. Two-thirds of the respondents were unable to accurately identify the components needed for organizations to move towards zero trust. Similarly, many respondents failed to select the best practice technologies for reducing phishing and lacked understanding of the full scope of identity capabilities that can improve an organizations security posture.

These findings align with third-party research, such as Verizons report, which found that stolen credentials have become the most popular entry point for data breaches over the past five years. The gaps in users identity knowledge provide cybercriminals with opportunities to exploit vulnerabilities.

Furthermore, personal devices pose security risks, as 72% of respondents believe that people frequently use them to access professional resources. Additionally, respondents expressed trust in technical innovations, like password managers and computers, to secure their information, as well as faith in artificial intelligences potential to improve identity security.

The report also highlights the impact of fragmented identity solutions on costs and productivity. Nearly three-quarters of respondents underestimated the cost of a password reset, which can account for nearly half of all IT help desk costs. Additionally, inadequate identity governance and administration hindered organizational productivity, with 30% of respondents reporting weekly access issues.

These findings emphasize the need for organizations to invest in unified identity solutions and integrate artificial intelligence to keep pace with the evolving threat landscape. The report serves as a call to action for organizations to enhance their understanding of identity security and adopt comprehensive approaches to protect against cyber threats.

See original here:

Identity Security: A Super-Human Problem in the Era of Exponential ... - Fagen wasanni

Will AI revolutionize professional soccer recruitment? – Engadget

Skeptics raised their eyebrows when Major League Soccer (MLS) announced plans to deploy AI-powered tools in its recruiting program starting at the tail end of this year. The MLS will be working with London-based startup ai.io, and its aiScout app to help the league discover amateur players around the world. This unprecedented collaboration is the first time the MLS will use artificial intelligence in its previously gatekept recruiting program, forcing many soccer enthusiasts and AI fans to reckon with the question: has artificial intelligence finally entered the mainstream in the professional soccer industry?

There's no doubt that professional sports have been primed for the potential impact of artificial intelligence. Innovations have the potential to transform the way we consume and analyze games from both an administrative and fan standpoint. For soccer specifically, there are opportunities for live game analytics, match outcome modeling, ball tracking, player recruitment, and even injury predicting the opportunities are seemingly endless.

"I think that we're at the beginning of a tremendously sophisticated use of AI and advanced analytics to understand and predict human behaviors," Joel Shapiro, Northwestern University professor at the Kellogg School of Management said. Amid the wave, some experts believe the disruption of the professional soccer industry by AI is timely. Its no secret that soccer is the most commonly played sport in the world. With 240 million registered players globally and billions of fans, FIFA is currently made up of 205 member associations with over 300,000 clubs, according to the Library of Congress. Just days into the 64-game tournament, FIFA officials said that the Womens World Cup in Australia and New Zealand had already broken attendance records.

Visionhaus via Getty Images

The need for more players and more talent taking on the big stage has kept college recruiting organizations like Sports Recruiting USA (SRUSA) busy. "We've got staff all over the world, predominantly in the US everyone is always looking for players," said Chris Cousins, the founder and head of operations at SRUSA. Cousins said he is personally excited about the potential impact of artificial intelligence on his company and, in fact, he is not threatened by the implementation of predictive analysis impacting SRUSA's bottom line.

"It probably will replace scouts," he added, but at the same time, he said he believes the deployment of AI will make things more efficient. "It will basically streamline resources which will save organizations money." Cousins said that SRUSA has already started dabbling with AI, even if only in a modest way. It collaborated with a company called Veo that deploys drones that follow players and collect video for scouts to analyze later.

Luis Cortell, senior recruiting coach for mens soccer for NCSA College Recruiting, is a little less bullish, but still believes AI can be an asset. Right now, soccer involves more of a feel for the player, and an understanding of the game, and there aren't any success metrics for college performance," he said. "While AI wont fully fill that gap, there is an opportunity to help provide additional context.

At the same time, people in the industry should be wary of idealizing AI as a godsend. "People expect AI to be amazing, to not make errors or if it makes errors, it makes errors rarely," Shapiro said. The fact is, predictive models will always make mistakes but both researchers and investors alike want to make sure that AI innovations in the space can make "fewer errors and less expensive errors" than the ones made by human beings.

But ultimately, Shapiro agrees with Cousins. He believes artificial intelligence will replace some payrolls for sure. "Might it replace talent scouts? Absolutely," he said. However, the ultimate decision-makers of how resources are being used will probably not be replaced by AI for some time. Contrary to both perspectives, Richard Felton-Thomas, director of sports sciences and chief operating officer at ai.io, said the technology being developed and used by the MLS will not replace scouts. [They] are super important to the mentality side, the personality side," he said. "You've still got to watch humans behave in their sporting arena to really talent ID them.

Photo by Rob Hart

When the aiScout app launches in the coming weeks and starts being deployed by the MLS later this year, players will be able to take videos of themselves performing specific drills. Those will then be uploaded and linked to the scout version of the app, where talent recruiters working for specific teams can discover players based on whatever criteria they choose. For example, a scout could look for a goalie with a specific height and kick score. Think of it as a cross between a social media website and a search engine. Once a selection is made, a scout would determine whether or not they should go watch a player in person before making any final recruitment decisions, Felton-Thomas explained.

The main AI actually happens less around the scoring and more around the video processing and the video tracking, Felton-Thomas said. Sport happens at 200 frames per second type speeds, right? So you cant just have any old tracking model. It will not track the human fast enough. The AI algorithms that have been developed to analyze video content can translate human movements into what makes up a players overall performance metrics and capabilities.

Getty Images

These performance metrics can include biographical data, video highlights and club-specific benchmarks that can be made by recruiters. The company said in a statement that the platforms AI technology is also able to score and compare the individual players technical, athletic, cognitive and psychometric ability. Additionally, the AI can generate feedback based on benchmarked ratings from the range of the club trials available. The FIFA Innovation Programme, the experimental arm of the association that tests and engages with new products that want to enter the professional soccer market, reported that ai.ios AI-powered tools demonstrate a 97 percent accuracy level when compared to current gold standards.

Beyond the practical applications of AI-powered tools to streamline some processes at SRUSA, Cousins said that he recognizes a lot of the talent recruitment process is very opinion based" and informed by potential bias. ai.io's talent recruitment app, because it is accessible to any player with a smartphone, broadens the MLS reach to disadvantaged populations.

The larger goal is for aiScout to potentially disrupt bias by continuing to play a huge role in who gets what opportunity, or at least in the pre-screening process. Now, a scout can make the call to see a player in real life based on objective data related to how a player can perform physically. The clubs are starting to realize we can't just rely on someone's opinion, Felton-Thomas said. Of course, it's not an end-all-be-all for bias, considering preferential humans are the ones coding the AI. There is no complete expunging of favoritism from the equation, but it is one step in the right direction.

aiScout could open doors for players from remote or disadvantaged communities that don't necessarily have the means or opportunity to be seen by scouts in cups and tournaments. "Somebody super far in Alaska or Texas or whatever, who can't afford to play for a big club may never get seen by the right people but with this platform there, boom. They're going straight to the eyes of the right people," Cousins said about ai.ios app.

The MLS said in a statement that ai.io's technology "eliminates barriers like cost, geography and time commitment that traditionally limit the accessibility of talent discovery programs." Felton-Thomas said it is more important to understand that ai.io will democratize the recruiting process for the MLS, ensuring physical skills are the most important metric when leagues and clubs are deciding where to invest their money. What we're looking to do is give the clubs a higher confidence level when they're making these decisions on who to sign and who to watch. By implementing the AI-powered app, recruitment timelines are also expected to be cut.

Silvia Ferrari, professor of mechanical and aerospace engineering at Cornell and Associate Dean for cross-campus engineering research, who runs the university's 'Laboratory for Intelligent Systems and Controls' couldn't agree more. AI has the potential to complement the expertise of recruiters while also helping, "eliminate the bias that sometimes coaches might have for a particular player or a particular team, Ferrari said.

In a similar vein, algorithms developed in Ferrari's lab can accurately predict the in-game actions of volleyball players with more than 80% accuracy. Now the lab, which has been working on AI-powered predictive tools for the past three years, is collaborating with Cornell's Big Red men's ice hockey team to expand the projects applications. Ferrari and her team have trained the algorithm to extract data from videos of games and then use that to make predictions about game stats and player performance when shown a new set of data.

LISC lab

"I think what we're doing is, like, very easily applicable to soccer," Ferrari said. She said the only reason her lab is not focused on soccer is because the fields are so large that her teams cameras could not always deliver easily analyzed recordings. There is also the struggle with predicting trajectory and tracking the players, she explained. However, she said in hockey, the challenges are similar enough, but because there are fewer players and the fields are smaller, so the variables are more manageable to tackle.

While the focus at Ferraris lab may not be soccer, she is convinced that research in the predictive AI space has made it so much more promising to develop AI in sports and made the progress much faster." The algorithms developed by Ferrari's lab have been able to help teams analyze different strategies and therefore help coaches identify the strengths and weaknesses of particular players and opponents. I think we're making very fast progress," Ferrari said.

LISC lab

The next areas Ferrari plans to try to apply her labs research to include scuba diving and skydiving. However, Ferrari admits there are some technical barriers that need to be overcome by researchers. "The current challenge is real-time analytics," she said. A lot of that challenge is based on the fact that the technology is only capable of making predictions based on historical data. Meaning, if there is a shortage of historical data, there is a limit to what the tech can predict. Beyond technical limitations, Felton-Thomas said implementing AI in the real world is expensive and without the right partnerships, like the ones made with Intel and AWS, it would not have been possible fiscally.

Felton-Thomas said ai.io anticipates tens of millions of users over the next couple of years. And the company attributes that expected growth to partnerships with the right clubs, like Chelsea FC and Burnley FC in the UK, and the MLS in the United States. And while aiScout was initially designed for soccer, the company claims that its core functionalities can be adapted for use in other sports.

LISC lab

But despite ai.ios projections for growth and all the buzz around AI, the technology is still a long way from being widely trusted. From a technology standpoint, Ferrari said there's still a lot of work to be done and a lot of the need for improvement is not just based on problems with feeding algorithms historical data. Predictive models need to be smart enough to adapt to the ever-changing variables in the current. On top of that, public skepticism of artificial intelligence is still rampant in the mainstream, let alone in soccer.

If the sport changes a little bit, if the way in which players are used changes a little bit, if treatment plans for mid-career athletes change, whatever it is, all of a sudden, our predictions are less likely to be good, Shapiro said. But hes confident that the current models will prove valuable and informative. At least for a little while.

See the original post here:

Will AI revolutionize professional soccer recruitment? - Engadget

Meet the Maker: Developer Taps NVIDIA Jetson as Force Behind AI … – Nvidia

Goran Vuksic is the brain behind a project to build a real-world pit droid, a type of Star Wars bot that repairs and maintains podracers which zoom across the much-loved film series.

The edge AI Jedi used an NVIDIA Jetson Orin Nano Developer Kit as the brain of the droid itself. The devkit enables the bot, which is a little less than four feet tall and has a simple webcam for eyes, to identify and move its head toward objects.

Vuksic originally from Croatia and now based in Malm, Sweden recently traveled with the pit droid across Belgium and the Netherlands to several tech conferences. He presented to hundreds of people on computer vision and AI, using the droid as an engaging real-world demo.

A self-described Star Wars fanatic, hes upgrading the droids capabilities in his free time, when not engrossed in his work as an engineering manager at a Copenhagen-based company. Hes also co-founder and chief technology officer of syntheticAIdata, a member of the NVIDIA Inception program for cutting-edge startups.

The company, which creates vision AI models with cost-effective synthetic data, uses a connector to the NVIDIA Omniverse platform for building and operating 3D tools and applications.

Named a Jetson AI Specialist by NVIDIA and an AI Most Valuable Professional by Microsoft, Vuksic got started with artificial intelligence and IT about a decade ago when working for a startup that classified tattoos with vision AI.

Since then, hes worked as an engineering and technical manager, among other roles, developing IT strategies and solutions for various companies.

Robotics has always interested him, as he was a huge sci-fi fan growing up.

Watching Star Wars and other films, I imagined how robots might be able to see and do stuff in the real world, said Vuksic, also a member of the NVIDIA Developer Program.

Now, hes enabling just that with the pit droid project powered by the NVIDIA Jetson platform, which the developer has used since the launch of its first product nearly a decade ago.

Apart from tinkering with computers and bots, Vuksic enjoys playing the bass guitar in a band with his friends.

Vuksic built the pit droid for both fun and educational purposes.

As a frequent speaker at tech conferences, he takes the pit droid on stage to engage with his audience, demonstrate how it works and inspire others to build something similar, he said.

We live in a connected world all the things around us are exchanging data and becoming more and more automated, he added. I think this is super exciting, and well likely have even more robots to help humans with tasks.

Using the NVIDIA Jetson platform, Vuksic is at the forefront of robotics innovation, along with an ecosystem of developers using edge AI.

Vuksics pit droid project, which took him four months, began with 3D printing its body parts and putting them all together.

He then equipped the bot with the Jetson Orin Nano Developer Kit as the brain in its head, which can move in all directions thanks to two motors.

The Jetson Orin Nano enables real-time processing of the camera feed. Its truly, truly amazing to have this processing power in such a small box that fits in the droids head, said Vuksic.

He also uses Microsoft Azure to process the data in the cloud for object-detection training.

My favorite part of the project was definitely connecting it to the Jetson Orin Nano, which made it easy to run the AI and make the droid move according to what it sees, said Vuksic, who wrote a step-by-step technical guide to building the bot, so others can try it themselves.

The most challenging part was traveling with the droid there was a bit of explanation necessary when I was passing security and opened my bag which contained the robot in parts, the developer mused. I said, This is just my big toy!

Learn more about the NVIDIA Jetson platform.

Read the original:

Meet the Maker: Developer Taps NVIDIA Jetson as Force Behind AI ... - Nvidia

10 Jobs That Artificial Intelligence May Replace Soon – TechJuice

With the emergence of artificial intelligence, it has changed the shape of the world. Anyone cant survive without bumping into artificial intelligence. It will reshape the whole world in coming years. Soon AI will have a great impact even though second life 2.0 does not exist currently. People may comapred it to the rise of automation over the last few decades, which witnesses the reshaping of entire industries by robotics and self-operating machines.

The recent Goldman Sachs report states that it is expected that AI will replace 300 million workers. Here we are highlighting 10 occupations that AI could make obsolete shortly

The next model of AI is super intelligent and it has the potential to do all financial tasks. Instead of training it on a wide range of data, it will follow more targeted pathways. One of the most important factors on which the companies are working is to train AI that can perform basic accounting duties.

Multiple accounting companies including Safe are already using artificial intelligence to automate their process. As soon as the bot will be trained on export algorithms, it will automatically reduce the human input. Automation will replace entry-level accounting jobs, leaving organizations composed of auditors and overseers who just monitor the AIs work.

Emotional social platforms like Facebook and Twitter place their content Moderators who must filter out offensive images, videos, and other irrelevant content so that it wont become public. A large portion is already pre-screened by AI algorithms, but the financial decisions are made by a human being.

As soon as these programs will improve, they will automatically reduce the demand for human workers, and AI will become more and more accurate.

Hence, AI is performing so well in various fields but still, it is not sure whether it is good at making decisions and ready to go in Infront of judges. But the lower echelons at law firms might feel vulnerable. When lawyers will get to know the competence of AI and machine learning to produce citations and summaries, they might feel insecure.

AI has the potential to replace human lawyers and can produce better results with facts.

ChatGPT is intelligent to generate readable text, poems, and prose. AI can be a great proofreader and can help in finding human errors. People use different software and applications to find out the errors in any text. Word processing and Google Docs are the best examples of AI as its a short step from showing a red underline on a misspelled word to letting the computer correct it on its own. That role has traditionally fallen to humans, but if you have a large data set, a machine could handle it intelligently.

AI can perform trading tasks very efficiently, and the algorithms on which it works are very well-equipped. Entry-level stock traders of big banks who are just out of business school spend their precious time modeling predictions in Excel and data management.

Though its a human task AI perform can perform the task more efficiently. AI will help in reducing the possibility of error and open the doors to more complex comparative modeling. The higher people in the management can perform their tasks and still be needed but people at the lower level should be ready for AI to make them obsolete soon.

Voice recognition algorithms and translators have been improving at staggering speeds in the last decades. Voice recognition systems first came on the market in 1952, but the modern machine has enabled systems to understand the language more easily and has the potential to create incredibly accurate and fast audio transcriptions.

Different occupations and organizations rely on transcriptions including journalists and lawyers, therefore the tool will help in generating the desired content

AI models are advanced enough to produce images in a large array of styles. Graphic design is itself a very creative field and requires holds on certain principles of colors, contrast, compositions, and readability. Indeed, its a perfect sandbox for a machine learning tool to play with.

Imagine giving a text to AI and checking its ability to produce thousands of layouts for a billboard, magazines, or visual materials in just a few seconds. It is simple and easy for the software to flow the content into a final print-ready file once the client chose a design. It will provide multiple designs to work on.

A text to speech software is an amazing creation to answer queries. AI is much more accurate and easier to scale than a call center. Even it helps companies by reducing their agents salaries and unpredictable staffing costs. Many companies are using AI to answer customer queries and AI has proven itself a best call agent

The loss of brave soldiers is indeed a big loss for any country to bear. AI has stepped-in in this field also and created a good mark. We have seen, now military agencies are using machines and technological weapons to save their country.

Self-directed munitions are not new, but technological advancement has enabled drones and other war weapons to make decisions and do battle without any human oversight. Hence, it is expected that World War III will be machine fighting.

AI is one of the best tools for producing relevant content. It is based on algorithms that produce authentic and accurate content.

ChatGPT mad other bots are the best examples of AI, which produces authentic content and can write poems, prose, and other writing context.

Many companies are using chatGPT to write articles and blog posts. An AI has the potential to replace writers, and content creators as it can generate accurate content.

Read more:

PTA Records Over 14,000 Complaints Against Telcos In June

Process To Find Out Which Apps Are Draining Your iPhone Or iPad Battery: Time To Stop Them

Here is the original post:

10 Jobs That Artificial Intelligence May Replace Soon - TechJuice

The Future of Video Conferencing: How AI and Big Data are … – Analytics Insight

Video Conferencing is now everywhere , on TV when newscasters are talking to a reporter in faraway land or when you are facetiming with your friends and as technology is rapidly evolving there are several factors that can really change the way we use this it. Although it has certain advantages and disadvantages, technology has undeniably become an essential necessity for the modern world.

Yes, it is one of the key features transforming user experience in video conferencing. AI helps in increasing automation and efficiency of webinars. In addition, makes it more user friendly by personalization and customization by using AI algorithms to analyze user behavior, preferences and historical data to deliver personalized experiences and recommendations. This increases customer satisfaction, boost engagement and improves business outcomes.

The future of video conferencing will be shaped by advances in artificial intelligence (AI) and big data analytics. These technologies are revolutionizing remote collaboration by improving user experience, improving video quality, enabling intelligent features, and providing valuable insights. Here are some ways AI and big data are transforming video conferencing.

AI algorithms can analyze video and audio streams in real time to improve the quality of video conferencing. You can remove background noise, refine image clarity, and adjust lighting conditions to give your attendees a better experience.

AI-powered video conferencing platforms can create virtual backgrounds and apply augmented reality (AR) effects. With this feature, attendees can change the setting, add virtual objects, act as avatars, and more, making meetings more engaging and interactive. This feature is super useful because it lets people hide their real background during online meetings and maintain their privacy.

Natural language processing (NLP) algorithms play a central role in video conferencing by enabling real-time transcription and translation of conversations. These algorithms can automatically convert spoken words to text and even perform language translation on the fly, breaking down language barriers and improving communication and understanding among participants in multilingual meetings. Therefore it can play a crucial role in helping businesses gain worldwide recognition.

AI-powered virtual assistants can join video meetings and assist participants by taking meeting notes, summarizing discussions, and planning follow-up tasks. You can also integrate these wizards with other tools and applications to automate your workflow and increase your productivity.

AI-based facial recognition can be used to identify participants and automatically tag them with name, title, and other relevant information. Sentiment analysis algorithms analyze facial expressions to detect emotions and provide valuable insight into participant reactions and engagement.

Video conferencing platforms generate large amounts of data such as usage patterns, meeting duration, participant engagement, and shared content. By analyzing this data, organizations can gain insights into meeting effectiveness, resource allocation and participant engagement. It help to improve collaboration and productivity by identifying opportunities for streamlining workflows, and making data-driven decisions.

AI-powered video conferencing platforms integrate seamlessly with other collaboration tools such as project management software, document sharing platforms, and customer relationship management systems. This integration enables more efficient remote collaboration.

VR and holographic technologies are still in their early stages, but they offer great potential for video conferencing. By combining AI algorithms with VR, you can create an immersive virtual meeting room where participants feel like they are physically in the same room. A holographic display can project a lifelike 3D representation of her on a remote participant, enhancing realism and interaction.

AI-powered video conferencing platforms are constantly evolving to address security and privacy concerns. Advanced encryption algorithms, biometrics, and AI-powered anomaly detection help ensure secure communications and protect sensitive data during remote collaboration.

The use of robotics, especially telepresence robots, in videoconferencing offers several advantages and opportunities. How robotics will improve video conferencing:

Telepresence robots allow individuals to have a virtual presence at a remote location instantly. This means that you can be on location without physically being there. Furthermore, telepresence robots go beyond a simple video conference call. The operator has full control over what they wish to see, eliminating the need for multiple people to adjust their positions to be seen on the video screen. This enhances communication and makes interactions more seamless.

Telepresence robots are designed to navigate large workspaces, event spaces, and retail spaces. Remote users can easily and safely navigate these environments for a more immersive experience. Telepresence robots have also been introduced to improve accessibility for people with speech and motor disabilities. These robots allow you to attend meetings and interact with others remotely, thus overcoming the traditional challenges of conference calls.

IoT (Internet of Things) can indeed enhance video conferencing in several ways. Here are a few examples:

Smart Cameras: IoT-enabled cameras can automatically detect and track participants during a video conference. They can adjust focus, zoom, and framing to ensure everyone is visible and well-positioned in the frame. Smart cameras can also use facial recognition to identify speakers and switch between different views accordingly.

Voice-Activated Controls: IoT devices equipped with voice recognition can allow participants to control various aspects of the video conferencing system using voice commands. For example, users can start or end a call, adjust volume, mute/unmute, or switch between different modes with voice-activated controls, making the conferencing experience more intuitive and hands-free.

Access to Real-Time Information: IoT combined with video conferencing allows for instant access to information without disrupting the flow of a meeting. This means that participants can retrieve relevant data, documents, or presentations in real-time, improving the quality and speed of video conferences.

Scalability: Video conferencing platforms require significant computing resources to handle the processing and transmission of audio, video, and data streams. Cloud computing provides scalability by allowing video conferencing providers to dynamically allocate resources based on demand. This ensures that the system can handle a large number of participants and deliver a consistent and high-quality experience.

Data Storage and Backup: Cloud computing provides scalable and secure storage solutions. Video conferencing platforms can leverage cloud storage to store recorded meetings, transcripts, and associated data. Additionally, cloud-based backup services ensure that critical meeting data is protected against data loss and can be easily recovered if needed.

Ease of Implementation and Updates: Cloud-based video conferencing solutions are typically easier to implement compared to on-premises solutions. Users can quickly set up and start using the service without extensive technical knowledge or infrastructure requirements. Cloud providers also handle software updates and maintenance, ensuring that users have access to the latest features and improvements.

The blockchain is a decentralized, distributed, and often public digital ledger consisting of records called blocks that are used to record transactions across many computers so that any involved block cannot be altered retroactively.

This technology provides the high level of security and trust required for modern digital transactions which can be beneficial in video conferencing platform among other thing in the following ways.

Decentralized Video Conferencing Platforms: Blockchain technology can enable the development of decentralized video conferencing platforms. These platforms can operate on a peer-to-peer network, utilizing blockchain for identity verification, encryption, and data integrity. Decentralized platforms can offer increased privacy, censorship resistance, and resilience to single points of failure.

Tokenized Rewards and Incentives: Video conferencing platforms can leverage blockchain-based tokens to reward participants for their contributions. For instance, active participants who contribute valuable insights or provide technical support can receive tokens as incentives. This gamification approach encourages engagement and fosters a collaborative environment during video conferences.

Recording and Intellectual Property Protection: Blockchain technology can be utilized to securely store and verify recordings of video conferencing sessions. By timestamping and hashing the recordings on the blockchain, participants can prove the authenticity and integrity of the content. This can be particularly useful for legal or intellectual property purposes.

By leveraging IoT technology, organizations can create smarter, more intuitive, and optimized video conferencing experiences for their users. Overall, using robots in video conferencing, especially remote presentation robots, provides a more immersive and interactive experience, improves communication and collaboration, and saves time. Together, AI and big data are revolutionizing video conferencing by improving communication quality, enabling intelligence, delivering valuable insights, and enhancing collaboration experiences from across the globe. As these technologies evolve, we can expect more innovative applications to emerge, revolutionizing the way we communicate and collaborate remotely.

Read more:

The Future of Video Conferencing: How AI and Big Data are ... - Analytics Insight

Denial of service threats detected thanks to asymmetric behavior in … – Science Daily

Scientists have developed a better way to recognize a common internet attack, improving detection by 90 percent compared to current methods.

The new technique developed by computer scientists at the Department of Energy's Pacific Northwest National Laboratory works by keeping a watchful eye over ever-changing traffic patterns on the internet. The findings were presented on August 2 by PNNL scientist Omer Subasi at the IEEE International Conference on Cyber Security and Resilience, where the manuscript was recognized as the best research paper presented at the meeting.

The scientists modified the playbook most commonly used to detect denial-of-service attacks, where perpetrators try to shut down a website by bombarding it with requests. Motives vary: Attackers might hold a website for ransom, or their aim might be to disrupt businesses or users.

Many systems try to detect such attacks by relying on a raw number called a threshold. If the number of users trying to access a site rises above that number, an attack is considered likely, and defensive measures are triggered. But relying on a threshold can leave systems vulnerable.

"A threshold just doesn't offer much insight or information about what it is really going on in your system," said Subasi. "A simple threshold can easily miss actual attacks, with serious consequences, and the defender may not even be aware of what's happening."

A threshold can also create false alarms that have serious consequences themselves. False positives can force defenders to take a site offline and bring legitimate traffic to a standstill -- effectively doing what a real denial-of-service attack, also known as a DOS attack, aims to do.

"It's not enough to detect high-volume traffic. You need to understand that traffic, which is constantly evolving over time," said Subasi. "Your network needs to be able to differentiate between an attack and a harmless event where traffic suddenly surges, like the Super Bowl. The behavior is almost identical."

As principal investigator Kevin Barker said: "You don't want to throttle the network yourself when there isn't an attack underway."

Denial of service -- denied

To improve detection accuracy, the PNNL team sidestepped the concept of thresholds completely. Instead, the team focused on the evolution of entropy, a measure of disorder in a system.

Usually on the internet, there's consistent disorder everywhere. But during a denial-of-service attack, two measures of entropy go in opposite directions. At the target address, many more clicks than usual are going to one place, a state of low entropy. But the sources of those clicks, whether people, zombies or bots, originate in many different places -- high entropy. The mismatch could signify an attack.

In PNNL's testing, 10 standard algorithms correctly identified on average 52 percent of DOS attacks; the best one correctly identified 62 percent of attacks. The PNNL formula correctly identified 99 percent of such attacks.

The improvement isn't due only to the avoidance of thresholds. To improve accuracy further, the PNNL team added a twist by not only looking at static entropy levels but also watching trends as they change over time.

Formula vs. formula: Tsallis entropy for the win

In addition, Subasi explored alternative options to calculate entropy. Many denial-of-service detection algorithms rely on a formula known as Shannon entropy. Subasi instead settled on a formula known as Tsallis entropy for some of the underlying mathematics.

Subasi found that the Tsallis formula is hundreds of times more sensitive than Shannon at weeding out false alarms and differentiating legitimate flash events, such as high traffic to a World Cup website, from an attack.

That's because the Tsallis formula amplifies differences in entropy rates more than the Shannon formula. Think of how we measure temperature. If our thermometer had a resolution of 200 degrees, our outdoor temperature would always appear to be the same. But if the resolution were 2 degrees or less-like most thermometers-we'd detect dips and spikes many times each day. Subasi showed that it's similar with subtle changes in entropy, detectable through one formula but not the other.

The PNNL solution is automated and doesn't require close oversight by a human to distinguish between legitimate traffic and an attack. The researchers say that their program is "lightweight" -- it doesn't need much computing power or network resources to do its job. This is different from solutions based on machine learning and artificial intelligence, said the researchers. While those approaches also avoid thresholds, they require a large amount of training data.

Now, the PNNL team is looking at how the buildout of 5G networking and the booming internet of things landscape will have an impact on denial-of-service attacks.

"With so many more devices and systems connected to the internet, there are many more opportunities than before to attack systems maliciously," Barker said. "And more and more devices like home security systems, sensors and even scientific instruments are added to networks every day. We need to do everything we can to stop these attacks."

The work was funded by DOE's Office of Science and was done at PNNL's Center for Advanced Architecture Evaluation, funded by DOE's Advanced Scientific Computing Research program to evaluate emerging computing network technologies. PNNL scientist Joseph Manzano is also an author of the study.

Follow this link:

Denial of service threats detected thanks to asymmetric behavior in ... - Science Daily

3 Cheap Machine Learning Stocks That Smart Investors Will Snap … – InvestorPlace

Source: shutterstock.com/cono0430

Machine learning stocks represent publicly traded firms specializing in a subfield of artificial intelligence (AI). The terms AI and machine learning have become synonymous, but machine learning is really about making machines imitate intelligent human behavior. Semantics aside, machine learning and AI have come to the forefront in 2023.

Generative AI has boomed this year, and the race is on to identify the next must-buy shares in the sector. The firms identified in this article arent cheap in an absolute sense. Their price can be quite high. However, they are expected to provide strong returns, making them a bargain for investors currently and cheap in a relative sense.

Source: Sundry Photography / Shutterstock.com

Lets begin our discussion of machine learning stocks with ServiceNow (NYSE:NOW). The firm offers a cloud computing platform utilizing machine learning to help firms manage their workflows. Enterprise AI is a burgeoning field that will only continue to grow as firms integrate machine learning into their workflows.

As mentioned in the introduction, ServiceNow is not cheap in an absolute sense. At $563 a share, there are a lot of other equities that investors could buy for much cheaper. However, Wall Street expects ServiceNow to move past $600 and perhaps $700. The metrics-oriented website Gurufocus believes ServiceNows potential returns are even higher and peg its value at $790.

The firms Q2 earnings report, released July 26, gives investors a lot of reason to believe that share prices should continue to rise. The firm exceeded revenue growth and profitability guidance during the period, which allowed management the confidence to raise subscription revenue and margin guidance for the year.

Q2 subscription revenue reached $2.075 billion, up 25% year-over-year (YOY). Total revenues reached $2.150 million in the quarter.

Source: Pamela Marciano / Shutterstock.com

AMD (NASDAQ:AMD) and its stock continued to be overshadowed by its main rival, Nvidia (NASDAQ:NVDA). The former has almost doubled in 2023, while the latter has more than tripled. Its basically become accepted that AMD is far behind its competition in all things AI and machine learning. However, the news is mixed, making AMD particularly interesting as Nvidia shares are continually scrutinized for their price levels.

An article from early 2023 noted that the comparison between AMD and Nvidia isnt unfair. It concluded that Nvidia is better all around. However, that article also touched on the notion that AMD could potentially optimize its cards through software capabilities inherent to the firm.

That was the same conclusion MosaicML came to when testing the two firms head-to-head several months later. AMD isnt very far behind Nvidia, after all, and it has a chance to make up ground via its software prowess. Thats exactly why investors should consider AMD currently, given its relatively cheaper price.

Source: T. Schneider / Shutterstock.com

CrowdStrike (NASDAQ:CRWD) operates in a combination of growing fields. The stock represents cybersecurity and machine learning directed toward identifying IT threats. It provides endpoint security and was recently awarded its second consecutive annual honor as the best at the SC Awards Europe 2023. The company is well-regarded in its industry and is growing very quickly.

The entity also has strong fundamentals. In Q1, revenues increased by 61% YOY, reaching $487.8 million. CrowdStrikes net income loss narrowed from $85 million to $31.5 million during the period YOY. The firm generated $215 million in cash flow, leaving a lot of room to maneuver overall.

Furthermore, CrowdStrike announced it is partnering with Amazon (NASDAQ:AMZN) to work with AWS on generative AI applications to increase security. CrowdStrike is arguably the best endpoint security stock available overall, and its strong inroads into AI and machine learning have set it up for even greater growth moving forward.

On the date of publication, Alex Sirois did not hold (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Alex Sirois is a freelance contributor to InvestorPlace whose personal stock investing style is focused on long-term, buy-and-hold, wealth-building stock picks. Having worked in several industries from e-commerce to translation to education and utilizing his MBA from George Washington University, he brings a diverse set of skills through which he filters his writing.

The rest is here:

3 Cheap Machine Learning Stocks That Smart Investors Will Snap ... - InvestorPlace

A comparative study of predicting the availability of power line … – Nature.com

Mlnek, P., Rusz, M., Benel, L., Slik, J. & Musil, P. Possibilities of broadband power line communications for smart home and smart building applications. Sensors 21(1), 240 (2021).

Article ADS PubMed PubMed Central Google Scholar

Al-Fuqaha, A., Guizani, M., Mohammadi, M., Aledhari, M. & Ayyash, M. Internet of things: A survey on enabling technologies, protocols, and applications. IEEE Commun. Surv. Tutor. 17(4), 23472376 (2015).

Article Google Scholar

Gonzlez-Ramos, J. et al. Upgrading the power grid functionalities with broadband power line communications: Basis, applications, current trends and challenges. Sensors 22(12), 4348. https://doi.org/10.3390/s22124348 (2022).

Article ADS PubMed PubMed Central Google Scholar

Ghasempour, A. Internet of things in smart grid: Architecture, applications, services, key technologies, and challenges. Inventions 4(1), 22 (2019).

Article Google Scholar

Hamamreh, J. M., Furqan, H. M. & Arslan, H. Classifications and applications of physical layer security techniques for confidentiality: A comprehensive survey. IEEE Commun. Surv. Tutor. 21(2), 17731828 (2018).

Article Google Scholar

Vincent, T. A., Gulsoy, B., Sansom, J. E. & Marco, J. Development of an in-vehicle power line communication network with in-situ instrumented smart cells. Transp. Eng. 6, 100098 (2021).

Article Google Scholar

Brandl, M. & Kellner, K. Performance evaluation of power-line communication systems for lin-bus based data transmission. Electronics 10(1), 85 (2021).

Article Google Scholar

Prasad, G. & Lampe, L. Full-duplex power line communications: Design and applications from multimedia to smart grid. IEEE Commun. Magaz. 58(2), 106112 (2019).

Article Google Scholar

Rocha Farias, L., Monteiro, L. F., Leme, M. O. & Stevan, S. L. Jr. Empirical analysis of the communication in industrial environment based on g3-power line communication and influences from electrical grid. Electronics 7(9), 194 (2018).

Article Google Scholar

Wang, B. & Cao, Z. A review of impedance matching techniques in power line communications. Electronics 8(9), 1022 (2019).

Article Google Scholar

Oliveira, R. M., Vieira, A. B., Latchman, H. A. & Ribeiro, M. V. Medium access control protocols for power line communication: A survey. IEEE Commun. Surv. Tutor. 21(1), 920939 (2018).

Article Google Scholar

Appasani, B. & Mohanta, D. K. A review on synchrophasor communication system: Communication technologies, standards and applications. Protect. Control Mod. Power Syst. 3(1), 117 (2018).

Google Scholar

Sanz, A., Sancho, D., & Ibar, J.C. Performances of g3 plc-rf hybrid communication systems. In: 2021 IEEE International Symposium on Power Line Communications and Its Applications (ISPLC), pp. 6772 (2021). IEEE

Deru, L., Dawans, S., Ocaa, M., Quoitin, B. & Bonaventure, O. Redundant border routers for mission-critical 6lowpan networks. In Real-world Wireless Sensor Networks (ed. Dev, T.) 195203 (Springer, 2014).

Chapter Google Scholar

Kassab, A.S., Seddik, K.G., Elezabi, A., & Soltan, A. Realistic wireless smart-meter network optimization using composite rpl metric. In: 2020 8th International Conference on Smart Grid (icSmartGrid), pp. 109114 (2020). IEEE

Stiri, S. et al. Hybrid plc and lorawan smart metering networks: Modeling and optimization. IEEE Trans. Indus. Inf. 18(3), 15721582 (2021).

Article Google Scholar

Ullah, Z., Al-Turjman, F., Mostarda, L. & Gagliardi, R. Applications of artificial intelligence and machine learning in smart cities. Comput. Commun. 154, 313323 (2020).

Article Google Scholar

Mata, J. et al. Artificial intelligence (ai) methods in optical networks: A comprehensive survey. Optic. Swit. Netw. 28, 4357 (2018).

Article Google Scholar

Fu, Y., Wang, S., Wang, C.-X., Hong, X. & McLaughlin, S. Artificial intelligence to manage network traffic of 5G wireless networks. IEEE Netw. 32(6), 5864 (2018).

Article Google Scholar

Yang, H. et al. Artificial-intelligence-enabled intelligent 6G networks. IEEE Netw. 34(6), 272280 (2020).

Article Google Scholar

Shi, Y., Yang, K., Jiang, T., Zhang, J. & Letaief, K. B. Communication-efficient edge AI: Algorithms and systems. IEEE Commun. Surv. Tutor. 22(4), 21672191 (2020).

Article Google Scholar

Zhang, C. & Lu, Y. Study on artificial intelligence: The state of the art and future prospects. J. Indus. Inf. Integr. 23, 100224. https://doi.org/10.1016/j.jii.2021.100224 (2021).

Article Google Scholar

Balada, C. et al. Fhler-im-netz: A smart grid and power line communication data set. IET Smart Gridhttps://doi.org/10.1049/stg2.12093 (2022).

Article Google Scholar

Righini, D., Tonello, A.M. Noise determinism in multi-conductor narrow band plc channels. In: 2018 IEEE International Symposium on Power Line Communications and its Applications (ISPLC) (2018) https://doi.org/10.1109/isplc.2018.8360239

Righini, D., Tonello, A.M.: Automatic clustering of noise in multi-conductor narrow band plc channels. In: 2019 IEEE International Symposium on Power Line Communications and its Applications (ISPLC) (2019) https://doi.org/10.1109/isplc.2019.8693272

Reyes, D. M. A., Souza, R. M. C. R. & Oliveira, A. L. I. A three-stage approach for modeling multiple time series applied to symbolic quartile data. Exp. Syst. Appl. 187, 115884. https://doi.org/10.1016/j.eswa.2021.115884 (2022).

Article Google Scholar

Bade, K., & Nurnberger, A. Personalized hierarchical clustering. In: 2006 IEEE/WIC/ACM International Conference on Web Intelligence (WI 2006 Main Conference Proceedings)(WI06) (2006) https://doi.org/10.1109/wi.2006.131

Leskovec, J., Rajaraman, A. & Ullman, J. D. Mining of Massive Datasets (Cambridge University Press, 2014).

Book Google Scholar

Vesanto, J. & Alhoniemi, E. Clustering of the self-organizing map. IEEE Trans. Neural Netw. 11(3), 586600. https://doi.org/10.1109/72.846731 (2000).

Article CAS PubMed Google Scholar

Dubey, A., Mallik, R. K. & Schober, R. Performance analysis of a multi-hop power line communication system over log-normal fading in presence of impulsive noise. IET Commun. 9(1), 19. https://doi.org/10.1049/iet-com.2014.0464 (2015).

Article Google Scholar

Hossam, M., Afify, A.A., Rady, M., Nabil, M., Moussa, K., Yousri, R., & Darweesh, M.S. A comparative study of different face shape classification techniques. In: 2021 International Conference on Electronic Engineering (ICEEM), pp. 16 (2021). https://doi.org/10.1109/ICEEM52022.2021.9480638

Prajapati, G.L., & Patle, A. On performing classification using svm with radial basis and polynomial kernel functions. In: 2010 3rd International Conference on Emerging Trends in Engineering and Technology (2010) https://doi.org/10.1109/icetet.2010.134

Almaiah, M. A. et al. Performance investigation of principal component analysis for intrusion detection system using different support vector machine kernels. Electronics 11(21), 3571 (2022).

Article Google Scholar

Verma, A.R., Singh, S.P., Mishra, R.C., & Katta, K. Performance analysis of speaker identification using gaussian mixture model and support vector machine. In: 2019 IEEE International WIE Conference on Electrical and Computer Engineering (WIECON-ECE) (2019) https://doi.org/10.1109/wiecon-ece48653.2019.9019970

Khan, M. Y. et al. Automated prediction of good dictionary examples (gdex): A comprehensive experiment with distant supervision, machine learning, and word embedding-based deep learning techniques. Complexityhttps://doi.org/10.1155/2021/2553199 (2021).

Article Google Scholar

Liu, P., Zhang, Y., Wu, H. & Fu, T. Optimization of edge-plc-based fault diagnosis with random forest in industrial internet of things. IEEE Internet Things J. 7(10), 96649674. https://doi.org/10.1109/jiot.2020.2994200 (2020).

Article Google Scholar

Bhushan, S. et al. An experimental analysis of various machine learning algorithms for hand gesture recognition. Electronics 11(6), 968. https://doi.org/10.3390/electronics11060968 (2022).

Article Google Scholar

Abirami, S. P., Kousalya, G. & Karthick, R. Varied expression analysis of children with ASD using multimodal deep learning technique. Deep Learn. Parallel Comput. Environ. Bioeng. Syst.https://doi.org/10.1016/b978-0-12-816718-2.00021-x (2019).

Article Google Scholar

Heydarian, M., Doyle, T. E. & Samavi, R. Mlcm: Multi-label confusion matrix. IEEE Access 10, 1908319095. https://doi.org/10.1109/access.2022.3151048 (2022).

Article Google Scholar

Abdulhammed, R., Musafer, H., Alessa, A., Faezipour, M. & Abuzneid, A. Features dimensionality reduction approaches for machine learning based network intrusion detection. Electronics 8(3), 322. https://doi.org/10.3390/electronics8030322 (2019).

Article Google Scholar

Read more here:

A comparative study of predicting the availability of power line ... - Nature.com

Preventing Bias In Machine Learning – Texas A&M Today – Texas A&M University Today

Based on data, machine learning can quickly and efficiently analyze large amounts of information to provide suggestions and help make decisions. For example, phones and computers expose us to machine learning technologies such as voice recognition, personalized shopping suggestions, targeted advertisements and email filtering.

Dr. Na Zou

Texas A&M Engineering

Machine learning impacts extensive applications across diverse sectors of the economy, including health care, public services, education and employment opportunities. However, it also brings challenges related to bias in the data it uses, potentially leading to discrimination against specific individuals or groups.

To combat this problem, Dr. Na Zou, an assistant professor in the Department of Engineering Technology and Industrial Distribution at Texas A&M University, aims to develop a data-centric fairness framework. To support her research, Zou received the National Science Foundations Faculty Early Career Development Program (CAREER) Award.

She will focus on developing a framework from different aspects of common data mining practices that can eliminate or reduce bias, promote data quality and improve modeling processes for machine learning.

Machine learning models are becoming pervasive in real-world applications and have been increasingly deployed in high-stakes decision-making processes, such as loan management, job applications and criminal justice, Zou said. Fair machine learning has the potential to reduce or eliminate bias from the decision-making process, avoid making unwarranted implicit associations or amplifying societal stereotypes about people.

According to Zou, fairness in machine learning refers to the methods or algorithms used to solve the phenomenon that machine learning algorithms naturally inherit or even amplify the bias in the data.

For example, in health care, fair machine learning can help reduce health disparities and improve health outcomes, Zou said. By avoiding biased decision making, medical diagnoses, treatment plans and resource allocations can be more equitable and effective for diverse patient populations.

Additionally, users of machine learning systems can enhance their experiences across various applications by mitigating bias. For instance, fair algorithms can incorporate individual preferences in recommendation systems or personalized services without perpetuating stereotypes or excluding certain groups.

To develop unbiased machine learning technologies, Zou will investigate data-centric algorithms capable of systemically modifying datasets to improve model performance. She will also look at theories that facilitate fairness through improving data quality, while incorporating insights from previous research in implicit fairness modeling.

The challenge of developing a fairness framework lies in problems within the original data used in machine learning technologies. In some instances, the data may lack quality, leading to missing values, incorrect labels and anomalies. In addition, when the trained algorithms are deployed in real-world systems, they usually face problems of deteriorated performance due to data distribution shifts, such as a covariate or concept shift. Although the data can be incomplete, it is used to make impactful decisions throughout various fields.

For example, the trained models on images from sketches and paintings may not achieve satisfactory performance when used in natural images or photos, Zou said. Thus, the data quality and distribution shift issues make detecting and mitigating models discriminative behavior much more difficult.

If successful, Zou believes the outcome of this project will lead to advances in facilitating fairness in computing. The project will produce effective and efficient algorithms to explore fair data characteristics from different perspectives and enhance generalizability and trust in the machine learning field. This research is expected to impact the broad utilization of machine learning algorithms in essential applications, enabling non-discrimination decision-making processes and prompting a more transparent platform for future information systems.

Receiving this award will help me achieve my short-term and long-term goals, Zou said. My short-term goal is to develop fair machine learning algorithms through mitigating fairness issues from computational challenges and broadening the impact through disseminating research outcomes and a comprehensive educational toolkit. The long-term goal is to extend the efforts to all aspects of society to deploy fairness-aware information systems and enhance society-level fair decision-making through intensive collaborations with industries.

View post:

Preventing Bias In Machine Learning - Texas A&M Today - Texas A&M University Today

Platform Reduces Barriers Biologists Face In Accessing Machine … – Bio-IT World

August 1, 2023 | A group of scientists at the Wyss Institute for Biologically Inspired Engineering at Harvard University and MIT are convinced that automated machine learning (autoML) is going to revolutionize biology by removing many of the technical barriers to using computational models to answer fundamental questions about sequences of nucleic acids, peptides, and glycans. Machine learning can be complicated, but it doesnt have to be, and sometimes simpler is better, according to graduate student Jackie Valeri, a big believer in the power of autoML to solve real-world problems.

AutoML is a method learning concept that helps users transfer data to training algorithms and automatically search for the best ML architecture for a given issue, lowering the demand for expert-level computational knowledge that currently outpaces the supply. It can also be pretty competitive with even the best manually designed ML models that can take months if not years to develop, says Valeri, as she and her colleagues recently demonstrated in a paper published in Cell Systems (DOI: 10.1016/j.cels.2023.05.007).

The article showcased the potential of their novel BioAutoMATED platform which, unlike other autoML tools, accommodates more than one type of ML model and is designed to accept biological sequences. Its intended users are systems and synthetic biologists with little or no ML experience, says Valeri, who works in the lab of Jim Collins, Ph.D. at the Wyss Institute.

The all-in-one BioAutoMATED platform modifies three existing AutoML toolsAutoKeras, which searches for optimal neural networks; DeepSwarm, which looks for convolutional neural networks; and TPOT, which hunts for a variety of other, simpler modeling techniques such as linear regression and random forest classifiersto come up with the most appropriate model for a users dataset, she explains. Standardized output results are presented as a set of folders, each associated with one of those search techniques, revealing the best performing model in graphic and text file format.

The tool is very meta, says Valeri, in that it is learning on the learning. Model selection is often the part of research projects that requires a lot of computational expertise biologists generally do not possess and the task cant be easily passed to an ML specialist even if one is to be found because domain knowledge is needed in the model-building process.

Overall, biological researchers are excited about using machine learning but until now have been stymied by the amount of coding needed to get started, she says, noting that it is not uncommon for ML models to have a codebase of over 750 lines. The installation of packages alone can be a huge barrier.

Interest in ML has skyrocketed over the past year thanks largely to the introduction of ChatGPT with its user-friendly interface, but people have also quickly discovered they cant trust everything the large language model has to offer, says Valeri. Similarly, BioAutoMATED is useful but not a magic bullet that erases data problems and like ML in general should be approached with a healthy amount of skepticism to ensure it is learning whats intended.

BioAutoMATED will in the future likely be used together with ChatGPT, predicts Wyss postdoctoral fellow Luis Soenksen, Ph.D., co-lead author on the Cell Systems paper. Researchers will simply articulate what they want to do and be presented with the best questions, required data, and ML models to get the job done.

When put to the test, BioAutoMATED not only outperformed other autoML tools but also some of the models created by a professional ML expertand did it in under 30 minutes using only 10 lines of input code from the user. The required coding is for the basics, says Valeri, to specify the target folder for results, the file name where input data can be found, the column name where sequences can be found within that file, and run times for these extensions.

Users are instructed to first install Docker on their computer, if they have not done so already, and are walked through the process of doing that, she adds. The open software platform sets up its own environment for running applications, requiring only two lines of code to access the Jupyter notebooks preloaded on BioAutoMATED that contain everything needed to run the autoML tool. Its a quick start for most people accustomed to using a computer.

With a bit more coding, users can access some of the embedded extras, says Valeri. These include the outputs from scrambled control tests where BioAutoMATED generates sequences by shuffling the order of nucleotides, answering the frequently asked question of whether models are picking up on real order-and sequence-specific biology.

Half of the battle in biological research is knowing how to ask the right questions, says Soenksen. The platform helps users do that as well as provides insights leading to new questions, hypotheses, models, and experiments.

Users can also opt for data saturation tests where BioAutoMATED sequentially reduces the dataset size to see the effect on model performance, Valeri says. If you can say the models do great with 20,000 sequences, maybe you dont have to go to the effort of collecting 50,000 or 100,000 sequences, which is a real impactful finding for a biologist actually doing the experiments.

Two of the most exciting outputs from the tool, in Valeris mind, are the interpretation and design results. Interpretation results indicate what a model is learning (e.g., nucleotides of elevated importance), including sequence logos where the larger the size of the letter in the sequence the more important it is to whatever function of interest is being examined. Sequence logos of the raw data can also be done to facilitate comparisons across ML tools.

Biologists using BioAutoMATED in this way can expect some actionable outputs, says Valeri. They might want to pay more attention to a motif that pops up through all these sequence logos, for example, or do a deep mutational scanning of a targeted region of the sequence that appears to be most important.

The other key output is a list of de novo design sequences that are optimized for whatever function the model has been trained on, she says. For the newly published study, this focused on the downstream efficiency of a ribosome binding site to translate RNA into protein in E. coli bacteria.

BioAutoMATED was also used to identify areas of the sequence most important in determining translation efficiency, and to design new sequences that could be tested experimentally. Further, the platform generated highly accurate information about amino acids in a peptide sequence most critical in determining an antibodys ability to bind to the drug ranibizumab (Lucentis), as well as classified different types of glycans into immunogenic and non-immunogenic groups based on their sequences.

Finally, the team had the platform optimize the sequences of RNA-based toehold switches. This informed the design of new toehold switches for experimental testing with minimal input coding required.

The time it takes to obtain results from BioAutoMATED depends on several factors, including the question being asked and the size of the dataset for model training, says Valeri. Weve found the length of the sequence is a really big factor... and the compute resources you have available.

The maximum user-allowed time for obtaining results is another important consideration, adds Soenksen. The platform can search for hours or days, as circumstances dictate. Time constraints are routinely employed when training ML models as a matter of practicality.

Soenksen and Valeri both use BioAutoMATED as a benchmark for their own custom-built models, and friends that have tested the platform on different machines are enthusiastic about its potential, they say. In the manuscript, the platform also had good performance on many different datasets, including ones specific to sequence lengths and types.

I have personally used it for some quick paper explorations, trying to see what data are available... [without] having to take the time to code up my own machine learning models, says Valeri. Although it is too soon to know how the tool will be used by biologists elsewhere, it is already being used regularly by a handful of scientists at Harvard investigating short DNA, RNA, peptide, and glycan sequences.

BioAutoMATED is available to download fromGitHub. If we get a lot of traction [with it], and I think we will, our team will probably put more resources into the user interface, notes Soenksen, a serial entrepreneur in the science and technology space. The long-term goal is to make the tool usable by clicking buttons to further lower barriers to access.

If youre a machine learning expert, youll probably be able to beat the output of BioAutoMATED, adds Valeri. We are just trying to make it easy for people with limited machine learning expertise to [quickly] get to a pretty good model.

Complicated neural networks and big language models, which have a lot of parameters and require large amounts of data, are not always best, she says. The simple-model techniques identified by TPOT can be quite well suited to the often-limited datasets biologists have available and can perform as well as if not better than systems with more advanced ML architecture.

Continue reading here:

Platform Reduces Barriers Biologists Face In Accessing Machine ... - Bio-IT World

Apple’s Commitment to Generative AI and Machine Learning – Fagen wasanni

In recent months, there has been a surge in the development of artificial intelligence (AI) chatbots, with ChatGPT making waves in the industry. Tech giants like Microsoft and Google have also announced their plans to integrate AI technology into their products. Amidst this flurry of activity, Apple has been relatively quiet, leading some to believe that generative AI technology is not a priority for the company, and that it may be falling behind its competitors.

However, those familiar with Apples approach know that the company does not make bold proclamations about its projects until it has a tangible product to showcase. Apple CEO Tim Cook addressed this misconception in an interview with Reuters following the companys quarterly earnings call. He emphasized that generative AI research has been a long-standing initiative for Apple, and the company has been investing billions of dollars in research and development, with a significant portion allocated to AI.

While Apple may not be as overt in its AI initiatives as its rivals, Cook pointed out that AI will be integrated into Apples products to enhance user experiences, rather than being offered as standalone AI devices like ChatGPT. For instance, he highlighted features such as Live Voicemail Transcription in the upcoming iOS 17 as examples of how AI will power new functionalities in Apple products.

Apple has been incorporating machine learning features into its devices for years, utilizing the Neural Engine in its A-series and M-series chips. The company has made significant advancements in areas like computational photography, voicemail transcription, visual lookup, language translation, and augmented reality. This progress is exemplified by Apples hiring of Googles former Head of AI, John Giannandrea, as its Senior Vice President of Machine Learning and Artificial Intelligence Strategy.

Undoubtedly, AI has the potential to greatly enhance Apples voice assistant, Siri. Despite starting earlier than competitors like Alexa and Google Assistant, Siri lost its early lead, but Apple has been working to rectify that in recent years. It is likely that Giannandrea has been tasked with bolstering Siris capabilities.

Cook emphasized that while Apple may not have the same breadth of AI-centric services as other companies, AI and machine learning are fundamental core technologies embedded in every Apple product. Apples commitment to generative AI and machine learning remains strong, as evidenced by its substantial investments in research and development, the incorporation of AI features into its products, and the strategic hiring of top AI talent.

Read the original here:

Apple's Commitment to Generative AI and Machine Learning - Fagen wasanni

Richmond could become AI and machine learning tech hub – The Daily Progress

Can Richmond be the capital of artificial intelligence?

A local group is pushing to turn the region into an innovation hub for artificial intelligence and machine learning in the coming years. Many of the companies and experts pushing the limits of these technologies could be based in the Richmond area should a grant be awarded to the group.

The recent emergence of AI tools like ChatGPT and Bard AI have been touted as a revolutionary leap in human technology, with the ability to impact nearly every field and the need for all companies to become fluent in AI.

The Richmond Technology Council branded as rvatech is a member-driven association of companies actively trying to grow Richmonds tech-based economy. The group is applying for a federal grant that worth between $50 to $70 million that would establish Richmond as one of about 20 tech hubs around the country.

The U.S. Economic Development Administrations Technology Hub Grant Programis targeting 10 areas of technology. Some are in fields like robotics, advanced computing and semiconductors, advanced communications, biotechnology and advanced energy, like nuclear. rvatech submitted an application specifically for AI and machine learning.

We're trying to position Richmond as the leading edge of artificial intelligence and machine learning so that if you're a company that is in that space, this is a good place to find talent and to headquarter here, said Nick Serfass, CEO of rvatech. If you want to enter the space, it's a good place to come and learn and be exposed to thought leaders and other practitioners who are in the space.

The application process for these grants is expected to be competitive with regions across the country keen on raising their profile in tech, and AI. By this fall, applicants will be narrowed to 20 regions.

It's transformative in terms of what it could do for a metropolitan area, Serfass said. We don't know of any other artificial intelligence and machine learning applications going out as of now.

The termsAI and machine learning are often used interchangeably, though machine learning is really a subcategory of AI. The field of AI essentially creates computers and robots that can both mimic and exceed human capabilities. It can be used to automate tasks without the need for human input or intake massive amounts of information and make decisions.

Machine learning is a pathway to artificial intelligence. It uses algorithms to recognize patterns in data that can make increasingly better decisions.

These tools are can be applied to fields like manufacturing, banking, health care and customer service. AI can recognize errors and malfunctions in equipment before they happen, or detect and prevent cybersecurity attacks. Everyday people are also using AI to do household tasks like planning workouts or meals, sending emails and making music playlists.

The bulk of the funding from the federal grant would go towards workforce and talent development through higher education, workforce programs at mid-career leadership levels or talent attraction, bringing in top professionals from other areas.

Nick Serfass is the executive director of the Richmond Technology Council, or RVATech.

More companies and workers in the space could later lend itself to more physical changes like lab and research facilities.

Serfass says the Richmond tech scene is well-positioned to be transformed into a hub. A report from commercial real estate and investment firm CBRE listed Richmond in the top 50 tech talent markets nationwide.

A high density of Fortune 500 companies are headquartered in the city and its surrounding counties. Many of those rely either entirely on tech, or have tech-focused sides of their businesses that would benefit from AI and machine learning.

Serfass also cited Richmonds status as the seat of state government as an asset, and Dominions presence in the area as an entity that could revolutionize infrastructure through the use of AI. There is also a major presence of data centers from Meta and QTS in Henrico's White Oak Technology Park, which are a critical asset to digital businesses.

Several different elements highlight the merit of the city and why it could be a great tech hub. Its really the fact that we have such a 360 degree set of resources and assets here in town that could help us thrive.

Richmond has also grown a startup fostering and acceleration scene, largely though Capital Ones Michael Wassmer Innovation Center in Shockoe Bottom. Programs like Startup Virginia and accelerator Lighthouse Labs have helped countless young companies grow, many with focuses in tech.

Richmond has a history of providing focused tech solutions, including data analysis, AI and machine learning in niche markets. As a tech-focused acceleration program, we are always on the lookout for startups utilizing these new technologies, and we've seen more and more apply each cycle, said Art Espy, managing director for Lighthouse Labs. We love it when a local or regional company is a fit for our program; building a hub here would give us even more homegrown tech startups to accelerate, while adding even more vibrancy to our thriving startup ecosystem.

rvatech is currently writing the mission statement for its grant application which could also include the need to bring underserved populations into the industry. Serfass said tech lends itself well to certifications in lieu of college degrees, which offers accessible entry into the field.

A second group in Richmond is pursuing a grant from the Tech Hubs Program. The Alliance for Building Better Medicine, which has an application in the area of advanced pharmaceutical manufacturing, has been a national leader in pharma development, seeking to onshore medicine making from overseas and create a more robust U.S. supply chain for medications.

The tech hubs initiative is an exciting opportunity for Greater Richmond and we are supporting not one, but two applications from our region this year, said Jennifer Wakefield, president and CEO of the Greater Richmond Partnership. Both community partners, rvatech and the Alliance to Build Better Medicine, see the promise of elevating Greater Richmond and its assets which greatly benefits economic development and business attraction to the area.

Sean Jones (804) 649-6911

sjones@timesdispatch.com

Twitter: @SeanJones_RTD

Get the latest local business news delivered FREE to your inbox weekly.

Read the rest here:

Richmond could become AI and machine learning tech hub - The Daily Progress

Postdoctoral Fellowship: Pathogenesis of High Consequence … – Global Biodefense

Aresearch opportunityiscurrently available with the U.S. Department of Agriculture (USDA), Agricultural Research Service (ARS)located in Frederick, Maryland.

The Agricultural Research Service (ARS) is the U.S. Department of Agricultures chief scientific in-house research agency with a mission to find solutions to agricultural problems that affect Americans every day from field to table. ARS will deliver cutting-edge, scientific tools and innovative solutions for American farmers, producers, industry, and communities to support the nourishment and well-being of all people; sustain our nations agroecosystems and natural resources; and ensure the economic competitiveness and excellence of our agriculture. The vision of the agency is to provide global leadership in agricultural discoveries through scientific excellence.

Research Project:We are seeking doctoral-level (Ph.D., M.D., D.V.M.) scientists passionate about researching zoonotic and emerging diseases that impact human and animal health. Under the guidance of a mentor, the Postdoctoral fellows will conduct advanced machine learning-based research to analyze histopathology slides from high-consequence viral infections, such as Crimean Congo Hemorrhagic Fever, Nipah, Hendra, Ebola, and Marburg viruses, with the ultimate goal of understanding mechanisms of pathogenesis.

Learning Objectives:Fellows will learn to build pipelines for training machine learning models and slide analysis. Fellows will gain expertise in histopathology, and machine learning using TensorFlow, U-net, and QuPath. Under the guidance of a mentor, fellows will be expected to develop a scientific project and publish in peer-reviewed publications. As a result of participating in this fellowship, participants will enhance. thier:

Projects will be jointly performed with Dr. C. Paul Morris at the Integrated Research Facility at Fort Detrick in Frederick Maryland. Fellows may be required to undergo background investigations to obtain access to facilities.

Mentor(s):The mentor(s) for thisopportunityis Lisa Hensley(lisa.hensley@usda.gov). If you have questions about the nature of the research, please contact the mentor(s).

Anticipated Appointment Start Date: 2023.Start date is flexible and willdepend on a variety offactors.

Appointment Length:The appointment will initially be for oneyear, butmay bereneweduponrecommendation of ARSand is contingent on the availability of funds.

Level of Participation:The appointment is full-time.

Participant Stipend:The participant will receive a monthly stipend commensurate with educational level and experience.

Citizenship Requirements:This opportunity is available to U.S. citizens, Lawful Permanent Residents (LPR), and foreign nationals. Non-U.S. citizen applicants should refer to theGuidelines for Non-U.S. Citizens Detailspage of the program website for information about the valid immigration statuses that are acceptable for program participation.

ORISE Information:This program, administered by ORAU through its contract with the U.S. Department of Energy (DOE) to manage the Oak Ridge Institute for Science and Education (ORISE), was established through an interagency agreement between DOE and ARS.Participants do not become employees of USDA, ARS, DOE or the program administrator, and there are no employment-related benefits.Proof of health insurance is required for participation in this program. Health insurance can be obtained through ORISE.

Questions:Please visit ourProgram Website. After reading, if you have additional questions about the application process,please emailORISE.ARS.Plains@orau.organd include the reference code for this opportunity.

Qualifications:

The qualified candidate shouldhave received a doctoral degree in one of the relevant fields or be currently pursuing the degree with completion before start of appointment. Degree must have been received within the past three years.

Preferred Skills:

Eligibility Requirements:

Degree: Doctoral Degree received within the last 36 months or currently pursuing. Disciplines: Communications and Graphics Design Computer, Information, and Data Sciences Life Health and Medical Sciences Mathematics and Statistics

Apply for USDA-ARS-P-2023-0162 by 22 Dec 2023.

View original post here:

Postdoctoral Fellowship: Pathogenesis of High Consequence ... - Global Biodefense

Johns Hopkins makes major investment in the power, promise of … – The Hub at Johns Hopkins

By Hub staff report

Johns Hopkins University today announced a major new investment in data science and the exploration of artificial intelligence, one that will significantly strengthen the university's capabilities to harness emerging applications, opportunities, and challenges presented by the explosion of available data and the rapid rise of accessible AI.

At the heart of this interdisciplinary endeavor will be a new data science and translation institute dedicated to the application, understanding, collection, and risks of data and the development of machine learning and artificial intelligence systems across a range of critical and emerging fields, from neuroscience and precision medicine to climate resilience and sustainability, public sector innovation, and the social sciences and humanities.

The institute will bring together world-class experts in artificial intelligence, machine learning, applied mathematics, computer engineering, and computer science to fuel data-driven discovery in support of research activities across the institution. In all, 80 new affiliated faculty will join JHU's Whiting School of Engineering to support the institute's pursuits, in addition to 30 new Bloomberg Distinguished Professors with substantial cross-disciplinary expertise to ensure the impact of the new institute is felt across the university.

Ron Daniels

President, Johns Hopkins University

The institute will be housed in a state-of-the-art facility on the Homewood campus that will be custom-built to leverage a significant investment in cutting-edge computational resources, advanced technologies, and technical expertise that will speed the translation of ideas into innovations. AI pioneer Rama Chellappa and KT Ramesh, senior adviser to the president for AI, will serve as interim co-directors of the institute while the university launches an international search for a permanent director.

"Data and artificial intelligence are shaping new horizons of academic research and critical inquiry with profound implications for fields and disciplines across nearly every facet of Johns Hopkins," JHU President Ron Daniels said. "I'm thrilled this new institute will harness our university's innate ethos of interdisciplinary collaboration and build upon our demonstrated capacity to deliver impactful research at the forefront of this critical age of technology."

The creation of a data science and translation institute, supported through institutional funds and philanthropic contributions, will represent the realization of one of the 10 goals identified in the university's new Ten for One strategic plan: to create the leading academic hub for data science and artificial intelligence to drive research and teaching in every corner of the university and magnify our impact in every corner of the world.

The 21st century is already being defined by an explosion of available data across an almost incomprehensible array of subject areas and domains, from wearables and autonomous systems, to genomics and localized climate monitoring. The International Data Corporation, a global leader in market intelligence, estimates that the total amount of digital data generated will grow more than fivefold in the next few years, from an estimated 33 trillion gigabytes of information in 2021 to 175 trillion gigabytes by 2025.

"It's not hyperbole to say that data and AI to help us make informed use of that information have vast potential to revolutionize critical areas of discovery and will increasingly shape nearly every aspect of the world we live in," said Ed Schlesinger, dean of the Whiting School of Engineering. "As one of the world's premier research institutions, and with our existing expertise in foundational fields at the Whiting School, Johns Hopkins is uniquely positioned to play a lead role in determining how these transformative technologies are developed and deployed now and in the future."

Johns Hopkins has met the moment with several data-driven initiatives and investments, building on long-standing expertise in data science and AI to launch the AI-X Foundry earlier this year. Created to explore the vast potential of human collaboration with artificial intelligence to transform medicine, public health, engineering, patient care, and other disciplines, the AI-X Foundry represents a critical first step toward the creation of a data science and translation institute.

Additional JHU programs that will contribute to the new institute include:

Johns Hopkins is also home to the renowned Applied Physics Laboratory, the nation's largest university-affiliated research center, which has for decades conducted leading-edge research in data science, artificial intelligence, and machine learning to help the U.S. address critical challenges.

But there remains significant untapped potential to use data, artificial intelligence, and machine learning to expand and enhance research and discovery in nearly every area of the university, particularly in fields where the power of data is only now being realized. As Johns Hopkins Bloomberg Distinguished Professor Alex Szalay, an astrophysicist and pioneering data scientist, has said: "The most impactful research universities of the future will be those with scholars who possess meaningful depth in data and another domain, and are equipped with the ability to bridge between these disciplines."

To that end, the new institute will be a hub for interdisciplinary data collaborations with experts in divisions across Johns Hopkins, with affiliated faculty, graduate students, and postdoctoral fellows working together to apply big data to pressing issues. Their work will be supported by the latest techniques and technologies and by experts in data translation, data visualization, and tech transfer, shortening the path from discovery to impact and fostering the development of future large-scale data projects that serve the public interest, such as the award-winning Johns Hopkins Coronavirus Resource Center.

"The Coronavirus Resource Center is just one example of the power of data science and translation and its capacity to guide lifesaving decisions," said Beth Blauer, associate vice provost for public sector innovation and data lead for the CRC. "Our ability to harness data and connect it not just to public policy and innovation but to guide the deeply personal decisions we make every day speaks to the magnitude of this investment and its potential impact. There is no other institution more poised than Johns Hopkins University to guide us."

Johns Hopkins will develop this new institute with a commitment to data transparency and accessibility, highlighting the need for trust and reproducibility across the research enterprise and making data available to inform policymakers and the public. The institute will support open data practices, adhering to standards and structures that will make the university's data easier to access, understand, consume, and repurpose.

Additionally, institute scholars will partner with faculty from across the institution in fields including bioethics, sociology, philosophy, and education to support multidisciplinary research that helps academia and industry alike understand the societal and ethical concerns posed by artificial intelligence, the power and limitations of these tools, and the role for, and character of, appropriate government policy and regulation.

"As both data and the tools for harnessing data have become widespread, artificial intelligence and data-driven technologies are accelerating advances that will shape academic and public life for the foreseeable future," said Stephen Gange, JHU's interim provost and senior vice president for academic affairs. "The investment will ensure Johns Hopkins remains on the forefront of research, policy development, and civic engagement."

Excerpt from:

Johns Hopkins makes major investment in the power, promise of ... - The Hub at Johns Hopkins

Predicting BRAFV600E mutations in papillary thyroid carcinoma … – Nature.com

Patients

A retrospective analysis was performed on PTC patients who had undergone preoperative thyroid US elastography, BRAFV600E mutation diagnosis, and surgery at Jiangsu University Affiliated People's Hospital and traditional Chinese medicine hospital of Nanjing Lishui District between January 2014 and 2021. The enrolling process is displayed in Fig.1. 138 PTCs of 138 patients (mean age, 41.6311.36 [range, 2565] years) were analyzed in this study. The patients were divided into BRAFV600E mutation-free group (n=75) and BRAFV600E mutation group (n=63). Using a stratified sample technique at a 7:3 ratio, all patients were randomly assigned to either the training group (n=96) or the validation group (n=42). The following criteria were required for inclusion: postoperative pathology indicated PTC; preoperative thyroid US elastography evaluation; related US images and diagnostic outcomes; maximum nodule diameter>5mm, and<5cm; and unilateral and single focal lesion. The exclusion criteria included a maximum nodule diameter of>5cm and indistinct US imaging of nodules caused by artifacts. The clinical details of the enrolled patients were documented, including age, sex, nodule diameter, nodule location, nodular echo, nodule boundary, nodule internal and peripheral blood flow, nodule elastic grading, calcification, CLNM, and BRAFV600E mutation results. The Jiangsu University Affiliated People's Hospital and the traditional Chinese medicine hospital of Nanjing Lishui District Ethics Committee approved this study. Because it was retrospective in nature, it did not require written informed consent.

Schematic diagram of the patient selection. PTC, papillary thyroid carcinoma.

There were two ultrasonic devices used: the Philips Q5 (both Healthcare, Eindhoven, Netherlands) and the GE LOGIC E20 (GE Medical Systems, American General) (L12-5 linear array probe, frequency: 1014MHz).

To acquire longitudinal and transverse images of the thyroid nodules, continuous longitudinal and transverse scanning was done while the patients were supine. Blood flow in and around the nodule, strain elastic grading of the nodule, calcification, and CLNM were all visible on the coexisting diagram, which also included the nodule diameter, location, echo, and boundary.

The cross-sectional image's position and size of the sampling frame were adjusted, and the strain elastic imaging mode was activated. With an ROI that was larger than the nodules (generally more than two times), the nodules were placed in the middle of the elastic imaging zone. Pressure was applied steadily (range 12mm, 12 times/s) while the probe was perpendicular to the nodule. When the linear strain hint graph (green spring) suggested stability, the freeze key was pressed to get an elastic image; the ROI's color changed (green indicated soft; red indicated hard), and the nodule's hardness was determined based on elasticity. The elastic image was graded according to the following criteria: one point equals a nodular area that alternates between red, green, and blue; two points equal nodules that are partially red and partially green (mostly green, area>90%); three points equal a nodule area that is primarily green, with surrounding tissues visible in red; four points equal a nodule area that is primarily red, with the red area>90%; and five points equal a nodule area that is completely covered in red.

One week prior to surgery, thyroid US exams were conducted. US image segmentation was done manually. Using the ITK-SNAP program (http://www.itksnap.org), the ROIs were manually drawn on each image (Fig.2). The grayscale images were used to create a sketch outline of the tumor regions in the elastography US images.

(A) Ultrasound conventional B-mode image of papillary thyroid carcinoma. (B) corresponding ultrasound elastography image, with the circle,labeled A indicating a lesion region and the circle labeled B indicating a reference area. (C) Corresponding image after region of interest (ROIs) segmentation step.

Radiomic features were extracted using PyRadiomics (https://github.com/Radiomics/pyradiomics). A total of 479 radiomic features were recovered from each ROI's elastography US images. Among those included were first-order Gray Level Co-occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), Gray Level Size Zone Matrix (GLSZM), Gray Level Dependence Matrix (GLDM), and Neighbouring Gray Tone Difference Matrix (NGTDM) features, as well as features deduced from wavelet filter images containing first-order GLCM, GLRLM, GLSZM, GLDM, and NGTDM features.

The retrieved features were normalized using a standard scalar to reduce bias and overfitting in the study. The dataset was divided into training and validation cohorts. To make each characteristic substantially independent, the row spatial dimension of the feature matrix was reduced using the Pearson correlation coefficient (PCC). Every pair of features with a PCC of more than 0.80 was deemed redundant.

After PCC, recursive feature elimination (RFE) for feature selection was applied to the whole dataset using the Scikit-learn python module24 to choose representative features for the training cohort. During the RFE procedure, the following parameters were taken into consideration (cross-validation was set to stratifiedkfold with the number of splits being 10, the random state was set to 101, minimum features to select was set to 3, and accuracy was employed for the scoring.

The Support Vector Machine with the linear kernel (SVM_L), Support Vector Machine with radial basis function kernel (SVM_RBF), LogisticRegression (LR), Nave Bayes (NB), K-nearest Neighbors (KNN), and Linear Discriminant Analysis (LDA) classifiers were used to build the prediction models using the RFEs key features. All six algorithms were implemented using the Scikit-learn machine learning library24

The same feature sets were chosen and fed into the model during the validation process. Standard clinical statistics like the area under the curve (AUC), sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV), and accuracy (ACC) were used to evaluate the model's performance on the training and validation datasets.

Python (version 3.7, https://www.python.org/ Accessed 8 July 2021) and IBM SPSS Statistics (Monk Ar, New York, New York State, USA.) for Windows version 26.0 were used for statistical analyses. Pearson's chi-square and Fisher's exact tests were used to compare the differences in categorical characteristics. The independent sample t-test was used for continuous factors with normal distribution, whereas the MannWhitney U test was used for continuous factors without normal distribution.

A twosided P<0.05 indicated statistically significant differences. PyRadiomics (version 2.2.0, https://github.com/Radiomics/pyradiomics Accessed 10 August 2021) and scikitlearn version 1.224 were used to extract radiomic features and build the prediction models. Each prediction model's AUC, sensitivity, specificity, ACC, NPV, and PPV were calculated.

Medcalc Statistical Software was used to calculate the six models AUCs and evaluate the predictions. The DeLong method was used to compare the AUCs of the six machine learning classifiers. To create calibration curves, the sci-kit-learn version 1.224 was used. R software (version 3.6.1, https://www.r-project.org) was used to perform the decision curve analysis.

The study was conducted in accordance with the Declaration of Helsinki and approved by the Jiangsu University-Affiliated Peoples Hospital and traditional Chinese medicine hospital of Nanjing Lishui District Ethics Committee.

Patient consent was waived by the Jiangsu University-Affiliated Peoples Hospital and traditional Chinese medicine hospital of Nanjing Lishui District ethics committee due to the retrospective nature of the study.

Originally posted here:

Predicting BRAFV600E mutations in papillary thyroid carcinoma ... - Nature.com

Machine learning prediction and classification of behavioral … – Nature.com

2013 TSA cohort traits

The traits scored in the cohort represent measures of confidence/fear, quality of hunting related behaviors, and dog-trainer interaction characteristics19,20. The traits Chase/Retrieve, Physical Possession, and Independent Possession were measured in both the Airport Terminal and Environmental tests whereas five and seven other traits were specific to each test, respectively (Table 1). The Airport Terminal tests include the search for a scented towel placed in a mock terminal and observation of a dogs responsiveness to the handler. This represents the actual odor detection work expected of fully trained and deployed dogs. Because the tasks were consistent between the time periods, the Airport Terminal tests demonstrate improvements of the dogs with age. All trait scores except for Physical and Independent Possession increased over time, with the largest increase between the 6- and 9-month tests (Fig.1a). This may be due to puppies having increased possessiveness and lack of training at younger ages. The general improvement over time could be due to the increased age of the dogs or to the testing experience gained. Compared to accepted dogs, those eliminated from the program for behavioral reasons had lower mean scores across all traits.

(a) Radar plots of the mean scores for each of the traits for the airport terminal tests. (b) Radar plots of the mean scores for each of the traits in the environmental tests; M03=BX (gift shop), M06=Woodshop, M09=Airport Cargo, M12=Airport Terminal.

Environmental tests involved taking dogs on a walk, a search, and playing with toys in a noisy location that changed for each time point. The traits measured a variety of dog behaviors as they moved through the locations, and their performance while engaging with toys. Accepted dogs had both higher and more consistent scores across the tests (Fig.1b). The largest separation of scores between accepted dogs and those eliminated for behavior occurred at 6-months, at the Woodshop. That suggests this test and environment combination might best predict which dogs will be accepted into the training program. Among the traits that showed the greatest separation between the two outcomes were Physical and Independent Possession, and Confidence.

Three different classification Machine Learning algorithms were employed to predict acceptance based on their ability to handle binary classifiers: Logistic Regression, Support Vector Machines, and Random Forest. Data were split into training (70%) and testing (30%) datasets with equivalent ratios of success and behavioral elimination status as the parent dataset. Following training of the model, metrics were reported for the quality of the model as described in the Methods. Prediction of success for the Airport Terminal tests yielded consistently high accuracies between 70 and 87% (Table 2). The ability to predict successful dogs improved over time, with the best corresponding to 12-months based on F1 and AUC scores. Notably, this pattern occurred with an overall reduction in both the number of dogs and the ratio of successful to eliminated dogs (Supplemental Table 1). The top performance observed was for the Random Forest model at 12-months: accuracy of 87%, AUC of 0.68, and harmonic mean of recall and precision F1 of 0.92 and 0.53 for accepted and eliminated dogs, respectively. The Logistic Regression model performed marginally worse at 12-months. Taking the mean of the four time points for accuracy, AUC, and accepted and eliminated F1, Logistic Regression was slightly better than Random Forest for the first three elements and vice versa for the fourth. The Support Vector Machines model had uneven results largely due to poor recall for eliminated dogs (0.09 vs. 0.32 and 0.36 for the other models).

Prediction of success from the Environmental tests yielded worse and more variable results (Table 2). A contributing factor for the poorer performance may have been the smaller mean number of dogs with testing data compared to the Airport Terminal test (56% vs. 73% of the cohort). Overall, the Logistic Regression model was most effective at predicting success based on F1 and AUC scores. That model showed a pattern of improving performance with advancing months. At 12-months, accuracy was 80%, the AUC was 0.60, and F1 were 0.88 and 0.36 for accepted and eliminated dogs, respectively. The best scores, seen at 12-months, coincided with the lowest presence of dogs eliminated for behavioral reasons. Support Vector Machines had extremely low or zero F1 for eliminated dogs at all time points. All three models had their highest accuracy (0.820.84) and the highest or second highest F1 for accepted dogs (0.900.91) at 3-months. However, all three models had deficient performance in predicting elimination at 3-months (F10.10).

To maximize predictive performance, a forward sequential predictive analysis was employed with the combined data. This analysis combined data from both the Airport Terminal and Environmental at the 3-month timepoint and ran the three ML models, then added the 6-month timepoint and so on. The analysis was designed to use all available data to determine the earliest timepoint for prediction of a dogs success (Table 3). Overall, the combined datasets did not perform much better than the individual datasets when considering their F1 and AUC values. The only instances where the combined datasets performed slightly better were M03 RF over the Environmental M03, M03+M06+M09 LR over both Environmental and Airport Terminal M09, all data SVM over Airport Terminal M12, and all data LR over Environmental M12. The F1 and AUC scores for the instances where the combined sequential tests did not perform better showed that the ML models were worse at distinguishing successful and eliminated dogs when the datasets were combined.

Two feature selection methods were employed to identify the most important traits for predicting success at each time point: Principal Components Analysis (PCA) and Recursive Feature Elimination using Cross-Validation (RFECV). The PCA was performed on the trait data for each test and no separation was readily apparent between accepted and eliminated dogs in the plot of Principal Components 1 and 2 (PC1/2). Scree plots were generated to show the percent variance explained by each PC, and heatmaps of the top 2 PCs were generated to visualize the impact of the traits within those. Within the heatmaps, the top- or bottom-most traits were those that explained the most variance within the respective component. RFECV was used with Random Forest classification for each test with 250 replicates, identifying at least one feature per replicate. In addition, 2500 replicates of a Nave Bayes Classifier (NB) and Random Forest Model (RF) were generated to identify instances where RF performed better than a nave classification.

Scree plots of the Airport Terminal tests showed a steep drop at PC2, indicating most of the trait variance is explained by PC1. The variance explained by the top two PCs ranged from 55.2 to 58.2%. The heatmaps (Fig.2a) showed the PC1/2 vectors with the strongest effects were H1/2 at 3- and 6- months, and PP at 9- and 12-months, both of which appeared in the upper left quadrant (i.e., negative in PC1 and positive in PC2). Several traits showed temporal effects within PCs: (i) at 3-months, PC1 had lower H1 than H2 scores, but that reversed and its effect increased at the other time points; (ii) at 3- and 6-months, PC2 had positive signal for H1/2, but both became negative at 9- and 12-months; (iii) at 3-months, HG was negative, but that effect was absent at other time points; (iv) at 3- and 6- months, PC2 had negative signal for PP, but it changed to strongly positive at 9- and 12-months. When the RFECV was run on the same Airport Test data, a similar pattern of increasing number of selected traits with advancing time points was observed as in the PCA (Table 4). Like the PCA results, H2 was among the strongest at all time points except for the 6-month, although it first appeared among the replicates at 9-months. Means of the NB and RF models were compared (Supplemental Table 2) and showed the M06 and M12 results were the most promising for classification. This suggested that shared traits such as all possession traits (MP, IP, and PP) and the second hunt test (H2) are the most important in identifying successful dogs during these tests, however the distinct nature of the assessment in each time point does not allow for a longitudinal interpretation.

Principal Component Analysis (PCA) results for airport terminal (a) and environmental (b) tests. Each time point displays a heatmap displaying the relative amount of variance captured by each trait within the top 2 components.

The PCA results for the Environmental tests yielded scree plots that had a sharp drop at PC2 for all time points except 9-months (Fig.2b). The amount of variation explained by the top two components decreased with the increasing time points from 62.7 to 49.8. The heatmaps showed the PC1/2 vector with the strongest effect was for the toy possession trait IP, which appeared in the upper left quadrant at all time points (CR and PP had a similar effect at reduced magnitudes). Within PC observations included the following: (i) in PC1, Confidence and Initiative were negative at all time points, and (ii) in PC2, Concentration and Excitability were positive at 3-months, and increased at 6- and at 9- and 12-months. When the RFECV was run on the Environmental test scores (Table 4), all traits for both 9- and 12- months were represented in the results. At 3-months, only Confidence and Initiative were represented and at 6-months, only those and Responsiveness. Means of the NB and RF models were also compared (Supplemental Table 2) and demonstrated M03 and M12 were the most significant for classification. These tests correspond to the earliest test at the gift shop and the last test at an active airport terminal. Primary shared traits include confidence and initiative, with possession-related and concentration traits being most important at the latest time point.

Original post:

Machine learning prediction and classification of behavioral ... - Nature.com

Photonic Neural Networks: Revolutionizing Machine Learning and AI – Fagen wasanni

Researchers at Politecnico di Milano have made a significant breakthrough in the field of photonic neural networks. These networks, inspired by the human brain, have the potential to revolutionize machine learning and artificial intelligence systems.

Neural networks analyze data and learn from past experiences, but they are energy-intensive and costly to train. To overcome this obstacle, the researchers have developed photonic circuits that are highly energy-efficient. These circuits can be used to build photonic neural networks that utilize light to perform calculations quickly and efficiently.

The team at Politecnico di Milano has developed training strategies for these photonic neurons, similar to those used for conventional neural networks. This means that the photonic neural network can learn quickly and achieve precision comparable to traditional neural networks, but with considerable energy savings. The energy consumption of these networks grows much more slowly compared to traditional ones.

The researchers have created a photonic accelerator in the chip that allows calculations to be carried out very quickly and efficiently. Using a programmable grid of silicon interferometers, the calculation time is incredibly fast, equal to less than a billionth of a second.

The implications of this breakthrough extend beyond machine learning and AI. The photonic neural network can be used for a wide range of applications, such as graphics accelerators, mathematical coprocessors, data mining, cryptography, and even quantum computers. The technology has the potential to revolutionize these fields by providing high computational efficiency.

The energy efficiency, speed, and accuracy of photonic neural networks make them a powerful tool for industries seeking digital transformation and AI integrations. With this technology, businesses can approach machine learning and artificial intelligence in a more cost-effective and efficient manner. The future holds great potential for photonic neural networks to shape the development of artificial intelligence and quantum applications.

Original post:

Photonic Neural Networks: Revolutionizing Machine Learning and AI - Fagen wasanni

Predictive Analytics And Machine Learning Market: A … – Fagen wasanni

The Predictive Analytics And Machine Learning Market is the subject of a new study by MarketsandResearch.biz. The report presents a detailed analysis of the market, including crucial determinants such as product portfolio and application description. It incorporates trends, restraints, drivers, and different opportunities that shape the markets future status.

The report provides vital details about the market flow and its forecasted performance during the period of 2022-2028. It also examines the study of raw materials, downstream demand, and present market dynamics. The report profiles frontline players in the Predictive Analytics And Machine Learning market, with a focus on their technology, application, and product types.

Key market features are highlighted in the report to provide a comprehensive view of the global market. These include revenue size, average regional price, capacity utilization rate, production rate, gross margins, consumption, import & export, demand & supply, cost bench-marking, market share, annualized growth rate, and periodic CAGR. The report also covers supply chain facets, economic factors, financial data particulars, and analysis of various acquisitions & mergers, as well as present and future growth opportunities and trends.

Major companies operating in the global market are profiled in the report, including Schneider Electric, SAS Institue Inc., MakinaRocks Co., Ltd., Globe Telecom,Inc., Qlik, RapidMiner, IBM, Alteryx, Alibaba Group, Huawei, Baidu, and 4Paradigm.

Market segmentation based on product type includes General AI and Decision AI. By end-users/application, the report covers segments such as Financial, Retail, Manufacture, Medical Treatment, Energy, and Internet.

The report further provides regional segmentation, focusing on current and projected demand for the market in North America, Europe, Asia-Pacific, South America, and the Middle East & Africa.

The report aims to estimate the market size for the global Predictive Analytics And Machine Learning market on a regional and global basis. It also identifies major segments in the market and evaluates their market shares and demand. The report includes a competitive scenario and major developments observed by key companies in the historic years.

In conclusion, the report provides comprehensive analysis for new entrants and existing competitors in the industry. It also delivers a detailed analysis of each application/product segment in the global market.

For customized reports that meet specific needs, clients can contact the sales team at MarketsandResearch.biz.

View original post here:

Predictive Analytics And Machine Learning Market: A ... - Fagen wasanni

Growing Concerns Over Bias in Powerful AI and Machine Learning … – Fagen wasanni

The rise of powerful artificial intelligence (AI) and machine learning (ML) tools has sparked concern about the presence of bias in these technologies. Sam Altman, CEO of OpenAI, acknowledges that there will never be a universally unbiased version of AI. As these tools become more prevalent across industries, bias has become a critical topic for lawmakers. Some countries, like France, have even banned the use of AI tools in certain sectors to prevent the commercialization of tools that predict judicial decision-making patterns.

One major concern with AI tools is the potential for biases to undermine the neutrality of the legal system. The use of predictive analysis tools that process vast amounts of data can produce unsettlingly accurate results. This raises questions about justice when an AI tool predicts guilt or innocence based on the judge or magistrate handling the case, rendering individual guilt irrelevant.

The issue of bias extends beyond the legal system. Industries such as healthcare and finance are increasingly embracing AI technology. Pfizer, for example, experimented with IBM Watson to accelerate drug discovery efforts in oncology. While IBM Watson fell short of expectations, the emergence of more powerful AI tools has renewed excitement in the industry. However, biases introduced during the data collection and algorithm development processes can lead to inequitable outcomes in patient treatment or financial decision-making.

Biases can enter datasets through factors like sampling bias, confirmation bias, and historical bias. To address bias, Altman highlights the importance of representative and diverse datasets. The quality of data directly affects the potential for bias in AI models.

The responsibility for addressing bias falls on policymakers as AI continues to impact society and individual lives. The proliferation of AI systems holds a mirror to society, revealing uncomfortable truths that might necessitate ethical guidelines and frameworks to ensure fairness and accountability in the use of AI technology.

More here:

Growing Concerns Over Bias in Powerful AI and Machine Learning ... - Fagen wasanni