Page 21234..1020..»

Category Archives: Cloud Computing

Best Cloud Tools of 2024: Unleash Maximum Productivity – Simplilearn

Posted: February 26, 2024 at 12:17 am

In our world, data and data management is of prime importance. The more data we store, the better we can function and progress towards success. Data management can ensure we have all we need for optimum performance. And let's face it, every company these days wants to function well and reach its potential as a team running like a bullet train.

Being a cloud computing tool it carries out most of the functions that a generic cloud computing tool would do.

That is to store and help you manage and organize data and resources to your liking as well as convenience through the internet. With this feature, any cloud computing, including AWS offers services like databases, storage, and much more. AWS stands out for being flexible, scalable, and reliable, becoming the demographic's first choice of cloud tools.

Being a digital toolbox, several companies can benefit from this cloud tool, for it offers useful services such as data storage, running applications and software, IT management, and more.

One may call Microsoft Azure a place to store data, compute power, and gain access to various other resources when you need them. One needs to worry about physical management as it is its manager. As a cloud data storage software with useful services that vary depending on needs, it is like having your cloud in the sky that you can tap into.

Like the other above-mentioned cloud tools, Oracle Cloud also is a digital space for storing data, running apps and IT management. If there is a place you want to find all the tools you need to manage your precious data, then this is it.

Besides managing and storing data, it allows companies and individuals to collaborate with others in the process without any hardware hassle.

Like any other cloud tool, this tool is owned by a big technological company, IBM. Using their tools, services, and features, one may consider IBM Cloud a virtual hub for all things cloud and store data, manage them, and access them when needed, without tackling physical hardware.

A connecting unit that helps connect software and web applications. In these cloud tools, data processes are seamlessly sent back and forth between parts of your business. Cloudhub allows seamless data integration, so there is an efficiency factor in your business functioning.

It is useful to have a tool around to tally your business profits and losses. Like AWS and MS Azure, it simplifies data gathering and usage from one axis. With Cloudability, businesses can keep track of their expenses and usage patterns and ensure maximum value for the investments made.

The fact that this helps manage the financial aspect of cloud tools makes it a good contender among others.

With this cloud tool, you can build your applications without the worry of an underlying infrastructure. One can get a good yield by investing time in a digital infrastructure, however, it requires a constant check on servers, scaling, and maintenance. Unlike other cloud tools, Cloud App Engine allows you to work on your coding while Google looks over most of it on your behalf.

One tool that helps companies control and maximize their cloud computing tools-related expenses is called Cloudyn. It monitors their cloud service expenses, finds cost-saving opportunities, and offers information to support them in making wise decisions about cloud usage. Helping businesses maximize their cloud investment, it functions similarly to a broker for cloud computing.

This cloud tool called CloudZero aids companies comprehend and control their cloud computing expenses. It monitors the amount of money an organization spends on cloud services, such as processing power or storage, and offers information about where that cash is being spent. Through the analysis of usage data, CloudZero assists companies in making the most of their cloud computing investment, finding ways to cut costs, and optimizing their cloud resources. It functions as a financial expert for cloud expenses, assisting companies in controlling costs and making more informed decisions.

Lacework functions similarly to a security guard in the digital realm. To ensure that your computer systems, networks, and cloud services are safe from cyber threats, it closely monitors everything that occurs in them. Real-time data analysis searches for any odd behavior or possible threats. It notifies you if it notices anything unusual so you may implement the necessary cybersecurity precautions to safeguard your data and maintain the security of your systems.

Software provider Informatica assists companies in efficiently managing and analyzing their data. It offers solutions and the best cloud computing tools to gather, arrange, and convert data from diverse sources into insightful knowledge. In essence, it aids companies in making the most of their information to enhance operations and make better decisions.

IT operations use Chef to streamline organizing and setting up computer systems. It lets users use code to specify the ideal state of their systems, and it then applies those configurations automatically to multiple servers or machines to guarantee consistency and dependability.

Puppet is a processing tool for computer system management, much like Chef. It helps users manage substantial installations, maintain consistency, and expedite system administration tasks by allowing them to set up and regulate the envisioned state of their IT systems using code.

Being an expert cloud migration tool, falling under the best cloud computing tools category, Cloudsfer makes transfers between numerous cloud storage easy and hassle-free.

Experience butter-like files, folders, and other data transfers that users can manage and will find extremely handy to arrange digital property in a safe and secure cloud environment.

One tool that aids in task automation for IT operations is Ansible. It makes it simpler to install programs, oversee layouts, and coordinate difficult tasks without human intervention by enabling you to control and configure numerous PCs or server systems from a single location.

Carbonite is your storage room, where you can restore whatever data you wish. Thus securing them from loss. In the event of a malware malfunction, accidental deletion, or some other calamity, having one of these tools acts as your safe that can backup, restore, and preserve your precious data anytime.

Read more here:

Best Cloud Tools of 2024: Unleash Maximum Productivity - Simplilearn

Posted in Cloud Computing | Comments Off on Best Cloud Tools of 2024: Unleash Maximum Productivity – Simplilearn

Real-time Analytics News for the Week Ending February 24 – RTInsights

Posted: at 12:17 am

In this weeks real-time analytics news: AWS announced two Mistral AI models to soon be available on Amazon Bedrock.

Keeping pace with news and developments in the real-time analytics and AI market can be a daunting task. Fortunately, we have you covered with a summary of the items our staff comes across each week. And if you prefer it in your inbox, sign up here!

Amazon Web Services (AWS) announced that two high-performing Mistral AI models, Mistral 7B and Mixtral 8x7B, will be available soon onAmazon Bedrock. Mistral AI is the 7th foundation model on Amazon Bedrock, joining models from AI21 Labs,Anthropic,Cohere,Meta,Stability AI, andAmazon.

With these two Mistral AI models, users have the flexibility to choose the optimal, high-performing LLM for their specific use case to build and scale generative AI applications using Amazon Bedrock. The models and their capabilities include:

Salesforceannounced the general availability of Tableau Pulse, making it easier for customers to make better, faster decisions with trusted generative AI and analytics. Tableau Pulse uses generative AI to surface insights in both a natural language and visual format, making it easier to discover important metrics, gain insights, ask questions, and tie data to a real-world business context. And, with AI capabilities running through the Einstein Trust Layer, a secure AI architecture natively built into the Salesforce Platform, Pulse allows teams to use generative AI without compromising their customer data. (Tableau Pulse is included for free with all Tableau Cloud editions and Embedded Analytics solutions.)

DataStaxannounced its out-of-the-box retrieval augmented generation (RAG) solution, RAGStack, is now generally available powered byLlamaIndexas an open source framework, in addition to LangChain. RAGStack with LlamaIndex offers a comprehensive solution tailored to address the challenges encountered by enterprise developers in implementing RAG solutions. Key benefits include a curated Python distribution available on PyPI, ensuring smooth integration with Astra DB, DataStax Enterprise (DSE), and Apache Cassandra, and a live RAGStack test matrix and proven GenAI app templates.

Arcitecta announced enhancements to its Mediaflux Livewire offering that address the challenges of transmitting data over low-bandwidth and unreliable network connections. The software solves the problems of large-scale data transmission over slow, undependable network connections. With the latest Mediaflux Livewire, customers can securely and reliably transfer massive file volumes, fostering collaboration and allowing users to focus on their data, not the management of the data.

Avaamo released Avaamo LLaMB, a new low-code framework for building generative AI applications in the enterprise safely, securely, and faster than anything else in the market. Avaamo LLaMB addresses the key challenges in enterprise adoption of Generative AI: eliminating hallucinations, integrating disparate enterprise content, automating cross-enterprise workflows, and accommodating any large language model (LLM) the enterprise chooses.

CitiusTech announced the launch of a solution for healthcare organizations to help address the reliability, quality, and trust requirements for Generative AI (Gen AI) solutions. The CitiusTech Gen AI Quality & Trust solution helps organizations design, develop, integrate, and monitor quality and facilitate trust in Generative AI applications, providing the confidence needed to adopt and scale Gen AI applications enterprise-wide. The solution offering is a software-based framework coupled with consulting, implementation, and support services.

ExasolannouncedEspresso AIwith the launch of three new artificial intelligence (AI) capabilities designed to help enterprises approach data analytics in a faster, more cost-efficient, and flexible manner. With these new features, Espresso AI provides organizations with the tools needed to harness the power of their data for advanced AI-driven insights and decision-making. By leveraging Espresso AI, data teams are equipped to address business-critical needs such as demand forecasting, fraud detection, and churn prediction.

Gathr Dataannounced the launch of Gen AI fabric an integrated platform approach to building Gen AI solutions over a unified visual canvas. The new Gen AI capabilities enable enterprises to streamline operations, workforce training, and customer interactions. Gathr offers data engineering, machine learning, action analytics, and process automation capabilities. With Gen AI fabric, enterprises can now leverage Gathrs data engineering and Gen AI capabilities to process large data volumes efficiently and create AI applications over a single pane.

Google introducedGemma, a new generation of open models to assist developers and researchers in building AI responsibly. Gemma is a family of lightweight, state-of-the-art open models built from the same research and technology used to create theGemini models. Accompanying the model weights, Google is also releasing tools to support developer innovation, foster collaboration, and guide the responsible use of Gemma models.

Hammerspaceunveiled a high-performance NAS architecture to address the requirements of broad-based enterprise AI, machine learning, and deep learning (AI/ML/DL) initiatives and the widespread rise of GPU computing both on-premises and in the cloud. This new category of storage architecture Hyperscale NAS is built on the tenants required for large language model (LLM) training and provides the speed to efficiently power GPU clusters of any size for GenAI, rendering, and high-performance computing.

Ikigai Labs announced that itsAI platformis now available onthe SAP Store, the online marketplace for SAP and partner offerings. Ikigai is built on top of its three proprietary foundation blocks developed from years of MIT research aiMatchfor data reconciliation,aiCast for prediction, and aiPlanfor scenario planning and optimization. The Ikigai platform integrates with SAP to deliver AI-enabled data reconciliation, time series forecasting, and scenario planning across a broad range of industry and horizontal use cases.

Lightning AI announced it has signed a Strategic Collaboration Agreement (SCA) with Amazon Web Services, Inc. (AWS). By collaborating with AWS, Lightning AI is able to offer a powerful, enterprise-grade, cloud-based platform for building and deploying AI products. To that end, the SCA allows Lightning AI to leverage AWS compute services to power generative AI services and to provide support for Amazon Elastic Compute Cloud (Amazon EC2) Trn1 instances, powered by AWS Trainium accelerators, directly within the platform.

ManageEngine, a division of Zoho Corporation, announced the release of an ML-powered exploit triad analytics feature in its SIEM solution,Log360. Now, enterprises can knowledgeably trace the path of adversaries and mitigate breaches by providing complete contextual visibility into the exploit triad: users, entities, and processes.

Nasuni announcedNasuni IQ: data intelligence capabilities to help enterprises manage, assess, and prepare their unstructured data environment for AI. With Nasuni IQ, businesses can quickly monitor usage patterns, make proactive data management decisions, and better enable the delivery of intelligent insights. The Nasuni File Data Platform supports the next generation of data initiatives by consolidating unstructured data within a single global namespace.

Reltiointroduced its latest product release featuring Reltios LLM-powered, pre-trained machine learning feature for entity resolution and a new AI capability designed to simplify matching and make digital content searches easier. Key enhancements in the Reltio Connected Data Platform 2024.1 include Reltios Flexible Entity Resolution Networks (FERN)for rule-free matching, Reltio Intelligent Assistant (RIA), product data domain for Reltio for Life Sciences velocity pack, and more.

Sonatype announcedartificial intelligence and machine learning (AI/ML) component detection,available as part ofSonatype Lifecycle. This technology enables companies to accelerate software development while effectively managing the risks associated with AI. Key functions of AI/ML component detection include AI/ML usage monitoring and component categorization, AI usage management, and internal detection of AI models.

Tabnine announced new product capabilities that enable organizations to get more accurate and personalized recommendations based on their specific code and engineering patterns. Engineering teams can now increase Tabnines contextual awareness and quality of output by exposing it to their organizations environment both their local development environments and their entire code base to receive code completions, code explanations, and documentation that are tailored to them.

If your company has real-time analytics news, send your announcements to [emailprotected].

In case you missed it, here are our most recent previous weekly real-time analytics news roundups:

Excerpt from:

Real-time Analytics News for the Week Ending February 24 - RTInsights

Posted in Cloud Computing | Comments Off on Real-time Analytics News for the Week Ending February 24 – RTInsights

Synadia Raises $25 Million Series B Funding to Meet Massive Demand for Multi-cloud and Edge Computing Driven by AI – PR Newswire

Posted: at 12:17 am

Investment led by Forgepoint Capital; Next-generation platform developer will use funds to accelerate the development of NATS.io for edge AI and multi-cloud applications

SAN MATEO, Calif., Feb. 22, 2024 /PRNewswire/ -- Synadia Communications, Inc., creators and maintainers of NATS.io (NATS), the open-source cloud and edge-native messaging system for high-performance data streaming, today announced it has closed a $25 million Series B financing round. Led by Forgepoint Capital, the investment round includes participation from existing investors and new investors Singtel Innov8, LDVP and 5G Open Innovation Lab. Forgepoint Managing Director Ernie Biowill join Synadia's board of directors.

The round brings Synadia's total funding to $51 million raised to date and further validates the company's holistic approach to solving the increasingly complex and pervasive challenges of distributed computing, which is driving rapid growth in edge computing investments.The recent Worldwide Edge Spending Guide from International Data Corporation (IDC) predicts that the worldwide edge computing market will reach $250.6 billion in 2024 with a compound annual growth rate (CAGR) of 12.5% from 2019 to 2024.

The latest round of funding will enable Synadia to further accelerate the growth of NATS.io, the connective platform used by tens of thousands of companies daily to extend their digital services to the edge. For leading enterprises such as Rivian and Walmart to rapid growth startups like Replit and Personal.AI, NATS is becoming the defacto platform for delivering newly developed edge applications where data and micro-services needs to be accessed closer to the user whether for performance or to reduce escalating cloud costs. These new applications span areas such as AI/ML, real-time customer experience, immersive retail, and industrial IoT.

"Companies are increasingly shifting beyond the cloud to the edge to better anticipate and address customer and operational needs of speed and data management. As they do this, legacy technologies are not built for distributed cloud architecture and optimizing user experience," said Bio. "Synadia has experienced rapid growth and widespread customer adoption due to the flexibility and reliability of its Adaptive Edge platform and its 'secure by design' architecture which meets the growing industry demand to improve performance."

Founded in 2017 by CEO Derek Collison, the creator of NATS, Synadia entered the market to address the challenges of edge and multi-cloud computing. It has established itself as a leader in developing and supporting highly available, low-latency applications and microservices for multi-cloud and edge environments.

"Building on the rapid success of the NATS open-source project, our Synadia Cloud and Synadia Platform offerings continue to drive exceptional value," said Collison. "With the support of Forgepoint Capital and our other investors, Synadia is well-positioned to provide real-time and secure access to services and data in any geography, with any cloud provider, and at any edge location. This funding further validates our product-market fit, accelerates our momentum, and underscores the significant opportunity in front of us to further power the proliferation of modern distributed systems."

"By partnering with Synadia to run NATS, we have seen great improvements in performance over our existing communication layer, while reducing costs," said Franco Sabini, Head of IT, Trading Online at Fineco, the number one bank in Italy according to the Forbes World's Best Banks List 2022. "NATS provided us with superior performance over competitive systems. When we factored in that it also has a key-value (KV) store built in, and does not require us to run load balancers or service meshes, we have reduced complexity and thereby costs by nearly three times."

Synadia Products and Resources

About SynadiaSynadia empowers developers and enterprises to accelerate the delivery of their edge applications strategy in a highly secure and cloud-agnostic way. We are the creators and maintainers of the award-winning NATS.io open-source platform powering thousands of applications globally. Founded in 2017, Synadia is backed by leading VCs and strategic investors, including Forgepoint Capital, True Ventures, Bold Capital Partners, LDVP, Singtel Innov8, Accenture, and Samsung Next. Synadia's diverse customer base ranges from innovative startups to Global 500 enterprises in Finance, Retail, Automotive, and Industrial Manufacturing to innovative startups across FinTech, AI, Green Energy, and Gaming.Learn more at https://www.synadia.com/.

About Forgepoint CapitalForgepoint Capital is a leading cybersecurity and digital infrastructure software venture capital firm that invests in transformative companies protecting the digital future. With $1B+ AUM, the largest sector-focused investment team, and a portfolio of nearly 40 companies, the firm brings over 100 years of proven company-building experience and its Advisory Council of more than 90 industry leaders to support entrepreneurs advancing innovation globally. Founded in 2015 and headquartered in the San Francisco Bay Area, Forgepoint is proud to help category-defining companies reach their market potential. Learn more at https://www.forgepointcap.com.

SOURCE Synadia

Originally posted here:

Synadia Raises $25 Million Series B Funding to Meet Massive Demand for Multi-cloud and Edge Computing Driven by AI - PR Newswire

Posted in Cloud Computing | Comments Off on Synadia Raises $25 Million Series B Funding to Meet Massive Demand for Multi-cloud and Edge Computing Driven by AI – PR Newswire

CEO Outlook 2024: 20 Solution Providers On The Cloud Moment – CRN

Posted: at 12:17 am

The CEOs of 3rd Element, Conduent and Wipro Americas are among the tech leaders to weigh in on cloud in 2024.

Multi-cloud. Hybrid cloud. Cloud-based applications, cloud optimization and cloud-connected mainframes.

This long into the cloud revolution, and solution providers continue to find more areas in which to expand their cloud practices while opening up new opportunities in cloud and security, cloud and artificial intelligence, and other areas that stretch use of the technology.

Observations on what 2024 will mean for cloud in this years CRN CEO Outlook 2024 come courtesy of written responses from 3rd Elements Dawn Sizer, Conduents Cliff Skelton, Wipro Americas Srini Pallia and other solution provider leaders.

[RELATED: CEO Outlook 2024]

In some cases, customers seek a solution provider who can provide a variety of technology services. Adapture President Brian Kirsch said that his company will expand cloud offerings because clients are looking for a single source for cloud sourcing and support services who can also offer robust cybersecurity solutions and talent sourcing.

Trace3 CEO Rich Fennessy said that the underpinning of a viable AI strategy is a mature data, security and cloud strategy, all areas of expertise for the solution provider.

We're ready to help our clients design strategies to tackle complex automation objectives, navigate human capital constraints, and address growing data intelligence demands, Fennessy said.

Read on for what some of the top cloud solution provider CEOs had to say in CRNs 2024 CEO Outlook.

And of course, see the full CEO Outlook project for more thoughts from these and other tech CEOs, including thoughts from some of the biggest networking CEOs.

3rd Element

CEO

What do you see as the toughest challenges facing your customers in 2024?

Most end users are really just embracing cloud technologies and real cybersecurity.

Now they will want to jump ahead and implement AI because they believe there isnt a ramp-up process, or that they need to secure the data they have first.

There will be any number of breaches this year due to AI and unsecured or improperly secured data.

Adapture

President

What is the biggest market opportunity your company will tackle in 2024?

We are expanding our cloud offerings and strategically investing in our team.

Our clients are looking for a single source for cloud sourcing and support services who can also offer robust cybersecurity solutions and talent sourcing.

Our one-stop approach sets us apart and makes environments more visible and manageable.

Advizex Technologies

CEO

What are the key technology investments you plan to make in 2024?

Over the past year, our strategic focus and investment in consumption technology, multi-cloud solutions and AI has proven to be pivotal in propelling our organization forward. Embracing consumption technology has enhanced our ability to adapt to evolving customer needs, ensuring seamless accessibility and a better ability to scale.

Our commitment to multi-cloud environments provides our customers with unparalleled scalability, resilience and flexibility. Moreover, our strides in AI and analytics have revolutionized customer operations, empowering data-driven decision-making and fostering innovation across all facets of their business.

These investments have not only optimized our customer operations but have also positioned us at the forefront of technical advancement, setting the stage for sustained growth and competitive advantage in today's market.

Blue Mantis

CEO

What do you see as the toughest challenges facing your customers in 2024?

Without question, we have entered a brand new world with respect to data management and privacy from a regulatory perspective. The new SEC mandate to disclose data breaches more quickly and transparently intensifies their already complex environments. Data resiliency is another major challenge.

Customer data is under continuous attack by an increasingly sophisticated universe of adversaries. Clients risk profiles are expanding. Further compounding these two issues is what I call the growing tech debtbudgets have gotten tighter across the board in the past few years.

Do more with less is now the rule, not the exception, in most enterprises. The availability of skilled tech talent in IT and cybersecurity and other key areas remains constrained. Most enterprises are grappling with these issues, and they require guidance and direction on how best to prioritize projects in the areas of cloud, cybersecurity and digital transformation.

Caylent

CEO

What is the biggest market opportunity your company will tackle in 2024?

There is no doubt most companies are looking at this, but AI is a large market for us. Not just AI development, but also all the work customers need to do with their data and systems to be ready to successfully leverage the benefits of AI.

We are also seeing significant opportunities as major players in key industries are looking to innovate in the cloud. In this modern cloud era, self-managed infrastructures and VMs are things of the past.

Companies are starting to adopt services managed by the cloud provider, building next-gen infrastructures to enable their talent to build IP instead of managing IT.

AWS provides the best technologies for the future of business. With their significant investment in data, analytics and GenAI, it creates a unique opportunity for Caylent to provide innovation to our customers by tailoring AWS product portfolio to meet their specific business needs and strengthen our position as a disrupter in the systems integration space.

Cloudsufi

President, CEO

What is the biggest market opportunity your company will tackle in 2024?

For Cloudsufi, based on our innovation, we are focused on handling the following opportunities, which are key pillars of data monetization for us.

AI-Powered Analytics: Integrating artificial intelligence (AI) and machine learning into analytics processes will continue to be a major opportunity for Cloudsufi.

This includes predictive analytics, natural language processing (NLP), and computer vision to automate data analysis, discover insights and improve industry decision-making.

Supply Chain and Logistics: Post-pandemic disruptions have highlighted the need for supply chain resilience. Analytics will be crucial in optimizing supply chain operations, reducing risks and improving agility.

Edge Analytics: With the proliferation of IoT devices and sensors, theres an opportunity for analytics at the edge. Edge analytics involves processing data locally on IoT devices, reducing latency, and enabling quicker responses and insights, particularly in manufacturing, logistics and health-care industries.

Ethical and Responsible AI: With increased awareness of bias and fairness concerns in AI, there will be opportunities for analytics solutions that audit and monitor AI systems for ethical and responsible use.

Clutch Solutions

Founder, CEO

What is the biggest market opportunity your company will tackle in 2024?

Far and away the market attention will continue to focus on artificial intelligence, but what is really being highlighted is the continued rapid adoption of digital transformation. In this case its moved beyond traditional IT workload efforts of old, to the more general (read: generative AI) and creative aspects of the way people work and engage with technology.

To enable this, we are seeing our clients increase their focus on data management, hardware optimization, hybrid cloud environments, SaaS migration, managed services, cybersecurity, and process optimization.

Conduent

President, CEO

What is the biggest market opportunity your company will tackle in 2024?

Conduent sees significant opportunity in 2024 and beyond in accelerating business process outcomes and quality through the strategic use of AI, including Generative AI and utilizing our cloud-based and mobile-enabled Business Process as a Service solutions (BPaaS) to enhance customer experience, performance and value for our clients.

For instance, our Digital Integrated Payments Hub streamlines payment processes, reducing processing times from up to 10 days to just minutes. This centralized platform provides secure, efficient and cost-effective payment options for businesses, government and transportation agencies, with transparent transaction tracking.

In the government sector, our cloud-based Medicaid suite (CMdS) automates health-care claims administration and our BenePath Suite streamlines eligibility and enrollment for social services and government aid programs.

We offer technology solutions with AI to modernize client business processes and are collaborating with technology providers to reimagine and enhance value in business processes through generative AI. For example, we are working with a large health-care insurance provider to explore how AI can identify patient or medical provider claims that will be rejected due to missing information or other errors.

Another example is using AI to summarize and analyze large page-volume claim protest submissions, including handwritten documents, to provide insurance providers with relevant, digestible information faster and at less expense than manual analysis.

By solving these challenges, we can save health-care insurance clients resources and costs, while eliminating hassles and delays for medical providers and patients.

ConRes

CEO

What is the No. 1 thing you want your vendor partners to do to support your business in 2024?

In the upcoming year, our primary expectation from vendor partners is the establishment of robust partner programs.

These programs should recognize and appreciate solution providers who contribute substantial value to their offerings, while also carving out opportunities for partner profitability within the collaborative framework.

Additionally, with manufacturers increasingly transitioning to the cloud, our revenue model has undergone a significant shift towards services, amplifying the need for vendor partnerships that align with this evolving landscape.

e360

CEO

What impact do you expect AI to have on your business in 2024?

Over the last 12 months, e360 has been integrating AI into our service offerings. We are focused on creating services that ensure governance, risk and compliance around cybersecurity and protecting customers from unwanted attacks, including those that utilize AI.

We are also working on helping customers implement AI, which has created many opportunities for our organization.

We see some companies trying to do anything and everything when it comes to AI, even in areas where they don't excel.

At e360, we are committed to focusing on our core strengths, which include cloud computing, cybersecurity, digital workplace and modern infrastructure, and delivering the best possible service to our customers in these practice areas.

We plan to eventually integrate AI across the board to make our service offerings even better, and we will accomplish this using various tools, technologies and software partners.

Ensono

CEO

What is the biggest market opportunity your company will tackle in 2024?

Our focus will be on collaborating closely with technology, consultancy and VAR partners to deliver innovative hybrid cloud solutions seamlessly integrating complex legacy, mission-critical systems to modern cloud environments.

An example of this approach is our modern, cloud-connected mainframe enabled by our full ecosystem of partners.

We differentiate ourselves in our ability to modernize even the most complex legacy environments, whether it be making the mainframe a first-class citizen with the rest of a clients hybrid cloud estate or converting legacy code to enable modern cloud-based solutions that allow data to be leveraged in new ways.

With the growing complexity of IT ecosystems, there is an increasing need for managed services that can provide continuous monitoring, security, optimization and innovation. Ensono and our partners will work together to offer managed services supported by the best technology and innovation in the market, ensuring that our clients can focus on their core business objectives while relying on us for their hybrid cloud operational needs from mission-critical to cloud.

ePlus

President, CEO

View original post here:

CEO Outlook 2024: 20 Solution Providers On The Cloud Moment - CRN

Posted in Cloud Computing | Comments Off on CEO Outlook 2024: 20 Solution Providers On The Cloud Moment – CRN

DigitalOcean beats expectations under the helm of new CEO Paddy Srinivasan – SiliconANGLE News

Posted: at 12:17 am

Shares of the cloud computing infrastructure firm DigitalOcean Holdings Inc.traded higher in extended trading today after it delivered earnings, revenue and guidance for the current quarter that came in above expectations.

The company reported earnings before certain costs such as stock compensation of 44 cents per share, nicely ahead of Wall Streets target of 37 cents per share. Revenue growth was a little sluggish at just 11% more than a year earlier, but the companys $181 million in reported sales still came in ahead of the analysts consensus estimate of $178.1 million.

Investors were likely also pleased to see the company boosting its profitability. It reported net income for the quarter of $15.9 million, rising from a loss of $10.3 million one year earlier.

The company also announced its full-year fiscal 2023 results, with revenue growing by 20% from the previous year, to $693 million.

The results clearly pleased investors, as DigitalOceans stock rose more than 6% in extended trading, reversing a 3% decline that occurred in the regular trading session.

DigitalOcean stands out as a competitor to Amazon Web Services Inc. and Microsoft Corp. in the public cloud infrastructure market. Instead of competing against those giants head on, it has carved a niche for itself serving small businesses with its developer cloud that makes it easy for small teams of developers to create modern applications.

With the DigitalOcean App Platform, developers can deploy application code in production with a few clicks, in line with the companys stated aim of keeping cloud computing simple. DigitalOceans pitch is that it takes care of the cloud infrastructure and deployment side of things, so developers can maintain a focus on their code.

The company was formerly led by Yancey Spruill, but last month it announced that Paddy Srinivasan (pictured) was taking over as its new chief executive, concluding a leadership transition plan that was outlined last summer.

Taking part in his first earnings call as the companys new CEO, Srinivasan said hes eager to help the company invest in transformational new AI solutions. Indeed, the company has ambitious plans to play a role in the growing artificial intelligence industry. While most of the headlines around AI are focused on bigger players like Google LLC and Meta Platforms Inc., there are plenty of smaller companies hoping to tap into the power of generative AI, and its these that DigitalOcean is hoping to cater to with its recently acquired Paperspace platform.

Last month, the company announced that its making Nvidia Corp.s most powerful H100 graphics processing units available to small and medium-sized businesses via Paperspace, enabling them to access the critical hardware needed to power AI workloads. It said at the time that it sees big demand for the offering, because the large cloud providers such as AWS and Microsoft Azure have largely optimized their GPU offerings to serve bigger enterprises.

The companys efforts in AI appear to have been well-received so far, for DigitalOcean reported 11% growth in its annual revenue run rate, which ended the quarter at $730 million. It also saw its average revenue per customer increase to $92.63, up 6% from a year earlier, while Builders and Scalers, which refers to customers that spend at least $50 per month on its offerings, increased 8% from a year earlier.

Holger Mueller of Constellation Research Inc. said DigitalOcean did well to swing back to profitability, showing good growth over the full year. However, he said investors may be concerned to see that the companys growth slowed in the final quarter. One of the most urgent jobs for the new CEO Paddy Srinivasan will be to find a way to rekindle its growth, and it will be interesting to see how he does that, the analyst said. From his comments, it seems AI will likely be a key strategy, and well hopefully learn more about its plans and prospects in the coming quarters.

Looking to the coming quarter, DigitalOcean said it sees earnings of between 37 and 39 cents per share, ahead of the Streets forecast of 37 cents. It also expects revenue of between $182 million and $183 million, ahead of Wall Streets target of $181.4 million.

For fiscal 2024, the company is targeting earnings of between $1.60 and $1.67 per share, versus the consensus estimate of $1.62. For revenue, its looking at a range of $755 million to $775 million, versus the analysts $765.9 million target.

THANK YOU

Read the original:

DigitalOcean beats expectations under the helm of new CEO Paddy Srinivasan - SiliconANGLE News

Posted in Cloud Computing | Comments Off on DigitalOcean beats expectations under the helm of new CEO Paddy Srinivasan – SiliconANGLE News

Securing Kubernetes in a Cloud Native World – The New Stack

Posted: at 12:17 am

Kubernetes has revolutionized the way cloud native applications are deployed and managed, but how can you mitigate those weak links in cloud environments?

Simply put, cloud native means building, deploying and managing your applications in cloud computing environments. Applications that are born to live in the cloud tend to be resilient, portable, easily scalable to meet the ups and downs of demand, and easy to update as needs change. Indeed, being cloud native means apps can be changed and updated quickly and frequently, with no impact on service delivery. Apps can be developed and optimized quickly, and then undergo continuous improvement based on user feedback all at the speed of business.

As the adoption of cloud native applications increases, Kubernetes has emerged as the go-to container orchestrator for many organizations. It automates the deployment, scaling and management of containerized applications, making it an essential part of modern DevOps environments. However, as powerful and prevalent as Kubernetes is, ensuring its security is a non-trivial task. With built-in security features and a growing market of third-party tools, creating a secure Kubernetes deployment requires careful planning, diligent implementation and ongoing management.

Securing your Kubernetes deployments requires a holistic and integrated approach from the earliest stages in the development process. Begin by hardening your infrastructure and host operating system to minimize potential attack vectors. Container images should always be vetted and secure before they are deployed.

Kubernetes includes an array of native security features, including role-based access control (RBAC), network policies and secrets management. RBAC is a fundamental tool that allows administrators to define roles and bind them to users or groups of users, allowing granular control over who can access and modify resources within the cluster. Network policies offer another layer of protection, providing control over how pods communicate with each other and other network endpoints. Secrets management helps in securely storing and managing sensitive information like passwords, tokens and API keys, and allows secrets to be stored and managed centrally within Kubernetes.

Regular and continuous scanning of container images for vulnerabilities is critical to preemptive threat management. To maintain the integrity of containerized applications, signing and verification processes before deployment are also essential.

As the methods of malicious actors evolve, real-time threat detection systems can act as the last line of defense. These systems let you continuously monitor your Kubernetes environment to instantly identify and respond to threats, ensuring that your containerized landscape stays secure.

Successfully navigating Kubernetes security isnt just about setting up your security program correctly; its an ongoing commitment. The path is riddled with challenges, such as properly configuring Kubernetes, securing container images, managing secrets and ensuring runtime monitoring. Perhaps the most demanding aspect is the need for continuous visibility over the full life cycle of Kubernetes deployments to detect misconfigurations and vulnerabilities promptly.

To achieve this, runtime container security requires agentless scanning across the full stack, including the container, cloud and workloads. Image scanning of running containers and container image registries is vital in this process.

Ensuring long-term security for Kubernetes deployments underlies the need for robust strategies. Regular updates, correct configuration, vulnerability scanning and strict adherence to best security practices are the cornerstones of a secure Kubernetes environment. Likewise, understanding and monitoring industry and regulatory rules is vital for Kubernetes security, ensuring compliance and avoiding data privacy issues.

Changing security regulatory standards make it vital for organizations to keep their Kubernetes deployments compliant. This eliminates various risks including security vulnerabilities, noncompliance penalties and system inefficiencies.

Despite its importance, maintaining compliance is not without challenges. First, the dynamic nature of Kubernetes deployments makes it difficult to track and manage all resources effectively. Second, a lack of visibility into configurations can result in noncompliant setups. Third, manual compliance checks are tedious, error-prone and dont scale well with the increase in Kubernetes clusters.

To meet these challenges head-on, there are several strategies. Automating compliance checks saves time and reduces errors, while introducing uniform policy enforcement across all deployments ensures better control and traceability.

Integrating compliance into the CI/CD pipeline allows for early detection of noncompliance issues, and thus easier remediation. Using these strategies ensures compliance and helps optimize the overall performance of your deployments.

Your organization must watch over your containerized applications, which are vulnerable to all kinds of exploits and threats. Identity and access management are your responsibility, along with all the various configurations, encryption, network traffic protection, segmentation and other details. Adopting industry-grade security best practices can significantly enhance your Kubernetes security profile. The following 10 best practices should guide your Kubernetes security program:

Kubernetes security is a complex but manageable challenge. Organizations can navigate the cloud native world securely by starting with a strong foundation, correctly implementing isolation and multitenancy, securing containers throughout their life cycle and fostering a culture of security.

Continuous monitoring and using the right tools further ensure that the Kubernetes environment remains resilient against evolving threats. As cloud native technologies continue to advance, staying informed and adaptable is key to maintaining a secure Kubernetes ecosystem.

To learn more about Kubernetes and the cloud native ecosystem, join us at KubeCon + CloudNativeCon Europe, in Paris, on March 19-22.

YOUTUBE.COM/THENEWSTACK

Tech moves fast, don't miss an episode. Subscribe to our YouTube channel to stream all our podcasts, interviews, demos, and more.

SUBSCRIBE

Go here to read the rest:

Securing Kubernetes in a Cloud Native World - The New Stack

Posted in Cloud Computing | Comments Off on Securing Kubernetes in a Cloud Native World – The New Stack

How to Build a Chat Interface using Gradio & Vultr Cloud GPU SitePoint – SitePoint

Posted: at 12:17 am

This article was created in partnership with Vultr. Thank you for supporting the partners who make SitePoint possible.

Gradio is a Python library that simplifies the process of deploying and sharing machine learning models by providing a user-friendly interface that requires minimal code. You can use it to create customizable interfaces and share them conveniently using a public link for other users.

In this guide, youll be creating a web interface where you can interact with the Mistral 7B large language model through the input field and see model outputs displayed in real time on the interface.

On the deployed instance, you need to install some packages for creating a Gradio application. However, you dont need to install packages like the NVIDIA CUDA Toolkit, cuDNN, and PyTorch, as they come pre-installed on the Vultr GPU Stack instances.

Follow the next steps for populating this file.

The above code snippet imports all the required modules in the namespace for inferring the Mistral 7B large language model and launching a Gradio chat interface.

The above code snippet initializes model, tokenizer and enable CUDA processing.

The above code snippets inherits a new class named StopOnTokens from the StoppingCriteria class.

The above code snippet defines variables for StopOnToken() object and storing the conversation history. It formats the history by pairing each of the message with its response and providing tags to determine whether it is from a human or a bot.

The code snippet in the next step is to be pasted inside the predict() function as well.

The streamer requests for new tokens from the model and receives them one by one ensuring a continuous flow of text output.

You can adjust the model parameters such as max_new_tokens, top_p, top_k, and temperature to manipulate the model response. To know more about these parameters you can refer to How to Use TII Falcon Large Language Model on Vultr Cloud GPU.

Gradio uses the port 7860 by default.

Executing the application for the first time can take additional time for downloading the checkpoints for the Mistral 7B large language model and loading it on to the GPU. This procedure may take anywhere from 5 mins to 10 mins depending on your hardware, internet connectivity and so on.

Once it executes, you can access the Gradio chat interface via your web browser by navigating to:

The expected output is shown below.

In this guide, you used Gradio to build a chat interface and infer the Mistral 7B model by Mistral AI using Vultr GPU Stack.

This is a sponsored article by Vultr. Vultr is the worlds largest privately-held cloud computing platform. A favorite with developers, Vultr has served over 1.5 million customers across 185 countries with flexible, scalable, global Cloud Compute, Cloud GPU, Bare Metal, and Cloud Storage solutions. Learn more about Vultr.

See the article here:

How to Build a Chat Interface using Gradio & Vultr Cloud GPU SitePoint - SitePoint

Posted in Cloud Computing | Comments Off on How to Build a Chat Interface using Gradio & Vultr Cloud GPU SitePoint – SitePoint

Microsoft to invest $2.1bn in cloud and AI infrastructure in Spain – DatacenterDynamics

Posted: at 12:17 am

Microsoft has committed to investing $2.1 billion in cloud computing and artificial intelligence (AI) infrastructure in Spain over the next two years.

The planned investment was shared by the vice chair and president of Microsoft Brad Smith via a post on X following a meeting with the Prime Minister of Spain, Pedro Sanchez.

"Im thrilled to announce that we will expand our AI and cloud infrastructure in Spain by $2.1bn in the next two years," Smith said. "Our investment is beyond just building data centers, its a testament to our 37-year commitment to Spain, its security, and development and digital transformation of its government, businesses, and people."

Prime Minister Sanchez added: "In addition, we have analyzed cooperation opportunities to strengthen cybersecurity and promote artificial intelligence in Public Administration. Public-private collaboration is essential to successfully face the challenges of digital transformation."

Microsoft has previously invested in AI in Spain with its 2021 R&D hub in Barcelona which is dedicated to artificial intelligence, machine learning, and deep learning.

The company first announced plans for a Spanish Azure cloud region in early 2020, and will be the last of the major providers to set up there. The company is currently developing data centers in Madrid and the Aragon region.

The former will be constructed by Ferrovial as confirmed in January 2023. Microsoft ultimately plans to operate three data centers in Madrid.

The Aragon data center development, announced in October 2023, is reported to be three sites around the city of Zaragoza; A 63-acre site in the Recycling Technology Park (PTR), as well as land in La Puebla de Alfindn to the east of Zaragoza, and La Muelato to the west. Build-out is reportedly expected over seven years.

Google was the first to launch a Spanish region, opening one in Madrid in May 2022. Amazon launched an AWS cloud region in Aragon in November 2022. Oracle opened a Madrid region in September with plans for a second in the works.

The latest investment commitment comes shortly after Microsoft announced that it was committing $3.44bn to doubling Germany's AI infrastructure and cloud computing capacity.

As with the Spanish investment, this will be delivered over two years, with some of the money dedicated to training 1.2m people in Germany with AI skills.

Read more here:

Microsoft to invest $2.1bn in cloud and AI infrastructure in Spain - DatacenterDynamics

Posted in Cloud Computing | Comments Off on Microsoft to invest $2.1bn in cloud and AI infrastructure in Spain – DatacenterDynamics

Stannah looks to enterprise cloud software to lift IT systems – ComputerWeekly.com

Posted: at 12:16 am

Stairlift and lift manufacturer Stannah Group said it has made a significant investment in a cloud-based enterprise software roll-out to modernise its IT systems and prepare for the companys future growth.

Stannah is one of the UKs leading engineering companies, and remains an independent, family-owned and run business that has sold over 700,000 stairlifts worldwide. It now operates in more than 40 countries, and has subsidiaries in 12 countries, including France, Italy and the US.

The companywill be rolling out IFS Cloud for enterprise resource planning(ERP), field service management (FSM), planning and scheduling optimisation (PSO), and enterprise asset management (EAM).

The implementation of IFS Cloud will enable Stannah to replace an existing legacy system and 67 edge systems with the new enterprise offering, which will provide real-time visibility across company departments.

Stannah said that, for the first time, all of its employees across 13 countries will be aligned on a modern and cohesive solution that consolidates all of Stannahs operational functions from manufacturing to field service management. When fully rolled out, the new system will support over 2,500 users.

Nick Stannah, joint CEO of Stannah Home Accessibility, said this is a major project for Stannah, and a keystone of a new IT strategy to modernise the Stannah Groups whole IT estate.

Our legacy systems have served us well, but we have grown considerably over the past decade and plan to build on that, he said. We have a lot of technical debt in many legacy systems, and infrastructure that is not joined up and no longer fit for purpose, he told Computer Weekly.

Stannah said the group also has a lot of international businesses running separate systems and needs to move to a globally integrated enterprise-wide IT platform that can better support its business growth plans.

Our new IT strategy aims to ensure that IT is an enabler for the Group to achieve its future business objectives, he said. Moving to cloud services, and modernising our whole infrastructure, enables us to achieve this, providing scalability, flexibility and security for our growing business.

Stannah said the company had selected IFS as its cloud ERP provider after a very thorough selection process during 2023. IFSs evergreen cloud solution means we can keep modernising our platform as technology evolves, he said, adding that IFS is a strong fit for Stannah because it can provide the range of functionality the company requires, providing very capable solutions for our manufacturing and field service operations.

The company is at the very start of this project, and is currently at the planning stage. This will be a considerable undertaking, he said.

It has created a team of dedicated people from across the business to see the project through, and will be working through audits of current systems to assess business needs and processes before working on training every member of staff on the system.

Phase one of the roll-out is likely to go live around January 2026, when Stannah will deploy it across its manufacturing plants as well as all of its UK and France trading businesses. Phase two will see the roll-out of the system across all of the companys other international businesses over the following two years.

Among the broader business benefits Stannah aims to get will be standardised business processes that will deliver significant efficiency improvements across the business. The project will also lead to improved user experience for employees using the business systems, and improved service experience for customers.

It should also offer real-time business reporting and integrated data to improve business decisions, and improved planning and scheduling operations across Stannahs manufacturing and field services.

As the project only kicked off in January, Stannah said its still early days. A key focus is not only developing the solution, but also communicating our strategy and plans to everyone in the business to prepare our people for the changes the system will bring, he said. This will be a huge change for our people, and we need to ensure we provide the necessary change management, training and support for everyone to succeed.

The move to the cloud-based system will drive efficiencies and productivity forStannahby enabling it to run a wide range of operational functions on a single platform with a single data model, the companies said. Initially, these will include customer relationship management, sales, planning, supply chain, manufacturing, human capital management, finance, asset and field service management.

By streamlining business processes, the new system will allow for more responsiveness in the service offered to customers, thereby driving enhanced productivity and efficiencies including first-time fix rates, route planning and more, said Stannah.

IFS said its AI-driven PSO engine will also streamline Stannahs field service operations, helping to implement cost-saving route planning and solve complex scheduling issues. Using EAM and PSO in tandem will ensure Stannah can optimise asset uptime and maintenance time, IFS said. The firm will also use the IFS Success Services to support customers from adoption and engagement to software support. Stannah has also invested in IFS Implementation Services, offered by IFS to its clients to assist them in implementing its software.

Cloud adoption by enterprises continues to grow at a rapid rate. According to tech analyst Gartner, cloud computing is in the process of shifting from being a technology disruptor to a necessity. The analyst recently predicted that by 2028, modernisation efforts will culminate in 70% of workloads running in a cloud environment, up from 25% in 2023. It said that this year, worldwide user spending on public cloud services is expected to reach $679bn and is likely to exceed $1tn in 2027.

Read the original:

Stannah looks to enterprise cloud software to lift IT systems - ComputerWeekly.com

Posted in Cloud Computing | Comments Off on Stannah looks to enterprise cloud software to lift IT systems – ComputerWeekly.com

AI vendor finds opportunity amid AI computing problem – TechTarget

Posted: at 12:16 am

With the growth of generative AI, a big problem enterprises and vendors are concerned with is computing power.

Generative AI systems such as ChatGPT suck up large amounts of compute to train and run, making them costly.

One AI vendor trying to address the massive need for compute is Lambda.

The GPU cloud vendor, which provides cloud services including GPU compute as well as hardware systems, revealed it had achieved a valuation of more than $1.5 billion valuation after raising $320 million in a Series C funding round.

The vendor was founded in 2012 and has focused on building AI infrastructure at scale.

As a provider of cloud services based on H100 Tensor Core GPUs from its partner Nvidia, Lambda gives AI developers access to architectures for training, fine-tuning, and inferencing generative AI and large language models (LLMs).

One of the early investors and a participator in the latest funding round is Gradient Ventures.

Gradient Ventures first invested in the AI vendor in 2018 and then did so again in 2022.

The investment fund became interested in Lambda during a time when the vendor faced the challenge of trying to build AI models without having the workstation and infrastructure it needed. This led Lambda to start building AI hardware that researchers can use.

"That's why we were excited is that we saw this sort of challenge to the development," said Zachary Bratun-Glennon, general partner at Gradient Ventures. "Since then, we've been excited as the product has developed."

Lambda grew from building workstations to hosting servers for customers that had bigger compute needs and budgets and then to offering a cloud service with which users can point and click on their own desktop without needing to buy a specialized workstation.

"Our excitement is just seeing them meet the developer and the researcher where they are with what they need," Bratun-Glennon said.

Lambda's current fundraising success comes as the vendor continues to take advantage of the demand for computing in the age of generative AI, Futurum Group research director Mark Beccue said.

"I really think the fundraise ... has got to do with that opportunistic idea that AI compute is in high demand, and they're going to jump on it," he said.

As a vendor with experience building on-premises GPU hardware for data centers, Lambda appeals to investors because of the options it brings to enterprises, he added.

Lambda also enables enterprises to get up and running quickly with their generative AI projects, Constellation Research founder R "Ray" Wang said.

"GenAI on demand is the best way to look at it," Wang said. "Lambda labs basically says, 'Hey, we've got the fastest, the best, and not necessarily the cheapest but a reasonably priced ability to actually get LLMs on demand.'"

"What people are rushing to deliver is the ability to give you your compute power when you need it," he continued.

However, as generative AI evolves, the compute problem could ease somewhat.

Over the past year and a half, generative AI systems have evolved from large models that run on up to 40 billion parameters to smaller models that run on as few as 2 billion parameters, Beccue said.

"The smaller the language models are, the less compute you have to use," he said.

Moreover, while Nvidia is known for providing powerful AI accelerators like GPUs, competitors including Intel and AMD have also released similar offerings in the last few months, Beccue added.

For example, Intel's Gaudi2 is a deep-learning processor comparable to Nvidia's H100.

In December, AMD introduced MI300X Accelerators. The chips are designed for generative AI workloads and rival Nvidia H100s in performance.

"The models are getting better, and the chips are getting better and we're getting more of them," Beccue said. "It's a short-term issue."

For Lambda, the challenge will be how to extend beyond solving the current AI computing challenge.

"They're not necessarily going to be competing head-to-head with the cloud compute people," Beccue said. He noted that the major cloud computing vendors -- the tech giants -- are deep-pocketed and have vast financial resources. "I'm sure what they're thinking about is, 'Okay, right now, there's kind of a capacity issue that we can fill. How do we extend over time?'"

As an investor in AI companies, Bratun-Glennon said he thinks generative AI will produce thousands of language models, requiring different amounts of compute.

"Even if there are models that have lower compute requirements, the more use cases people will find to apply them to, the lower the cost that creates so the more ubiquitous that becomes," he said. "Even as models get more efficient, and more companies can use them that expands the amount of compute that is required."

AI compute is also a big market, helping Lambda serve developers -- a different audience than what other cloud providers target, he added. Hyper-scale cloud providers focus on selling to large enterprises and getting large workloads.

"Lambda is the AI training and inference cloud," Bratun-Glennon said. "The thing that has carried through the six years I've been working with them is the AI developer mindset."

Lambda is not the only vendor working to meet the demand of AI compute.

On February 20, AI inference vendor Recogni revealed it raised $102 million in series C funding co-led by Celesta Capital and GreatPoint Ventures. Recogni develops AI inference systems to address AI compute.

The latest Lambda round was led by Thomas Tull's U.S. Innovative Technology fund, with participation, in addition to Gradient Ventures, from SK Telecom, Crescent Cove and Bloomberg Beta.

Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems.

See the original post here:

AI vendor finds opportunity amid AI computing problem - TechTarget

Posted in Cloud Computing | Comments Off on AI vendor finds opportunity amid AI computing problem – TechTarget

Page 21234..1020..»