Monthly Archives: May 2023

How Digital Technology Helps F&B Manufacturers Become More … – Food Industry Executive

Posted: May 8, 2023 at 5:14 pm

Article sponsored by ECI Software Solutions

The COVID pandemic acted as a forcing function for the food and beverage industry to speed up their digital transformation. It was a perfect storm that revealed all of the cracks in the industrys foundation the rigidity of supply chains and the precariousness of the labor situation, to name just two. The rules of the game changed practically overnight, and many manufacturing operations broke because they werent flexible enough to bend.

As a result, manufacturers started taking a close look at their operations and rethinking them with an eye toward increasing agility and resilience. Now, macroeconomic uncertainty is reinforcing the need for businesses to prepare for the unexpected. This article takes a deep dive into what agility and resilience mean for the F&B industry, as well as how digital technologies can help manufacturers achieve these goals and, in doing so, future-proof their organizations.

It was very easy during the pandemic to see what happens when food and beverage operations arent agile or resilient: grocery shelves were empty. A USDA study found that median stock-out rates of fixed-weight items increased roughly 130% following March 15, 2020. The categories with the highest stock-out rates included meat and poultry products, convenience and frozen foods, baby formula, and carbonated beverages. The reasons for these stock-outs ranged from ingredient and packaging materials shortages to a lack of truck drivers to transport product, to consumers stockpiling non-perishables, to manufacturers that normally supply for foodservice being unable to transform their operations for retail.

But what does it mean to be agile and resilient?

These two qualities are often talked about in the same breath, as if theyre the same thing. But there are some key differences.

The National Institute of Standards and Technology defines them like this:

Steve Banker, VP of supply chain services at ARC Advisory Group, offers a colorful description:

Agility is an action, or the ability to take action; resilience is a strategy, or a business mindset. You have to be agile in order to be resilient.

So, in practical terms, what does an agile and resilient food and beverage manufacturer look like?

Here are some key hallmarks:

Digital transformation is the key to food and beverage manufacturers becoming agile, resilient organizations. Used correctly, almost any digital technology can contribute to this effort. Here, well look at a few of the critical solutions F&B companies should consider.

A digital twin is a virtual replica of a physical system, such as a manufacturing plant or an entire supply chain. By using digital twins, manufacturers can simulate different scenarios and test the impact of different decisions before implementing them in the real world. This can help manufacturers make more informed decisions, reduce the risk of costly mistakes, and become more responsive to changing market conditions.

In a whitepaper on Building Operational Resilience and Agility, Deloitte outlined how an India-based dairy cooperative used a digital twin to increase their revenue by $98 million during COVID lockdowns:

First, they had partnered with a technology provider to create a digital twin of their supply chain ecosystem, including supplier locations, plants, and customers / retailers. Using this system, they could see the capacity at which plants were operating, how many trucks were operational, and any idle capacity. They could also easily track customer demand, and they saw the demand for cheese and condensed milk increase sharply.

To meet this demand, the company started operating at 115% capacity. It used the digital twin to identify idle capacity and distribute the load. It then started using rail transport, which was considered an essential service, and secured partnerships with multiple e-commerce retailers. In addition, they increased advertising.

According to Deloitte, the digital twin allowed the company to reconfigure its agile supply chain ecosystemto sell more products and manage crises better.

When I started writing about the food industry back in 2015, many people I spoke with were hesitant about cloud computing. They were concerned with security risks and protecting their proprietary information. Today, the cloud is widely considered to be more secure than on-premises software.

EY calls cloud computing foundational for companies that are ambitious about moving and transforming at speed, meeting elevated customer expectations, and future-proofing their business. This is largely because of its speed and ability to connect multiple systems, providing the real-time end-to-end visibility that is a hallmark of an agile and resilient operation.

Digital twins and cloud computing fall under the umbrella of Industry 4.0. There are several other technologies in this category, and a group of researchers recently performed an analysis to identify which technologies have a direct impact on various supply chain resilience elements such as flexibility, redundancy, visibility, robustness, and information sharing.

Their conclusion? All of them contribute to resilience. For agility specifically, these are the technologies they identify: Internet of Things (e.g., sensors), cloud computing, big data analytics, artificial intelligence, digital twins, blockchain, industrial robotics, and additive manufacturing.

ERP is one of the best tools in a food and beverage manufacturers toolbox for informing day-to-day operations because it supports automation and processes across the entire business and provides real-time data to inform decision-making. According to ECI Software Solutions 2021 State of Manufacturing Digital Transformation survey, nearly 95% of manufacturers that use ERP say it helped them manage the impacts of the pandemic.

In a blog post, ECI Software Solutions highlighted four ERP trends that help manufacturers stay resilient:

Its clear that food and beverage manufacturers need to harness the power of digital technologies to future-proof their organizations. So, how do you do it? Just like with ERP, theres no one-size-fits-all answer. Every companys journey will be different.

Here are five recommendations from Deloitte for companies to target their digital transformation efforts:

Becoming truly agile and resilient requires food and beverage manufacturers to think about their operations a little differently than they did 10 or 20 years ago. By looking at their entire business ecosystem with an eye toward preparing for the unexpected, companies can ensure the ability to quickly adapt to changing circumstances (agility), which will support future-proofing, no matter what the future might bring (resilience). Embracing the digital transformation is the only way the industry will be able to reach this lofty goal.

View original post here:

How Digital Technology Helps F&B Manufacturers Become More ... - Food Industry Executive

Posted in Cloud Computing | Comments Off on How Digital Technology Helps F&B Manufacturers Become More … – Food Industry Executive

Sourcepass Acquires Proxios to Expand Mid-Atlantic Reach – ChannelE2E

Posted: at 5:14 pm

by Sharon Florentine May 8, 2023

Sourcepass has acquired Proxios, a Virginia-based technology and solution provider. Financial terms of the deal were not disclosed.

This is technology M&A deal number 140 that ChannelE2E and MSSP Alert have covered so far in 2023. See more than 2,000 technology M&A deals for 2022, 2021, and 2020 listed here.

Sourcepass, founded in 2021, is based in New York, New York. The company has 195 employees listed on LinkedIn. Sourcepass areas of expertise include IT services and consulting, SaaS, robotic process automation and AI.

Proxios, founded in , is based in Midlothian, Virginia. The company has 50 employees listed on LinkedIn. Proxios areas of expertise include Business IT Transformation, IT Secuity, Agile Coaching & Transformation, Cloud Computing, Hybrid Cloud Computing, Infrastructure Services, Virtual Desktop, Unified Communications, Network Management, Disaster Recovery, IT Managed Services, DFARS Compliance, Network Assessment, CISO, CIO, Project Management, Backup and Recovery, Office365, Data Recovery, Data Backup, IT Budget, IT Budget Planning, IT Infrastructure Assessment, Data Security Assessment, Security Assessment, IT Budget Assessment, Azure, Office 365, Microsoft, and Citrix.

The acquisition is Sourcepass seventh within the last 12 months and will further the companys growth strategy, enabling them to expand their physical presence in the mid-Atlantic states while deepening their portfolio of clients in the healthcare, legal, and non-profit sectors, the company said. More than 50 Proxios employees will join the Sourcepass team and add expertise in cloud and network management services.

Chuck Canton, founder and CEO, Sourcepass

Chuck Canton, founder and CEO at Sourcepass, commented on the news:

After many conversations, it became evident that there was a strong value alignment between our two organizations. In Proxios, we found a partner who further enhances our capabilities and is as committed to providing a high-quality experience to clients as Sourcepass.

Patrick Butler, CEO of Proxios, added:

In addition to many shared values, there were some exciting complementary service offerings between Sourcepass and ProxiosOur clients will now have significantly more resources to leverage and a broader set of services from which to choose. Given these benefits, we knew joining Sourcepass was in the best interests of both our employees and clients.

This is the seventh acquisition for Sourcepass. In December 2022, Sourcepass acquired CCSI.

Other key deals include:

In addition, the company recently raised an additional $65 million in funding, bringing its total funding to$135 million. This investment will be used to further the companys strategic initiatives, including the launch of Quest, an AI-powered digital transformation platform. Sourcepass also said it plans to continue its expansion with a series of strategic acquisitions throughout 2023 aimed at increasing its vertical market reach and bolstering its portfolio of services and talent resources.

View original post here:

Sourcepass Acquires Proxios to Expand Mid-Atlantic Reach - ChannelE2E

Posted in Cloud Computing | Comments Off on Sourcepass Acquires Proxios to Expand Mid-Atlantic Reach – ChannelE2E

AI and cloud tech can enhance the "Netflix like experience" for … – TechGraph

Posted: at 5:14 pm

Speaking to TechGraph, JPS Kohil, Founder & Chairman of SkillUp Online said, In line with the Netflix experience, I believe that AI and cloud technology can provide invaluable insights into learner behavior and preferences.

Read the interview:

- Advertisement -

TechGraph: Could you help give a sense of how SkillUp Online has come in its five years of existence?

JPS Kohli: When I started SkillUp in 2016, my 25 years of experience in cloud-based learning, which included being Director of Online Training at Microsoft, revealed a problem in the industry. Though education was shifting from face-to-face learning which limits numbers to online self-paced learning which is available to anyone, anytime, anywhere completion rates for online courses were very low. I believed the human touch was missing and created SkillUp to solve this problem.

The business has grown based on a learner-centric training methodology that balances a technology-driven ethos with a human-centered approach. We focus on future skills training that keeps professionals relevant in this changing world.

Our course catalog includes fields such as artificial intelligence, data science, machine learning, cloud computing, cybersecurity, etc.

However, we also offer human skills training that enables professionals to develop the leadership, team building, time management, collaboration, and problem-solving skills that businesses desperately need.

To further the SkillUp vision, we then launched our dedicated training platform SkillUp Online in 2019. SkillUp Online is a hybrid learning platform that focuses on learning outcomes namely skills, practical experience, and certifications and learner outcomes where improved skills and practical experience facilitate access to better career opportunities, higher salaries, and future-proof jobs.SkillUp now operates in the US, Europe based in Portugal and India. The company has partnered with top organizations, including Microsoft, IBM, and Google Cloud for technical learning content, and NASSCOM and Pacific Lutheran University (USA) for extended reach.

TechGraph: How is SkillUp Online, facilitating the entire learning process digitally?

JPS Kohli: The SkillUp Online platform offers a mix of online self-paced learning, peer-to-peer interaction, and online instructor-led sessions. We utilize a flipped learning methodology that blends the best that instructor-led training and self-paced learning have to offer.

This includes high-quality technical training content developed with industry partners such as Microsoft, IBM, and Google, substantial hands-on learning, practical projects, soft skills coaching, and real-world capstone projects. And the outcome is learners develop job-ready tech and human skills that are aligned with the specific roles employers are desperately seeking to fill. This, in turn, gets learners better jobs, faster, than if theyd trained elsewhere.

Additionally, we also provide a valuable human touch to ensure our learners fly. For example, we offer 1-to-1 online mentoring with subject experts who are there to provide support, spot when learners need a nudge, answer questions, and personalize their learning.

Recent projects with NASSCOM and IBM Skills Build are outstanding examples of the success of our approach. Through these initiatives alone, the business has trained over 125,000 learners and achieved 67%+ completion rates. That alone is impressive, but what were proud of is the fact that weve achieved this by focusing specifically on learner outcomes higher employability, better salaries, and improved promotion opportunities.

- Advertisement -

TechGraph: How is the response so far to your online courses?

JPS Kohli: Our focus on career-aligned skills has prompted a very good response. Our learners are building skills that their traditional degree hasnt covered, and theyre achieving great results quickly.

But the stats speak for themselves. With 67%+ completion rates in comparison to MOOC completion rates of 3%-5% its clear that the business is not only enabling deep learning at scale but also helping learners to build successful careers quickly.

To build on this success, weve launched a unique collection of cutting-edge programs that enable learners with minimal tech experience to get job-ready and hit the ground running. For example, our popular TechMaster Certificate Programmes cover data science, artificial intelligence, and cloud computing. The strapline for these new programs Designed to get you hired! Perfectly encapsulates our focus on learner outcomes. And learners are welcoming the opportunity to invest their time and money to get a better job and higher salary quickly.

TechGraph: What are the new trends in AI & Machine Learning and the Big Data space?

JPS Kohli: The opportunities appearing through AI, machine learning, and big data are very exciting for edtech. The strides being taken in natural language processing (NLP) are highlighting the genuine possibility of AI instructors now. Though, at SkillUp the human touch will always remain an important aspect of our offering too.

Generative AI is also exploding, with ChatGPT and Bard taking the world by storm. This may soon facilitate new and up-to-date content being accessed and presented automatically in courses through machine learning. It will most likely start with text-based updates being made. But I see no reason why images, video, and even music wont follow shortly too. Content will always need to be moderated by a human being, of course, to check its accuracy. However, the process of updating content will be speeded up considerably.

Edge computing also deserves a mention here, with AI processes managing applications locally, as opposed to centrally, to reduce latency. And Im also excited about explainable AI XAI where soon, for example, powerful algorithms will provide descriptions and explanations for why decisions were made in a scenario.

TechGraph: SkillUp Online has been collaborating with different industry experts and technology partners to enhance the students learning experience. Going forward, do you see more such engagements?

JPS Kohli: Absolutely. Learning from industry experts is critical to the learners outcome once theyve completed a course. Ensuring that the right expertise and support are in place is core to the philosophy of SkillUp. Our approach to learning blends traditional expert input with self-paced learning; achieving the right balance is very important to us. Weve found that learners need experts to show them how something is done, so they can then go and practice it themselves.

However, in addition to using skilled instructors and industry experts in our programs, we also ensure we provide high-quality technical training content developed with industry partners such as Microsoft, IBM, and Google. Our technology partners enable us to include a lot of practical, hands-on material in our courses.

As learners work through a course, they get their hands dirty working on labs created by our industry partners. Plus, they are supported by proactive 1-to-1 mentoring from SkillUp Online technical experts. And with this approach, we make it easier and quicker for learners to gain the employable skills backed by real practical experience they need to get a better job.

TechGraph: What is the state of online skill development and learning platforms in the Indian market, especially for Gen Z?

JPS Kohli: Gen Z learners in general, including India, are the digital natives of our world. They intuitively know how to work an app without training. They are happy to consume bite-size chunks of content. And theyve got the confidence to keep up with the fast pace of change in a digital environment. Their challenge, though, is that a give-it-a-go attitude isnt enough they need employable skills.

This is where online courses come to the fore. The young professionals of today are very comfortable learning online via videos, sound bites, and well-defined learning paths. Online courses facilitate quick and easy access to such modular learning. And because, yes, learners still need nudging, skills-building courses that provide good mentoring support are achieving results much faster than traditional training.

The predisposition of Gen Zs ability to consume digital content, coupled with the clear need for upskilling in hot fields such as artificial intelligence, data science, and machine learning, is creating a match made in heaven.And this means that learning platforms that dont just shell out certificates, but instead ensure learners are armed with employable skills and practical experience to get a job quickly, will make the difference.

TechGraph: How do technologies, namely Al, machine learning, and cloud, have relevance in online courses? What will the future look like?

JPS Kohli: People have gotten used to technology making recommendations based on their user profile; Netflix and Amazon are excellent examples. Edtech learners are no different, and they are now expecting this tailored experience in the learning arena too.

In line with the Netflix experience, I believe that AI, machine learning, and cloud technology can provide invaluable insights into learner behavior, preferences, and needs. And by making good use of the power of machine learning, SkillUp is planning to use technology to tailor custom learning plans for individuals and offer them choices in how they learn and how they are supported to learn.

This personalized approach also has another aspect, though. To speed up deep skilling at scale, future-focused learning platforms like SkillUp Online will need to harness AI and machine learning to power continual learning. And I anticipate this will include adaptive learning, where the progression of the course content will vary depending on the pace of the learners uptake in understanding.

Generative learning powered by content created by algorithms akin to ChatGPT will also soon be working for hand in glove with explainable AI (XAI). This will take personalized, continual learning to new heights, with online courses always providing the relevant skills of the moment.

TechGraph: What has the response so far been to code learning courses on your platform?

JPS Kohli: With the explosion of AI and data science across the industry, its no surprise that our coding courses are popular. A great example is Python. Python is used across all industries and Python skills learned with SkillUp Online can be applied in many ways. This ensures our learners have transferable skills that will enhance their career opportunities considerably.

Because of this, weve made a point of integrating core Python skills into our flagship TechMaster Certificate portfolio. So, not only are learners developing specific Python skills but depending on the field they are training for, be it AI, data science, cloud computing, etc, they are also learning how to use their Python skills with other coding competencies. And this approach ensures they do have the skills they need to get hired.

- Advertisement -

Follow this link:

AI and cloud tech can enhance the "Netflix like experience" for ... - TechGraph

Posted in Cloud Computing | Comments Off on AI and cloud tech can enhance the "Netflix like experience" for … – TechGraph

Hyperautomation Market to Grow at CAGR of 16.5% through 2032 … – GlobeNewswire

Posted: at 5:14 pm

Newark, May 08, 2023 (GLOBE NEWSWIRE) -- The Brainy Insights estimates that the USD 36.46 Billion Hyperautomation market will reach USD 168 Billion by 2032. Rising demand of the Hyperautomated technological solutions for reducing business operational costs may propel the growth of the Hyperautomated Market, globally. This not only saves times, but also, energy, labour, and money of the organization. For instance: In May 2019, Thermax Limited adopted the hyperautomation systems for substituting manual chemical mixing process to hyperautomated solution. Thereby reducing the 40man days time to 15-man days time. Consequently, reducing the business operational costs.

Request to Download Sample Research Report - https://www.thebrainyinsights.com/enquiry/sample-request/13435

Report Coverage Details

North America to account for the largest market size during the forecast period accounting to 47% of the total market. Whereas Asia Pacific is expected to be the fastest growing region in the period forecasted.

North America emerged as the largest market for the global Hyperautomation market. Owing to increasing adoption of the technologies and entry of new market players in the region. Whereas Asia Pacific region is anticipated to exhibit highest growth rate over the period. Owing to rising investments on IT infrastructure from various countries such as, India, China, Japan. Furthermore, increasing demand for cloud-computing in these countries have also contributed towards the growth of the Hyperautomation market in this region.

Machine Learning (ML) has dominated the market with the most significant market revenue of USD 40 Billion in 2022.

Machine learning has dominated the market. Machine learning is a branch of Artificial Intelligence which primarily uses algorithms & models. Thereby uncovering critical insights and other focus areas. Hence, rising adoption of machine learning, globally with boost the overall growth of the Hyperautomation market.

IT & Telecom accounted for the largest share of the market, with a market revenue of USD 45.23 Billion in 2022.

IT & Telecom segment has dominated the Hyperautomation market. It is also expected to be the fastest growing segment across the globe. Owing to increased adoption of Integrating Robotic Process Automation (RPA). This ultimately helps in simplifying the operational tasks and providing long-term revenue generation opportunities in the period forecasted.

Procure Complete Research Report - https://www.thebrainyinsights.com/report/hyperautomation-market-13435

Latest Development:

In April 2022: Juniper Network entered into a partnership agreement with PP Telecommunication Sdn Bhd (PPTEL). Main objective of this partnership agreement was to provide the company with the solutions that will help them strengthen and build its growth plan. With this the company will be able to fulfil the demand and needs of its end-users and will provide high network communication facilities in the period forecasted.

In February 2022: IBM and SAP entered into partnership agreement. The main objective of this agreement was to provide its consulting services in the areas of hyperautomation technology. Further, this agreement will also provide hybrid cloud solution and disseminate object oriented critical problems from SAP to various regulated and unregulated industries.

Market Dynamics

Drivers: Digitization of traditional manufacturing plants

Rising digitization and automation of the traditional manufacturing plants is one of the major factors that boosts the growth of the Hyperautomation Market. To solve complex data problems and minimizing unwanted efforts of the labour. Various organizations have adapted Hyperautomation to reduce their Operating Expenditure (OPEX) and enhance their productivity and efficiency levels.

Restraint: Scarcity of skilled workers

With constantly evolving technology, their prevails higher demand for the skilled and trained professionals, who manages and streamlines the workflow effectively and efficiently. Therefore, lack of skilled workers may hamper the growth of the Hyperautomation market in the period forecasted.

Opportunity: Increased demand of Hyperautomation to lower overall business operational costs

Rising demand of the Hyperauomated technological solutions for reducing business operational costs may propel the growth of the Hyperautomated Market, globally. This not only saves times, but also, energy, labour, and money of the organization. For instance: In May 2019, Thermax Limited adopted the hyperautomation systems for substituting manual chemical mixing process to hyperautomated solution. Thereby reducing the 40man days time to 15-man days time. Consequently, reducing the business operational costs.

Challenge: Higher installation and maintenance costs

Higher installation and maintenance costs is one of the major challenges that the organizations and other high-tech faces in the current market scenario. With ongoing advancements in Hyperautomation and increasing demand of the same. The companies now know of the complex procedural solutions that needs to be taken care of. Thus, only the Large corporate firms are able to incur the huge installation and maintenance costs of this solution. Which puts MSMEs at par towards taking advantage of the Hyperautomated technology, to a huge extent.

Interested to Procure the Research Report? Inquire Before Buying - https://www.thebrainyinsights.com/enquiry/buying-inquiry/13435

Some of the major players operating in the Hyperautomation market are:

UiPath Wipro Ltd. Tata Consultancy Services Ltd. Mitsubishi Electric Corporation OneGlobe LLC SolveXia Appian Automation Anywhere Inc. Allerin Tech Pvt. Ltd. PagerDuty, Inc. Honeywell International Inc

Key Segments cover in the market:

By Type:

Biometrics Machine Learning Context-Aware Computing Natural Learning Generation Chatbots Robotic Process Automation

By End-User:

BFSI Retail IT & Telecom Education, Automotive Manufacturing Healthcare & Life Science

Have Any Query? Ask Our Experts:https://www.thebrainyinsights.com/enquiry/speak-to-analyst/13435

About the report:

The global Hyperautomation market is analysed based on value (USD trillion). All the segments have been analysed on a worldwide, regional, and country basis. The study includes the analysis of more than 30 countries for each part. The report offers an in-depth analysis of driving factors, opportunities, restraints, and challenges for gaining critical insight into the market. The study includes porter's five forces model, attractiveness analysis, raw material analysis, supply, demand analysis, competitor position grid analysis, distribution, and marketing channels analysis.

About The Brainy Insights:

The Brainy Insights is a market research company, aimed at providing actionable insights through data analytics to companies to improve their business acumen. We have a robust forecasting and estimation model to meet the clients' objectives of high-quality output within a short span of time. We provide both customized (clients' specific) and syndicate reports. Our repository of syndicate reports is diverse across all the categories and sub-categories across domains. Our customized solutions are tailored to meet the clients' requirements whether they are looking to expand or planning to launch a new product in the global market.

Contact Us

Avinash DHead of Business DevelopmentPhone: +1-315-215-1633Email: sales@thebrainyinsights.comWeb: http://www.thebrainyinsights.com

Visit link:

Hyperautomation Market to Grow at CAGR of 16.5% through 2032 ... - GlobeNewswire

Posted in Cloud Computing | Comments Off on Hyperautomation Market to Grow at CAGR of 16.5% through 2032 … – GlobeNewswire

UP Board modernises computer learning in schools, introduces basics of AI, drone technology – Organiser

Posted: at 5:14 pm

Now the students from Government schools in Uttar Pradesh will study and read about e-governance, artificial intelligence, cryptocurrency, drone technology, and information technology (IT) advancements.

According to the board secretary Dibyakant Shukla, Prayagraj-headquartered Uttar Pradesh Madhyamik Shiksha Board has updated the syllabus in accordance with the National Education Policy 2020 for classes 9 to 12 and uploaded it on its official website for the convenience of the students. The changes are specifically made in the curriculum for computer learning, which is taught in 28000 schools of UP board.

The syllabus is revised with the guidance and approval of subject experts. Its a significant change as it doesnt follow the current course prescribed by the National Council of Educational Research and Training (NCERT).The experts have replaced traditional computer programming languages such as C++ and HTML with Python and Java for class 11 and 12 students. This decision was made because HTML and C++ languages are not practised these days; instead, Core Java, Robotics and Drone Technology are introduced in the Class 12 syllabus.The class 11 students will study the Internet of Things (IoT), artificial intelligence, blockchain technology, augmented and virtual reality, 3-D printing and cloud computing.

Apart from HTML and C++, the board has also removed chapters on computer generations, history, and types of computers because of their irrelevance. The class 10 students will study ways to avoid hacking, phishing and cyber fraud. They will also be taught about artificial intelligence, drone technology and cyber security.Even students will study e-governance as a part of their curriculum.

Now class 9 students will be taught programming techniques, computer communication and networking, which class 10 students earlier studied.

While talking about the recent changes in the syllabus Biswanath Mishra, UP Board has made important changes in the syllabus of computer as a subject for students of classes 9 to 12. Students will now be taught modern topics like cryptocurrency, drone technology, artificial intelligence, hacking, fishing and cloud computing. This will prepare them as per the requirement of modern times. He teaches computers at Shiv Charan Das Kanhaiya Lal Inter College, Attarsuiya, Prayagraj.

The rest is here:

UP Board modernises computer learning in schools, introduces basics of AI, drone technology - Organiser

Posted in Cloud Computing | Comments Off on UP Board modernises computer learning in schools, introduces basics of AI, drone technology – Organiser

Banking on Thousands of Microservices – InfoQ.com

Posted: at 5:14 pm

Key Takeaways

In this article, I aim to share some of the practical lessons we have learned while constructing our architecture at Monzo. We will delve into both our successful endeavors and our unfortunate mishaps.

We will discuss the intricacies involved in scaling our systems and developing appropriate tools, enabling engineers to concentrate on delivering the features that our customers crave.

Our objective at Monzo is to democratize access to financial services. With a customer base of 7 million, we understand the importance of streamlining our processes and we have several payment integrations to maintain.

Some of these integrations still rely on FTP file transfers, many with distinct standards, rules, and criteria.

We continuously iterate on these systems to ensure that we can roll out new features to our customers without exposing the underlying complexities and restricting our product offerings.

In September 2022, we became direct participants in the Bacs scheme, which facilitates direct debits and credits in the UK.

Monzo had been integrated with Bacs since 2017, but through a partner who handled the integration on our behalf.

Last year we built the integration directly over the SWIFT network, and we successfully rolled it out to our customers with no disruption.

This example of seamless integration will be relevant throughout this article.

A pivotal decision was to build all our infrastructure and services on top of AWS, which was unprecedented in the financial services industry at the time. While the Financial Conduct Authority was still issuing initial guidance on cloud computing and outsourcing, we were among the first companies to deploy on the cloud. We have a few data centers for payment scheme integration, but our core platform runs on the services we build on top of AWS with minimal computing for message interfacing.

With AWS, we had the necessary infrastructure to run a bank, but we also needed modern software. While pre-built solutions exist, most rely on processing everything on-premise. Monzo aimed to be a modern bank, unburdened by legacy technology, designed to run in the cloud.

The decision to use microservices was made early on. To build a reliable banking technology, the company needed a dependable system to store money. Initially, services were created to handle the banking ledger, signups, accounts, authentication, and authorization. These services are context-bound and manage their own data. The company used static code generation to marshal data between services, which makes it easier to establish a solid API and semantic contract between entities and how they behave.

Separating entities between different database instances is also easier with this approach. For example, the transaction model has a unique account entity but all the other information lives within the account service. The account service is called using a Remote Procedure Call (RPC) to get full account information.

During the early days of Monzo, before the advent of service meshes, RPC was used over RabbitMQ, which was responsible for load balancing and deliverability of messages, with a request queue and a reply queue.

[Click on the image to view full-size]

Figure 1: Rabbit MQ in Monzos early days

Today, Monzo uses HTTP requests: when a customer makes a payment with their card, multiple services get involved in real-time to decide whether the payment should be accepted or declined. These services come from different teams, such as the payments team, the financial crime domain team, and the ledger team.

[Click on the image to view full-size]

Figure 2: A customer paying for a product with a card

Monzo doesn't want to build separate account and ledger abstractions for each payment scheme, so many of the services and abstractions need to be agnostic and able to scale independently to handle different payment integrations.

We made the decision early on to use Cassandra as our main database for services, with each service operating under its own keyspace. This strict isolation between keyspaces meant that a service could not directly read data from another service.

[Click on the image to view full-size]

Figure 3: Cassandra at Monzo

Cassandra is an open-source NoSQL database that distributes data across multiple nodes based on partitioning and replication, allowing for dynamic growth and shrinking of the cluster. It uses timestamps and quorum-based reads to provide stronger consistency, making it an eventually consistent system with last-write wins semantics.

Monzo set a replication factor of 3 for the account keyspace and defined a query with a local quorum to reach out to the three nodes owning the data and return when the majority of nodes agreed on the data. This approach allowed for a more powerful and scalable database, with fewer issues and better consistency.

In order to distribute data evenly across nodes and prevent hot partitions, it's important to choose a good partitioning key for your data. However, finding the right partitioning key can be challenging as you need to balance fast access with avoiding duplication of data across different tables. Cassandra is well-suited for this task, as it allows for efficient and inexpensive data writing.

Iterating over the entire dataset in Cassandra can be expensive and transactions are also lacking. To work around these limitations, engineers must be trained to model data differently and adopt patterns like canonical and index tables: data is written in reverse order to these tables, first to the index tables, and then to the canonical table, ensuring that the writes are fully complete.

For example, when adding a point of interest to a hotel, the data would first be written to the pois_by_hotel table, then to the hotels_by_poi table, and finally to the hotels table as the canonical table.

[Click on the image to view full-size]

Figure 4: Hotel example, with the hard-to-read point of interests table

Although scalability is beneficial, it also brings complexity and requires learning how to write data reliably. To mitigate this, we provide abstractions and autogenerated code for our engineers. To ensure highly available services and data storage, we utilize Kubernetes since 2016. Although it was still in its early releases, we saw its potential as an open-source orchestrator for application development and operations. We had to become proficient in operating Kubernetes, as managed offerings and comprehensive documentation were unavailable at the time, but our expertise in Kubernetes has since paid off immensely.

In mid-2016, the decision was made to switch to HTTP and use Linkerd for service discovery and routing. This improved load balancing and resiliency properties, especially in the event of a slow or unreliable service instance.

However, there were some problems, such as the outage experienced in 2017 when an interaction between Kubernetes and etcd caused service discovery to fail, leaving no healthy endpoints. This is an example of teething problems that arise with emerging and maturing technology. There are many stories of similar issues on k8s.af, a valuable resource for teams running Kubernetes at scale. Rather than seeing these outages as reasons to avoid Kubernetes, they should be viewed as learning opportunities.

We initially made tech choices for a small team, but later scaled to 300 engineers, 2500 microservices, and hundreds of daily deployments. To manage that, we have separate services and data boundaries and our platform team provides infrastructure and best practices embedded in core abstractions, letting engineers focus on business logic.

[Click on the image to view full-size]

Figure 5: Shared Core Library Layer

We use uniform templates and shared libraries for data marshaling, HTTP servers, and metrics, providing logging, and tracing by default.

Monzo uses various open-source tools for their observability stacks such as Prometheus, Grafana, OpenTelemetry, and Elasticsearch. We heavily invest in collecting telemetry data from our services and infrastructure, with over 25 million metric samples and hundreds of thousands of spans being scraped at any one point. Every new service that comes online immediately generates thousands of metrics, which engineers can view on templated dashboards. These dashboards also feed into automated alerts, which are routed to the appropriate team.

For example, the company used telemetry data to optimize the performance of the new customer feature Get Paid Early. When the new option caused a spike in load, we had issues with service dependencies becoming part of the hot path and not being provisioned to handle the load. We couldn't statically encode this information because it continuously shifted, and autoscaling wasn't reliable. Instead, we used Prometheus and tracing data to dynamically analyze the services involved in the hot path and scale them appropriately. Thanks to the use of telemetry data, we reduced the human error rate and made the feature self-sufficient.

Our company aims to simplify the interaction of engineers with platform infrastructure by abstracting it away from them. We have two reasons for this: engineers should not need to have a deep understanding of Kubernetes and we want to offer a set of opinionated features that we actively support and have a strong grasp on.

Since Kubernetes has a vast range of functionalities, it can be implemented in various ways. Our goal is to provide a higher level of abstraction that can ease the workload for application engineering teams, and minimize our personnel cost in running the platform. Engineers are not required to work with Kubernetes YAML.

If an engineer needs to implement a change, we provide tools that will check the accuracy of their modifications, construct all relevant Docker images in a clean environment, generate all Kubernetes manifests, and deploy everything.

[Click on the image to view full-size]

Figure 6: How an engineer deploys a change

We are currently undertaking a major project to move our Kubernetes infrastructure from our self-hosted platform to Amazon EKS, and this transition has also been made seamless by our deployment pipeline.

If you're interested in learning more about our deployment approach, code generation, and our service catalog, I gave a talk at QCon London 2022 where I discussed the tools we have developed, as well as our philosophy towards the developer experience.

The team recognizes that distributed systems are prone to failure and that it is important to acknowledge and accept it. In the case of a write operation, issues may occur and there may be uncertainty as to whether the data has been successfully written.

[Click on the image to view full-size]

Figure 7: Handling failures on Cassandra

This can result in inconsistencies when reading the data from different nodes, which can be problematic for a banking service that requires consistency. To address this issue, the team has been using a separate service running continuously in the background that is responsible for detecting and resolving inconsistent data states. This service can either flag the issue for further investigation or even automate the correction process. Alternatively, validation checks can be run when there is a user-facing request, but we noticed that this can lead to delays.

[Click on the image to view full-size]

Figure 8: Kafka and the coherence service

Coherence services are beneficial for the communication between infrastructure and services: Monzo uses Kafka clusters and Sarama-based libraries to interact with Kafka. To ensure confidence in updates to these libraries and Sarama, coherence services are continuously run in both staging and production environments. These services utilize the libraries like any other microservice and can identify problems caused by accidental changes to the library or Kafka configuration before they affect production systems.

Investment in systems and tooling is necessary for engineers to develop and run systems efficiently: the concepts of uniformity and "paved road" ensure consistency and familiarity, preventing the development of unmaintainable services with different designs.

From day one, Monzo focuses on getting new engineers onto the "paved road" by providing a documented process for writing and deploying code and a support structure for asking questions. The onboarding process is defined to establish long-lasting behaviors, ideas, and concepts, as it is difficult to change bad habits later on. Monzo continuously invests in onboarding, even having a "legacy patterns" section to highlight patterns to avoid in newer services.

While automated code modification tools are used for smaller changes, larger changes may require significant human refactoring to conform to new patterns, which takes time to implement across services. To prevent unwanted patterns or behaviors, Monzo uses static analysis checks to identify issues before they are shipped. Before making these checks mandatory, we ensure that the existing codebase is cleaned up to avoid engineers being tripped up by failing checks that are not related to their modifications. This approach ensures a high-quality signal, rather than engineers ignoring the checks. The high friction to bypass these checks is intentional to ensure that the correct behavior is the path of least resistance.

In April 2018, TSB, a high-street bank in the UK, underwent a problematic migration project to move customers to a new banking platform. This resulted in customers being unable to access their money for an extended period, which led to TSB receiving a large million fine, nearly 33 million in compensation to customers, and reputational damage. The FCA report on the incident examines both the technological and organizational aspects of the problem, including overly ambitious planning schedules, inadequate testing, and the challenge of balancing development speed with quality. While it may be tempting to solely blame technology for issues, the report emphasizes the importance of examining organizational factors that may have contributed to the outage.

Reflecting on past incidents and projects is highly beneficial in improving operations: Monzo experienced an incident in July 2019, when a configuration error in Cassandra during a scale-up operation forced a stop to all writes and reads to the cluster. This event set off a chain reaction of improvements spanning multiple years to enhance the operational capacity of the database systems. Since then, Monzo has invested in observability, deepening the understanding of Cassandra and other production systems, and we are more confident in all operational matters through runbooks and production practices.

Earlier I mentioned the early technological decisions made by Monzo and the understanding that it wouldn't be an easy ride: over the last seven years, we have had to experiment, build, and troubleshoot through many challenges, and this process continues. If an organization is not willing or able to provide the necessary investment and support for complex systems, this must be taken into consideration when making architectural and technological choices: choosing the latest technology or buzzword without adequate investment is likely to lead to failure. Instead, it is better to choose simpler, more established technology that has a higher chance of success. While some may consider this approach to be boring, it is ultimately a safer and more reliable option.

Teams are always improving tools and raising the level of abstraction. By standardizing on a small set of technological choices and continuously improving these tools and abstractions, engineers can focus on the business problem rather than the underlying infrastructure. It is important to be conscious when systems deviate from the standardized road.

While there's a lot of focus on infrastructure in organizations, such as infrastructure as code, observability, automation, and Terraform, one theme often overlooked is the bridge between infrastructure and software engineers. Engineers don't need to be experts in everything and core patterns can be abstracted away behind a well-defined, tested, documented, and bespoke interface. This approach saves time, promotes uniformity, and embraces best practices for the organization.

Showing different examples of incidents, we highlighted the importance of introspection: while many may have a technical root cause, it's essential to dig deeper and identify any organizational issues that may have contributed. Unfortunately, most post-mortems tend to focus heavily on technical details, neglecting the organizational component.

It's essential to consider the impact of organizational behaviors and incentives on the success or failure of technical architecture. Systems don't exist in isolation and monitoring, and rewarding the operational stability, speed, security, and reliability of the software you build and operate is critical to success.

See the rest here:

Banking on Thousands of Microservices - InfoQ.com

Posted in Cloud Computing | Comments Off on Banking on Thousands of Microservices – InfoQ.com

Cyber Security vs. Data Science Which Is the Right Career Path? – Analytics Insight

Posted: at 5:14 pm

Here is the comparison between the most in-demand fields Cyber Security vs. Data Science

Todays IT-intensive environment has taught us two important lessons: we need solutions to transform tidal surges of data into something that organizations can utilize to make educated decisions. We must safeguard that data and the networks on which it is stored.

As a result, we have the fields of data science and cyber security. So, which is the better job path? You wont get far if you approach the debate between cyber security vs. data science in terms of which field is more in demand. Both fields are in desperate need of a workforce.

Cyber security is the discipline of securing data, devices, and networks against unauthorized use or access while assuring and maintaining information availability, confidentiality, and integrity. A career in cybersecurity entails entering a thriving industry with more available positions than qualified applicants.

Data science combines domain knowledge, programming abilities, and mathematical and statistical knowledge to generate usable, relevant insights from massive amounts of unstructured data, often known as Big Data.

A career in data science includes carrying out data processing responsibilities, data scientists often use algorithms, processes, tools, scientific methods, techniques, and systems, and then apply the derived insights across multiple domains.

Data science and cyber security are inextricably linked since the latter demands the defences and protection that the former supplies. To obtain their conclusions and assure the security of the resultant processed information, data scientists require clean, uncompromised data. As a result, the area of data science looks to cyber security to assist protect the information in any form.

For someone interested in a career in one of the more intriguing and busy IT disciplines, cyber security and data science present fantastic chances. The career trajectories in both fields are comparable.

Experts in cyber security often begin their careers with a bachelors degree in computer science, information technology, cyber security, or a related profession. Aspirants in the field of cyber security should also be proficient in fundamental subjects like programming, cloud computing, and network and system administration.

The prospective cyber security specialist joins a corporation as an entry-level employee after graduating. After a few years of work experience, its time to apply for a senior position, which normally calls for a masters degree and certification in a variety of cybersecurity-related fields.

Cyber security experts choose career paths like security analyst, ethical hacker, chief information security officer, penetration tester, security architect, and IT security consultant.

Data scientists demand more formal education than cyber security specialists. A masters or even a bachelors degree isnt required for cybersecurity professionals, though having those resources helps. A bachelors degree in data science, computer science, or a similar branch of study is required for most data science professions. After a few years in an entry-level role, the ambitious data scientist should seek a masters degree in Data Science, reinforced by a few relevant certifications, and apply for a position as a senior data analyst.

Data science experts choose career paths like data engineer, marketing manager, data leader, product manager, and machine learning leader.

According to Glassdoor, the average yearly salary for cyber security specialists in the United States is US$94,794, whereas this figure is 110,597 in India.

In the field of data science, Indeed reports that US-based data scientists make an average salary of US$124,074 annually, while their Indian counterparts earn an average salary of US$830,319 annually.

Depending on demand, the hiring of certain individuals, and the location, these numbers frequently change.

Read the original post:

Cyber Security vs. Data Science Which Is the Right Career Path? - Analytics Insight

Posted in Cloud Computing | Comments Off on Cyber Security vs. Data Science Which Is the Right Career Path? – Analytics Insight

DIGITAL PROMISE: Amazon pledges further R30bn SA investment … – Daily Maverick

Posted: at 5:14 pm

Amazons cloud service, Amazon Web Services (AWS), has announced plans to invest a further R30.4-billion in its cloud infrastructure in South Africa by 2029. It has already invested R15.6-billion in the country.

In a new economic impact study outlining Amazons investment in its AWS Africa (Cape Town) region since 2018, the group estimates its total investment of R46-billion between 2018 and 2029 will add at least R80-billion in gross domestic product to the South African economy. It will also help to support about 5,700 full-time equivalent (FTE) jobs at local vendors each year.

The FTE jobs are supported across the data centre supply chain, such as telecommunications, non-residential construction, electricity generation, facilities maintenance and data centre operations.

AWS provides cloud computing or on-demand delivery of IT resources over the internet which allows customers to access computing power, data storage and other services with pay-as-you-go pricing, as opposed to the traditional contract-based IT model.

Many of South Africas public sector institutions make use of AWS.

GovChat, SAs largest citizen-government engagement platform, provides a conversational interface that integrates voice and text into applications and provides a unified platform that citizens can use to connect with the government.

Wits University, SAs largest research university, has adopted a cloud-first approach to its IT strategy, using technology to enhance all its core processes.

Other AWS clients include Absa, Investec, Medscheme, MiX Telematics, Old Mutual Limited, Pick n Pay, Standard Bank, Pineapple and Travelstart.

Amazon is also steaming ahead with its retail marketplace in South Africa, with an expected launch towards the end of the year.

On 28 April 2023, Bloomberg reported that Amazon had warned that growth in its cloud computing business was continuing to cool.

AWS revenue rose 16% to $21.4-billion in the first quarter, as Amazon reported stronger-than-expected profits and sales in the period.

Last week, Amazon executives jolted investors by admitting that sales growth in the cloud computing unit had slowed. Some analysts have speculated that as companies seek to trim technology costs, AWS growth could sink to single digits, according to the report.

Amazons chief financial officer, Brian Olsavsky, told reporters that AWS was less profitable now than it was a year ago, partly owing to discounts offered in exchange for longer-term contracts. BM/DM

Original post:

DIGITAL PROMISE: Amazon pledges further R30bn SA investment ... - Daily Maverick

Posted in Cloud Computing | Comments Off on DIGITAL PROMISE: Amazon pledges further R30bn SA investment … – Daily Maverick

There’s a Secret Way to Get to Absolute Zero. Scientists Just Found It. – Popular Mechanics

Posted: May 6, 2023 at 3:24 pm

Were not getting to absolute zero anytime soon. The temperature at which all energy in an object drops to zero, our inability to reach it is enshrined in the third law of thermodynamics.

One version of the law states that in order to reach absolute zero, wed have to either have infinite time or infinite energy. Thats not happening any time soon, so out the window go our hopes of achieving a total lack of energy.

Or do they?

A team from the Vienna University of Technology in Austria wanted to see if there was alternate route to absolute zero. And they found one in an interesting placequantum computing.

The researchers entered into their research with the intent of trying to generate a version of the third law of thermodynamics that jived cleanly with quantum physics. Because the regular version that so many physicists know and love doesnt quite fit nicely into the quantum world.

Disagreements between classical and quantum physics happen all the timeits why so much time and effort goes into trying to find a unified theory of physics that encompasses both sets of rules. That doesnt mean classical physics is wrong, it just means its limited in ways that we didnt expect when we first were figuring out how the universe works.

The third law of thermodynamics, despite how fundamental it is, is one of those surprisingly limited aspects of classical physics. In saying that we cant reach absolute zero without infinite time or infinite energy, it doesnt fully take a fundamental aspect quantum physicsinformation theoryinto account.

A principle of information theory called the Landauer principle states that there is a minimum, and finite, amount of energy that it takes to delete a piece of information. The catch here is that deleting information from a particle is the exact same thing as taking that particle to absolute zero. So, how is it possible that it takes a finite amount of energy to delete information and an infinite amount of energy to reach absolute zero, if those two things are the same?

It's not a total paradoxyou could take an infinitely long time. But that doesnt tell the whole story. The team discovered a key parameter that would get it done a whole lot fastercomplexity. It turns out that if you have complete, infinite control over an infinitely complex system, you can bring fully delete information from a quantum particle without the need for infinite energy or infinite time.

Now, is infinite complexity with infinite control more achievable than infinite time or infinite energy? No. Were still dealing with infinities here.

But this discovery does emphasize known limitations in the functionality of quantum computers. Namely, once we start saving information on those things, were never going to be able to fully scrub the information from the quantum bits (known as qubits) making up our information storage centers.

According to experts, thats not going to present a practical issue. Machines that operate absolutely perfectly already dont exist, so theres no reason to hold quantum computers to an unreachable standard. But it does teach us a bit more about exactly what building and operating these futuristic machines is going to take.

When it comes to quantum, were just getting started.

Associate News Editor

Jackie is a writer and editor from Pennsylvania. She's especially fond of writing about space and physics, and loves sharing the weird wonders of the universe with anyone who wants to listen. She is supervised in her home office by her two cats.

Visit link:

There's a Secret Way to Get to Absolute Zero. Scientists Just Found It. - Popular Mechanics

Posted in Quantum Physics | Comments Off on There’s a Secret Way to Get to Absolute Zero. Scientists Just Found It. – Popular Mechanics

Photon Precision: How Quantum Physicists Shattered the Bounds of Sensitivity – SciTechDaily

Posted: at 3:24 pm

A team at the University of Portsmouth has achieved unprecedented precision in measurements through a method involving quantum interference and frequency-resolving sampling measurements. This breakthrough could enhance imaging of nanostructures and biological samples, and improve quantum-enhanced estimation in optical networks.

A team of researchers has demonstrated the ultimate sensitivity allowed by quantum physics in measuring the time delay between two photons.

By measuring their interference at a beam-splitter through frequency-resolving sampling measurements, the team has shown that unprecedented precision can be reached within current technology with an error in the estimation that can be further decreased by decreasing the photonic temporal bandwidth.

This breakthrough has significant implications for a range of applications, including more feasible imaging of nanostructures, including biological samples, and nanomaterial surfaces, as well as quantum-enhanced estimation based on frequency-resolved boson sampling in optical networks.

The research was conducted by a team of scientists at the University of Portsmouth, led by Dr. Vincenzo Tamma, Director of the UniversitysQuantum Science and Technology Hub.

Dr. Tamma said: Our technique exploits the quantum interference occurring when two single photons impinging on the two faces of a beam-splitter are indistinguishable when measured at the beam-splitter output channels. If, before impinging on the beam splitter, one photon is delayed in time with respect to the other by going through or being reflected by the sample, one can retrieve in real time the value of such a delay and therefore the structure of the sample by probing the quantum interference of the photons at the output of the beam splitter.

We showed that the best precision in the measurement of the time delay is achieved when resolving such two-photon interference with sampling measurements of the two photons in their frequencies. Indeed, this ensures that the two photons remain completely indistinguishable at detectors, irrespective of their delay at any value of their sampled frequencies detected at the output.

The team proposed the use of a two-photon interferometer to measure the interference of two photons at a beam splitter. They then introduced a technique based on frequency-resolving sampling measurements to estimate the time delay between the two photons with the best possible precision allowed by nature, and with an increasing sensitivity at the decreasing of the photonic temporal bandwidth.

Dr. Tamma added: Our technique overcomes the limitations of previous two-photon interference techniques not retrieving the information on the photonic frequencies in the measurement process.

It allows us to employ photons of the shortest duration experimentally possible without affecting the distinguishability of the time-delayed photons at the detectors, and therefore maximizing the precision of the delay estimation with a remarkable reduction in the number of required pairs of photons. This allows a relatively fast and efficient characterization of the given sample paving the way to applications in biology and nanoengineering.

The applications of this breakthrough research are significant. It has the potential to significantly improve the imaging of nanostructures, including biological samples, and nanomaterial surfaces. Additionally, it could lead to quantum-enhanced estimation based on frequency-resolved boson sampling in optical networks.

The findings of the study are published in the journal Physical Review Applied.

Reference: Ultimate Quantum Sensitivity in the Estimation of the Delay between two Interfering Photons through Frequency-Resolving Sampling by Danilo Triggiani, Giorgos Psaroudis and Vincenzo Tamma, 24 April 2023, Physical Review Applied.DOI: 10.1103/PhysRevApplied.19.044068

More:

Photon Precision: How Quantum Physicists Shattered the Bounds of Sensitivity - SciTechDaily

Posted in Quantum Physics | Comments Off on Photon Precision: How Quantum Physicists Shattered the Bounds of Sensitivity – SciTechDaily