You likely can’t go a day without seeing some sort of reference to the cloud. We use cloud-based services on a daily basis, at work, in our homes, at the stores we shop at, but developing an understanding of how this technology works for your business is absolutely crucial to ensure you’re making the right decisions for your organization now and for the long-term.
Often the problem when learning about the cloud is there is so much information available and it’s typically presented in a “tech-heavy” way that is difficult for businesses to apply to their current situation. Or, even worse, it’s oversimplified and the value is lost. PiF recently hosted a webinar reviewing the basics of the cloud and sharing important information broken down into digestible bites, and we cover the same content in this post. The most important thing to understand regarding the cloud is that it isn’t just like flipping a switch. There’s a process to implementing cloud hosting but with the right knowledge, tools, and partner, you can succeed in bringing all the data you need into the cloud.
Defining Cloud Computing
Before we dive into the technical details of the cloud, let’s start at the beginning and define exactly what the cloud is. Cloud Computing is the practice of using a network of remote servers hosted on the Internet to store, manage, and process data, rather than a local server. As a whole, this term is used generally to describe data centers that are available to users over the internet. A common consumer-facing example of this would be Apple’s iCloud, which many iPhone users utilize for storing photos and data from their Apple devices all in one place.
There are many public cloud service providers and some of the top ones include Amazon Web Services (AWS) and Microsoft Azure. The difference between public cloud computing vs. private cloud computing is that the private cloud is server based while with the public cloud, you are driving infrastructure with APIs in an on-demand marketplace.
We’ll stick to public cloud and AWS for this conversation. Specifically in terms of Cloud Hosting with Amazon Web Services (AWS), there are features that make this solution desirable to business owners and IT professionals. AWS gives you the ability to work from any location that has an internet connection and provides access to the same information including software programs, data, and documents that you’d typically access from your office desktop. Though this data is accessible from anywhere in the world, it still maintains the same security and identity management you’d have on your On-Premise server. Not only does AWS provide remote access to your data from anywhere, over the long term you can garner significant savings and it can also help lower your IT infrastructure support costs, also known as total cost of ownership (TCO).
Cloud Buzzwords you need to know
There are plenty of buzzwords that are associated with the cloud. It’s crucial to know exactly what they mean and how they play into your entire cloud solution.
All hardware, software, networks, and data centers facilities used to operate, monitor, manage, and support information technology services.
Software programs for end users including email, word, excel, and functional or industry specific programs for Accounting, HR, Manufacturing, etc.
Multiple linked computers and servers to allow electronic communications, typically linked through cables or Wi-Fi.
Components & recording media that are used to retain digital data.
The computing function of making copies of data to enable recovery from data loss.
Translates data into another form or code, so that only people with access to a secret key or password can read it. Data encryption is used to protect digital data confidentiality as it is stored on computer systems and transmitted over the internet.
A network security system that monitors and controls incoming and outgoing network traffic based on predetermined security rules. A firewall establishes a barrier between a trusted internal network and untrusted external network, i.e., the Internet.
When a Software Application provides every tenant a dedicated share of the instance – including its data, configuration, and user management.
The specific realization of any object. The object may be varied in a number of ways, and each realized variation of that object is an instance. Every time a program runs, it is an instance of that program.
Total Cost of Ownership (TCO)
The total cost of using and maintaining IT investments over time. A combination of direct costs (hardware, software, and operations) and indirect costs (administrative, end-user operations, and downtime).
Why now is the right time to move to the cloud
They say that timing is everything, and while just about any time is the perfect time to make the decision to move to the cloud, right now is truly the perfect time to see an immediate impact once you make the move.
With many businesses following stay-at-home orders and utilizing a primarily remote workforce, migrating your data to the cloud can provide much-needed access for remote employees. Employees working from home need to access their data just like they are in the office and maintain as much business continuity as possible during this time, adjusting to the “new normal.”
An additional part of this is developing an understanding of Disaster Avoidance vs. Disaster Recovery. Many businesses use the posture of Disaster Recovery (DR), assuming that they have a good system in place and nothing bad could ever happen to their organization. This strategy can be risky and could entirely destroy an organization if they don’t have a sound DR plan. Regardless of how their data is negatively impacted, be it by accident or due to a business interruption such as fire, flood, hurricane, Nor’Easter, or other event, a cloud storage system with automated backups can ensure data isn’t gone for good.
Understanding the concept of Failover is also important to your Disaster Avoidance planning. Failover provides the ability to change over to a standby server, system, or network when the server being used gets terminated without a warning (aka disaster). Tied to this is fault tolerance, which helps a system recover immediately and continue operating in any given failure, are both a major part of the cloud hosting process. For AWS, this means replication of data and documents at a data center out of region, where a machine image is created and recovered within minutes should a disaster occur.
Recently, a growing trend impacting small businesses to Mid-Market Organizations is Ransomware. We’ve spoken to dozens of business owners who have had their On-Premise servers hit by ransomware in the past 18 months alone, and those numbers will continue to grow amongst organizations who don’t invest in a more secure way to store their data.
Where does the data go?
Organizations have the feeling that their On-Premise servers are secure, but when it comes to the cloud, they are often unsure of exactly where their data is stored. AWS specifically is among the most secure hosting providers and relies on multiple Regions and Availability Zones to both store and back up data.
In PiF’s case, customer data is stored in the AWS Ohio and Virginia Data Centers because PiF’s customer base is mostly located in the Northeast. As mentioned above the PiF customer AWS failover locations are purposely in Oregon, to ensure region separation.
There are a variety of products that are used to host your organization’s data in AWS. These include Elastic Compute Cloud (EC2) , Elastic Block Storage (EBS) and Simple Storage System (S3). EC2 is a Windows or Linux virtual server in AWS. EBS holds “in process” documents, this storage is slightly more expensive, yet allows for installation of applications as well as object based storage. S3 holds the images/document files, and is considered “object only storage”, it’s not built to hold full applications. Overall, you have many storage options for your data, all of which can be customized based on your needs.
Capacity Planning, why does it matter?
Your IT team relies on On-Premise Planning to determine the resources they’re going to need in the future. One of the challenges with this method of planning is that typically these On-Premise hardware purchases are made with “future” capacity in mind (4 to 5 years, the lifecycle of the server) and don’t reflect an organization’s current needs. What’s more surprising is that many Data Center Servers are often less than 50% utilized, meaning that investment is not fully realized.
A huge draw of AWS is that it completely eliminates the need to to “over-spec” servers to handle unknown “future” demand, it’s flexible enough to handle increased OR decreased demand; there may be an immediate need to increase application or storage capacity due to explosive growth, or as we have seen with COVID-19, one may need to reduce capacity just as quickly.
Beyond the shortfalls of traditional capacity planning, there are other challenges to On-Premise hardware that you can avoid by moving to AWS. Needing to apply continuous software upgrades, patches & bug fixes and supporting legacy / End-of-Life hardware and software can drain your IT team of valuable time and resources. Depending on your size and the scale of your IT Infrastructure, you may be spending a lot on overhead costs such as power, cooling, and physical space, in some cases, not realizing how much you are spending. Additionally, all of your On-Premise servers, network devics and software needs to be managed by Internal IT or by utilizing a third party IT company. Either of those can be costly, especially for small to medium sized organizations.
Part of mitigating these costs is developing an understanding of Operating Expenditure (OpEx) versus Capital Expenditure (CapEx). Capital expenditure is incurred when a business acquires assets that could be beneficial beyond the current tax year, such as new equipment or by upgrading an existing asset. Operational expenditure consists of expenses that a business incurs in order for day-to-day operations to run smoothly.
It’s important to review your existing IT expenses and determine if they are OpEx or CapEx. Part of what can influence that decision is the varying costs often associated with IT. In the context of AWS, their “Pay As You Go” model allows you to pay for only what you need and allows for on-demand or “elastic” response when more resources are needed.
Data Procurement and Consumption
AWS offers a few purchase models to meet customer specific needs, On Demand, Reserved, and Spot.
On Demand is best for organizations who have unknown usage and need compute resources quickly without any long term commitments.
Reserved is the best option if you can make a commitment for at least 1 year. There are two types, Convertible or Nonconvertible. The reservation term is flexible and lasts 1 or 3 years. It also has flexible options for payment terms, upfront, partial, and no upfront.
Spot is utilized for extremely short term compute resources that are bid on per second and can be interrupted at a moments notice. An example of usage would be within the PiF Docstar Cloud, which utilizes Spot instances for rendering of images when executing conversions.
Total Cost of Ownership
Total Cost of Ownership (TCO) is the total cost of using and maintaining an IT investment over time. To best calculate this amount your calculations should include both Direct Costs and Indirect Costs. Direct Costs encompass things such as hardware, software, and operations, whereas Indirect Costs include administration, end-user operations, down-time. A problem many organizations face is that TCO is often mis-calculated, overlooked, and unbudgeted, which makes it less effective.
A common flaw is that many organizations believe their direct costs end at the point of purchase. But in reality, the cost for onsite IT infrastructure includes much more than the hardware and licensing. In fact, research shows that the base price of hardware typically represents less than 20% of its TCO and technical support, maintenance and labor costs account for the remaining 80%. This is due to the constant configuring and maintenance that successful IT infrastructure requires. Additionally, ongoing costs related to security measures, software updates, and hardware maintenance are unavoidable and often not calculated.
So, what other costs are overlooked?
- Salary for IT personnel or Expense for Outsourced IT (or a portion thereof)
- Power and cooling
- Backup & Replication software & process for Disaster Recovery
- Software (MS SQL)
- Security Software
- Remote Access Software
Simplifying your IT infrastructure and management processes will increase efficiency and significantly lower your Total Cost of Ownership.
Why PiF and AWS?
There are plenty of cloud hosting providers out there, but PiF made an educated decision to partner with Amazon Web Services. One factor for us in this decision included their data centers in multiple regions, with multi-availability zones, so we could best support our customers in the Northeast. They also excel with Security & Identity Management and Replication for high availability and Fault Tolerance with 99.99999% reliability. With the largest cloud partner network in the world, it was an easy decision for PiF to make.
Coupled with Amazon Web Services’ robust offerings, our vast experience makes for a successful partnership for your business long-term. We have 24 years of software & managed services support experience and 10 dedicated AWS Consultants who will help your organization with the migration, they’ve already assisted the 200+ PiF customers who migrated to the AWS cloud in the past 3 years.
We have cloud hosting Application expertise with all of our solutions, as well as ERP/CRM cloud hosting expertise with systems such as Sage, Microsoft Dynamics Nav, and SYSPRO.
Our system relies on Separate SQL Servers for scalability, is HIPAA compliant, has off hours throttling and auto scaling, replication regions and availability zones, and isolation using Virtual Private Connection (VPC).
The next steps
Ready to take the plunge and move to the cloud? Great! We’ve outlined the steps we go through to ensure every project goes smoothly.
- Consult with PiF on getting started
- Identify a first workload to start testing
- Focus on Dev/Test, Website and Non-Production servers as a first project
- PiF installs the AWS Application Discovery Service servers to collect statistics
- PiF provides proposal on approximate costs and discuss TCO
After running the AWS Application Discovery Agent, we know your requirements for memory, disk, network, and CPU utilization, which can help us determine your approximate costs and estimate your TCO more accurately.
These results provide PiF contextual data points to answer questions that will determine your needs, such as:
- Actual utilization of existing compute, storage and network resources
- What compute procurement model works?
- On Demand, Reserved, Spot
- What is the Labor effort to migrate on-prem resources to the Cloud?
- What level of Security and Configuration are required (VPN, security groups)?
- What are your Remote Access to resources options (Terminal Services, WorkSpaces)?
- What type of Authentication do you need (Microsoft AD, LDAP, SAML)?
When these questions are answered, you can gain a more in-depth understanding of what your cloud system will look like and a more detailed review of what long term TCO could look like.
We would love to have a discussion with you about how the AWS cloud can impact your organization and how PiF can implement this solution. Schedule a demo with us to help take the first step.