- JHC Technology, Inc.
- JHC Technology is a Service Disabled, Veteran-Owned, Small Business based in the Washington, DC Metro area. Our primary focus is to offer customized solutions and IT consulting to our Commercial and Government clients. Our experts have a broad experience delivering and managing Microsoft Enterprise applications and Cloud and Virtualization Solutions, as well as mobilizing Enterprise data.
Wednesday, August 21, 2013
All businesses maintain some sort of disaster recovery model that includes regular backups of their email systems. Using a 3rd party tool like Symantec Netbackup and Microsoft Exchange, an entire email database containing user mailbox data can be restored. The information can then be imported back into the user’s mailbox. The steps are simple and as follows:
1. You want to obtain as much information as possible regarding the user’s missing emails. Particularly which email folders are missing mail (i.e. Inbox, Sent Items) and the dates the emails are missing from. This is very important because the restore point selected will be based on the date the exchange server was backed up.
2. You will need to create a Recovery database (RDB) in preparation for the restore process on your MS Exchange server. This recovery database will house an email database and log file information for the mailbox that you want to restore. The database must be dismounted and ready to be overwritten by a restore. These are options that can be selected once the RDB is created.
3. You need to use your third party backup tool to display a timeline of the backup jobs that were performed on the exchange server. You want to select a date prior to that of the missing email and one that will contain the latest backup information. Once obtained, you will start what’s known as a restore to process to pull the email database information from its stored location (backup tapes, disk space, etc.) which is usually rather large file so make sure that you have enough drive space in which to populate the data. The data is placed into the RDB based on a selected path that’s predetermined that tells the Netbackup where to place the restored files.
4. After the restore is completed, using MS Exchange Management Powershell, you can run a simple script to populate the missing emails directly into the users’ mailboxes with no user intervention required.
It’s that simple. You have now gone through the process of restoring missing email files from a recovery database.
Jeronna Freeman is the Cloud Administrator for JHC Technology. She can be reached atjfreeman (at) jhctechnology.com or connect with her on LinkedIn.
Wednesday, August 14, 2013
Last month, I took a look at some of the things the cloud isn’t, but I wanted to expand on those a little as we move forward, because there’s truly a misunderstanding about what the cloud can provide. Some of that stems from traditional IT vendors that have spent millions and billions of dollars in building their own infrastructure to host agencies and commercial clients. Other misunderstandings come from the standard competition between cloud vendors, so take that for what it’s worth.
Among the biggest misconceptions out there is that the Cloud is simply one big data center somewhere and everyone’s livelihoods are based on it always being up and running. One of my biggest pet peeves are headlines screaming that the “cloud goes down.” Take this instance, for example, about EC2 outages in AWS’ US East Region. The headline is “Amazon Cloud Goes Down Friday Night, Taking Netflix, Instagram And Pinterest With It”. But do they really mean the Amazon Cloud?
The Amazon Cloud is 30 services across nine distinct geographical regions around the globe. Did that go down? No.
The Amazon Cloud is a minimum of two individual, geographically isolated availability zones in each region around the world. Did all of those go down? No.
In fact, did the US East Availability Zone go down? Did all of the services within the US East Region fail? No.
So why is it that the “Amazon Cloud” went down? It sounds to me like components of a network experienced a failure.
What actually happened is that there was service failure for 90 minutes (for Netflix, at least) that affected the virtual machine instances and some of the block storage components within a single availability zone in the US East Region. No data loss, no viruses, no exposures.
Additionally, saying that the “Cloud” goes down, taking others with it is also slightly disingenuous. Did Netflix cease to operate for 90 minutes? Apparently not, because the tweet identified at the article’s open notes that some users were experiencing problems, not that the entire service was down.
Forbes notes in the article that its Flipboard content wasn’t updating reliably 24 hours after the problem. Here’s my question: What do we know about the architecture of these services? A follow up article notes Instagram’s response to the outage, which had ben caused by unnaturally violent storms that affected the East Coast:
“As of Friday evening of June 29, 2012, Instagram is experiencing technical difficulties. An electrical storm in Virginia has affected most of our servers, and our team of engineers is working hard to restore service.” -- Instagram, per Forbes.com
Here’s what I make of that statement: Instagram put all of its eggs in one basket. Most of its servers are in US East? Why no balance? Why no servers in US West? Why not in Dublin? Cloud architecture should be designed for failures, just like traditional architecture. Here’s what else is noted in the follow up article: No problems from The Guardian, HootSuite, UrbanSpoon, EngineYard (PaaS). Interesting, no? Clearly, the Amazon Cloud didn’t go down, because these guys kept operating.
Cloud infrastructure is super cheap compared to traditional racking and stacking. In most instances, you’re not even charged for servers that aren’t running. There are load balancing tools, DNS routers, auto scaling devices, and on and on and on.
The truth of the matter is that the cloud didn’t go down. Companies affected long term may not have properly architected their cloud infrastructure. We don’t know. To imply that AWS is the cause of Flipboard being slow to return or for Instagram to have gone down entirely may lay entirely too much blame at the foot of a cloud provider and too little blame at the foot of the company.
Until the Amazon Cloud, or any other, experiences a total and complete loss of service, it would behoove everyone to gain a much better understanding about what happens when there’s a service outage. Let’s scale back on the ominous “Cloud Goes Down,” because that’s simply not the case.
Matt Jordan is the Cloud Services Manager for JHC Technology. He can be reached at mjordan (at) jhctechnology.com, @matt_jhc, or connect with him on LinkedIn.
Wednesday, August 7, 2013
I have decided to deviate from my blog series about Non-Technical Cloud Barriers and talk about some of the solution architecture work JHC is performing for our Federal clients moving to Amazon Web Services. One of the major design hurdles the Federal Government has to take into consideration when moving into the Cloud is how to implement Trusted Internet Connection (TIC). What is Trusted Internet Connection? Department of Homeland Security describes TIC as an initiative to:
“…optimize and standardize the security of individual external network connections currently in use by federal agencies, including connections to the Internet. The initiative will improve the federal government's security posture and incident response capability through the reduction and consolidation of external connections and provide enhanced monitoring and situational awareness of external network connections.” (You may also refer to OMB Memorandum M-08-05).
My understanding is that currently, no public Cloud offerings have the capability/ability to natively provide TIC for their federal clients. In most cases, internet traffic is routed back to the federal government datacenter and out a TIC router provided by a vendor through the vendor’s Managed Trusted Internet Provider Service (MTIPS). Currently the following vendors are the only MTIPS providers available under the Networx contract:
- CenturyLink (formerly Qwest)
- Verizon Business
For Federal Agencies looking to expand and/or move all infrastructure operations into the Cloud, but still need to maintain a physical datacenter to allow for a TIC vendor provided router, it is not cost effective and from a networking prospective it is inefficient. Using AWS features, JHC has been able to design a TIC solution that removes the requirement for Agencies to have to maintain physical datacenters for TIC compliance while providing a TIC solution that is High Availability and has built-in Disaster Recovery. Below is a high level overview and sample architecture of the TIC Solution:
- Utilize AWS Regions in US East and/or GovGloud
- Deploy Virtual Private Cloud (VPC) within the AWS Region and associate subnets across Availability Zones.
- Within your VPC deploy EC2 virtual routers and EC2 web content filters across Availability Zones for high availability and disaster recovery.
- Establish VPN connection between your agency and EC2 virtual router.
- (Optional) for additional high availability and disaster recovery connect your AWS regions via EC2 virtual router and load balance user internet traffic across the US.
- Use AWS Direct Connect feature to route your internet traffic to Equinix facility in either Seattle Washington and/or Ashburn, VA utilizing AWS Virtual Private Gateway.
- Drop TIC provider router into Equinix and connect AWS Direct Connect Router to TIC Router