About Me

My photo
JHC Technology is a Service Disabled, Veteran-Owned, Small Business based in the Washington, DC Metro area. Our primary focus is to offer customized solutions and IT consulting to our Commercial and Government clients. Our experts have a broad experience delivering and managing Microsoft Enterprise applications and Cloud and Virtualization Solutions, as well as mobilizing Enterprise data.

Monday, September 22, 2014

High Availability (HA) for NAT Instances

While working with a project that utilized an Amazon Linux NAT (network address translation) instance for outbound connections in redundant availability zones, I realized that a single NAT making egress requests for two Availability Zones (AZs) introduces a single point of failure.

A whitepaper written by Jinesh Varia outlines the steps required to implement a two-way monitoring high-availability (HA) failover NAT solution. He provides a script with a guide on how to replace necessary variables to give each NAT instance visibility of each other.

However, I wanted to provide insight into an issue I faced while testing this configuration. I found that when I would stop a NAT instance, it would not restart. The next steps were to see if any intended routing failover was occurring, which in fact was. Using the script below I was able to see the logs of nat_monitor.sh :

tail /tmp/nat_monitor.log

Troubleshooting led me to find out that the active NAT instance was unable to see the downed instance’s state due to two reasons.

The first is that the instances only had public IPs, not Elastic IPs (EIPs). EIPs stick to the instance when the instances are turned off and are visible to the API, so when you are making calls to a box that is turned off, you are still able to communicate with it.

The second is specified in a notation Jinesh made in his whitepaper (in Step 7). He makes it clear that the script works with tools version 1.6.12.2 2013-10-15.  He points out that if NAT_STATE isn’t updating, then to change "print $4;" on line 77 to "print $5;". It's because different versions of the tools output the ec2-describe-instances differently. Here’s the original Line 77:

NAT_STATE=`/opt/aws/bin/ec2-describe-instances $NAT_ID -U $EC2_URL | grep INSTANCE | awk '{print $4;}'`

This article was written January 30, 2014, and the tools have since been upgraded. While opening up a ticket with AWS to assist in troubleshooting the script’s output, the support engineer recommended to change “print $5”; to “print $6”; and the change produced the outcome I’d been seeking.

The script uses the API to see if the NAT box is "stopped". If it is, then it will start it. If it's not stopped, it will try to stop it and then loop back to the previous attempt to start it.

You will be able to successfully test this functionality by stopping an instance within the console and observe it restarting automatically after the threshold in the nat_monitor.sh configuration is met.


Rory Vaughan is a Cloud Engineer with JHC Technology. He can be reached at rvaughan (at) jhctechnology.com or connect with him on LinkedIn.

Tuesday, September 2, 2014

Security OF the Cloud vs. Security IN the Cloud

It’s been another fun day of “blame the cloud” around the media universe, and only very few of those media groups are smart enough to understand what they’re actually looking at.  Word has spread that a hacker, or group of hackers, was able to crack user accounts in Apple’s iCloud and gain access to intimate photos of various celebrities.

The headline of the Washington Post indicates that this raises “more questions around the security of the cloud”.  What the Washington Post doesn’t get is that it’s not the security OF the cloud, it’s the security IN the cloud.  According to most reports, it seems that hackers were able to gather email addresses and passwords, or use tricks to leverage the email addresses in enabling the hacker to reset the passwords.  Another apparent way in was through hacking a service of Apple’s that helped open up a door to the user data on iCloud.

Let’s be very clear that none of these methods means that “the cloud” was compromised.  Whether your data is in a cloud, on a server under your desk, or in your corporate datacenter, if a malicious user gains access to your user name and/or password, they’re going to be able to exploit your account(s).  If a user gains access to a service such as “Find My iPhone” that has connectivity to your data, but has a security flaw, they’ll be able to exploit that.  Again, this has no bearing on where your data rests, cloud or otherwise.  A key sentence from this story by DataCenter Knowledge?:  “Cloud … is only as safe as the services that rest upon [it].”

Cloud infrastructure operates mainly with a shared responsibility model.  This means that the cloud provider is generally responsible for the security of its systems up to the servers on which your data resides. However, beyond that, from the operating system on up, the user or company is responsible for that security.  As an example, an infrastructure (cloud) provider such as Amazon Web Services will provide the servers on which you can run your website or host your files. It, generally, isn’t responsible for what you use that server for. If you don’t bother to (or don’t know to) put in the necessary security firewalls on that server to limit access, you’re running the risk of your data being available.  If you don’t bother to (or don’t know to) limit access to certain ports for traffic to your server, you’re opening major holes for exploitation.  That’s not a fault of the cloud provider, that’s user error.

Cloud and application providers have taken steps over the years to try and increase security not only of their own infrastructure and data, but to help users protect themselves.  Some of these methods include multi-factor authentication (MFA) and rotating passwords.  It also includes some services in which you have to rotate passwords on a regular basis, without using previous ones again.  While seemingly inconvenient to the end user, it provides an important step in trying to stay ahead of the game.  Users should take advantage of these components. 

I suggest, and adhere to when offered to me, utilizing MFA for all accounts.  For those unfamiliar with MFA, examples include setting your email provider or Twitter accounts to text you a code that you enter before you can log into an account.  Despite the overly ominous headline, this article from Entrepreneur offers the same advice:  take advantage of MFA.

The breach of iCloud is not a testament to cloud security.  It is more a testament to vulnerabilities of the applications or end user that has access to data stored on the cloud.  It is incumbent on us to take advantage of the security measures offered so we can all do our part.


Matt Jordan is the Cloud Services Manager for JHC Technology. He can be reached at mjordan(at)jhctechology.com, @matt_jhc, or connect with him on LinkedIn.

Monday, July 21, 2014

GSA IT Schedule 70 Offers Amazon Web Services (AWS) Cloud Infrastructure with SDVOSB Status

The United States General Services Administration (GSA), an agency devoted to the efficient acquisition of services for the United States Federal Government, awarded JHC Technology, Inc., with a contract schedule as an approved vendor of Information Technology services to federal, state and local government agencies. Although JHC Technology has worked with all levels of government in the past, the GSA award serves as a formal contract vehicle for government customers to directly procure and leverage JHC Technology's services, including Amazon Web Services (AWS) Cloud Infrastructure products. 

As an expert services integrator and Service Disabled, Veteran-Owned Small Business (SDVOSB), JHC Technology is proud to have been awarded its GSA IT Schedule 70 Contract with Special Item Number 132-51 providing Professional IT Services that include Cloud Engineering and Administration categories; and Special Item Number 132-52 providing AWS Cloud Services to Federal, State and Local governments.


For more information, please contact:

Ms. Wendy Dueri
Director of Business Operations

About JHC
JHC Technology, Inc. is a Service Disabled, Veteran-Owned, Small Business (SDVOSB) that provides Engineering, Architecture, and Subject Matter Expert level services in Microsoft, Citrix, Amazon Web Services and mobility services to apply intelligent technology solutions to a broad range of business needs. Our primary focus is to streamline business processes, securely increase the mobility of end users, and effectively provide customers with highly scalable environments while ensuring necessary computing power and infrastructure by leveraging on-demand, utility-based computing and next generation solutions.
 
JHC Technology is an Amazon Web Services Authorized Government Reseller Partner, Advanced Consulting Partner, and Channel Reseller. In addition, JHC Technology holds partnerships with Citrix Systems and Microsoft Corporation.  For more information on JHC Technology, please visit http://www.jhctechnology.com.

Tuesday, July 8, 2014

When Public Cloud Isn’t Public

One of the key misnomers in cloud technology today is the idea of “public cloud”.  In our work with clients, and especially when discussing Infrastructure as a Service providers such as Amazon Web Services, we invariably have to walk some potential clients off the “public cloud” ledge.  Companies such as AWS are immediately labeled “public” simply because the public can access it.

In fact, we recently worked with a client that asked if AWS could meet the NIST definition of “private cloud.”  The answer is emphatically yes.  NIST defines private cloud as:

[C]loud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.
NIST Publication 800-145, The NIST Definition of Cloud Computing, at pg. 3.

It is a simple two-sentence definition, so let us look at what is there and why AWS can qualify as a private cloud.  Quite simply, the use of the AWS Virtual Private Cloud provides the exclusivity that is required for private cloud status.  Per the AWS web site, VPC:

…lets you provision a logically isolated section of the Amazon Web Services (AWS) Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways.

The client, of course, does not directly own the physical AWS hardware but the logical isolation afforded by the use of VPC allows the deployed AWS infrastructure to be exclusive to the client.

The second sentence’s key component is that the term “combination”.  In the case of the Federal Government, the combination is key.  As we encourage all of our clients to do, they should own their own account, meaning the Government owns the AWS infrastructure (above the hypervisor), manages that infrastructure, and operates the infrastructure.  If it chooses, a third party provider, such as JHC, can also handle the management and operation – the “third-party” NIST identifies.

At the end of the day, cloud knowledge continues to filter down, and we are always happy to provide as much of it as we can.  I hope that we will quickly dispel the misnomer of AWS and others as public clouds simply because the public can use the infrastructure.  Once deployed correctly, AWS and others transition directly into private clouds.

Matt Jordan is the Cloud Services Manager for JHC Technology. He can be reached at mjordan(at)jhctechology.com, @matt_jhc, or connect with him on LinkedIn.

 

 

 

Wednesday, June 11, 2014

AWS and Azure Pushing The Pace


Gartner Magic Quadrant for Infrastructure as a Service Cloud
During the past few years, Amazon Web Services has been the dominant player in the cloud infrastructure space, with very little in the way of significant competition. However, a quick spin around the interwebs has shown that status is starting to face some competition. At JHC we’ve been deploying solutions on AWS – from small scale to large scale – for more than four years. During that time, we’ve also kept an eye on Microsoft Azure, a platform JHC principals considered a few years ago for a key project.

As Microsoft moves rapidly forward in its cloud efforts, it’s becoming a bigger player in the space, and certainly a strong alternative to industry-leading AWS. Just recently, in fact, a customer requested that JHC use Azure to host its new global web site and we were happy to oblige. JHC is a Microsoft Certified Partner and Microsoft has always been a key point in our solutions, AWS or otherwise. We use Microsoft products internally and for our clients, hosting solutions for SharePoint 2007, 2010, and 2013 on AWS. We also have a hybrid Office 365 and AWS solution that also works well for our customers.

It all puts us in a great position as the latest Gartner Magic Quadrant report for Cloud Infrastructure as a Service came out, positioning both AWS and Azure in the Leaders quadrant. We’re thrilled at the continued success of both and look forward to leveraging AWS and Azure infrastructure to provide innovative and secure solutions for ourselves and our clients moving forward.

Matt Jordan is the Cloud Services Manager for JHC Technology. He can be reached at mjordan(at)jhctechology.com, @matt_jhc, or connect with him on LinkedIn.

Tuesday, April 22, 2014

How to protect yourself from the HeartBleed Bug?

·       Change your passwords on a more regular basis
·       Sites such as Yahoo, Github, Netflix, Amazon, Paypal Cloudfront have issues new SSL certs for their sites (So these Sites should be good to go)
·        Contact the sites in which you provide some of your most sensitive/private information (financial or not) and ask questions about Heartbleed Bug.
o   Also make sure you ask what you can do to again PROTECT yourself.
Engineers and Admins with systems on AWS please refer to the following website(s) for more information…

* Red Hat: https://rhn.redhat.com/errata/RHSA-2014-0376.html
* Ubuntu: http://www.ubuntu.com/usn/usn-2165-1/

For more information about this vulnerability, please visit
* AWS Security Bulletin page: https://aws.amazon.com/security/security-bulletins/
* OpenSSL’s official advisory: https://www.openssl.org/news/secadv_20140407.txt
* The Heartbleed Bug: http://heartbleed.com/

Tuesday, March 4, 2014

Amazon Web Services VPN Gateway

I ran into an interesting conflict last week with AWS VPN (Virtual Private Network) Gateway.  I know there is a limitation with your AWS account that you are not allowed to have multiple customer gateways within a region having the same IP address. This would be an extremely nice feature because we would be able to connect multiple VPCs (Virtual Private Cloud) inside the same region to a single VPN device outside of AWS.  There are a lot of use cases for being able to have multiple VPCs within a region have a VPN connect to a single customer gateway device:
  • Logical separation of Development and Production environments
  • Logical separation of data at different classification levels for industry compliance and regulatory restrictions.
  • Customer segregation

Based on this information, I thought I would be clever to create two customer gateways within the same region, but have them separated by two AWS accounts. I was able to successfully create the customer gateway and perform the VPN connection in the first AWS account.  I then went into the second AWS account and was able to create the customer gateway successfully; however, when I went to create the VPN connection I received a conflict error with the customer gateway.  Come to find out, regardless of AWS account separation you cannot successfully create VPN connections with AWS VPN gateway if the customer gateway address is being used somewhere else within a single AWS Region. 


The way to work around this issue is to separate VPCs and customer gateways across different AWS regions regardless if you have one or multiple AWS accounts.

James Hirmas is the CEO for JHC Technology.  He can be reached at jhirmas(at)jhctechnology.com,@JHC_JamesHirmas, or connect with him on LinkedIn.

Tuesday, February 18, 2014

Why use the Configuration Logging feature in XenApp 6.5?

The Configuration Logging feature allows you to keep track of administrative changes made to your server farm environment. By generating the reports that this feature makes available, you can determine what changes were made to your server farm, when they were made, and which administrators made them. This is especially useful when multiple administrators are modifying the configuration of your server farm. It also facilitates the identification and, if necessary, reversion of administrative changes that may be causing problems for the server farm.

Below I have provided step by step instructions on how to configure this feature in XenApp 6.5.

Step 1: Left click on "Start (push button)" in "Start"

Step 2: Left click on "Citrix AppCenter (menu item)" in "Start menu"



Step 3: Left click on "SOC Farm (outline item)" in "Citrix AppCenter" 


Step 4: Right click on "SOC Farm (outline item)" in "Citrix AppCenter"



Step 5: Left click on "Farm properties (menu item)"



Step 6: Left click on "Configuration Logging (outline item)" in "SOC Farm - Farm Properties"



Step 7: Left click on "Configure Database... (push button)" in "SOC Farm - Farm Properties"




Step 8: Left click on "Server name: (This will be the name of your SQL server)" in "Configuration Logging Database"


Step 9: The username specified must have db_owner permissions over the database in SQL.

Citrix recommended to use Windows authentication as it’s more secure than SQL authentication.


Step 10: Left click on "password (editable text)" in "Configuration Logging Database"




Step 11: Left click on "password (editable text)" in "Configuration Logging Database"



Step 12: Enter the password for the Windows account.




 Step 13: Left click on "Next > (push button)" in "Configuration Logging Database"





Step 14: Left click on "specify the database (editable text)" in "Configuration Logging Database"



Step 15: Specify the database created in SQL





Step 16: Left click on "Next > (push button)" in "Configuration Logging Database"




Step 17: Left click on "Open (push button)" in "Configuration Logging Database"



Step 18: left click on "No (list item)"


 Step 19: left click on "Next > (push button)" in "Configuration Logging Database"



Step 20: left click on "Test Database Connection (push button)" in "Configuration Logging Database"



Step 21: left click on "OK (push button)" in "AppCenter"



Step 22: left click on "Finish (push button)" in "Configuration Logging Database (4/4)" 



Step 23: left click on "Apply (push button)" in "SOC Farm - Farm Properties"



Step 24: left click on "OK (push button)" in "SOC Farm - Farm Properties"


Step 25: left click on "History (outline item)" in "Citrix AppCenter"




Step 26: User left click on "History (outline item)" in "Citrix AppCenter"





Step 27: User left click on "Yes (push button)" in "No Filters Specified"


Step 28: Once History is selected, choose “Get Log” under the Action column on the right hand side of the window.



Step 29: As you can see, now that Configuration Logging has been enabled changes made within the AppCenter, such as the publishing of Apps or changes to the permissions of applications or policies will be logged under the History option located within your Farm under XenApp.


David Cuevas is a Jr. Citrix Engineer for JHC Technology.  He can be reached at dcuevas (at) jhctechnology.com.