Week 5 ( 2nd Sems) – Preliminary Research

For identifying misconfigurations, we have been doing a some preliminary research into how to tell if a particular service is insecure.  With memcached for instance, if one is  able to connect to an instance via telnet then we know that that memcached is open and listening to anyone in the world. MySQL, on the other hand, is a bit more difficult to several possible security vulnerabilities and determining which to classify as misconfigurations.

 

 

Week 4 ( 2nd Sems) – Timing Results

We were able run a controlled experiment in EC2 and gather  response timing data via information returned by cURL . As we suspected, the response times from the load balancer versus its back-end instance were not significantly different.

For our second idea of identifying web deployment misconfigurations, we made a list of ports we could probe in order to determine how prevalent those services are in the cloud, including FTP (21), SSH (22), MySQL ( 3306), memcached(11211), redis(6379), NFS (2049), OpenVPN (1194), HFDS datanode (50075),  MapReduce task tracker ( 50060) .  This is the first step in helping us decide which services are worth exploring, which including  figuring out if and how they are misconfigured and insecure. We intend to look at using WhoWas to conducting the probing in the following weeks.

Week 3 (2nd Sems) – Vetting Ideas

Last week were where able to critically examine the ideas of the previous week and narrow them down to the 2 best options as summarized below:

1) Timing Measurements of Request-Response Packets through a Load-Balancer versus Direct-to-Machine Communication ( in a Cloud)

This idea would involve measuring the time it takes to receive a response packet from a  vm behind a load balancer in the cloud versus communicating directly with that machine. The original hypothesis is that the timing information the different situations will be similar and therefore indistinguishable, but if the time responses are dissimilar, it would open up more  room for exploration. The plan is to conduct the testing in a controlled environment in the EC2 cloud. Since the methods ( packet capture and inspection)  are quite similar our work done earlier in the semester, this component shouldn’t take long to complete.

2) Determine Prevalence of Misconfigurations in Cloud Deployments

The WhoWas paper including a case study of the ” software ecosystem of web services running the cloud” which included identifying the web servers and templates, backend techniques, and tracking behaviour. As a continuation of that inquiry, we thought it would be interesting to study misconfigurations in these software ecosystems. Our ideas include:

1- Identifying set-ups where we are able to communicate directly with vm that is supposed to be hidden behind a load balancer.

2-Identifying unprotected memcached . Memcached is an in-memory object caching system used to speed up websites, but purportedly has little built-in security which could lead to unwanted leakage of  data. Determining the extent of memcached usage and protection provide insight into the security of some software deployments in the cloud.

Week 2 ( 2nd Sems) – New Questions

In our search for new research questions, we contacted on the co-authors of the WhoWas paper  and held a productive brainstorming session. The WhoWas paper was a measurement study of web deployments in the EC2 and Azure cloud using light active probing over an extended period of time. We look at the unresolved questions from the study and were able to come up with several ideas for further research.

For example, the study only probed ports 80 & 443, so we could try other ports to and see how they affect the results. We could also conduct measurements to see if there is a time difference between directly accessing a vm versus accessing a vm via a load balancer. Additionally, we could study  misconfigured deployments in the cloud, instances with an unprotected memcache-D for instance.

At the end we made plans to meet the following weeks and narrow down which problem we intend to focus on.

Week 1( 2nd Sems) – Goal Reexamination

For our first meeting of the new semester, we took time to evaluate our research goals to determine if they were attainable or if we should look at slightly different research problem. While our original idea was to try and determine how much content served by CDNs originated in the cloud, we also decided that reverse engineering the behavior of the CDNs might prove too complex and difficult. We decided to reexamine the WhoWas paper and look for related research questions that would be more feasible.

Week 16 – Cookie Monster

After modifying the script to fetch cookies from our list of Alexa’s 10000 websites, I was able to identify a larger number of sites using Amazon’s ELB sticky sessions ( roughly 144 ) compared to number of sites from our diff results. In my pre-research before writing the script, I did note that while some sites had the AWSELB cookie, it did not show up in a cURL command line request. Upon closer examination,  AWSELB cookie belonged to another domain. For example, when examining the  Heroku landing site  using Chrome developer tools, I found the AWSELB session cookie, but belonging to pixel.prfct.co domain. When I checked,  pixel.prfct.co did not have a web server.

I have several question regarding this behavior I hope to investigate, including resolving whether Heroku actually uses the ELB or if is it for some other content on the page. We will also discuss the results of this search to determine whether they are relevant to our original goal or at least provide some new insights.

Week 15 – Wrap Up

After discussing the results of the search, we have decided to wrap-up this portion of the experiment. From what we were able to find, the current method of inquiry has proved insufficient to determine the number of back-ends behind a load balancer. Understandably,  most websites we surveyed appear to have taken steps to hide this information on the front-end as it is a potential security risk.

On last search we plan to try is to check for Amazon ELB sticky session cookies. Amazon ELB sticky sessions involve passing a cookie to the client that routes it back to the same back-end server for a period of time. I plan to modify our bash script to filter out the webpages that have a cookie named AWSELB that indicates a sticky server session. If again we see no significant results, we will turn our attention to our second research goal: identifying the percentage of webpage content hosted within versus without cloud-based CDNs.

Week 14 – Processing Results

I was able to run those scripts and receive the list of websites that were using Amazon’s ELB. Then I used the grep tool to  look for string that indicated Amazon EC2 instances including pattern matching IP addresses,  looking for terms such as server,  aws, ami , and pattern matching instance names via regular expression. Out of the subset of websites examined ( roughly 8,500), we found only a marginal  number indicated  and identified  the back-end instance in the HTML of the landing page.

The format of such identifying string usually included a generic name-number string, such as “aws1qatweb3″ or “aws-web02″, although one site did reveal the internal IPs of the instance in question. Still, in total there were less than 7 unique domains positively identified.

Week 12 & 13 – Bash Scripting

I spent most of these two weeks developing  scripts to send multiple requests to domains using Amazon ELB and are on Alexa’s Top Websites list  and diff the HTML pages, our goal in mind to examine the diff’d results to see if those pages contain any indicators of the origin server, like those of Netflix.

Halfway through, I chose to switch  from  python to bash scripting since most of processing relied on native shell commands. Since the list of websites is quite big, we looked for ways to speed up the processing.  For that issue, we partitioned the file into same-size chucks and used the GNU Parallel tool to launch multiple jobs that could process each chunk concurrently.

P.S. Happy Belated Thanksgiving Weekend !

Week 11 – Testing Results

Unfortunately, we discovered that we would not be able to use TCP Timestamps. Aaron was able to run  some preliminary measurements in a controlled test environment with a few back-end instances, a load balancer, and client. After graphing the results,  we discovered a discrepancy in the clock skew calculated on the back-end instances versus the clock skew calculated from the client. While each  back-end instance had its own unique clock-skew, our client received clock skews in one range.  Amazon’s ELB (Elastic Load Balancer) terminates TCP connections and re-sends requests to server/client machine when configured  for HTTP/HTTPS or TCP with SSL. Only for pure TCP (without SSL) does the ELB leave off modifying the header.

On another track, while writing the script to diff (linux bash command for comparing files ) websites using Amazon’s ELB, I was able to discover that Reddit uses sticky session cookies for Amazon ELB. During small qa testing for the scripts, I noticed that we would receive consistently receive the same instance identifier for a Reddit page over time, unlike Netflix’s server id which change per request. Examining the Reddit page with Chrome’s developer tools, I found the AWSELB and JSESSIONID cookies indicating a sticky session.  This is another possible path for exploration we hope to look into.