### Akka Clustering and Remoting: Application Scaling

In my previous post (Akka Clustering and Remoting: The Experiment), I defined a ping-pong application and deployed it into a local JVM cluster. In this post, I want to examine how we can scale this application into a vendor's cloud (e.g. Amazon or Rackspace).

However when pushing into the cloud, it is wise to remove any reliance upon a single cloud vendor. So, I'll also look at how that may be accomplished.

Tags: akka

### Akka Clustering and Remoting: The Experiment

This blog post is based on some concepts that I was using a few years ago to build clusters hosting distributed Actor applications. We will build an example application and scale it into a cloud evironment.

In general, I'll be focused on using Amazon for our cloud deployment. However, I'll also consider what happens if our cloud vendor fails in some way. Whilst such events may be rare, they do occur. So when we scale our Akka application, I will consider how we can avoid reliance upon any single cloud vendor.

Tags: akka

### Volatility Plugin Contest

Results are in for the 1st Annual Volatility Framework Plugin Contest - and I'm happy to say that I came joint fourth.

Tags: digital forensics, volatility

### Month of Volatility Plugins

I was just looking over the Month of Volatility Plugin posts published by the Volatility team, and stumbled across the post MoVP 4.4 Cache Rules Everything Around Me(mory) by AAron Walters.

Not only does it have a nice overview behind the development of the Volatility 2.3 dumpfiles plugin, but (and very much to my surprise!), it has a very complementary historical overview of my implementation work here.

Thanks to the Volatility team for providing such a great memory analysis framework. :-)

Tags: digital forensics, volatility

### Honeynet Reverse Engineering Challenge

Recently, I've succeeded in coming joint first in Honeynet challenge 11: Dive Into Exploit by Georg Wicherski. Both of the winning answers are worthwhile reading as they supply highly complementary analyses (Ruud rightly pipped myself at the post here as he managed to succeed in getting gdlog to behave).

This was essentially a reverse engineering challenge with a serious piece of cryptography thrown in for good measure!

Tags: honeynet, digital forensics, reverse engineering

### Clock Descriptions

This is the final article in a series of articles related to analysing the Honeynet Log Mysteries Challenge data set by applying the Scientific Method (see Casey2009 and Carrier2006) and utilising data visualisation (see Conti2007 and Marty2008).

With this final blog article, we shall use the logging events present in sanitized_log/apache2/www-*.log to build a reference clock description (see An Improved Clock Model for Translating Timestamps by Florian Buchholz). In doing this, we are then able to provide date and time estimates to events in terms of a standard reference clock.

Tags: honeynet, digital forensics, data visualisation, clock descriptions

### Wordpress Versioning: Part 2

This is part of a series of articles related to analysing the Honeynet Log Mysteries Challenge data set by applying the Scientific Method (see Casey2009 and Carrier2006) and utilising data visualisation (see Conti2007 and Marty2008).

Using just the logging events present in sanitized_log/apache2/www-*.log, this article explores how we might provide probability estimates (via naive Bayesian classifiers) for the version numbers of Wordpress and its plugins.

In the final blog article to this series, we shall look at how the work of Florian Buchholz (eg. see An Improved Clock Model for Translating Timestamps) can be used to measure logging event times relative to a suitable reference clock description.

Tags: honeynet, digital forensics, data visualisation, wordpress

### Estimating Apache2 Restarts

This is part of a series of articles related to analysing the Honeynet Log Mysteries Challenge data set by applying the Scientific Method (see Casey2009 and Carrier2006) and utilising data visualisation (see Conti2007 and Marty2008).

Using just the logging events present in sanitized_log/apache2/www-*.log, this article explores how we might estimate the Apache2 restart times by reconstructing the scoreboard worker thread data structure.

In the final blog article to this series, we shall look at how the work of Florian Buchholz (eg. see An Improved Clock Model for Translating Timestamps) can be used to measure logging event times relative to a suitable reference clock description.

Tags: honeynet, digital forensics, data visualisation, apache2, score board

### Wordpress Versioning: Part 1

This is part of a series of articles related to analysing the Honeynet Log Mysteries Challenge data set by applying the Scientific Method (see Casey2009 and Carrier2006) and utilising data visualisation (see Conti2007 and Marty2008).

Using just the logging events present in sanitized_log/apache2/www-*.log, this article explores how we might estimate the version numbers for Wordpress and its plugins. A latter article will explore how to add probabilistic certainties to such versioning estimates.

Tags: honeynet, digital forensics, data visualisation, wordpress

### Tagging and Timelines: Part 2

This is part of a series of articles related to analysing the Honeynet Log Mysteries Challenge data set by applying the Scientific Method (see Casey2009 and Carrier2006) and utilising data visualisation (see Conti2007 and Marty2008).

In Tagging and Timelines: Part 1 we had introduced a tagging algorithm that utilised Debian's package tags (ie. debtags). This blog post explores how we may use these tagging relationships, along with an interactive timeline (implemented using Protovis), to explore and analyse the auth.log sudo events.

Tags: honeynet, digital forensics, data visualisation, timeline

### Tagging and Timelines: Part 1

This is part of a series of articles related to analysing the Honeynet Log Mysteries Challenge data set by applying the Scientific Method (see Casey2009 and Carrier2006) and utilising data visualisation (see Conti2007 and Marty2008).

Using a tagging algorithm that utilises Debian's package tags (ie. debtags), this blog post explores how we may quickly and objectively classify logging events using tagging. The next post introduces an interactive timeline (implemented using Protovis) that we shall use to further explore and analyse the auth.log sudo events.

Tags: honeynet, digital forensics, data visualisation, tagging

### Apache2 Version Analysis: Ubuntu Packaging

During a recent attempt at answering the Honeynet Log Mysteries Challenge, I wrote a series of reasoned analyses for the supplied Honeynet logging data. Unfortunately, teaching workloads stopped me from submitting any realistic challenge answer.

Inspired by the idea of applying the Scientific Method to Digital Forensics (see Casey2009 and Carrier2006) and using data visualisation (see Conti2007 and Marty2008), I set about attempting to apply the same principles to analysing the Log Mysteries data sets.

In the blog post Apache2 Version Analysis: Data Visualisation, we estimated that Apache2 is at a revision < 596448 (ie. tag release is ≤ 2.2.6) and, under minimal additional assumptions, we also estimated that Apache2 was at a revision ≥ 420983 (ie. tag release is ≥ 2.2.3). Obviously, these revision and tag numbers are taken relative to the Apache2 subversion repository and not the Ubuntu package repository.

As Ubuntu packages (like Debian packages) essentially consist of the original (pristine!) upstream source code (eg. a tagged release straight from the Apache2 subversion repository) with a patch that is to be applied on installation, we clearly have some extra work to do here!

Tags: honeynet, digital forensics

### Apache2 Version Analysis: Data Visualisation

During a recent attempt at answering the Honeynet Log Mysteries Challenge, I wrote a series of reasoned analyses for the supplied Honeynet logging data. Unfortunately, teaching workloads stopped me from submitting any realistic challenge answer.

Inspired by the idea of applying the Scientific Method to Digital Forensics (see Casey2009 and Carrier2006) and using data visualisation (see Conti2007 and Marty2008), I set about attempting to apply the same principles to analysing the Log Mysteries data sets.

In the blog post Apache2 Version Analysis, we presented an argument that purported to provide an upper bound estimate on the version of Apache2 that was present on the Log Mysteries web server. It was pointed out, in that blog post, that this version estimate had a subtle error that needed to be located and fixed. In this article, we aim to rectify this situation by using a timeline to correctly estimate that Apache2 is at a revision < 596448 (ie. tag release is ≤ 2.2.6). Under minimal additional assumptions, we can also deduce that Apache2 is at a revision ≥ 420983 (ie. tag release is ≥ 2.2.3).

Tags: honeynet, digital forensics, data visualisation, timeline

### Apache2 Version Analysis

During a recent attempt at answering the Honeynet Log Mysteries Challenge, I wrote a series of reasoned analyses for the supplied Honeynet logging data. Unfortunately, teaching workloads stopped me from submitting any realistic challenge answer.

Inspired by the idea of applying the Scientific Method to Digital Forensics (see Casey2009 and Carrier2006), I set about attempting to apply the same principles to analysing the Log Mysteries data sets.

Using just the apache2/www-* logs from the Log Mysteries Honeynet challenge, this blog post demonstrates how we can define upper bounds on the version of Apache2 used and, more interestingly, data regarding Apache's worker threads. We are also able to establish how to obtain the log events with microsecond (instead of just second) timestamp accuracy.

Tags: honeynet, digital forensics