Secure Email On OSX

There are a few really good articles out there on how to send and receive secure email using Thunderbird and GPG. This small guide will show you how you can use Mail app along with GPG Tools for the same result.

Install GPG Tools

First, head over to https://gpgtools.org/ and download the latest release. At the time of this article, the latest stable release is 2.1.

To start off on the right foot, before you install it, open up terminal and verify that you've downloaded a package that matches the following signature: ac7a636bfee1027d8f43a12a82eea54e7566dcb8. This can be accomplished with the following commands:

$ cd ~/Downloads
$ shasum GPG\ Suite\ -\ 2013.10.22.dmg
ac7a636bfee1027d8f43a12a82eea54e7566dcb8  GPG Suite - 2013.10.22.dmg

Once you can verify that the dmg file that you've downloaded hasn't been tampered with during transfer, go ahead and open it and run through the install process. This will install the base GPG tools, a graphical key manager and a plugin for Mail.

Create A Key

Next you'll want to create a key. Open the newly installed "GPG Keychain Access" application and click the "New" button to create a key. You'll be prompted for your full name and email address, which you should fill in. Be sure to also check the box to have the public key uploaded once generated. Having accurate information is vital and if this is your first time going through this process. I strongly recommend setting the comment to your website or twitter under the advanced options.

Next, you'll be prompted to set a password for your key. Choose a strong password. Depending on your system, it may take a few moments for the key to be generated after your password is accepted. Don't be alarmed.

Configure Mail

There shouldn't be anything extra needed to send and receive encrypted and or signed email through the Mail app now. In the Mail app preferences is a "GPGMail" section that should indicate that GPGMail is ready for use. It is set to encrypt/sign drafts and sign all new messages by default.

Test Sending Signed Mail

From Mail, create a new message to send to a loved one, friend, coworker or the like. Once you fill in the To, Subject, and Body, ensure that the message is signed by clicking the checkmark box button within the new mail window. If you have Mail configured to sign by default, you may be prompted within a few seconds to give the password for the key.

It is important to note that you can sign outbound email to anyone, but you can only encrypt email messages to people who have given you their public key. This is where the GPG Keychain Access app comes into play.

With the GPG Keychain Access app you can also import key files given to you and search for keys for people you may know. If someone sends you their public key you can use the "import" feature to load the key into your keyring. Alternatively, if you know the email address or name, you can attempt to search for keys associated with them on public key servers.

When composing emails to addresses that have public keys associated with them, you'll have the option of encrypting the email messages being sent. If you don't have any other public keys in your key ring, you can test this by sending an encrypted email to yourself.

Tips

Guard your private key. It is critical that you ensure your private key is safe and secure. For everyday use, keeping it on a personal, non-public computer is probably enough. If you feel that a computer that has your private key on it has been compromised, infected by a virus or malware, etc then you revoke the key and create a new one.

Find a thumbdrive that you don't use and back up your keyring to it. This should also include a revocation certificate. A revocation certificate will allow you to revoke the key if the key is lost or compromised.

When backing up your private key, consider using symmetric encryption using a password to encrypt the backup file. This can be done with GPG using the following command:

$ cd /Volumes/thumbdrive
$ gpg --output backup.zip.gpg --symmetric backup.zip

When you need to decyrpt your backup, you can use the following command:

$ gpg --output backup.zip -d backup.zip.gpg

When publishing your key on your blog or website, you can export a plain text version of your key that can be read as text and imported easily using the following command:

gpg --armor --export person@wherever.place

The output of that command can be placed inside of a pre block as-is. It is the most direct way to share your key wih someone viewing your blog or website. An alternative would be to create a small signature block telling people how to find your key.

$ gpg --fingerprint nick@gerakines.net
pub   4096R/4F96B2E4 2013-06-15
      Key fingerprint = 9530 23D8 48C3 5059 A2E2  4888 33D4 3D85 4F96 B2E4
...
$ gpg --clearsign

You need a passphrase to unlock the secret key for
user: "Nick Gerakines (http://ngerakines.me/) <nick@gerakines.net>"
4096-bit RSA key, ID 4F96B2E4, created 2013-06-15

9530 23D8 48C3 5059 A2E2  4888 33D4 3D85 4F96 B2E4
nick@gerakines.net

^D
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

9530 23D8 48C3 5059 A2E2  4888 33D4 3D85 4F96 B2E4
nick@gerakines.net

-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: GPGTools - http://gpgtools.org

iQIcBAEBCAAGBQJTi8pOAAoJEDPUPYVPlrLkwZkP/3PxOQfNAlF0W5JVImPltVMr
9rqNK/9T07cU8qCugECX0U+CPsz5+fY9t6KuPb9XQv1SZT/s0Cdu0NoV83/zyTJe
VmCpnDwDYa1k8PsfiYHziM/BQ4N8HFlc/rNwsyfS+v9o2Pa2nEJA6OmU+jsVg25A
vyGfgH6fK/QeWRIlFIMfuh5b0+XSOA0E6/xTHFSNHdn3oYA4xjNsE6AajHekcYAS
l99uZZhqu+bnKLaCpxLHjZbTcjGuZcacIyTXNh20VcHtgZS0VvUWKyRvJ9PPZcwJ
oidbGTQkx5GJJJrXREoncHsh5uVt0SUJk/Cb2B43sICzTD1+5tENpK6kUnxlo2bi
O0rzEFSZRVme3GiDTZc5pV7DoWUS28EiJl6LLc7hU7d8lwsme69/3tV85mEdyDzJ
4OnFDQ39qIHfHhnswyumTAYnI/31GWrWfCl/UL3MOd4HKQhxsuQWi/zOWVAlvHJN
/lwIh3yiH5PGJsOUKs04XoOgNaZLC2A2vq9FUng+hi7WfGBzYkPc/RLgNxI9cU9H
dADC+Np4DRQ71YMSX9oYpUpybq6IdA68rrWbdjfDMc+ZQBDZz83zk7xRMLfws1ut
u2n6uzAVvYe/FjGjBaNXJ++yE8oIC38RDBG14nJDBK+cdqZpBP0Lxd+nGRB6VxcX
ZBOr7eKH1bpVjSbuOX1S
=OhTU
-----END PGP SIGNATURE-----

Chef Application Cookbooks

This is a follow up to the blog post Creating A Chef Cookbook. Since writing that blog post, I created the preview project and several cookbooks for it. With it, I've done a few things differently and believe that they represent some notable trends in the chef community.

Embed Application Cookbooks in Application Repositories

Instead of having a separate git repository for each cookbook, all of the cookbooks for the preview application are in the preview git repository in the 'cookbooks' directory.

Practically, this doesn't change anything for the cookbook itself. In projects that reference this cookbook that use berkshelf, I have to update the Berksfile to point to the project repository and the subdirectory that contains the cookbook. When a cookbook is uploaded to a chef server, the location is irrelevant.

There is one thing that I do want to make very clear: The application build process doesn't build or prepare the cookbooks. The cookbooks are independently built, tested and released. I've seen projects where the cookbook is "generated" as part of the build process for the application and I feel strongly against that.

Create Environment Cookbooks

To learn more about the environment cookbook pattern, read the The Environment Cookbook Pattern written by Jamie Winsor.

In addition to the primary "preview" application cookbook, I created an environment cookbook called "preview_prod". This cookbook is used to represent the default configuration, files and actions needed to release the preview application into a production-like environment.

When looking at environment cookbooks, it is really important to note that these don't contain node information and attributes, but rather represent what a configuration of the application cookbook looks like in a given environment.

In the case of the preview_prod/metadata.rb file, I list several attributes that are required by the cookbook:

name             'preview_prod'
maintainer       'Nick Gerakines'
maintainer_email 'nick@gerakines.net'
license          'MIT'
description      'Installs/Configures preview_prod'
long_description IO.read(File.join(File.dirname(__FILE__), 'README.md'))
version          '0.2.2'

depends 'preview'

supports 'centos'

recipe 'preview_prod::node', 'Configures and prepares a preview application node.'
recipe 'preview_prod::storage', 'Configures and prepares a storage node.'

attribute 'preview_prod/node_id',
  :display_name => 'The id of the preview node.',
  :required => 'required',
  :type => 'string',
  :recipes => ['preview_prod::node']

attribute 'preview_prod/cassandra_hosts',
  :display_name => 'The cassandra hosts used by the preview node.',
  :required => 'required',
  :type => 'array',
  :recipes => ['preview_prod::node']

attribute 'preview_prod/edge_host',
  :display_name => 'The base url used to request assets from the cluster.',
  :required => 'required',
  :type => 'string',
  :recipes => ['preview_prod::node']

attribute 'preview_prod/s3Key',
  :display_name => 'The S3 key used to store generated assets.',
  :required => 'required',
  :type => 'string',
  :recipes => ['preview_prod::node']

attribute 'preview_prod/s3Secret',
  :display_name => 'The S3 secret key used to store generated assets.',
  :required => 'required',
  :type => 'string',
  :recipes => ['preview_prod::node']

attribute 'preview_prod/s3Host',
  :display_name => 'The S3 host used to store generated assets.',
  :required => 'required',
  :type => 'string',
  :recipes => ['preview_prod::node']

attribute 'preview_prod/s3Buckets',
  :display_name => 'The S3 buckets used to store generated assets.',
  :required => 'required',
  :type => 'array',
  :recipes => ['preview_prod::node']

Even though the preview_prod::node and preview_prod::storage recipes describe how to create production-like preview cluster nodes separately, the preview_prod::default exists to allow engineers to deploy to a single, full-stack node. This follows the idea that the default recipe's purpose should be to represent the most common and simple use for engineers that are new to the cookbook.

In the preview_prod::node recipe, we are using the preview_prod required and unsatisfied attributes to override attributes that have default values in the preview cookbook:

node.override[:preview][:config][:common][:nodeId] = normal[:preview_prod][:node_id]
node.override[:preview][:config][:storage][:engine] = 'cassandra'
node.override[:preview][:config][:storage][:cassandraKeyspace] = 'preview'
node.override[:preview][:config][:storage][:cassandraKeyspace] = normal[:preview_prod][:cassandra_hosts]
node.override[:preview][:config][:simpleApi][:edgeBaseUrl] = normal[:preview_prod][:edge_host]
node.override[:preview][:config][:uploader][:engine] = "s3"
node.override[:preview][:config][:uploader][:s3Key] = normal[:preview_prod][:s3Key]
node.override[:preview][:config][:uploader][:s3Secret] = normal[:preview_prod][:s3Secret]
node.override[:preview][:config][:uploader][:s3Host] = normal[:preview_prod][:s3Host]
node.override[:preview][:config][:uploader][:s3Buckets] = normal[:preview_prod][:s3Buckets]

include_recipe 'preview::default'

Your mileage may vary in terms of what a production cookbook should look like. The preview project is open source and public, but for internal environment cookbooks you may have default values or databag references for attribute values.

Build Cookbook

This is another pattern that I'm using at work and really like: Creating a cookbook to bootstrap a development environment. Again, this is another take on the environment cookbook pattern

Specifically for this project, this cookbook installs the version of the golang compiler required to build the preview application as well as tools git. In the preview_build/recipes/default.rb file, this looks like:

include_recipe 'golang::default'

node.default['go']['packages'] = ['github.com/gpmgo/gopm']

include_recipe 'golang::packages'

The preview project is open source and public, so I'm using travis-ci (https://travis-ci.org/ngerakines/preview) to compile the application and run the short tests. The build cookbook pattern is useful if you've got a build environment and CI that has a build agent. The cookbook would be applied to the build agent and the chef-client command executed at the beginning of the build agent run to ensure that it is up to date.

For a disposable build environment, we can use environment variables to create a GOPATH dynamically:

GOPATH=gopath-`date +%s`
echo "export GOPATH=$GOPATH" > env-gopath

Then, your commands would look like:

$ . path/to/env-gopath
$ go get ./...
$ go build
$ go test ./... -test.short
$ rm -rfv $GOPATH && env-gopath

Practically, it makes sense to use something like gopm to fetch specific versions of the packages used. The above script could be updated to use gopm instead

Application Integration Test Cookbook

For this project, I took it one step further and created an additional cookbook called preview_test that contains recipes, configuration and files to run integration tests. This cookbook is still heavily in development as I'm using it to learn how to effectively use chef-metal and kitchen-metal. I'll put up another blog post when I've got something demonstrable.

Creating a Chef Cookbook

In May, I wrote a cookbook for the s3ninja project and wanted to share how I go about writing application cookbooks. This cookbook is primarily used to test another project that I'm working on, tram. In the tram cookbook, I include this cookbook for use in the cookbook integration tests, so this is an interesting use case for an application that can stand on its own as well as be included in an application stack.

Step 1: What are we creating here?

There are a few things that I wanted to get out of this:

  1. An application cookbook that can be used to release the s3ninja application
  2. Support for both Centos and Ubuntu
  3. Cookbook unit tests
  4. Cookbook integration tests

My local cookbook development environment is pretty simple. I've got Ruby 1.9.3 installed through RVM as well as the chef, berkshelf, foodcritic, test-kitchen, rspec and chefspec gems. I'm also using a somewhat recent version of VirtualBox.

For this project, chef and berkshelf are required for general cookbook development and testing. Foodcritic is used as a sanity checking tool to make sure my cookbooks don't contain anything that is too far from the generally accepted development patterns used by the community. For unit testing I'll be using chefspec. For integration testing I'll be using test-kitchen and serverspec to create test suites that can be executed against different OS configurations.

Step 2: Creating the cookbook

With the development environment configured and ready, I started by creating a new cookbook using berkshelf:

$ cd ~/development/ngerakines
$ berkshelf cookbook s3ninja
$ mv s3ninja s3ninja-chef-cookbook

This cookbook exists outside of the s3ninja project for a few reasons, the primary one being that I'm not the maintainer of the s3ninja project and I'm not sure that they use chef. Alternatively, I would place the cookbook in the "cookbooks/s3ninja" directory within the s3ninja project repository.

What the berkshelf cookbook s3ninja command does is create a new directory with the cookbook's name and places a skeleton cookbook within it. Within that cookbook are a few key files to note and update:

name             's3ninja'
maintainer       'Nick Gerakines'
maintainer_email 'nick@gerakines.net'
license          'MIT'
description      'Installs/Configures s3ninja'
long_description IO.read(File.join(File.dirname(__FILE__), 'README.md'))
version          '0.1.0'

depends 'yum', '~> 3.2.0'
depends 'apt', '~> 2.3.10'
depends 'java', '~> 1.22.0'

supports 'centos', '>= 5.8'
supports 'ubuntu', '>= 12.04'

In the above metadata.rb file, you can see what the cookbook name is, who maintains it, the version and then what cookbooks it depends on and what operating systems it supports. This file is important because it is used to define and describe the cookbook.

In the attributes/default.rb file, I'm going to list all attributes specific to the cookbook and application. In this cookbook we just have one so far: the source location of the s3ninja package.

default[:s3ninja][:package_source] = "https://github.com/ngerakines/s3ninja/releases/download/latest/s3ninja.zip"

Next we have our Berksfile file. This file is used by berkshelf to describe where and how the cookbook dependencies are retrieved by berkshelf.

site :opscode

metadata

cookbook 'apt'
cookbook 'yum'
cookbook 'java'

This cookbook uses community cookbooks, so this file doesn't have to contain anything special.

As for recipes, the recipes/default.rb is going to be our entry point to the application cookbook and should provide everything that falls under the "sane defaults" category of cookbook work. In this case, that work would be to make sure the application's dependencies are installed, the application unpacked and services defined. When writing cookbooks, I write recipes to align with intent, so we'll break things out into "app" and "deployment" recipes.

With that, our recipes/default.rb file is going to simply include the app and deployment recipes:

include_recipe 's3ninja::app'
include_recipe 's3ninja::deployment'

The recipes/app.rb is going to do the heavy lifting of fetching the prepare the s3ninja application environment, s3ninja package, unpack it and configure it. The first thing that is done is include the dependant recipes and set any attributes needed.

include_recipe 'apt::default'
include_recipe 'yum::default'

node.default['java']['jdk_version'] = 7

include_recipe 'java::default'

In this case, we include the apt, yum and java default recipes. Before the java::default recipe is included, we want to ensure that Java 7 is installed because the s3ninja application package is compiled against it. Even though this recipe is going to be running against both Centos and Ubuntu environments, we are including both the apt and yum default recipes. We rely on them to intelligently exclude themselves from running if the node doesn't support them.

Next we want to create the s3ninja user and prepare the directories that house the unpacked application.

user 's3ninja' do
  username 's3ninja'
  home '/home/s3ninja'
  action :remove
  action :create
  supports ({ :manage_home => true })
end

group 's3ninja' do
  group_name 's3ninja'
  members 's3ninja'
  action :remove
  action :create
end

package 'unzip' do
  action :install
end

Next, we fetch the release package, unpackage it and then do any follow-up tasks. In this case, we want to make sure that permissions are correct for the application files.

remote_file "#{Chef::Config[:file_cache_path]}/s3ninja.zip" do
  source node[:s3ninja][:package_source]
end

bash 'extract_app' do
  cwd '/home/s3ninja/'
  code <<-EOH
    unzip #{Chef::Config[:file_cache_path]}/s3ninja.zip
    EOH
  not_if { ::File.exists?('/home/s3ninja/sirius.sh') }
end

execute 'chown -R s3ninja:s3ninja /home/s3ninja/'

file '/home/s3ninja/sirius.sh' do
  mode 00777
end

There are a few things going on here that aren't great. The first is that we are installing unzip and then use a bash block to unzip the downloaded archive. Ideally, we'd use a cookbook recipe that can unpack the zip file that contains the application. We then follow up with an execution of the chown command to ensure that everything inside the home directory is owned by the s3ninja user and group. The /home/s3ninja/sirius.sh is also re-permissioned incase it was packaged or unpackaged in a way that looses the execute permission.

Next, the recipes/deployment.rb recipe file will create and place the init script as well as define the s3ninja service.

template '/etc/init.d/s3ninja' do
  source 's3ninja-init.erb'
  mode 0777
  owner 'root'
  group 'root'
end

service 's3ninja' do
  provider Chef::Provider::Service::Init
  action [:start]
end

Step 3: Unit tests with ChefSpec

Chefspec is a set of rpsec extensions that let cookbook authors quickly test that their cookbooks are doing everything as expected.

The chefspec test files reside in the spec/recipes directory within the cookbook project and have a file suffix of _spec.rb. What I like to do is have one test file for each recipe in the cookbook.

  • spec/recipes/default_spec.rb
  • spec/recipes/app_spec.rb
  • spec/recipes/deployment_spec.rb

Each test file includes platform version mocking, and ends up looking like this:

require 'chefspec'
require 'chefspec/berkshelf'
ChefSpec::Coverage.start!

platforms = {
  "ubuntu" => ['12.04', '13.10'],
  "centos" => ['5.9', '6.5']
}

describe 's3ninja::recipe' do
  platforms.each do |platform_name, platform_versions|
    platform_versions.each do |platform_version|
      context "on #{platform_name} #{platform_version}" do

        let(:chef_run) do
          ChefSpec::Runner.new(platform: platform_name, version: platform_version) do |node|
            node.set['lsb']['codename'] = 'foo'
          end.converge('s3ninja::recipe')
        end

        ## Test code goes here.

      end
    end
  end
end

For the spec/recipes/default_spec.rb file, we want to make sure that it is simply including the s3ninja::app and s3ninja::deployment recipes with the following test code:

it 'Includes dependent receipes' do
  expect(chef_run).to include_recipe('s3ninja::app')
  expect(chef_run).to include_recipe('s3ninja::deployment')
end

The spec/recipes/app_spec.rb file is a bit longer, but includes all of the actions of the app recipe:

it 'includes dependent receipes' do
  expect(chef_run).to include_recipe('apt::default')
  expect(chef_run).to include_recipe('yum::default')
  expect(chef_run).to include_recipe('java::default')
end

it 'creates the user and groups' do
  expect(chef_run).to create_user('s3ninja')
  expect(chef_run).to create_group('s3ninja')
end

it 'installs required packages' do
  expect(chef_run).to install_package('unzip')
end

it 'downloads and unpacks the application package' do
  expect(chef_run).to create_remote_file('/var/chef/cache/s3ninja.zip')
  expect(chef_run).to run_bash('extract_app')
  expect(chef_run).to run_execute('chown -R s3ninja:s3ninja /home/s3ninja/')
  expect(chef_run).to create_file('/home/s3ninja/sirius.sh')
end

The spec/recipes/deployment_spec.rb has similar code, but again, verifies the actions of the deployment recipe:

it 'places the init script and starts the service' do
  expect(chef_run).to create_template('/etc/init.d/s3ninja')
  expect(chef_run).to start_service('s3ninja')
end

The tests can be run using the rspec command:

$ rspec
........................

Finished in 1.65 seconds
24 examples, 0 failures

ChefSpec Coverage report generated...

  Total Resources:   9
  Touched Resources: 9
  Touch Coverage:    100.0%

You are awesome and so is your test coverage! Have a fantastic day!


ChefSpec Coverage report generated...

  Total Resources:   9
  Touched Resources: 9
  Touch Coverage:    100.0%

You are awesome and so is your test coverage! Have a fantastic day!


ChefSpec Coverage report generated...

  Total Resources:   9
  Touched Resources: 9
  Touch Coverage:    100.0%

You are awesome and so is your test coverage! Have a fantastic day!

Step 4: Integration tests with ServerSpec

Even though this cookbook is going to be used as a component of another cookbook's tests, I still need to make sure that everything is setup and working properly. With test-kitchen, we can configure different operating systems (platforms) and test suites and it will execute each permutation.

The first thing to do is update the .kitchen.yml file with the platforms that we want the integration tests to run on. In this case, we want to ensure that the cookbook works on ubuntu 12.04, ubuntu 13.10, centos 6.5 and centos 5.8.

---
driver:
  name: vagrant

provisioner:
  name: chef_solo

platforms:
  - name: ubuntu-12.04
  - name: ubuntu-13.10
  - name: centos-6.5
  - name: centos-5.8
    driver:
      box_url: https://dl.dropbox.com/u/17738575/CentOS-5.8-x86_64.box

suites:
  - name: default
    run_list:
      - recipe[s3ninja::default]
    attributes:

Then we create some test files to execute. In this project, I have all of the integration test logic in the test/integration/default/serverspec/localhost/s3ninja_spec.rb file:

require 'spec_helper'

describe 's3ninja' do

  describe 'app' do

    describe file('/home/s3ninja') do
      it { should be_directory }
    end

    describe file('/home/s3ninja/sirius.sh') do
      it { should be_file }
      it { should be_executable }
    end

  end

  describe 'service' do

    describe file('/etc/init.d/s3ninja') do
      it { should be_file }
    end

    describe port(9444) do
      it { should be_listening }
    end

  end

end

In it, we ensure that the application directory and startup script both exist. Then we ensure that the init script used to start the service exists, that the service is listening on the default port and several test commands complete successfully. Personally, I like doing minimal application testing within the cookbook integration test to ensure everything is working as expected.

To run integration tests, I use the kitchen command to view and run them.

$ kitchen list
Instance             Driver   Provisioner  Last Action
default-ubuntu-1204  Vagrant  ChefSolo     <Not Created>
default-ubuntu-1310  Vagrant  ChefSolo     <Not Created>
default-centos-65    Vagrant  ChefSolo     <Not Created>
default-centos-58    Vagrant  ChefSolo     <Not Created>
$ kitchen test
-----> Starting Kitchen (v1.2.1)
-----> Cleaning up any prior instances of <default-ubuntu-1204>
-----> Destroying <default-ubuntu-1204>...
       Finished destroying <default-ubuntu-1204> (0m0.00s).
-----> Testing <default-ubuntu-1204>
-----> Creating <default-ubuntu-1204>...
       Bringing machine 'default' up with 'virtualbox' provider...
       [default] Importing base box 'opscode-ubuntu-12.04'...
...
s3ninja       
  app       
    File "/home/s3ninja"       
      should be directory       
    File "/home/s3ninja/sirius.sh"       
      should be file       
      should be executable       
  service       
    File "/etc/init.d/s3ninja"       
      should be file       
    Port "9444"       
      should be listening       

       Finished in 0.04707 seconds
5 examples, 0 failures       
       Finished verifying <default-centos-58> (0m1.46s).
-----> Destroying <default-centos-58>...
       [default] Forcing shutdown of VM...
       [default] Destroying VM and associated drives...
       Vagrant instance <default-centos-58> destroyed.
       Finished destroying <default-centos-58> (0m2.36s).
       Finished testing <default-centos-58> (17m46.48s).
-----> Kitchen is finished. (28m30.02s)

Tieing things off

There are a few additional files used by the cookbook and tests, so take a look at the s3ninja-chef-cookbook to see a complete picture of what it looks like. To see how this cookbook is being used, check out the tram-chef-cookbook. In it, I have this cookbook being referenced in an embedded test cookbook for integration testing.

How I Work

I love reading the how-i-work posts on life hacker. They aren't the first to do that sort of series, but I love the mix of writers, industry leaders, software engineers and designers that contribute to it. With that, this is my contribution.

Current gig: Software Engineer
Location: Centerville, Ohio, USA
Current mobile device: Google Glass, Samsung S4, Nexus 10
Current computer: Apple Macbook Pro, System76 Ratel Performance
One word that best describes how you work: Furiously

What apps/software/tools can't you live without?

A solid Linux install that keeps itself clean and up to date so that I don't have to deal with that. Currently using Ubuntu but have a long history with Red Hat and Gentoo. I use sublime text whenever I'mn ot using Vim. I've been using Chrome for a few years now, although I'm considering moving back to Firefox with all of the improvements they've made.

For Java development, I really couldn't live without Intellij. I use it for both professional and personal projects and gladly paid for a license to use at home.

To manage the constant flood of information, I use pinboard and Digg reader. Being a former del.icio.us engineer, I've got well over 5k bookmarks and add new links and notes daily. When Google Reader went away, I hopped between a few different feed readers but settled on the one that Digg created.

When gaming, I either stick with my PS3 or games purchased through Steam. The two games that I've been playing the most recently are Starbound and Hearthstone.

What's your workspace setup like?

I have a home office and work remotely. I have a sturdy glass desk that has a two monitor arm that I picked up off of Amazon a while back. One monitor runs off my of laptop and the other is for my desktop. I use Synergy to share a single keyboard and mouse across the two. My current keyboard is the Razer BlackWidow Ultimate with blue switches and my mouse is a Logitech M500.

Next to my desk is a large bookshelf that I use for keepsakes, books and misc storage. I've got a bunch of signed games from working at Blizzard and a few from when I was at EA. Most everything else was either made or gifted.

What do you listen to while you work?

I'm a big supporter of Pandora and have been for a while. A few years ago I copied all of my music into Amazon Cloudplayer and use the two services regularly. Every now and then I try Spotify, but don't find it compelling. I used to spend a lot of time managing playlists and whatnot, but over the years came to the conclusion that I listen to music to help me think and code (for the most part) so I don't really care what it is.

What's your best time-saving trick?

Doing one thing at a time. I put a lot of effort into not multi-tasking. If I need to change or context switch, I'll close everything out so that the next thing has 100% of my attention and focus.

What's your favorite to-do list manager?

At work I use Jira. At home, I stick to a todo.txt file.

Besides your phone and computer, what gadget can't you live without?

Google Glass, without question. Although I don't use it for taking pictures or recording videos that much, I find the notifications and information really helpful.

Amazon, you've got competition

tl;dr I'm using a Chromecast and the Google Play store to purchase and rent movies instead of Amazon Prime Instant Video. This is a change from what I've been doing and unless Amazon makes it easier, faster or cheaper, they'll probably continue to loose business from me.

A few months ago I picked up a Nexus 10. I love it, it is great. A few weeks ago I made my first non-app purchase in the play store, some movies to watch on a flight to SFO. Watching movies on the tablet was a great experience. The purchase was quick, downloading them to the device was fairly simply and the battery life on the device let me watch several movies with plenty of juice left over.

I've also been an Amazon Prime member for a while and the whole family has been using Amazon's streaming video service through our PS3 for a few years now. In fact, between Amazon Prime and Netflix, we didn't have cable for several years. The selection of movies and TV shows on Amazon is great and the purchase/rent flow is really easy to use.

With the release of the Chromecast, Amazon has been put on notice. During the Super Bowl, I let Vanessa pick out a movie from the Google Play store on my tablet and with the Chromecast, she watched it on a TV that doesn't have any capability to stream Amazon/Netflix. Using the Chromecast to watch a video purchased and streamed from my Nexus 10 was super easy and my 7 year old figured out pretty quickly.

What the Chromecast does for me is make streaming something that I can do mobily within my house. I don't have to have the PS3 setup, I don't have to run any special streaming software or have video files on my Linux box, I don't have to deal with clunky Time Warner Cable software and most of all I don't have to have a device that is fixed to each and every TV. I have one Chromecast and it is small and light enough to move from TV to TV without being a big problem. I'd also rather own 2-3 of them than spending several hundred dollars to have a roku/PS3 on each TV.

Amazon is great and I love the selection of movies they have, but with my Nexus 10, our multiple computers, the Google Play store and my Chromecast, I'll probably continue purchasing movies elsewhere. I like the portability and versatility.