in Business

A day in the life of a GoCardless software engineer

I’m Tim, and I’m a software engineer at GoCardless. I’ve been here for about four and a half years. I work on our UX team, building customer-facing bits of GoCardless, such as our dashboard and developer API. My team focuses on making them as powerful and easy to use as possible.

The early days

I first joined the team back in 2012, just a few months after GoCardless had launched into beta. I was attracted by the boldness of what the company was trying to do: making life better for small businesses and disrupting banks’ traditional monopoly. Since then I’ve worked in a variety of roles across the company, from setting up and running our customer support operation to running our partnerships team, to where I am now.

On a typical day, I travel in from my flat south of the Thames and arrive in the office at about 8:20am. I’ll usually spend about 40 minutes catching up with my emails and doing little things on my to-do list while it’s still quiet in the office. Then I go to the gym across the road for a run or to lift some weights. Keeping fit is really important to me and I notice the difference in my productivity.

Once I get back to the office, the UX team starts its day with a “stand-up”. There, we spend about ten minutes as a team sharing what we were working on yesterday, what we’re planning to work on today and raising any areas where help would be useful or where something is blocking us from making progress. Next, I grab some breakfast (there’s a big choice provided by the company!) and a cup of tea before getting back to work.

We usually work on projects in small teams of three or four people - this means we can bounce ideas off of each other, pair programme (where two people work together on the same piece of code), review one another’s work and make progress as quickly as possible.

Connecting and communicating

Most recently, I’ve been leading a project improving our CSV exports. Our customers, especially larger ones, rely on these for their reporting and reconciliation. As a small project team, we scope out the projects, and then divide up the work between us, working in two week “sprints” towards our goal.

I’ve been writing the backend code in Ruby, which builds the CSVs in just the right format. One of our brilliant interns, Henri, has been working on the frontend interface in JavaScript and HTML, which our customers see, getting lots of support from me to get up to speed on our codebase and make the right design decisions.

Throughout the project, I’ve been working with others across the company (usually over a coffee!) - for example, I’ve been in regular contact with one of our salespeople, Michael, and one of our Support team, James, to work out exactly how the CSVs should look to meet our customers needs. I’ve also been working with our Product Lead, Duncan, to work out how we should communicate the changes we’re making to users. Henri has stayed connected with our Design team to make sure the customer experience looks and feels as good as possible.

I hugely enjoy interacting with people across the company. The people are brilliant and it’s interesting to hear about what others are working on. It’s also good to think about how a little investment from engineers like myself could make day-to-day working lives much more efficient for people here. We’re always looking to improve processes and reduce manual work, so people have more time to do what they do best.

Flexible working and side projects

I’ll usually power through work from 10am until about 3pm, and then grab a late lunch at about 3pm (I’m invariably too engrossed in my work to eat until then!). Once I’ve eaten, I’ll get back to work for the last few hours of the day before usually going home at about 6pm (although sometimes I’ll stick around if there’s something I’m particularly enjoying and am keen to keep working on!).

I love working at GoCardless. I’m incredibly fortunate to work in a company packed full of incredibly smart people, to have flexibility to work how I want to (whether that means heading to a coffee shop to work sometimes, going to the gym in the middle of the day or working lying on a sofa) and to get to constantly improve a product used by tens of thousands of people.

If you want to be a software engineer somewhere as great as GoCardless, I’d recommend one big thing: play around with side projects. I’ve worked on lots of fun projects in my spare time from a tool to help people use their air miles are effectively to an open-source API library for Rap Genius. These kinds of things are great fun, give you lots more experience of writing code and look great on your CV.

Growth and…we’re hiring!

GoCardless is growing fast - and we need a lot of great people to help us continue that growth. Our Engineering team is hiring, so if you like what you've been reading, why not take a look at our careers page.

Interested in joining the GoCardless team?
We're hiring
in Business

What makes an awesome company culture?

The world of work has come a long way over the last decade.

Where once employees were expected to suit up and bring their most serious work face to the office, these days it’s more about artisanal coffee, flexible working, and regular company events.

Call it a hangover from the tech startups of Silicon Valley, but whatever the source, employee expectations of work have changed. There’s no going back.

Our parents and grandparents grew up in a world where your studies led you into a job for life. If you were lucky, your boss would present you with a gold watch when you retired. In those days people used to think that security was the most important element of a job.

But today’s workforce sees things differently. For them, driven by legions of technological advances and global opportunities, companies are compelled to offer more than simply job security. Employees are looking for meaning. And they find it in the culture of the company they work for.

A good company culture can have many benefits; both for the company itself and for the employees. What’s more, the two are intricately linked.

Culture defines a company; it’s the beating heart that keeps the company alive. Culture is the values and attitudes that drive the company. Ideally, these should be aligned with the values and attitudes of the people who work there.

When culture is unhealthy it reflects on the way people work together. They tend to work purely for the salary and benefits. They may have less loyalty to the company and be constantly on the lookout for something new.

On the other hand, a company with a healthy culture values everyone on the team, supports their goals, and develops an environment of inclusivity and collaboration.

In return, employees feel valued. They feel that the company has their best interests at heart. This fosters team spirit and boosts performance naturally. Great culture also brings many benefits for employees, from boosting motivation to enhancing their quality of life.

After all, we spend a large chunk of our lives in the office - it has a massive effect on life quality. What’s more, happy employees are quick to become company advocates, which can help us to hire more great people!

For the company, building a strong and positive culture brings great benefits too.

Having an awesome culture helps companies to:

  • Build a positive reputation
  • Develop good productivity
  • Keep quality of their service or product high
  • Attract and retain the best talent!

At GoCardless, culture is at the heart of everything we do.

"The culture is super trusting, collaborative and open. As soon as you start at GoCardless, you’re given the freedom to work flexibly, and the trust to do it well. We care immensely about enabling people to produce work that they’re proud of, rather than checking what time they walk into the office,” says Jess Summerfield, Head of People.

The hiring process at GoCardless focuses not only on your skills, but on the characteristics you bring to the table as an individual.

If we hire you, we’ll do everything we can to improve your skillset, all the way from supporting additional training to encouraging you to get involved in all kinds of cross-departmental projects. We also encourage employees to pursue their interests outside of work, from computer science to Mandarin Chinese!

GoCardless cares about doing brilliant work, and to enable this, we need to give people an awesome place to do just that.

That doesn't just mean sorting out the aesthetics. A nice office is great, but culture means letting people learn and develop in the best ways for them, giving them autonomy, ownership and a sense of purpose.

Jess says: “The culture fit side of the interview process is as important, if not more important, than the role fit side of it. The top piece of feedback that we get from the team is that it's the people who make GC an awesome place to work, and we want to keep it that way!

“We don’t want to hire people that are the same as us - far from it - but rather that we want to hire people who care about the same things that we do. I've seen the team grow from 35 people to 85 people over the past two years, and it's been a complete privilege to work with such a big group of amazing, talented people."

At GoCardless, diversity is key. We have a real mix of employees from a range of backgrounds, but the one thing is common is our company culture. We also have a high percentage of female employees for a tech company - 23% and rising! For the future, we’re focusing on growing this number even further.

To see some of our team talking about life at GoCardless, check out our video on Zealify.

Like what you see and read? We’re hiring! Why not take a look at our latest vacancies.

Interested in taking payments with GoCardless?
Sign up here!
in Business

How to get paid more quickly this summer

The grey skies of London don’t look especially promising as August dawns today. But here at GoCardless HQ, we hope to brighten your day just a little by exploring ways to get your incoming payments under control. Wouldn’t it be great to have extra cash in the pipeline, just in time for summer?

For many small businesses, late paying customers are a constant headache.

Chasing them wastes valuable time and manpower. But even worse, late payments affect your cash flow. This may cause your business to miss out on important opportunities simply because of a lack of available cash. It’s frustrating to send yet another payment reminder on an overdue invoice, only to be met with an excuse, or even worse, with complete silence.

At GoCardless, we specialise in taking payments by Direct Debit. So we firmly believe this is the most effective method for accepting recurring payments. Direct Debit allows you to take control of your payments and get paid on time. Here are a number of additional tips to handle late-paying customers and retrieve the money you’re owed.

Tips to get paid more quickly

  • Make sure that your payment terms are clear right from the start. State terms on your contract and repeat them on every invoice you send to customers. This should include the full amount, the due date, and any late payment penalties (yes, we’ll get to those shortly).

  • Send your invoice as soon as you complete the work. Some people do forget, especially those lacking an automatic invoicing setup. There’s no room for delay when it comes to getting paid.

  • State late payment penalties and make sure you use them. You’re covered by UK law for this, so don’t forget your rights.

  • Chat to your customer about possible reasons for late payment. You’d be surprised at the difference a phone call can make. You may well discover an unexpected problem and can then work with the customer to resolve it and get paid.

  • Make it easy for your customers by letting them choose their preferred payment method. Then set up a system where they can pay you instantly if they want to, send them payment reminders before payment is due, and make sure your payment information is displayed clearly and accurately on every invoice.

  • Try out credit control solutions such as Satago or Chaser. These can be integrated into your accounting workflow. They automatically email customers when an invoice becomes overdue. Businesses that use them report getting paid up to 23 days faster.

  • Consider using invoice factoring to fund your cash flow. Currently used by 45,000 UK businesses, invoice factoring involves selling invoices at a discount to a third party, either a bank or an independent factoring provider such as MarketInvoice. These services unlock funds tied up in unpaid invoices so that your business gets paid without waiting for customers to pay first.

And finally (you knew it was coming!) why not encourage your customers to set up automatic payments by Direct Debit? It’s especially useful for regular recurring payments as well as automatically taking payments on invoices and makes sure that you never miss a payment again. That keeps your cash flow in good shape.

At GoCardless we use advanced technology to make Direct Debit accessible to businesses of all sizes, not just large organisations. Now even sole traders can take advantage of the benefits of the UK’s most reliable payment system.

We hope you can use some of our payment tips to improve your business situation this summer. Using GoCardless lets you take full control of your payments. To learn more about how we can help you, click here.

Interested in taking payments with GoCardless?
Sign up here!
in Business

A day in the life of our Head of Legal

I’m the Head of Legal at GoCardless. My role is really varied so there’s no such thing as a typical or predictable day for me - I get involved in all kinds of tasks from designing a new contract management system through to reviewing foreign law advice on our international expansion.

Starting the day strong (and a little bit bruised)

I love to exercise and try to fit it in each morning - I think it’s a key part of my day and helps me feel energised. For the past three months I’ve been taking part in CrossFit at CrossFit CityRoad - it’s great but a real challenge; I’m using muscles I didn’t even think I had, and the ones I have seem useless!

After my workout, I’ll head into the office where I’ll make breakfast before checking my emails. There’s a pretty amazing selection of breakfast foods, with (literally) dozens of choices of granola. I try not to succumb to granola temptation, and typically have scrambled eggs on a bagel.

I receive a regular stream of email updates on the latest legal news and will read those over breakfast. Payments law is a relatively niche area, and not one in which I had experience before joining GoCardless, so I’m enjoying sharpening my regulatory skills.

After breakfast I look at my priorities for the week, which are set in a stand-up with the operations team every Monday. I’ll prepare a schedule for the day thinking about how to achieve those weekly goals while fitting them in around my various meetings.

Keeping ahead of hot topics

In terms of what I'm working on, there's always long-term proactive thinking about how the legal landscape is changing and what we need to do to put GoCardless in the best possible position. One super-hot topic right now is data protection.

Following the Snowden revelations, people's concerns have really intensified over who could access their data and what they might do with it. There have been a number of knock-on effects, such as the European Court of Justice finding EU-US Safe Harbor ineffective, making it far harder to transfer data out of Europe to the US. At GoCardless, we’re striving to set a great example when it comes to customer data - we think a good benchmark is treating that data as if it was our own personal data.

It’s such an important issue and takes a lot of work to get right - immediate internal work, but also trying to predict what will happen in the future. With data protection law and technology constantly changing we need to make sure we are ahead of the game.

Another area of focus is payments regulation, which is also changing rapidly, as regulators and the legislative process rush to keep up with the pace of financial innovation. The UK & the EU are forward-thinking and progressive in terms of allowing innovative structures such as ours in order to prosper, but they don’t always get things right off the bat. As a result, there’s a lot of lobbying to be done,and a lot of fresh thinking around how the services we provide fit within regulations that don’t necessarily contemplate our particular offering.

Finally, Brexit brings its own unique set of challenges. I can say that we’re committed to ensuring that we continue to serve our European customers without interruption. That might mean becoming regulated in another EU member state - a challenge we’re well equipped to rise to.

It’s all about collaboration

On top of those on-going projects, day-to-day I normally have multiple meetings with the guys and girls in the sales team to help them close deals. The open-plan structure of the office makes it really easy to chat with those from other departments - whether that’s sales, marketing or the engineering team. There are sofas and seating areas dotted around if we fancy getting away from our desks, and little booths if you need to get your head down and focus.

As Head of Legal it’s crucial that I know what’s going on and have a good overview of the business, so having sofa meetings and grabbing one too many coffees (there are many coffee options available!) with different departments is a vital part of my role. Only by knowing what’s going on can I be effective in planning and negotiating.

I’ll sometimes hop on a conference call with a merchant who’s looking to use us - it’s then that I’m grateful that we have put in the hard work around customer data protection, as there’s nearly always intense discussion on the point and having a strong story really helps alleviate any concern.

Often my colleagues from other departments come over for a chat. For example, the marketing team might need some legal advice on a new promotional idea, or perhaps I’ll be asked to chime in on a funky collaboration that the partnerships team have come up with. It’s never boring!

The longer commercial negotiations, for example where we want to partner with another payment provider or financial institution, tend to be the most complicated ones as there are so many moving parts, and at least two regulated entities with their own legal concerns. Throw different countries in the mix, and you have a good amount of complexity to work through.

Having this high level of daily interaction at work is really important for me - I really love working with super-smart people, and there’s a lot of them at GoCardless. That being said, there’s a fair amount of teaching to be done when you’re an in-house lawyer.

The law isn’t always crystal clear, meaning that you need to simplify complex issues and relate abstract ideas to the business issues in hand, rather than firing off an email attaching a 10-page advice note that draws a purely legal conclusion.

Growth and… we’re hiring!

I also spend a lot of time planning for future needs, and expanding the team. GoCardless is going through an exciting growth phase at the moment, and we have a lot of great people to find to help us continue that growth - the legal team is no different! If you like what you hear, why not take a look at our careers page.

Interested in joining the GoCardless team?
We're hiring
in Engineering

From idea to reality: containers in production at GoCardless

As developers, we work on features that our users interact with every day. When you're working on the infrastructure that underpins those features, success is silent to the outside world, and failure looks like this:

Recently, GoCardless moved to a container-based infrastructure. We were lucky, and did so silently. We think that our experiences, and the choices we made along the way, are worth sharing with the wider community. Today, we're going to talk about:

  • deploying software reliably
  • why you might want a container-based infrastructure
  • what it takes to reliably run containers in production

We'll wrap up with a little chat about the container ecosystem as it is today, and where it might go over the next year or two.

An aside - deployment artifacts

Before we start, it's worth clearing up which parts of container-based infrastructure we're going to focus on. It's a huge topic!

Some people hear "container" and jump straight to the building blocks - the namespace and control group primitives in the Linux kernel1. Others think of container images and Dockerfiles - a way to describe the needs of their application and build an image to run it from.

It's the latter we're going to focus on today: not the Dockerfile itself, but on what it takes to go from source code in a repository to something you can run in production.

That "something" is called a build artifact. What it looks like can vary. It may be:

  • a jar for an application running on the JVM
  • a statically-linked native binary
  • a native operating system package, such as a deb or an rpm

To deploy the application the artifact is copied to a bunch of servers, the old version of the app is stopped, and the new one is started. If it's not okay for the service to go down during deployment, you use a load balancer to drain traffic from the old version before stopping it.

Some deployment flows don't involve such concrete, pre-built artifacts. A popular example is the default Capistrano flow, which is, in a nutshell:

  • clone the application's source code repository on every server
  • install dependencies (Ruby gems)
  • run database schema migrations
  • build static assets
  • start the new version of the application

We're not here to throw shade at Capistrano - a lot of software is deployed successfully using this flow every day. We were using it for over 4 years.

It's worth noting what's missing from that approach. Application code doesn't run in isolation. It needs a variety of functionality from the operating system and shared libraries. Often, a virtual machine is needed to run the code (e.g. the JVM, CRuby). All these need to be installed at the right version for the application, but they are typically controlled far away from the application's codebase.

There's another important issue. Dependency installation and asset generation (JavaScript and CSS) happen right at the end of the process - during deployment. This leaves you exposed to failures that could have been caught or prevented earlier2.

It's easy to see, then, why people rushed at Docker when it showed up. You can define the application's requirements, right down to the OS-level dependencies, in a file that sits next to the application's codebase. From there, you can build a single artifact, and ship that to each environment (e.g. staging, production) in turn.

For us, and - I think - for most people, this was what made Docker exciting. Unless you're running at huge scale, where squeezing the most out of your compute infrastructure really matters, you're probably not as excited by the container primitives themselves.

What mattered to us?

You may be thinking that a lengthy aside on deployment artifacts could only be there to make this section easy, and you'd be right. In short, we wanted to:

  • have a uniform way to deploy our applications - to reduce the effort of running the ones we had, and make it easier to spin new ones up as the business grows
  • produce artifacts that can reproducibly be shipped to multiple environments3
  • do as much work up-front as possible - detecting failure during artifact build is better than detecting it during deployment

And what didn't matter to us?

In a word: scheduling.

The excitement around containers and image-based deployment has coincided with excitement around systems that allocate machine resources to applications - Mesos, Kubernetes, and friends. While those tools certainly play well with application containers, you can use one without the other.

Those systems are great when you have a lot of computers, a lot of applications, or both. They remove the manual work of allocating machines to applications, and help you squeeze the most out of your compute infrastructure.

Neither of those are big problems for us right now, so we settled on something smaller.

What we built

Even with that cut-down approach, there was a gap between what we wanted to do, and what you get out-of-the-box with Docker. We wanted a way to define the services that should be running on each machine, and how they should be configured. Beyond that, we had to be able to upgrade a running service without dropping any requests.

We were going to need some glue to make this happen.

Step one: service definitions

We wanted to have a central definition of the services we were running. That meant:

  • a list of services
  • the machines a service should run on
  • the image it should boot
  • the environment variable config it should be booted with
  • and so on

We decided that Chef was the natural place for this to live in our infrastructure4. Changes are infrequent enough that updating data bags and environment config isn't too much of a burden, and we didn't want to introduce even more new infrastructure to hold this state5.

With that info, Chef writes a config file onto each machine, telling it which applications to boot, and how.

Step two: using those service definitions

So we have config on each machine for what it should run. Now we need something to take that config and tell the Docker daemon what to do. Enter Conductor.

Conductor is a single-node orchestration tool we wrote to start long-lived and one-off tasks for a service, including interactive tasks such as consoles.

For the most part, its job is simple. When deploying a new version of a service, it takes a service identifier and git revision as arguments:

conductor service upgrade --id gocardless_app_production --revision 279d9035886d4c0427549863c4c2101e4a63e041

It looks up that identifier in the config we templated earlier with Chef, and uses the information there to make API calls to the Docker daemon. Using that information, it spins up new containers with those parameters and the git SHA provided. If all goes well, it spins down any old container processes and exits. If anything goes wrong, it bails out and tells the user what happened.

For services handling inbound traffic (e.g. API requests), there's a little more work to do - we can't drop requests on the floor every time we deploy. To make deploys seamless, Conductor brings up the new containers, and waits for them to respond successfully on a health check endpoint. Once they do, it writes out config for a local nginx instance with the ports that the new containers are bound to, and issues a reload of nginx. Before exiting, it tells the old containers to terminate gracefully.

In addition to long-running and one-off tasks, Conductor supports recurring tasks. If the application supplies a generate-cron script, Conductor can install those cron jobs on the host machine. The application's generate-cron script doesn't need to know anything about containers. The script outputs standard crontab format, as if there was no container involved, and Conductor wraps it with the extra command needed to run in a container:

# Example job to clean out expired API tokens
*/30 * * * *  /usr/local/bin/conductor run --id gocardless_cron_production --revision 279d9035886d4c0427549863c4c2101e4a63e041 bin/rails runner 'Jobs::CleanUpApiTokens.run'

Step three: triggering Conductor on deploys

There's one small piece of the puzzle we've not mentioned yet - we needed something to run Conductor on the right machines during deployment.

We considered a couple of options, but decided to stick with Capistrano, just in a reduced capacity. Doing this made it easier to run these deployments alongside deployments to our traditional app servers.

Unlike the regular Capistrano flow, which does most of the work in a deployment, our Capistrano tasks do very little. They invoke Conductor on the right machines, and leave it to do its job.

One step beyond: process supervision

At that point, we thought we were done. We weren't.

An important part of running a service in production is keeping it running. At a machine level this means monitoring the processes that make up the service and restarting them if they fail.

Early in the project we decided to use Docker's restart policies. The unless-stopped and on-failure options both looked like good fits for what we wanted. As we got nearer to finishing the project, we ran into a couple of issues that prompted us to change our approach.

The main one was handling processes that failed just after they started6. Docker will continue to restart these containers, and neither of those restart policies make it easy to stop this. To stop the restart policy, you have to get the container ID and issue a docker stop. By the time you do that the process you're trying to stop has exited, been swept up by Docker, and a new one will soon be started in its place.

The on-failure policy does have a max-retries parameter to avoid this situation but we don't want to give up on a service forever. Transient conditions such as being isolated from the network shouldn't permanently stop services from running.

We're also keen on the idea of periodically checking that processes are still able to do work. Even if a process is running, it may not be able to serve requests. You don't see this in every process supervisior7, but having a process respond to an HTTP request tells you a lot more about it than simply checking it's still running.

To solve these issues, we taught Conductor one more trick: conductor supervise. The approach we took was:

  • check that the number of containers running for a service matches the number that should be running
  • check that each of those containers responds to a HTTP request on its health check endpoint
  • start new containers if either of those checks fail
  • do that no more frequently than every 5 seconds to avoid excessive churn

So far, this approach has worked well for us. It picks up containers that have fallen over, and we can tell conductor supervise to stop trying to pick up a service if we need to.

That said, it's code we'd love not to maintain. If we see a chance to use something else, and it's worth the time to make the switch, conductor supervise won't live on.

The road to production

So that's the setup, but moving our apps into that stack didn't happen overnight.

Our earliest attempts were at the end of last year (September/October 2015). We started with non-critical batch processes at first - giving ourselves space to learn from failure. Gradually, we were able to ramp up to running more critical asynchronous workers. By December we were serving a portion of live traffic for some of our services from the new stack.

We spent January and February porting the rest of our services over8, and adjusting our setup as we learned more9.

By early March we had everything working on the new stack, and on the 11th we shut down the last of our traditional app servers. 🎉

Many ways to get to Rome

So here we are, 3 months after completing the move to the new infrastructure. Overall, we've been happy with the results. What we built hits the mark on the goals we mentioned earlier. Since the move, we've seen:

  • more frequent upgrades of Ruby - now that the busy-work is minimal, people have been more inclined to make the jump to newer versions
  • more small internal services deployed - previously we'd held back on these because of the per-app operational burden
  • faster, more reliable deployments - now that we do most of the work up-front, in an artifact build, deployment is a simpler step

So should you rush out and implement something like this? It depends.

The world of deployment and orchestration is moving rapidly right now, and with that comes a lot of excitement and blog posts. It's very easy to get swept along and feel that you need to do something because a company you respect does it. Maybe you would benefit from a distributed scheduler such as Mesos. Perhaps the container-based systems are too immature and fast-moving, and you'd prefer to use full-on virtual machine (VM) images as your deployment primitive. It's going to vary from team to team.

Even if you decide that you want broadly similar things to us, there are multiple ways to get there. Before we finish, let's look at a couple of them.

A VM option

There are plenty of hosting providers that support taking a snapshot of a machine, storing it as an image, and launching new instances from it. Packer is a tool that provides a way to build those images from a template and works with a variety of providers (AWS, Digital Ocean, etc - it can even build Docker images now).

Once you have that, you need something to bring up those VMs in the right quantities, and update load balancers to point to the right places. Terraform is a tool that handles this, and has been gaining a lot of popularity recently.

With this approach you sidestep the pitfalls of the rapidly-changing container landscape, but still get the benefits of image-based deployments.

A different container option

Docker has certainly been centre stage when it comes to container runtimes, but there are others out there. One which provides an interesting contrast is rkt.

Docker, with its daemon model, assumes responsibility for parenting, supervising, and restarting container processes if they fail. In contrast, rkt doesn't have a daemon. The rkt command line tool is designed to be invoked and supervised by something else10.

Lately, a lot of Linux distributions have been switching to systemd for their default init process11. systemd brings a richer process supervision and notification model than many previous init systems. With it comes a new question of boundaries and overlap - is there a reason to supervise containerised processes in a different way to the rest of the processes on a machine? Is Docker's daemon-based approach still worthwhile, or does it end up getting in the way? I think we'll see these questions play out over the next year or two.

There's less contrast when it comes to images. There's the acbuild tool if you want to build rkt-compatible images directly and they've also cleverly supported Docker images. It has conversion built-in with the docker2aci tool, which means you can continue to use Docker's build tools and Dockerfile.

So...what's next?

We mentioned earlier that deployment and orchestration of services are fast-moving areas right now. It's definitely an exciting time - one that should see some really solid options stabilise over the next few years.

As for what to do now? That's tough. There's no one answer. If you're comfortable being an early-adopter, ready for the extra churn that comes with that, then you can go ahead and try out some of the newer tooling. If that's not for you, the virtual machine path is more well-established, and there's no shame in using proven technology.

To sum up:

  • start by thinking about the problems you have and avoid spending time on ones you don't have
  • don't feel you have to change all of your tooling at once
  • remember the tradeoff between the promise of emerging tools and the increased churn they bring

If you'd like to ask us questions, we'll be around on @GoCardlessEng on Twitter.

Thanks for reading, and good luck!


  1. If the Kernel's own docs are more your thing, you can read the man pages for unshare (namespaces) and cgroups

  2. There might be transitory issues with the gem server, or worse, the gem version you depend on might have been yanked. 

  3. To give an example of how our existing deployments weren't like this, we'd encountered situations where upgrading a native library would cause a bundle install to fail on a dependency with native extensions. The simplest way out was to move the existing bundle away, and rebuild the bundle from scratch - an inconvenient, error-prone work around. 

  4. We were already using Chef, and didn't feel a strong need to introduce something new. 

  5. If this changes, we'll likely introduce one of etcd, Consul, or Zookeeper as a central store. 

  6. For example, if a Rails initialiser requires an environment variable to be present, and bails out early if it's missing. 

  7. And perhaps this shouldn't be part of Docker's responsibility. While we'd like it, it's completely fair that they've not added this. 

  8. Previously, our apps used templated config rather than environment variables, and didn't know how to log to anything other than a file. There was a fair amount of work in preparing our apps for a new, more 12-factor-like world. It ain't all glamorous! 

  9. conductor supervise didn't exist until the start of February. 

  10. The rkt docs have a section contrasting their approach to that of other runtimes. 

  11. The latest long-term support edition of Ubuntu, 16.04, ships with it, and many other distributions have also made the switch. 

Sound like something you'd enjoy?
Join our team