in Business

A day in the life of our Head of Legal

I’m the Head of Legal at GoCardless. My role is really varied so there’s no such thing as a typical or predictable day for me - I get involved in all kinds of tasks from designing a new contract management system through to reviewing foreign law advice on our international expansion.

Starting the day strong (and a little bit bruised)

I love to exercise and try to fit it in each morning - I think it’s a key part of my day and helps me feel energised. For the past three months I’ve been taking part in CrossFit at CrossFit CityRoad - it’s great but a real challenge; I’m using muscles I didn’t even think I had, and the ones I have seem useless!

After my workout, I’ll head into the office where I’ll make breakfast before checking my emails. There’s a pretty amazing selection of breakfast foods, with (literally) dozens of choices of granola. I try not to succumb to granola temptation, and typically have scrambled eggs on a bagel.

I receive a regular stream of email updates on the latest legal news and will read those over breakfast. Payments law is a relatively niche area, and not one in which I had experience before joining GoCardless, so I’m enjoying sharpening my regulatory skills.

After breakfast I look at my priorities for the week, which are set in a stand-up with the operations team every Monday. I’ll prepare a schedule for the day thinking about how to achieve those weekly goals while fitting them in around my various meetings.

Keeping ahead of hot topics

In terms of what I'm working on, there's always long-term proactive thinking about how the legal landscape is changing and what we need to do to put GoCardless in the best possible position. One super-hot topic right now is data protection.

Following the Snowden revelations, people's concerns have really intensified over who could access their data and what they might do with it. There have been a number of knock-on effects, such as the European Court of Justice finding EU-US Safe Harbor ineffective, making it far harder to transfer data out of Europe to the US. At GoCardless, we’re striving to set a great example when it comes to customer data - we think a good benchmark is treating that data as if it was our own personal data.

It’s such an important issue and takes a lot of work to get right - immediate internal work, but also trying to predict what will happen in the future. With data protection law and technology constantly changing we need to make sure we are ahead of the game.

Another area of focus is payments regulation, which is also changing rapidly, as regulators and the legislative process rush to keep up with the pace of financial innovation. The UK & the EU are forward-thinking and progressive in terms of allowing innovative structures such as ours in order to prosper, but they don’t always get things right off the bat. As a result, there’s a lot of lobbying to be done,and a lot of fresh thinking around how the services we provide fit within regulations that don’t necessarily contemplate our particular offering.

Finally, Brexit brings its own unique set of challenges. I can say that we’re committed to ensuring that we continue to serve our European customers without interruption. That might mean becoming regulated in another EU member state - a challenge we’re well equipped to rise to.

It’s all about collaboration

On top of those on-going projects, day-to-day I normally have multiple meetings with the guys and girls in the sales team to help them close deals. The open-plan structure of the office makes it really easy to chat with those from other departments - whether that’s sales, marketing or the engineering team. There are sofas and seating areas dotted around if we fancy getting away from our desks, and little booths if you need to get your head down and focus.

As Head of Legal it’s crucial that I know what’s going on and have a good overview of the business, so having sofa meetings and grabbing one too many coffees (there are many coffee options available!) with different departments is a vital part of my role. Only by knowing what’s going on can I be effective in planning and negotiating.

I’ll sometimes hop on a conference call with a merchant who’s looking to use us - it’s then that I’m grateful that we have put in the hard work around customer data protection, as there’s nearly always intense discussion on the point and having a strong story really helps alleviate any concern.

Often my colleagues from other departments come over for a chat. For example, the marketing team might need some legal advice on a new promotional idea, or perhaps I’ll be asked to chime in on a funky collaboration that the partnerships team have come up with. It’s never boring!

The longer commercial negotiations, for example where we want to partner with another payment provider or financial institution, tend to be the most complicated ones as there are so many moving parts, and at least two regulated entities with their own legal concerns. Throw different countries in the mix, and you have a good amount of complexity to work through.

Having this high level of daily interaction at work is really important for me - I really love working with super-smart people, and there’s a lot of them at GoCardless. That being said, there’s a fair amount of teaching to be done when you’re an in-house lawyer.

The law isn’t always crystal clear, meaning that you need to simplify complex issues and relate abstract ideas to the business issues in hand, rather than firing off an email attaching a 10-page advice note that draws a purely legal conclusion.

Growth and… we’re hiring!

I also spend a lot of time planning for future needs, and expanding the team. GoCardless is going through an exciting growth phase at the moment, and we have a lot of great people to find to help us continue that growth - the legal team is no different! If you like what you hear, why not take a look at our careers page.

Interested in joining the GoCardless team?
We're hiring
in Engineering

From idea to reality: containers in production at GoCardless

As developers, we work on features that our users interact with every day. When you're working on the infrastructure that underpins those features, success is silent to the outside world, and failure looks like this:

Recently, GoCardless moved to a container-based infrastructure. We were lucky, and did so silently. We think that our experiences, and the choices we made along the way, are worth sharing with the wider community. Today, we're going to talk about:

  • deploying software reliably
  • why you might want a container-based infrastructure
  • what it takes to reliably run containers in production

We'll wrap up with a little chat about the container ecosystem as it is today, and where it might go over the next year or two.

An aside - deployment artifacts

Before we start, it's worth clearing up which parts of container-based infrastructure we're going to focus on. It's a huge topic!

Some people hear "container" and jump straight to the building blocks - the namespace and control group primitives in the Linux kernel1. Others think of container images and Dockerfiles - a way to describe the needs of their application and build an image to run it from.

It's the latter we're going to focus on today: not the Dockerfile itself, but on what it takes to go from source code in a repository to something you can run in production.

That "something" is called a build artifact. What it looks like can vary. It may be:

  • a jar for an application running on the JVM
  • a statically-linked native binary
  • a native operating system package, such as a deb or an rpm

To deploy the application the artifact is copied to a bunch of servers, the old version of the app is stopped, and the new one is started. If it's not okay for the service to go down during deployment, you use a load balancer to drain traffic from the old version before stopping it.

Some deployment flows don't involve such concrete, pre-built artifacts. A popular example is the default Capistrano flow, which is, in a nutshell:

  • clone the application's source code repository on every server
  • install dependencies (Ruby gems)
  • run database schema migrations
  • build static assets
  • start the new version of the application

We're not here to throw shade at Capistrano - a lot of software is deployed successfully using this flow every day. We were using it for over 4 years.

It's worth noting what's missing from that approach. Application code doesn't run in isolation. It needs a variety of functionality from the operating system and shared libraries. Often, a virtual machine is needed to run the code (e.g. the JVM, CRuby). All these need to be installed at the right version for the application, but they are typically controlled far away from the application's codebase.

There's another important issue. Dependency installation and asset generation (JavaScript and CSS) happen right at the end of the process - during deployment. This leaves you exposed to failures that could have been caught or prevented earlier2.

It's easy to see, then, why people rushed at Docker when it showed up. You can define the application's requirements, right down to the OS-level dependencies, in a file that sits next to the application's codebase. From there, you can build a single artifact, and ship that to each environment (e.g. staging, production) in turn.

For us, and - I think - for most people, this was what made Docker exciting. Unless you're running at huge scale, where squeezing the most out of your compute infrastructure really matters, you're probably not as excited by the container primitives themselves.

What mattered to us?

You may be thinking that a lengthy aside on deployment artifacts could only be there to make this section easy, and you'd be right. In short, we wanted to:

  • have a uniform way to deploy our applications - to reduce the effort of running the ones we had, and make it easier to spin new ones up as the business grows
  • produce artifacts that can reproducibly be shipped to multiple environments3
  • do as much work up-front as possible - detecting failure during artifact build is better than detecting it during deployment

And what didn't matter to us?

In a word: scheduling.

The excitement around containers and image-based deployment has coincided with excitement around systems that allocate machine resources to applications - Mesos, Kubernetes, and friends. While those tools certainly play well with application containers, you can use one without the other.

Those systems are great when you have a lot of computers, a lot of applications, or both. They remove the manual work of allocating machines to applications, and help you squeeze the most out of your compute infrastructure.

Neither of those are big problems for us right now, so we settled on something smaller.

What we built

Even with that cut-down approach, there was a gap between what we wanted to do, and what you get out-of-the-box with Docker. We wanted a way to define the services that should be running on each machine, and how they should be configured. Beyond that, we had to be able to upgrade a running service without dropping any requests.

We were going to need some glue to make this happen.

Step one: service definitions

We wanted to have a central definition of the services we were running. That meant:

  • a list of services
  • the machines a service should run on
  • the image it should boot
  • the environment variable config it should be booted with
  • and so on

We decided that Chef was the natural place for this to live in our infrastructure4. Changes are infrequent enough that updating data bags and environment config isn't too much of a burden, and we didn't want to introduce even more new infrastructure to hold this state5.

With that info, Chef writes a config file onto each machine, telling it which applications to boot, and how.

Step two: using those service definitions

So we have config on each machine for what it should run. Now we need something to take that config and tell the Docker daemon what to do. Enter Conductor.

Conductor is a single-node orchestration tool we wrote to start long-lived and one-off tasks for a service, including interactive tasks such as consoles.

For the most part, its job is simple. When deploying a new version of a service, it takes a service identifier and git revision as arguments:

conductor service upgrade --id gocardless_app_production --revision 279d9035886d4c0427549863c4c2101e4a63e041

It looks up that identifier in the config we templated earlier with Chef, and uses the information there to make API calls to the Docker daemon. Using that information, it spins up new containers with those parameters and the git SHA provided. If all goes well, it spins down any old container processes and exits. If anything goes wrong, it bails out and tells the user what happened.

For services handling inbound traffic (e.g. API requests), there's a little more work to do - we can't drop requests on the floor every time we deploy. To make deploys seamless, Conductor brings up the new containers, and waits for them to respond successfully on a health check endpoint. Once they do, it writes out config for a local nginx instance with the ports that the new containers are bound to, and issues a reload of nginx. Before exiting, it tells the old containers to terminate gracefully.

In addition to long-running and one-off tasks, Conductor supports recurring tasks. If the application supplies a generate-cron script, Conductor can install those cron jobs on the host machine. The application's generate-cron script doesn't need to know anything about containers. The script outputs standard crontab format, as if there was no container involved, and Conductor wraps it with the extra command needed to run in a container:

# Example job to clean out expired API tokens
*/30 * * * *  /usr/local/bin/conductor run --id gocardless_cron_production --revision 279d9035886d4c0427549863c4c2101e4a63e041 bin/rails runner 'Jobs::CleanUpApiTokens.run'

Step three: triggering Conductor on deploys

There's one small piece of the puzzle we've not mentioned yet - we needed something to run Conductor on the right machines during deployment.

We considered a couple of options, but decided to stick with Capistrano, just in a reduced capacity. Doing this made it easier to run these deployments alongside deployments to our traditional app servers.

Unlike the regular Capistrano flow, which does most of the work in a deployment, our Capistrano tasks do very little. They invoke Conductor on the right machines, and leave it to do its job.

One step beyond: process supervision

At that point, we thought we were done. We weren't.

An important part of running a service in production is keeping it running. At a machine level this means monitoring the processes that make up the service and restarting them if they fail.

Early in the project we decided to use Docker's restart policies. The unless-stopped and on-failure options both looked like good fits for what we wanted. As we got nearer to finishing the project, we ran into a couple of issues that prompted us to change our approach.

The main one was handling processes that failed just after they started6. Docker will continue to restart these containers, and neither of those restart policies make it easy to stop this. To stop the restart policy, you have to get the container ID and issue a docker stop. By the time you do that the process you're trying to stop has exited, been swept up by Docker, and a new one will soon be started in its place.

The on-failure policy does have a max-retries parameter to avoid this situation but we don't want to give up on a service forever. Transient conditions such as being isolated from the network shouldn't permanently stop services from running.

We're also keen on the idea of periodically checking that processes are still able to do work. Even if a process is running, it may not be able to serve requests. You don't see this in every process supervisior7, but having a process respond to an HTTP request tells you a lot more about it than simply checking it's still running.

To solve these issues, we taught Conductor one more trick: conductor supervise. The approach we took was:

  • check that the number of containers running for a service matches the number that should be running
  • check that each of those containers responds to a HTTP request on its health check endpoint
  • start new containers if either of those checks fail
  • do that no more frequently than every 5 seconds to avoid excessive churn

So far, this approach has worked well for us. It picks up containers that have fallen over, and we can tell conductor supervise to stop trying to pick up a service if we need to.

That said, it's code we'd love not to maintain. If we see a chance to use something else, and it's worth the time to make the switch, conductor supervise won't live on.

The road to production

So that's the setup, but moving our apps into that stack didn't happen overnight.

Our earliest attempts were at the end of last year (September/October 2015). We started with non-critical batch processes at first - giving ourselves space to learn from failure. Gradually, we were able to ramp up to running more critical asynchronous workers. By December we were serving a portion of live traffic for some of our services from the new stack.

We spent January and February porting the rest of our services over8, and adjusting our setup as we learned more9.

By early March we had everything working on the new stack, and on the 11th we shut down the last of our traditional app servers. 🎉

Many ways to get to Rome

So here we are, 3 months after completing the move to the new infrastructure. Overall, we've been happy with the results. What we built hits the mark on the goals we mentioned earlier. Since the move, we've seen:

  • more frequent upgrades of Ruby - now that the busy-work is minimal, people have been more inclined to make the jump to newer versions
  • more small internal services deployed - previously we'd held back on these because of the per-app operational burden
  • faster, more reliable deployments - now that we do most of the work up-front, in an artifact build, deployment is a simpler step

So should you rush out and implement something like this? It depends.

The world of deployment and orchestration is moving rapidly right now, and with that comes a lot of excitement and blog posts. It's very easy to get swept along and feel that you need to do something because a company you respect does it. Maybe you would benefit from a distributed scheduler such as Mesos. Perhaps the container-based systems are too immature and fast-moving, and you'd prefer to use full-on virtual machine (VM) images as your deployment primitive. It's going to vary from team to team.

Even if you decide that you want broadly similar things to us, there are multiple ways to get there. Before we finish, let's look at a couple of them.

A VM option

There are plenty of hosting providers that support taking a snapshot of a machine, storing it as an image, and launching new instances from it. Packer is a tool that provides a way to build those images from a template and works with a variety of providers (AWS, Digital Ocean, etc - it can even build Docker images now).

Once you have that, you need something to bring up those VMs in the right quantities, and update load balancers to point to the right places. Terraform is a tool that handles this, and has been gaining a lot of popularity recently.

With this approach you sidestep the pitfalls of the rapidly-changing container landscape, but still get the benefits of image-based deployments.

A different container option

Docker has certainly been centre stage when it comes to container runtimes, but there are others out there. One which provides an interesting contrast is rkt.

Docker, with its daemon model, assumes responsibility for parenting, supervising, and restarting container processes if they fail. In contrast, rkt doesn't have a daemon. The rkt command line tool is designed to be invoked and supervised by something else10.

Lately, a lot of Linux distributions have been switching to systemd for their default init process11. systemd brings a richer process supervision and notification model than many previous init systems. With it comes a new question of boundaries and overlap - is there a reason to supervise containerised processes in a different way to the rest of the processes on a machine? Is Docker's daemon-based approach still worthwhile, or does it end up getting in the way? I think we'll see these questions play out over the next year or two.

There's less contrast when it comes to images. There's the acbuild tool if you want to build rkt-compatible images directly and they've also cleverly supported Docker images. It has conversion built-in with the docker2aci tool, which means you can continue to use Docker's build tools and Dockerfile.

So...what's next?

We mentioned earlier that deployment and orchestration of services are fast-moving areas right now. It's definitely an exciting time - one that should see some really solid options stabilise over the next few years.

As for what to do now? That's tough. There's no one answer. If you're comfortable being an early-adopter, ready for the extra churn that comes with that, then you can go ahead and try out some of the newer tooling. If that's not for you, the virtual machine path is more well-established, and there's no shame in using proven technology.

To sum up:

  • start by thinking about the problems you have and avoid spending time on ones you don't have
  • don't feel you have to change all of your tooling at once
  • remember the tradeoff between the promise of emerging tools and the increased churn they bring

If you'd like to ask us questions, we'll be around on @GoCardlessEng on Twitter.

Thanks for reading, and good luck!


  1. If the Kernel's own docs are more your thing, you can read the man pages for unshare (namespaces) and cgroups

  2. There might be transitory issues with the gem server, or worse, the gem version you depend on might have been yanked. 

  3. To give an example of how our existing deployments weren't like this, we'd encountered situations where upgrading a native library would cause a bundle install to fail on a dependency with native extensions. The simplest way out was to move the existing bundle away, and rebuild the bundle from scratch - an inconvenient, error-prone work around. 

  4. We were already using Chef, and didn't feel a strong need to introduce something new. 

  5. If this changes, we'll likely introduce one of etcd, Consul, or Zookeeper as a central store. 

  6. For example, if a Rails initialiser requires an environment variable to be present, and bails out early if it's missing. 

  7. And perhaps this shouldn't be part of Docker's responsibility. While we'd like it, it's completely fair that they've not added this. 

  8. Previously, our apps used templated config rather than environment variables, and didn't know how to log to anything other than a file. There was a fair amount of work in preparing our apps for a new, more 12-factor-like world. It ain't all glamorous! 

  9. conductor supervise didn't exist until the start of February. 

  10. The rkt docs have a section contrasting their approach to that of other runtimes. 

  11. The latest long-term support edition of Ubuntu, 16.04, ships with it, and many other distributions have also made the switch. 

Sound like something you'd enjoy?
Join our team
in Business

Our thoughts on Brexit

2 weeks ago today, Britain woke up to the news that we had collectively decided to part ways from our European Union. Whilst the dust is still settling on this shocking outcome a lot remains unclear. There is turmoil in both of Britain’s main political parties, a vacuum of leadership, and huge uncertainty around how our country’s relationship with Europe will evolve.

This uncertainty won’t affect our operations in the near term — we will continue to serve our European customers without interruption. However, we’ve been considering how to proactively respond to these new circumstances and wanted to share our plans for supporting our customers across Europe.

We are 100% committed to our expansion in Europe, and see this outcome as an opportunity to turn further towards our European markets. Concretely, we will be doing this by:

  • Expanding our operations in Europe & exploring additional EU regulatory approvals ahead of any changes to the law.
  • Accelerating our expansion plans in France, Germany & Spain by doubling our teams and establishing a local presence in these markets.
  • Investing more heavily in our SEPA product to ensure our offering is the best way to accept recurring payments across Europe.

We fundamentally believe that our world is becoming more interconnected even if our politics aren’t. For us, the vote for Brexit underlines the importance of our vision to create a global bank-to-bank payment network. We will therefore be working hard to ensure you can continue to focus on business as usual without worrying about how to get paid.

An introduction to our API

The GoCardless API allows you to manage Direct Debit payments via your own website or software. When a customer signs up for your services they can give you a Direct Debit authorisation online. Your integration can then create and manage payments and subscriptions automatically - there’s no need to manually add a new customer to GoCardless. Our API provides you with full flexibility when it comes to payment creation, and we offer it to all of our merchants at no extra cost.

In this blog post we’ll guide you through the steps needed to use our API, from customer creation to taking your first payment.

Let’s look at how Direct Debit payments work and how the GoCardless API is organised. In order to charge a customer’s bank account, you will first need their authorisation to collect payments via Direct Debit. This can be via our secure online payment pages or, if you’re using GoCardless Pro, you can take complete control of the payment process by hosting the payment pages on your own website.

GoCardless

Using GoCardless the process of creating a new customer is as follows:

  1. You direct your customers to the GoCardless payment page, allowing them to complete the authorisation to take payments from their account.
  2. Once complete, we redirect your customers back to your website. We’ve called this the redirect flow. When the customer is returned to your website, the redirect flow will already have created a customer record on the GoCardless system. Associated with the customer record will be a customer bank account, which itself will be associated with a mandate.
  3. You can now create payments and subscriptions against this mandate.

GoCardless Pro

If you host your own payment pages your clients will never have to leave your website to give you a Direct Debit authorisation.

  1. You use our API to create a customer record, followed by a customer bank account which is linked to the customer.
  2. Next you create a mandate by referencing the customer bank account.
  3. You can now create payments and subscriptions against this mandate.

Example requests

Now that we’ve covered the basics let’s look at the actual requests to the API. In order for you to follow these steps you will need the following:

  • A GoCardless sandbox account, get one here
  • An access token to use the API, create one here

In order to send a HTTP request to our API you will first need to set the URL where you want the request sent to. The base URLs for the GoCardless API are

  • https://api.gocardless.com/ for live
  • https://api-sandbox.gocardless.com/ for sandbox

As we’re using the sandbox we’ll use https://api-sandbox.gocardless.com/ which is then followed by the endpoint you want to send a request to. You will also need to specify if you want to send a POST (sending information) or a GET (requesting information) request and you will need to set the headers. Our API requires several headers to be set:

  • Authorization uses the access token you’ve created in the developer settings, preceded by the word Bearer
  • Accept tells the API that you’re expecting data to be sent in the JSON format. This needs to be application/json.
  • GoCardless-Version specifies which version of our API you’re using.

If you’re sending data to us, for example to create a new payment, you’ll also need to specify the content type:

  • Content-Type specifies the format of the content sent to the API (if any). This needs to be application/json.

An example request to our customers endpoint to list all customers on an account using curl would look like this:

curl https://api-sandbox.gocardless.com/customers \
-H "Authorization: Bearer ZQfaZRchaiCIjRhSuoFr6hGrcrAEsNPWI7pa4AaO" \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "GoCardless-Version: 2015-07-06"

Creating a customer using the redirect flow

To send your customer to the GoCardless payment pages you will need to create a redirect flow. This will be a POST request, and the redirect flow endpoint requires at least two parameters:

  • session_token This is used as an identifier allowing you to link the redirect flow to the respective customer in your integration. You could use the customer's email address or generate a random ID for this - it’s how you will identify this customer when they’re returned to your site after authorising payments
  • success_redirect_url This is the URL we redirect the customer to when they complete the payment pages.
  • description (optional) This will be shown to the customer when they’re on our payment page.

These parameters will need to be send with the request in a JSON blob, wrapped in a redirect_flows envelope:

curl https://api-sandbox.gocardless.com/redirect_flows \
-H "Authorization: Bearer ZQfaZRchaiCIjRhSuoFr6hGrcrAEsNPWI7pa4AaO" \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "GoCardless-Version: 2015-07-06" \
-d '{
  "redirect_flows": {
    "description": "Magazine subscription",
    "session_token": "session_ca853718-99ea-4cfd-86fd-c533ef1d5a3b",
    "success_redirect_url": "http://localhost/success"
  }
}'

The response from the API

{  
   "redirect_flows": {  
      "id": "RE00005H8602K9J5C9V77KQAMHGH8FDB",
      "description": "Magazine subscription",
      "session_token": "session_ca853718-99ea-4cfd-86fd-c533ef1d5a3b",
      "scheme": null,
      "success_redirect_url": "http://localhost/success",
      "created_at": "2016-06-29T13:28:10.282Z",
      "links": {  
         "creditor": "CR000035V20049"
      },
      "redirect_url": "https://pay-sandbox.gocardless.com/flow/RE00005H8602K9J5C9V77KQAMHGH8FDB"
   }
}

The response shows the redirect_url for the newly created redirect flow. An HTTP 303 redirect (or an alternative redirect method) can be used to send your customer to the our payment pages. This should be done immediately, as the redirect link expires after 30 minutes. The customer will then see the GoCardless payment page and can enter their details to authorise a Direct Debit to be set up.

Once the form is complete, we will redirect the customer back to the redirect_uri you originally specified and append the parameter redirect_flow_id like this: http://localhost/success?redirect_flow_id=RE00005H8602K9J5C9V77KQAMHGH8FDB.

In order for the API to know that the customer has been returned safely back to your integration you will need to complete the redirect flow by sending the following request to the API. This is a mandatory step and the customer won’t be set up if this is not completed.

curl https://api-sandbox.gocardless.com/redirect_flows/RE00005QAMHGH8FDB/actions/complete \
-H "Authorization: Bearer ZQfaZRchaiCIjRhSuoFr6hGrcrAEsNPWI7pa4AaO" \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "GoCardless-Version: 2015-07-06" \
-d '{
  "data": {
    "session_token": "session_ca853718-99ea-4cfd-86fd-c533ef1d5a3b"
  }
}'

Notice that the ID of the redirect flow and the required action was appended to the URL, and the session_token (as set by your integration when creating the redirect flow) was sent in the body of the request.

The response from the API

{  
   "redirect_flows": {  
      "id": "RE00005H8602K9J5C9V77KQAMHGH8FDB",
      "description": "Magazine subscription",
      "session_token": "session_ca853718-99ea-4cfd-86fd-c533ef1d5a3b",
      "scheme": null,
      "success_redirect_url": "http://localhost/success",
      "created_at": "2016-06-29T13:49:00.077Z",
      "links": {  
         "creditor": "CR000035V20049",
         "mandate": "MD0000TWJWRFHG",
         "customer": "CU0000X30K4B9N",
         "customer_bank_account": "BA0000TCWMHXH3"
      }
   }
}

The customer’s details have now been saved, and GoCardless will take care of setting up an authorisation to collect payments from their bank account. You’ll use the mandate ID (provided in the links) to create payments and subscriptions, so you’ll want to store that ID in your database. You may find it useful to store the other references to your customer's resources in your database as well.

Creating a payment will be just one more call to the API, using the payments endpoint. A quick look into the developer documentation shows the three required parameters:

  • amount The payment amount, given in pence/cents. So to take £10.00 the value would be 1000
  • currency The currency of the payment you’re taking
  • links[mandate] The mandate that should be charged

Another helpful parameter is charge_date, which specifies when the payment leaves the customer’s bank account. If no charge_date is provided, the payment will be charged on the earliest possible date.

curl https://api-sandbox.gocardless.com/payments \
-H "Authorization: Bearer ZQfaZRchaiCIjRhSuoFr6hGrcrAEsNPWI7pa4AaO" \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "GoCardless-Version: 2015-07-06" \
-d '{
  "payments": {
    "amount": 1000,
    "currency": "GBP",
    "links": {
      "mandate": "MD0000TWJWRFHG"
    }
  }
}
'

The response from the API:

{  
   "payments": {  
      "id": "PM0001G6V7BSN4",
      "created_at": "2016-07-01T09:27:52.352Z",
      "charge_date": "2016-07-06",
      "amount": 1000,
      "description": null,
      "currency": "GBP",
      "status": "pending_submission",
      "amount_refunded": 0,
      "reference": null,
      "metadata":{},
      "links": {  
         "mandate": "MD0000TWJWRFHG",
         "creditor": "CR000035V20049"
      }
   }
}

You have now set up a customer and charged your first payment using the GoCardless API!

The API offers you many more options, allowing you to integrate Direct Debit functionality into your existing website or software. If you’re using PHP, Java, Ruby or Python you can also make use of our client libraries.

Any API-related questions or feedback can be sent to our developer support team at [email protected].

Interested in joining the GoCardless team?
We're hiring
in Business

A day in the life of GoCardless support

Kicking off the day

It’s 7am and I’m already up bright and early this morning as today is our pre-work morning football match. Luckily I live closer to work than most so I get a little extra of a lie-in than the others. I’d like to think this will translate into a performance comparable to Ronaldinho vs Peter Kay in those John Smith’s ads. Unfortunately my two left feet are likely to say otherwise.

A much sweatier version of myself makes it into the office at around 8.40am. Fortunately the GoCardless office has showers so there’s time to freshen up before work officially begins. A few of our team have been in for a while; with the recent rapid growth of the GoCardless customer base it’s fair to say that the support team are a hard-working bunch!

By 8.55am I have my computer on and my coffee in hand - now it’s time to officially start the day. As always there are several things that I need follow up on before getting down to today’s workload. Whoever invented post-it notes was a genius… my desk is nothing short of a shrine to them!

Customer Happiness

As a team we split our days into segments, rotating between them depending on priorities, volume, any specific cases or projects we’re working on, and simply to avoid the monotony that may otherwise ensue. For the next few hours I’m on calls. Perfect, as I’ve just finished my second coffee and probably couldn’t keep quiet if I tried.

By around 12.45pm I’ve finished my last call of the morning. A lovely retiree needed some help navigating the dashboard to manage payments for her rambling club members – I love taking these calls!

Anyway, now it’s time to put a dent in that seemingly endless cycle of emails.

It’s a busy day for the team - our Slack channel notifies us that we are on high alert due to the amount of calls and emails coming in. Today is office lunch day so we grab something delicious from the huge buffet laid out on the long wooden table by the kitchen. The food is supplied by Cookoo, one of the startups working out of the GoCardless offices. It’s great to sit down and eat with my colleagues but as support is having a busy period I pick up a kebab wrap and salad and decide to eat it at my desk.

At around 2.30pm the phone calls begin to slow for a little while and so I take this opportunity to fit a proper break in. A colleague and I decide it’s time for a quick FIFA rematch on the huge work projector in the office. I still have yet to beat him (despite a shamefully large number of attempts), but I’m feeling confident that my luck is about to change.

15 minutes later… no change. With my tail between my legs, back to emails it is.

Time for a change of pace

I’ve got an accounting partner training session scheduled in with our three newest recruits - two new sales joiners and another in support. With GoCardless being integrated with a number of popular accounting platforms it’s good to know the fundamentals of how these packages operate, especially when in a customer facing role. Having taken on the responsibility of learning these when I joined, I now get to pass on my ‘wisdom’ as and when required.

The further I go through it, the more I’m reminded that the presentation needs a bit of updating. Wunderlist is great for keeping notes on little tasks like this.

It’s 5pm and there’s an hour left of the working day, our email feed is still looking a little on the larger side than we’d like. Our team manager has decided to rally the troops in the best way possible – a competition! Who can respond to the most emails in the next hour; rules being that we must take the next oldest email each time, (and not sacrifice quality for speed, obviously) - Game on!

At 6pm time for our “all-hands” team meeting. This is where the whole team gets together on the office bleachers and our CEO and company VP’s ensure we’re up to speed with what’s going on across the company. Of course, there’s office beers to help our focus.

Once the team meeting is over it’s time to finish off the last couple of emails I’d rather get sent this evening while the issues are fresh in my mind… Ah, one more beer won’t hurt to help see me through.

The emails are finished and I’ve made my notes for tasks first thing tomorrow morning. Now, time to go home and catch up on some Homeland!

Interested in joining the GoCardless team?
We're hiring