GraphQL is eating the web apps world right now. Large organisations like Sainsbury’s and Sky are rolling it out into their products. Tech conferences of all languages and disciplines are booking talks on it. Companies like GitHub already have a full GraphQL API alongside their RESTful offering.
However, the majority of the literature available on the subject focuses on how to write front-end code which consumes GraphQL-enabled endpoints.
So what does the backend for a GraphQL-enabled web application actually look like? At Stuart, we’ve been building one with Elixir, and it’s been a delightful experience!
👍 It feels clean and explicit, and our dev team likes it a lot.
🥇 By using a GraphQL schema, the contract between frontend and backend is elevated to a first class citizen.
Let’s dig into how ones goes about building a GraphQL backend using Elixir…
Primer: What is GraphQL?
GraphQL is an open source standard, which specifies a method for querying data from a server. The special part of it, is that you can tell the server what you want the response document to look like i.e. what data it contains and how it’s structured as JSON.
The above query says:
“I want the zones with a code matching ‘Manchester’, and for you to return the zone code, and the geographical center.”
The shape of the query informs the server on how to build the response content so the client only gets the data that they asked for.
What are the benefits of GraphQL?
- One API request per page — no need for “waterfall requests” slowing down your page load time where one API call is made, and the others to retrieve extra details. You create one query with all the fields that you need, and the server takes care of returning it all as a single response.
- Since the client can request only the data it needs, responses are typically smaller because they don’t contain extra data that the client isn’t interested in. Smaller responses = faster responses!
- By using a GraphQL schema, the contract between frontend and backend is elevated to a first class citizen. Once a commitment is made to what will be in the schema, it’s easy for frontend developers to ensure they have all the fields they need and it’s easy for backend developers to know which fields they’re expected to be able to respond with.
- A great bonus of 3) is that you can run frontend and backend development concurrently by creating a mock schema that returns static data — no need for backend to be a step-ahead in your development pipeline.
Getting Started with Absinthe
So we’re going to use the Absinthe library. This can be setup within a Phoenix app and will provide a lot of useful methods for defining schemas and creating resolver functions that fetch the data.
Setting up Phoenix
If you don’t already have a Phoenix app set up, read this guide, run:
mix phx.new graphql_example
graphql_example/lib/graphql_example_web/router.ex and edit as follows:
This lets Phoenix know that we want Absinthe to handle all API calls for requests that start with
/api, and also to set up a testing tool called GraphiQL so we can access its web interface at
Now it’s time to decide what our schema should look like.
Absinthe provides a lot of really useful functions and macros to build up your schema. In the below example, we’ve built a schema to handle the example request we showed earlier. This schema allows us to ask for zone data, allow those zones to be filtered based on their zone code, and to show some geographical data so we know where the zone’s centre is.
query section at the start defines the expected shape of the query, and the
field macros help us flesh out the actual contents of the data.
All GraphQL queries must resolve to a set of primitive types. They can be
id. This means all complex objects can eventually be broken down to one of these primitives.
The real grunt-work of an Absinthe application is done by resolver functions.
An Absinthe resolver function is passed a function name, some arguments, and a context with more info about the request itself. This is expected to return a two-element tuple where the first element is the atom
:ok, indicating success.
This in-turn calls an Ecto module which does the actual database call itself.
Et voilà! That’s the basic bones of it — now it’s just a case of building up the schema and resolver functions to fetch the data from the relevant tables.
- This is a nice way of working — it feels clean and explicit, and our dev team likes it a lot.
- It’s non-trivial to write resolvers in such a way that you avoid N+1 queries in the database. Care needs to be taken to aggregate resolver calls in such a way that a single SQL query is issued. This reduces response time and DB load. DataLoader can help a lot with this.
- Monitoring performance of your API is totally different now because there’s only one URL for all queries! This means resolvers are themselves the primitive for which monitoring tools must be configured in order to give you a representative picture of performance.
Time to Talk!
This blog post is based on a talk I did at the GraphQL Meetup in Manchester in May. Apologies for the missing chunk of time in the middle — the camera ran out of diskspace 🙈