Postgraphile for Beginners Free Mini Course

Author: James Moore

Video Transcript

"In this video, I'm going to show you how you can build the server side of your applications, using the excellent open source tool, Postgraphile.

So what's postgraphile, and why would you want to use it?

Well, postgraphile is an open source project, for creating high performance, graphql servers, that primarily uses postgres databases, and the reason you might be interested in using postgraphile, is because with it, you can build the server side of your applications, in a fraction of the time it would take to build the server, using most modern programming languages and frameworks.

So developing with postgraphile is fast, you can get your server built really quickly, but you might be wondering, what is it about postgraphile that speeds up the development process?

To answer this, let me explain how postgraphile works, is the simplest cases.

Basically, to use postgraphile, all you need to do is create a well structured database, and point postgraphile to your database, and then postgraphile will inspect the database schema, and create a graphql server that your applications can perform queries against.

Basically what I'm saying is, with postgraphile, you might be able to create a server, without writing any server side code, which obviously can save you a ton of time.

Now, you'll definitely need to write some SQL, to create tables, views, and maybe some SQL functions and a few other database related things, but these are the kinds of things you already do in a conventional server project, but with postgraphile, after you've written the SQL needed, to create a well structured database, there's a very good chance your server is done, and ready to be used with your application.

Alright, so how exactly do you use postgraphile?

Well, that's what we're going to cover in this video.

Now, there are a couple of soft prerequisites for getting the most out of this video.

You should be somewhat familiar with both SQL and GraphQL.

Now, I'm not saying you need to be an expert in both of these, to watch this video, but it might be a bit challenging to follow along, if your a complete newbie with these technologies.

Ok, are you Ready? let's get started.

End of face shot

I'm going to show you how to use postgraphile, by building an actual postgraphile server, for a web application, that we'll look at right now.

So what's the web app we'll be building?

Well, it's essentially a flash card app, that you could use to help you learn things, but it's more than just a flash card app, because we're going to implement it to use spaced repetition.

So what's spaced repetition?

Well, hold that thought for now, I'll explain in just a moment here, but first , let's take a look at some drawings of the web app, that we're building the server for.

Let's start by looking at the cards page you see here.

This page lists all the flash cards for the signed in user.

From this page you can edit an existing card, archive a card, delete a card, and create a new flash card.

If you click on the 'Add Card' button, you'll be taken to this page, where you can enter a question and the corresponding answer.

Now, once you've created a few flash cards, the home page will show you how many cards need to be reviewed, and clicking the 'begin review' button, will take you to the review page, where you are prompted with a flash card question, like you see here.

Ok, as a user, when you see the question, you'll attempt to answer the question in your mind, then you'll click the 'show answer' button, to see if you remembered correctly.

Now, when you view the answer, you'll also see these 4 buttons, which allows you to score or grade, how well you remembered the answer.

So for example, if you remembered it perfectly without thinking about it, you'd click 'easy', and on the other end of the spectrum, if you couldn't remember the answer at all, you'd click 'Again'.

Alright, you're probably wondering, what happens when you click these buttons?

Well, to answer this, I need to circle back to the concept of 'spaced repetition'.

So, what is spaced repetition?

Well, first of all, lets briefly talk about the human brain.

Our brains can hold a ton of information, in fact it's estimated that our brains can store somehere between 2 and 3 petabytes of data. That's enought storage for about 200 to 300 years of recorded video, so our brains can store alot of information, but unfortuantly our brains don't work, like we might want them to.

I mean, if our brains have so much capacity, why are we always forgetting things, like where I put my car keys, or where I left my iphone.

Well, there's a lot of reasons why you forget things, but I'll give you one big reason.

It's pretty healthy to forget some things.

For example, imagine what your life would be like if you remembered all the bad things that have happened to you since you were born.

Yeah, I'm totally ok forgetting the bad stuff.

Ok, so how does your brain decide if it should hold on to a memory, or let it go?

Well, this is where the concept of spaced repetition comes in.

Our brains have what's called a curve of forgetting, which is essentially a plot of memory retention in time.

Let me give you an example of how this curve of forgetting works.

Imagine I've got a rope, and I show you how to tie a special knot, then I hand you the rope and ask you to tie the same knot.

You might be able to tie the knot on your first try, or you might have to watch me do it one more time, but pretty quickly you'll be able to tie the knot.

Now, the question is, how long will you remember how to tie that knot, you just learned?

Well typically, most people will remember how to tie the knot for around 24 hours, but soon thereafter, the memory will fade and you'll forget.

Now, an interesting thing happens if you practice the knot just before it falls out of memory, right around the 24 hour point.

Basically, by practing the knot again, your mind says to itself, "hey, I'm using this memory again, so it must be kind of important", so the memory is strengthend, and you've got a new curve of forgetting, which is around 3 days.

Now, if you practice the knot again, 3 days later, your memory is strenthend even more, and now you'll hold onto this memory for about a week, and Every subsequent time you review that memory, just before you'd normally forget it, your strenthening that memory, and the curve of forgetting, grows and grows and grows.

So, the idea of spaced repetition, is that you can sort of hack your brain into remembering things, by recalling a memory, just before you'd typically forget it, and the nice thing about spaced repetition is, that you can memorize things with the least amount of effort possible.

Ok, so now that you've got a sense of what spaced repetition is, do you have any ideas on what these buttons are for?

Well, basically, clicking these button will set the spacing interval, on when to review this card next.

So, if you click the 'Again' button, it means you totally forgot the answer, and you need to reset the spacing interval to zero, in other words, you need to practice this one again right now.

And, as you can imagine, clicking the other 3 buttons will set the spacing interval, to different points in the future.

Ok, so there's 2 takeaways from this little discussion on spaced repetition:

  1. First of all, you should use spaced repetion when you learn new things, checkout anki if your interested in this.
  2. Secondly, the server we need to build, isn't just a dumb crud interface, there's a bit of logic, that we'll need to implement, but how do you implement serverside logic, if all your doing is creating a database. Well, stay with me and you'll see.

Alright, there's 2 other pages I want to look at, real quick: a page for creating an account, and a page for signing in.

So, basically, we'll need to somehow implement user registration, and authentication.

Ok, let's start coding.

I've got my editor open, and You'll notice that this project is mostly empty, except I initialized git and npm, and I've got and rc file that specifies what version of node.js to use, but for all intensive purposes, we're starting with an empty project.

Now, keep in mind, the vast majority of what we'll be doing, going forward in this video, is creating a well structured postgres database, that can be used by our flash card application.

So, how are we going to create the postgres database?

We'll be using a database migration tool named 'db-migrate', which exists in npm, so I'll go ahead and install it right now.

Of course, you don't have to use db-migrate, you can use whatever database migration tools that you prefer.

Next, I'll install the 'db-migrate-pg' library, which includes the approriate drivers, so 'db-migrate' can talk to a postgres database.

Ok, I've already got postgres installed on my mac, via the excellent postgresapp. You'll need to make sure a fairly recent version postgres is installed on your computer, I'm using Postgres version 10 for this video.

Next, I'll create the database we'll use by keying, createdb, and I'll name the database, learn_dev.

Now I'll use atom, my editor, to create a new config file, for 'db-migrate', named, database dot json.

Next I'll, create a 'dev' key, which has an object literal with the configuration information.

The driver will be, pg, the postgres driver, the host is localhost, and the database will be the one we just created, learn_dev.

Now, keep in mind, depending on how you setup postgres on your computer, you may need to pass additional configuration options, such as a username and password, but in my case, where I'm using the postgres app, I don't need to pass any credentials along.

Ok, our initial setup and config is done, and now we can begin the process of structuring our database, for our flashcard app.

So, to create a migration file, I'll pull up the terminal, and I'll key 'npx', a command line tool that comes with npm, the node package manager, then I'll key, 'db-migrate create', then I'll provide a file name of my choosing, which I'll call, 'create-schema', and lastly I'll add the option sql-file.

This sql-file option, will basically create 2 plain old sql files, that we'll use to specify how we'd like the up and down migrations to work, you'll see what I mean by this in just a moment.

Ok, so this command completed successfully, and you can see that it created 3 files in a new 'migrations' folder.

These two files here and here, are the files we'll be modifying to construct our database.

Notice that the name of the file is essentially a combination of a timestamp, the name we specified here, and either up or down.

The up file will include the new things we want to add to the database, and the down file will define how to remove the things we add in the up file, essentially allowing us to revert the changes made in the up file.

Ok, Let's open up the new migration files, that are located under the migrations folder, and then within the sqls folder.

The first thing we'll be adding to our database, is a new schema named 'learn'. I'll do this by keying, create, schema, learn.

Now, there's a good chance many of you have worked with databases for many years, but never did anything with schemas, like this, and you're probably wondering, what exactly is a schema in postgres, and why am I creating it.

You can think of a schema as just a namespace for now.

The namespace is kind of like a folder, that we'll be putting stuff in, such as the tables we'll be creating, in just a moment here.

Now, as many of you know, you don't need to create a custom schema, when using postgres, you could just use the default public schema, and if you are using the public schema, you probably don't even really think about it.

So why are we adding a custom schema?

Well, I'm not going to tell you why just yet, we need to cover a few more things still, but I'll give you a hint, the reason we're creating a custom schema has to do with security, or more specificly, it has to do with, what information is made available, from the postgraphile server, but more to come on schemas in a bit.

Next I'll open up the corresponding 'down' file, and I'll say, drop, schema, learn. This command essentially removes the schema, that gets created in the 'up' file.

Ok, now let's apply our migration in the terminal, by keying, npx, db-migrate, up, which will run any commands in the 'up' file.

Alright, we see the sql command that was run here, and there were no error messages, so all is well, it worked.

Now, if we wanted to revert the migration we just ran, we'd simply run, npx, db-migrate, and... any guesses what comes next? yep down... and as you can see this sql command was executed, and our database no longer has the learn schema.

Of course, we do want the new 'learn' schema in our database, so I'll go ahead and run our migration again.

You know how I added this sql-file option right here. I don't want to have to add this option every time I create a new migration, but I do want all of our migrations to be plain old sql files, instead of javascript files, which is the default.

db-migrate, allows you to specify default options, by creating an rc file, or a "run command" file, which I'll do right now in the atom editor.

The name of the file will be dot, db-migrate, rc.

The rc file, is a JSON, and I'll specify that the sql-file option should be true.

Ok, I'll close all the files that are open in my editor, then in the terminal, I'll create a new set of migration files, and I'll name it, create, table, person, but this time I don't need to add the sql-file option.

Alright, we see the new migration files here, which I'll open up in my editor.

In this migration, I'm going to create our first table, called the person table, and I'll do this by keying, CREATE, TABLE, then I'll enter our custom schema name, 'learn', dot, the table name, which is 'person', then I'll add an opening and closing parathesis.

Next I'll add the columns I'd like this table to have, starting with an 'id' column, of type serial, which is the primary key, then I'll add a 'first name' column, of type text, and I'll say it can't contain null values, similarly I'll add a last name column, and lastly I'll add a created_at column, of type timestamp, with a timezone, and I'll set it's default value to be the current time, or the time now.

Now I'll define how our 'down' migration should work, by saying, DROP, TABLE, learn, dot, person.

Ok, lets create our new person table, by running our migration, and it was successful.

Alright, now that we've got a table in our database, we can start using postgraphile, so I'll go ahead and install postgraphile by keying, npm, install, postgraphile.

Ok, so the next question you probably have is, how do you actually start or run postgraphile?

Well, there's a couple of ways. You can run it as a nodejs middleware, with many of the popular nodejs servers like expressjs or koajs.

The other option for running postgraphile, is to simply run the postgraphile command, that was just installed with the postgraphile library.

In this video, we'll just run the postgraphile command, which we could do right here in the terminal, but, we'll be specifying quite a few different options to the postgraphile command, so to make things easier on ourselves, I'm going to create a shell script named server, dot, and I'll make it executable.

Next, I'll open up our new script file in atom.

So, to start postgraphile, I'll key npx, postgraphile, then I'll extend the command onto the next line, and I'll add the database connection string it should use, by keying the dash c option, followed by a valid postgres connection string.

In my case, the connection string will be pretty minimal. I'll key postgres, colon, slash slash, and often you'll provide some credentials to use next, however in my case, I don't need to provide credentials, at least for now, so I'll add another slash, and I'll specify the database name, learn_dev.

Then I'll continue the command on the next line, where I'll add the schema option, and I'll tell postgraphile to use the custom 'learn' schema.

Do you remember a few moments ago, when I posed the question, "why are we creating a custom schema?".

I bet seeing this option here, is starting to answer that question in your mind.

Basically, we're telling postgraphile, to only expose thing it finds in the learn schema, so for example if we create some tables in some other schema, postgraphile, won't expose those tables in the other schema.

The reasoning behind why we're using schemas will become even more clear, a bit later, when we setup user registration and authentication, so hang with me for now.

Ok, let's startup our postgraphile server, by keying dot, slash, server, dot sh, and as you can see from the response, it looks like it worked.

I'm going to click on this link right here, which will open up the, 'graphiql' tool, which we'll be using extensively through the remainder of this video.

Ok, so what is this 'graphiql' tool, and how do we use it?

Well, this is where we'll be writing and trying out graphql queries.

Now, if you're not super familiar with graphql queries, you can think of them sort of like, sql queries, well, not exactly, but it's a reasonable analogy to start with.

Basically, we can write graphql queries fetch data from things like tables, and we can write queries or what's called mutations, to add new rows to tables, and do updates and deletes, plus a bunch of other operations, many of which we'll cover later in the video.

Let me give you an example of a query to fetch data from our new 'person' table.

So, to do this, I'll enter the keyword, 'query', followed by a operation name for this query, which I'll call 'PeopleQuery', then I'll add a set of curly braces, and I'll start by keying in the letter 'a', and we see that graphiql, gives us some options for autocompleting what I started typing.

This first option, 'AllPeople', looks pretty relevent for the person table, so I'll finish adding that field name, then I'll add another set of curly braces, then I'll add the keyword, 'nodes', followed by a set of curly braces, and now I'll press the hotkey, 'control space', and we see a few fields listed, the last 4 of which, match the column names in our person table.

I'll start by selecting the id field, then I'll select the firstName field, and the lastName field.

Ok, this is a valid graphql query, that is essentially saying, fetch me all the people, but only return each persons id, firstName and lastName.

I'll go ahead and run this query, by clicking the run button here, and we see the results of this query over here.

Basically, we get an empty array as the response, which makes sense, because we haven't actually added any people to the person table.

Let's do this, I'm going to pull up a sql tool I use named, postico, and I'll run the following sql command:

insert into learn, person, then I'll specify the first and last name columns, and I'll say the values to insert are a few first and last name combinations.

Now I'll run this command, which inserted 3 new rows, and we'll pull up graphiql again, and I'll run this query again, and this time, we see the 3 rows I just inserted, cool.

Ok, I didn't need to insert these person records, with sql, like I just did, we can also insert records into a table using graphql, let me show you how.

I'll get rid of this query, then I'll create what's called a mutation, and I'll name this mutation, CreatePerson, and I'll add a set of curly braces.

Next, I'll type the letter 'c', and notice what shows up in the autocomplete here.

It's a graphql field named 'createPerson', which does exactly what you'd expect, it allows you to create a new person.

Now, the next question you probably have is, how do you pass along the first and last name of the person to create.

Well, you do this by passing in some parameters to this createPerson field, so todo this I'll add a set of parathesis, and then we automatically see a autocomplete suggestion, that includes this 'input' identifier.

I'll go ahead and use the input parameter, followed by a colon, and a set of curly braces, then I'll enter 'person', followed by a colon and a set of curly braces, and then we see the input fields for the person table.

Next I'll add the firstName of 'Joe' and the lastName of 'Smith', and I won't worry about the id and createdAt columns, as these are auto populated.

Ok, if you notice the red squiggly lines here, under 'createPerson', it's an indication that our mutation isn't valid.

We need to specify what information should be returned after creating a new person, so I'll add a set of curly braces, then I'll key in the person field, indicating, I want the newly created person to be returned, and I'll say, return the id, firstName and lastName.

Now, I'll run this mutation, and we know it worked, because there were no errors, and we see the newly inserted person right here, cool.

We can also do updates in graphql, let me show you an example. Let's say we'd like the change this name from 'Joe', to 'Joey'.

So, to update the name, I'll create a new mutation named UpdatePerson, then I'll use the field that postgraphile offers us, named, 'updatePersonById', and I'll specify the input should include the id of the record to update, 4, and I'll add the personPatch parameter, where I'll say to update the firstName to 'Joey', then I'll return the updated persons, id, firstName and lastName.

Now I'll run the 'UpdatePerson' query, and as you can see over here, it worked.

We can also perform deletes with graphql, which I'll demonstrate right now.

I'll create a new mutation, which references the deletePersonById field, and it's input includes the id to delete, then I'll specify what data to return, which in this case, is the deleted persons id and first name.

Now I'll run the deletePerson mutation, and it worked.

Ok, so this hopefully gives you a general sense of what postgraphile gives you, but we've barely scratched the surface, so hang with me, as there's a lot more to see.

Now, by default, postgraphile uses some rather long names for the graphql fields, for example, a few moments ago, we ran this query, and the field name we used, was 'allPeople'.

I kind of wish it just said people here, well this, simpler naming convention can be achieved by using a custom postgraphile plugin.

Let me show you how this works.

First I'll kill the server by pressing control-c.

Next I'll install the postgraphile plugin I'm referring to, by keying, npm, install, at-graphile-contrib, slash, pg-simplify-inflector.

Now, we've got to tell postgraphile to use this plugin. To do this, I'll pull up our 'server' shell script, and I'll continue the command on the next line, adding the option, append-plugins, and providing the plugins module name.

Alright, let restart the server, then I'll head back to graphiql, then I'll click the history button and click 'PeopleQuery'.

So, this is the query we ran to fetch all the people, but notice what happens when I run this query again, we get an error that says it can't find the field "allPeople".

Well, that makes sense, because the plugin we just installed should now make the field name "people", not "allPeople".

I'll go ahead and get rid of the query, and I'll press control-space which gives us a list of all the fields we can query against.

Hmm, why do you think we still see 'allPeople' listed?

Well, the reason is because I haven't refreshed the browser since restarting the server, so the graphql metadata stored in the browser is out of date.

This is easy to fix, by just refreshing the page, and now when I press control-space, we see the field name we expect, which is 'people';

So, I'll go ahead and choose the field, 'people', and I'll create a query simular to what we ran earlier, and when I run this query, we get the result we're expecting.

Ok, so here's the takeaway I want you to remember about what we just did, when we added the plugin.

Postgraphile offers a bunch of functionality, which covers the vast majority of use cases, you'll ever need to handle, but if you happen to run into a scenario that postgraphile doesn't handle, you might be able to extend postgraphile with a community plugin, or you can write a plugin yourself, but honestly, you probably won't run into any showstoppers, that require you to write a plugin.

I want to enable a couple additional features to postgraphile, that will add some interesting features to graphiql.

To enable these features, I'll add the 'enhance-graphiql' option, and the 'allow-explain' option.

Next, I'll kill the server and restart it.

Now I'll pull up the browser and refresh the page, and as you can see, this page looks a bit different now.

In particular, these two buttons are new.

The Explorer allows you to see all the fields and parameters that are available to you, and lets create queries and mutations by simply selecting what you're wanting, for example, when I click the createdAt checkbox, you'll notice the field get added to the query, over here.

The 'Explain' button shows the result of using postgres's explain keyword on a query.

Basically, it can help you to determine, how efficient the underlying SQL query is, and if the query is inefficient, you can address the issue with indexes, or other fixes.

Below the explain section, is the sql used to fetch this data up here.

Ok, I bet I know what many of you are thinking, as you're looking at this SQL... You're probably thinking, that doesn't look like any SQL I've ever seen.

Yeah, I get it, it's not what you're probably used to seeing, so what's going on here?

Well, keep in mind, these SQL queries are generated dynamically, based on the graphql query, and some optimzations are happening, to minimize the number of queries actually sent to the database.

In fact, for every graphql query tree, postgrahile generates just 1 database query, and it avoids the N+1 problem.

So, the sql might look a bit crazy at first glance, but it's quite efficient.

I also want to point out the documentation in graphiql, which you can open by clicking right here.

Basically, the docs provide a description of the entire GraphQL schema, in other words, it shows all the possible queries, mutations, fields, parameters, data types and so on.

So for example, I could search for 'Person' and I'll see all sorts of fields and types associated with our person table.

If I click on this Person type, we see the table columns listed here, however, notice that there is no description for the Person, and there are no descriptions for each of these fields.

If we'd like to improve the documentation that we see here, we can do so by simply, adding some comments to our database, let me show you how this works.

I'm going to go ahead and kill the server, then I'll create a new migration.

Now I'll open up our new migration files, and I'll add some comments to our database.

First I'll add a comment to the table, 'learn, person', which is, 'A user of the flashcard app'.

Next, I'll add comments for each of the columns in the table.

And, actually I'm not going to bother to create a 'down' migration script for these comments, as it's not neccesary.

Now I'll run our migration, and start up our server.

Next, I'll pull up the browser and refresh the page, then I'll open up the docs and search for the Person type, and check this out, the comments we added here, are now displayed here, nice.

Ok, it's getting kind of annoying, having to kill and restart the server, everytime I make a change in the database, but this can be avoided, if we enable another feature when we start the postgraphile server.

So, todo this, I'll extend the command to the next line, and I'll add the watch option.

Now, anytime we apply migrations, the postgraphile server will notice the changes and automatically update the server and It'll automatically push the metadata changes to graphiql, in real-time.

Alright, I'll kill the server, and restart it with the new 'watch' option, and I'll open up another terminal session, where we can work with migrations.

It's time to add another table, the card table. So, I'll create a new migration, and I'll open the new up and down files.

First I'll create a table in the learn schema, named card.

Then I'll add the id field, then a question field of type text, an answer field, and a review after timestamp field.

Next, I'll add a spacing field, of type integer, and I'll add a check, which basically enforces that the spacing field must be greater than or equal to zero.

Now, if we were building a traditional server, using whatever your favorite language is, you'd typically need to validate values sent from users, in your server code, but in the case of postgraphile, we're not writing typical serverside code, so we lean more on the features in postgres, to validate or check values.

So in this case, we're having postgres validate the spacing column value right here.

Next, I'll add an ease_factor, which is a numeric type, and we're having postgres verify, or check that the value is >= 1.3.

Next I'll ad a seq column.

Ok, your probably wondering, what these 4 columns are for. Well, they're going to be used to determine when a flash card should be reviewed next.

Essentially these 4 fields, will be used by a simple spacing algorithm we'll implement in a bit here, and an important thing to know about these 4 fields is that they should never be manipulated by a user, they should be set exclusively by our spacing algorithm, so somehow we'll need to control access to these columns, but I don't want to deal with is just yet, we'll come back to this in a moment.

Next I'll add an archived column, followed by a person_id column, of type integer, which references the person tables, id column.

This person_id column, will hold the unique id of the person who created and owns each flash card.

Lastly, I'll add a created_at column.

Next, I'll add in a few indexes for this table.

An index for the review_after column, the archived column and the foreign key, person_id.

Now I'll add some comments on the table, and on each column in the table.

In the 'down' migration file, I'll simply drop the table.

Next, I'll go ahead and apply the migration.

Ok, we don't need to restart the server, since we've added the watch option, and we don't need to refresh the browser either, so I'll just add a mutation to create a new card, and I'll run the mutation, and it worked, we added this flashcard data, cool.

Now, there's a problem with this mutation, do you see what the problem is?

Well, remember how I said these 4 fields shouldn't be manipulated by users, well, I just manipulated the spacing value, I set it to 100, which will break our app.

There's another, simular but more subtle problem, which is the fact that I'm specifying the id of the person right here, but should we really trust the user to pass their id in this query?

If the user was devious, they might just send a query like this, with some other users id, effectively corrupting the other users set of flashcards.

Really, this personId should be set by the server, based on the logged in users, right?

Ok, so how can we fix these issues?

Well, in part we can address them with something called smarttags, let me show how this works.

First I'm going to rollback the last migration for the card table.

Next, I'll re-arrange the comments a bit, and then I'm going to modify the comment for the 'review_after' column, adding a special comment called a smarttag, that is 'at, omit', the ability to create and update the values in this column, then I'll seperate the smarttag from the comment with a newline.

Now I'll do a similar thing for the other column comments.

Ok, so what does this do for us?

Well, it'll prevent these fields from being updated by user, via theour postgraphile server, let me show you.

First I'll reapply our migration, then I'll pull up the browser, and check this out, right here.

graphiql, is complaining about us trying to add the personId and the spacing, because it's not allowed anymore. Cool.

So, can we still view the spacing field and the personId field?

Let's create a query and see, so I'll add in a card query that includes the spacing and personId, and as you can see graphiql isn't complaining about them, so yep we can query these fields..

Now, we still need to figure out a way to set the values on these fields, but we'll get to that in a bit here.

There's another field available to us in the card query, that I want to draw your attention to.

I'll go ahead and enter the letter 'p', and take a look at the fields that show up in the autocomplete list.

Checkout this first one, right here, person.

So, what's going on with this field? I mean, we don't have a person column in the card table, so what is it?

Well, since we added a proper foreigen key to our card table, postgraphile automatically added the person field, which allows use to easily query information about the owner of this card.

So for example, if I want to include information about the owner of the card, I'll just add the person field, then I'll add a sub-selection of the fields I want to include from the person table.

I'll add the persons id, firstname and lastname.

Now, when I run this query, we don't get any results, because we just dropped the card table and re-created it, but for the moment, I'll do the following.

I'll insert a row into the card table, using this sql statement, then I'll run the query again, and as you can see, we were able to query our new card, and the person who owns the card, cool.

Now, it's worth noting, that the smarttags were easily bypassed with this sql expression, but we'll address these types of security concerns more, in a bit here.

The next thing I want to do is add a new table, so I'll create a new migration, to create a response table.

This response table, is essentially a log of the response the user provides when reviewing a flash card.

So, I'll add in the appropriate columns for the response table, then I'll add some indexes as appropriate, and I'll add some comments.

Now, I don't want users to be able to create, update or delete rows in this table, well at least not directly.

So, to prevent postgraphile from performing: creates, updates and deletes, I'll add a smarttag on the table comment, to omit, create, update and delete operations.

Now I'll apply our migration.

Ok, to re-inforce what we just accomplished with smarttags in the response table, I'll start a query, and if I start to type response, we see some response fields available to use, so we can read from the response table, which is how we want it.

But, look what happens when I start to create a mutation.

When we look at the available mutation fields, we don't see any fields to create, update or delete responses, which is what you'd expect, with these table level smarttags.

Oh, one more thing, we need to drop the response table in the down migration file.

Ok, the next thing we'll add is a couple of postgres functions to handle the process of scoring how well your remembered the answer, and setting the appropriate spacing, in other words, setting the next time the card should be reviewed.

So, I'll create a new migration, then I'll open up the new migration files.

I'm going to write this function offline, then I'll briefly walk you through it.

Ok, I'm not going to walk you through the implementation of this function, because it's not particularly interesting and it's really only relevant for my flashcard app, but I do want to point out the basics of creating a postgres function.

So, I created this score_response function in the learn schema, that takes 3 parameters, and it returns this type here, score_response, which is this custom composite type I created up here.

Now, the dollar dollar, you see here, delimits the start of the function body, and if we scroll down, we see another, dollar dollar, that delimits the end of the function body.

Then down here, we see the language the function was written in, and the function is marked as immutable, meaning it's a pure function that has no side effects, and lastly we see this security definer, which we're going to skip for the moment, but it's important, and we'll come back to later.

Ok, you're probably wonder, how this function gets used, hold that thought, I'll show you in just a few moments here.

Next, I'll go ahead and add the down migration code to drop the function and the custom composite type.

Now I'll go ahead and apply our migration, and I'll create another migration for another function, that will handle scoring.

In other words, we're going to create a function, that will get called when these buttons get clicked in the app.

Next, I'll open our new up and down migration files, and I'll add a function offline, and I'll walk you through the basic logic in this function.

So, this handle_score function, takes 2 parameters, a card_id, and the score for how well the answer was remembered.

Basically, if the user clicks this button, the score is 0, clicking this button means the score was 1, the score for this button is 2 and this buttons score is 3.

The return value for this function, is a row from the card table.

Right here I defined a few variables, which get used in the body of our function.

Now, in this block, I'm quering a card based on the passed card_id, and storing the result in this card variable.

Then on this line, I'm calling that other function, we just finished creating, and I'm storing the returned value in the response variable.

Next, I'm setting a new review_after timestamp, to be the current review_after time, plus some spacing value in days.

Now in this block, I'm updating the card, setting the column values that the users aren't able to manipulate, and I'm returning the updated card row, into the card variable.

Then right here, I'm simply logging or inserting the users score into the response table, and lastly I'm returning the card.

Ok, after the body of the function, you see the language used in this function, and you see that this function isn't marked as IMMUTABLE, like we saw in the last function, it's marked as VOLATILE, becuase it's not a pure function, we've got side effects happing in this function.

And lastly, we see this, 'security definer' syntax again.

Now in the 'down' migration file, I'll drop the function.

Next, I'll apply our migration.

Ok, at this point you might be wondering, what do we do with this handle_score function, that I just created?

Well, you call the function, when these buttons are pushed... which begs the question, how do you call this function with postgraphile?

Well, as you might have guessed, postgraphile automatically adds the 'handle_score' function as a graphql mutation that you can use, let me show you.

So, I'll create a new mutation, and I'll key the letter 'h', and check it out, we see our handleScore function, or more acuretly it's called a field in graphql.

Alright, so to call it, you'd create a mutation that looks something like this.

Ok, the next thing I want to work on is, creating a graphql query, to find the next flash card that should be reviewed.

In other words, when your on this home page, and you see that there are 3 cards to be reviewed, so you click 'Begin Review' button, which should take you to the review page, well, we need to write a query that fetches the next card to review, in other words, this card we see right here.

But before we get started on the 'nextCard' query, I want to insert a few more card rows, so I'll run this insert command right here.

Ok, so we really need to write a, next card, graphql query, that sort of like this sql query you see here.

Basically, we need to query for cards that have a review_after timestamp, that is less than or equal to the current time, and the card shouldn't be archived, and the card must be for a given person, then we want to order the cards by the review_after timestamp, and only return the top card, but again, we need to do this type of query, but in graphql.

Alright, we'll start with this NextCard query, which returns all these cards, but then we can add some sorting, filtering and limiting on the query, by adding some parameters to the cards field.

So I'll start by saying orderBy, the Review_after field in ascending order, and running this query works, but we've got more to do still.

Next I'll say, only return the first 1 card, which gets us closer to what we need, but we still need to filter the cards, to only show the cards that are past due, based on the reviewAfter field.

There is a condition parameter, where we can choose a field to filter with, but what would I put here?

Well, this condition is problematic, for what we're trying to do.

This condition parameter only allows us to do equality comparisons, so I could filter for a card with a specific timestamp, but that's not what we're trying to do, what we want to do is something like this, where we say the reviewAfter is less than or equal to some date, but this isn't valid syntax, so it won't work.

Ok, so are we stuck, or is there a way to get around this?

Yep, we're stuck, it can't be solved... Of course I'm kidding, this is easily solved, and there's a couple of solutions that come to mind, I'll show you both of them.

One possible solution, is to create a database view.

So, to create a view, I'll create a new migration, then I'll open up the new migration files, and in the up migration, I'll create a view in the learn schema named, next_card, then I'll add a select statement, where I select every column from the card table, where the review_after date is less than or equal to the current time, right now, and I'll order the results by the sequence, review_after and id.

Next, I'll handle the down migration, by droping the view.

Now I'll apply the new migration.

So, how do we use this new view with postgraphile?

I bet you intuitively could guess the answer at this point.

Basically, postgraphile exposes the view as a query field, similar to tables.

So, let's create a new query that uses this view in graphiql.

I'll go ahead and press control, space, to see what fields are available to us, and as you can see right here, there's a nextCards field, so I'll go ahead and use it, and I'll create a query that includes the id, question and answer fields, then I'll run the query.

Ok, we still see more than one card, which isn't exactly what we want, but at least we know that they're cards that actually need to be reviewed, and they are in the order we want to review them, now all we need to do is modify this query so it only returns one card.

So, I'll add a parameter to the nextCards field, where I ask for the first, 1 , then I'll run the query again, and as you can see, it worked.

There's another way we can handle this nextCard query, which is to add a postgraphile plugin, that enhances the filtering capabilities.

Let me show you how to do this.

So, first I'll kill the server, then I'll install the postgraphile-plugin-connection-filter library, or plugin.

Next, I'll modify our server, shell script, by appending another plugin, right here, by seperating the first plugin, from the next plugin with a comma, then including the plugin module name.

Now I'll go ahead and start the server again, then we'll try out our new filtering capabilities in graphiql.

I'll change the nextCards field to just cards, then I'll get rid of the current parameter, first, and I'll press control, space, to show the auto complete list.

So, do you see any new parameters in this list?

This filter parameter, is new, it's made available by the plugin we just installed.

I'll go ahead and select the filter parameter, then I'll choose the reviewAfter field, and then inside a new set of curly braces, I'll trigger the auto complete list, and checkout all these different filtering options.

In our case, I'd like to use the lessThanOrEqualTo option, and then I'll add a datetime value.

Now I'll add the first 1 parameter again, and of course we'd need to set the orderBy as well, but for now I'll skip that, and just try out our query, and as you can see we're now able to filter the date, as needed.

Ok, we've basically implemented the required tables, views and functions, needed to implement this app, accept, we still need to handle some security related things, such as: user registration and user authentication.

So, let's deal with this security related stuff.

The next thing we'll do is create a new migration, where we'll create a new schema.

Now I'll open up our migration files, and in the up file I'll create a new schema named learn_private, and in the down migration, I'll drop the learn_private schema.

Ok, you're probably wondering what this learn_private schema is for, well, hang with me, you'll see in just a moment here.

Next, I'll go ahead and run the migration, then I'll create a new migration, for a new table, called person, account.

Now I'll open the new, up and down migration files, and I'll create a new table in the new, learn_private schema, named person_account, then I'll add a person_id column, of type integer, which is the primary key, and it references the learn schema's person table, by the id column, and I'll say when this referenced relation is deleted, cascade the delete to this table.

Next I'll add an email column, of type text, that can't be null, and I'll add a simple check on the format of the email.

This validation isn't sufficient, really, but I don't want to add a ridiculously long regular expression right here, but I hope you get the gist, of what's possible in the way of data validation.

Lastly, I'll add a password_hash column, of type text, which can't be null.

Ok, so what's this person_account table for?

Well, you might have already guessed, it's for storing the sensitive information about a user account, the email and password hash.

Can you guess why we put this table in the new, learn_private, schema?

Well, because we don't want to expose this information via postgraphile, in other words, we don't want there to be graphql queries that can return this information.

Ok, so how is it that, putting a table in this, learn_private, schema, prevents users from querying it?

Well, let's look back at our server shell script, an in particular, look at this option right here, the schema option.

With this option, we're telling postgraphile, to only expose tables, views, functions and so on, if they reside in the 'learn' schema, so by putting our sensitive information in another scheam, we're ensuring it won't be accidentally leaked.

Ok, one more thing in the up migration, I'll add an index to for the email column, then in the down migration, I'll drop the person_account table.

Now I'll go ahead and apply our migration.

Ok, we're going to need to hash the users password, as it's a best practice, and we'll do so using the bcrypt hashing algorythm, but to use bcrypt in postgres, we'll need to add an extention.

So to add the extension, I'll first create a new migration, then I'll open the up and down migration files, then in the up file, I'll create the extension, if it doesn't already exist, named 'pgcrypto'.

In the down file I'll drop the extension.

Next I'm going to make a new migration, that creates a postgres function to register new users, then I'll open the new migration files, and in the up file I'll create a new function in the learn schema, named register_person, which takes the first name, last name, email and password.

This function will return a new person, then in the body of the function, I'll declare a person variable, and I'll insert into the person table, the first and last names, and I'll return the new row into the person variable.

Then I'll also insert into the learn_private schema's, person_account table, the columns, person_id, email and the password_hash.

The value for the person id, is in the person variable, using dot notation, the email is a passed parameter, and the password hash will be generated by calling the crypt function, passing in the password, and a salt, by calling gen_salt, and passing in the 'bf', value.

Lastly, I'll return the person variable, and close out the function body.

Then I'll include the functions language, I'll mark it as strict, and I'll add 'security definer'.

Ok, as we've seen before, this function will become a graphql field, that we can essentially call with a graphql mutation, but let me ask you a question.

Is there anything in this function body, that bothers you, or seems problematic?

If you don't see anything that bothers you, let me give you a small hint.

Here's the hint: This register_person function will be called by random users on the internet when creating an account.

So, do you think both of these inserts will work, for random internet users?

In particular, what about this second insert, which is touching a table in the 'learn_private' schema?

Ok, it might seem conserning that an untrusted, random internet user can insert a value in the private schema, but it's actually desirable in this case and it's possible, because we're using this line right here, SECURITY DEFINER.

Basically, by adding SECURITY DEFINER, we're telling postgres when it run this function, regardless of who calls it, run it with the permissions of the user who created the function, in other words, always run this function with the permissions of the user who applies this migration.

Well, theres a bit more to the security story, but we'll get to that in a moment here.

Ok, next I'll go ahead and create the down migration, which drops the funtion, then I'll apply the migration.

Alright, lets try out this new RegisterPerson function. So, here's a graphql mutation that effectively calls the registerPerson function, passing in the relevant input parameters, and it returns the new person's id, first and last names.

Now I'll attempt to run this mutation, and it worked, cool.

Ok, now that our postgraphile server can register new users, the next task is to create a way for existing users to sign-in, or authenticate, but before we deal with authentication, we need to create some database security roles, one of which we're going to assign a security role to users, upon successful sign-in.

So, I'll make a new migration to create some roles, then I'll open the migration files, and in the up migration file, I'll start by creating a role named, learn, postgraphile, which is a role that can login with the password, 'secret, password'. This is the role that we'll have postgraphile use, in the server shell script, in just a moment.

Next I'll create a role named learn_anonymous, which is more like a security group, than a user account, and this role will get used for non-signed-in users, that make requests to the postgraphile server, you'll see exactly what I mean by this in just a moment.

Now I'll go grant the role, learn, anonymous, to the role, learn, postgraphile.

You can think of this command as saying, anything this role can do, can also be done by this role, this will make more sense when we start granting priveledges to this roles, in a few moments.

Next, I'll create a role named learn_person, which is the role that authenticated users will be a part of, and lastly we'll grant the, learn, person role, to the learn, postgraphile role.

Ok, let me quickly summarize what's going on in this migration.

On this line, we effectively created a database users, with a password;

Then on these two lines, we effectively created 2 security groups, one for signed in users, and one for anonymous users, then we said anything these security groups can do, can also be done by the learn, postgraphile role or user.

In the down migration, I'll drop all 3 roles.

Next, I'll modify the server shell script, and in particular, I'll change the connection string to use the, learn, postgraphile account, with it's corresponding password, and I'll specify that the server is running on the localhost.

I'm also going to extend the command, and add the option, owner, connection, which I'll pass in a connection string that uses my local, super user account.

The only reason I'm adding this connction string, is because this account here, isn't going to have the neccesary permissions to watch for schema changes.

Keep in mind, in production, you wouldn't actually use either of these options at the bottom here.

Next, I'll add another option, default, role, and I'll pass along the role, learn, anonymous.

Basically, this tells postgraphile, that if a non-authenticated user makes a request to the server, give them the permissions associated with this role.

Of course, we haven't actually assigned permissions to this role yet, but we'll do that in a moment.

Ok, I'll go ahead and apply our migration, then I'll restart our server, so it runs with all our new options.

We're about ready to deal with user sign ins, but there's one more thing we'll need to add first, a special postgres datatype, we'll use to generate JSON Web Tokens, or what's commonly called a JWT.

If your aren't familiar with JWT's, they are essentially a cryptographically signed token, that gets sent to a user upon successfully siging in, then on every subsequen request the user makes, they'll send the JWT to the server, as a form of authentication.

You can think of a JWT as an alternative to cookies in the browser.

So, to define our new JWT data type, I'll create a new migration, then I'll open up the migration files, and in the up file, I'll create a type in the learn schema, named JWT, which is a composite type, that includes a role field, of type text, a person_id field of type integer, and an experation of type big integer.

So, basically this type here, is a representation of a security token, that will get sent back to the user with they successfully sign in, you'll see this in action, in just a moment.

In the down migration file, I'll drop the jwt type.

Now, to make use of this jwt type, we've got to pass a couple new options to postgraphile.

First, we've got to tell postgraphile what composite type to use when generating the JWT. To do this, I'll pass the jwt, token, identifier option, then I'll pass the name of the new type we just created.

Next, we need to provide a secret passphrase, that postgraphile will use, when it signs the tokens, upon user signin, and it'll use this secret to verify the tokens haven't been tampered with, when the server receives qraphql queries from authenticated users.

Ok, I'll go ahead and run our migration, then I'll kill the server and restart it.

Now we can finally create a function that handles user sign in, or authentication, so I'll create a new migration, then I'll open up the migration files.

Next, in the up migration file, I'll create a function in the learn schema named authenticate, which gets passed a users email and password.

The function will return the jwt type we just created, then I'll declare an account variable of type person, account.

Now I'll select star into the account variable, from the person, account table, where the email equals the email passed as a parameter here.

You'll notice that I'm qualifying the email, with the function name here, to avoid ambiguity between the email parameter name, and the email column name, in the person_account table.

Next, we'll check to see if the provided password is correct, by saying, if the account, password, hash, equals the value returned by calling crpyt, passing in the password, and the accounts saved password, hash, then we'll return a new JWT type, which is created by keying an open paranthesis, followed by the role of, 'learn, person', followed by the person, id, followed by the experation time, as a unix timestamp, which is the current time, plus 30 days, and lastly I'll cast this new composite type, as a JWT type.

However, if the password isn't valid, we'll just return null.

Next, in the down migration file, I'll drop the authenticate function.

Ok, I'll go ahead and run our migration, and now I'll go ahead and create new migration, to assign permissions to the roles we created a few minutes ago.

In the up migration file, I'll start by altering the default previledges to revoke the execute permission on functions from the public role.

In a moment, we'll selectively grant execute prevledges to the functions we've created.

Next, I'll grant usage on the schema, 'learn', to the roles, 'learn, anonymous', and 'learn, person';

Now I'll grant execute priviledges on the function, 'register, person', to the role, learn_anonymous, in other words, we're saying it's ok for anonymous users to use the 'register, person' function.

Next, I'll do a similar thing to the 'authenticate' function, allowing both the 'learn, anonymous' and 'learn, person' roles call it.

Now I'll grant similar privileges on the, 'handle, score' function, to only allow usage to signed in users, which are people in the role, 'learn, person'.

Next, I'll grant all priveledges on the table person, to the role, learn, person. In other words, people who are in this role, can do inserts, updates, deletes and selects with this table.

Now I'll grant select and delete priviledges on the table, card, to the role, learn, person, and then on the next line, I'll grant insert and update priveledges on the id, question, answer and archived columns, of the card table, to the, learn, person role.

I'm just going to add the final few grants, and I'll walk you through them.

So this line gives users in the learn, person role, the ability to use the sequence here. Basically, this sequence is what auto generates the unique id's when new cards are inserted.

This line lets signed in users do selects on the response table, and this line lets signed in users do selects on the next, card view.

Ok, hopefully the puzzle is starting to come together for you, on why we created roles.

Roles allow us to control access to all the different parts of the database.

Alright, in the down migration file, I'll essentially reverse everything I added in the up migration file.

Next, I'll run the migration, and now we can go ahead and try out our sign in functionality, which is to call the authenticate function.

So, in graphiql, I'll create a new mutation named AuthenticatePerson, but this time, I'm going to use graphql variables in the query, let me show you what I mean by variables.

After the name, AuthenticatePerson, I'll add a set of parathesis, then I'll define a email variable, that starts with a dollar sign, and I'll say it should be a graphql type, string, and I'll add an exclamation point on the end, which means it's a required parameter, then I'll do a similar thing for the password.

Next, I'll enter the authenticate field name, and I'll add the input parameters, where the email parameter is the value in the email variable, and the password parameter is the value in the password variable.

Then I'll ask for the jwt to get returned.

So, are we done with this query?

Let's try it, and it failed as you probably expected, and we're getting an error response indicating the email and password variables weren't supplied, which of course is true, we didn't provide an actual email and password, but we can do that down here, in the query variable panel.

So, I'll click on the label, query variables, which expands the panel, then I'll pass in some JSON, that includes an email key, with the corresponding email address, followed by a password key, and the password we used earlier, when we registered this user.

Now I'll run the query again, and it worked, and what you see right here is the JWT.

So, what's the point of using graphql variables? Well, with variables you can reuse a query with different inputs without having to do any sort of string concatention.

Let's take a look at the JSON web token, that was returned here.

To help you better understand what this jwt is, I'm going to copy it to the clipboard, then I'll open the website, jwt, dot , io, then I'll paste the JWT, into this input, then I want to draw your attention to this section over here.

Basically, this site decodes the jwt, to it's 3 component parts, the header, the body, and the signature.

The interesting part is the payload here, or whats often called the claims, which as you can see, includes the role, the person_id, the experation timestamp, and a few other claims that postgraphile added on it's own.

Now, when a signed in user makes a request to the server, they'll send the jwt along with the request, typically as an authorization header, in http requests, then the server will verify the token hasn't been tampered with, and then the server can use these values in the payload as needed, for security purposes.

Ok, let's try using this JWT in graphiql.

To include the JWT with graphiql requests, I'm going to use the 'ModHeader' chrome extention, and I'm going to get rid of this other JWT that I was using for something else, and I'll paste in our new JWT, then I'll click the checkbox to add this authorization header.

Next I'll add a card query, and actually, let me remove the authorization header first, and I'll run the query, and as you can see from the response, permission was denied, so I'll go ahead and add the authorization header and run the query again, and this time it worked, which is good, but do you notice any problems in this response?

Let me give you a hint, by adding in the personId field, running the query again.

Do you see the problem now?

Basically, we're running this query as this signed in user, with a person id of 7, but we're seeing cards from other users, which isn't good.

Each user should only be able to see and manipulate thier own cards.

However, this can be easily addressed by enabling a feature in postgres called, row, level, security.

I'll show you how this works, by creating a new migration, then in the up migration file, I'll alter the table, card, to enable row level security.

Next, I'll create a policy named, select, card, on the card table, for select sql commands, and I'll apply this policy to the role, learn, person, using the following boolean logic.

I'll say selects are only allowed, when the person, id of each row, equals the following expression.

Ok, so let me explain this expression.

When postraphile recieves a JWT, it temporarily saves the the payload or claims of the JWT, to the database connection session, and this current settings call, allows us to access the JWT's claim values, so basically, when this JWT is sent to the server, which has the person_id set to 7, we can access the person_id, like this.

So, essentially, with this policy, a given user can only see rows in the card table, that have their own person_id.

I'll add similar policies for, updates, deletes and inserts.

The only significant difference between these policies is that, the insert policy uses, 'With check', instead of 'using'.

The difference here is that select, update and deletes, have existing saved row values to check against, so these checks happen before the corresponding SQL command occurs, but since inserts don't already have row values to check, what happens is the insert occurs, and then this boolean expression is checked, and if it fails the insert fails.

Now, in the down file, I'll simply reverse everything I did in the up migration file.

Ok, I'll go ahead and apply the migration, and I'll run this query again, and this time, the query works, but we don't see any of the cards, because the existing cards don't belong to the signed in user.

Alright, lets go ahead and add a new card, using the following mutation.

Actually, do you think this mutation will work?

Let's try and see.

Hmm, it failed, and it's complaining about a violation in the row-level security policy.

So, here's the problem, when we're running this mutation, the personId, isn't being added to the card, so the insert, policy is failing.

Ok, so how do we get the personId column populated, when creating a new card?

Well, it's not a value we'd want to send as a parameter, because we just can't trust users on something like this, however there is a way of handling this using database triggers, let me show you.

I'm going to create one final migration, then I'll open up the migration files, and in the up file, I'll create a function in the, 'learn, private' schema, named set_person_id, which returns a trigger, then in the body of the function, I'll use the NEW keyword, which represents the new row to be inserted, followed by a, dot, person_id, which I'll set to the person_id from the JWT, which again, you can access by calling, current_settings, and passing in a string representation of the claim value your wanting.

So, jwt, claims, person_id, then I'll just return new, and close out the function.

Next, I'll create a trigger named, card_person_id, then I'll say, before insert on the table card, for each row, execute the procedure we created up here, 'learn_private, set_person_id'.

Now, in the down migration file, I'll simply drop the trigger and the function.

Ok, let's apply this migration, then we'll try running the create, card mutation again, and this time it worked, cool.

Now let's see if we get any data from the CardQuery, and we do now. We get the one row, we just inserted, cool.

Ok, I think we're done.

So, the next question you might have is, how do we take graphql queries like these, and use them in applications, like our flashcard app, or a mobile app, or a native app.

Well, I haven't really pointed this out, but when you start postgraphile, you see the api enpoint it exposes right here. In our case, it's an http endpoint that accepts post requests, but if we were using the real-time features of postgraphile, it would also accept websocket connections as well. And if you use postgraphile as a nodejs middleware, you could potentially accept requests in other ways, such as with http get requests.

Now, since postgraphile is just using http for transport, you could send requests, similar to how you'd make rest api calls, by using the fetch api, or a library like axios, however, I'd suggest looking at some of the more popular graphql client libraries, like apollo, which adds all sorts of nice clientside features.

Similarly, for mobile apps and other types of native apps, you'd likely use one of thier popular clientside libraries.

So what do you think about postgraphile?

I think it's a pretty clever use of postgres, that offers excellent functionality, and allows you to build your backend in a fraction of the time it would take with modern programming languages and frameworks, oh and keep in mind, there are ton's of features I didn't cover, like the real-time capabilities of postraphile.

So here's my challenge to you, the next time your

  • in a hackathon,
  • or implementing a prototype
  • or creating a side project
  • or just starting on a new project at work consider using postgraphile, I'm pretty sure you'll save yourself a ton of time, and you'll probably really enjoy using it.

Additionally, if you end up using postraphile, I'd urge you to strongly consider supporting the open source project financially, either personally or through your employer.

This project is mostly maintained by Benjie Gillam, and your financial support allows him to spend less time contracting, and more time working on postgraphile.

A couple final words, if you like my teaching style, you should checkout my courses at knowthen.com, and of course it would be great if you subscribed to my youtube channel, and don't forget to click the notification bell.

Thanks for watching, see ya next time."

Free Courses

Functional Concepts in JavaScript with Ramda

What's Ramda?

In this free mini-course, you'll learn about functional programming concepts like: map, filter, reduce, curry and function composition.

This is a great course for dipping your toes into Functional Programming.

Price:Free ($0)
Sign Up

Elm for Beginners

What's Elm, and why should I care?

It's an amazing language for writing web apps which have no runtime exceptions.

I'm pretty sure learning Elm will change the way you program And It's Free!

Price:Free ($0)
Sign Up