Milliseconds vs Microseconds

I recently ran into an issue that I didn’t see coming. I was creating records, and then querying for them from the database.

I knew that they were created very close together in time, but they were created in two separate steps, so I knew that the timestamps would be different.

Only when I queried for them and compared the timestamps in my tests, I was getting the same timestamp back on both records.

After some digging I realized that the records had different timestamps in the database…where things were being measured in microseconds, but Javascript only tracks down to the millisecond, so under certain circumstances I was seeing one record round up and the other record round down such that the two of them were showing the exact same timestamp — to the millisecond on my in-memory object in Javascript.

I never would have expected to be in a situation where I needed or cared about more precision than 1/1000th of a second, but there you have it.

Javascript Milliseconds vs Postgres Microseconds

I recently ran into something that I never would have even considered thinking about before now.

I was trying to query Postgres looking for all records with a specific result in a field that was a timestamp.

I got zero results, which was a real head scratcher given that I could see the records in the database.

As it turns out, by default, Postgres saves timestamps that track 6 digits to the right of the decimal. (Microseconds.) On the other hand, when you convert a Javascipt Date object to it’s string equivelent, you only get 3 digits after the decimal (Milliseconds).

So, when I asked passed in my query string it was something like YYYY-MM-DD HH:mm:ss.333. Postgres would then look at that and say ‘I have YYYY-MM-DD HH:mm:ss.333125′ but that is slightly after what you’re asking for, so I have no results to return to you’.

You can over-ride the default settings for a timestamp in Postgres to be only 3 digits past the decimal at the time you create the table/field by defining it as ‘timestamp(3)’

Postgres ‘pg’ NPM Connection Pools

I’m currently in the middle of re-working my side project to use the pg npm package. As part of that, I now have to worry about managing the connection pool, which is something that was handled behind the scenes with the ORM that I was using previously, the ORM that I was having problems getting to play nicely with Typescript.

Online, I saw a recommendation to create a pool like so:

import { Pool } from 'pg';

const pool = new Pool({
  host: 'localhost',
  user: 'database-user',
  max: 20,
  idleTimeoutMillis: 30000,
  connectionTimeoutMillis: 2000,
})

And then to execute database operations like so:

const wrapperFunc = async (): Promise<void> => {
  const client = await pool.connect(); // create a connection

  try {

    const query = `
    CREATE TABLE users (
        email varchar,
        firstName varchar,
        lastName varchar,
        age int
    );
    `;

    await client.query(query); // use the connection
  } catch (error) {
    // Deal with your catch error
  } finally {
    client.release(); // release the connection
  }
}

Releasing the connection is important because otherwise you end up with a bunch of zombie connections that your database is waiting on, but which aren’t live anymore. Eventually, if you don’t release the connections, then your database hits its max connection count and it starts refusing new connections.

I had the bright idea of changing up my code to look more like this, which I thought would work just as well:

const wrapperFunc = async (): Promise<void> => {
  const client = await pool.connect(); // create a connection

  try {

    const query = `
    CREATE TABLE users (
        email varchar,
        firstName varchar,
        lastName varchar,
        age int
    );
    `;

    await client.query(query); // use the connection
    client.release(); // release the connection
    return
  } catch (error) {
    client.release(); // release the connection
    // Deal with your catch error
  }
}

I thought that would still result in the client being released regardless of whether we threw an error or not, but quickly started getting errors while trying to run my unit tests.

I can’t explain why yet, but for some reason, a client.release() position like I’ve got above doesn’t release the clients like it should, so I’ll be sticking with putting my client.release() statements in my finally block going forward.

Thoughts on Typescript

I once worked at a company where there was an ongoing rivalry among the developers between Java and Javascript/NodeJS.

To be honest, I didn’t know enough back in those days to have anything remotely resembling an informed opinion. I’m not sure that much has changed there, but one of the things that surprised me a bit was the way that one of the Java proponents would dismiss Typescript as just being a way to try and make Javascript more like Java.

I ended up ultimately choosing to focus my learning efforts more an Javascript/NodeJS simply because I liked the idea of not having to master Java for the back end and Javascript for the front end, but I always figured that someone who was in favor of strong typing would be in favor of bringing stronger typing to Javascript.

I’ve now been working with Typescript for a few months. There is still obviously a ton that I need to get my arms around with Typescript, but my initial impression has largely been favorable for all of the reasons that people typically praise it for.

I do occasionally miss being able to do crazy things with my variables, but by and large those types of things are likely to end up bitting me at a future point. By giving that up, I get instant feedback from my code editor when I mis-remember the type of a variable and try to use it in a way that is going to get me in trouble the first time I do a test run of my new code.

Getting things setup with some of the packages I was previously using on my side project proved to be beyond by current abilities, but once the setup is done, I seem to be developing at a faster rate simply because I’m chasing down fewer typing errors.

Typescript doesn’t magically make NodeJS something it isn’t. You still have to worry about blocking the event loop, but all in all (for what it’s worth), I think it’s a helpful extension to the base language.

Merge Sort

As discussed in my last post, I’ve been spending time learning new algorithms, and reminding myself how to use ones that I haven’t used in years.

Here is the evolution of my Merge Sort algorithm:

I did some searching online looking for a high-level discussion of what a merge sort was, and it’s pros and cons. tldr: It trades the usage of more space for better performance than Quick Sort.

Then I found this example by Tim Han.

I scanned through that quickly, and then worked on other stuff for several days. When I came back to take my first shot at writing my merge sort (crazily enough, this isn’t one that I encountered during my year and a half of computer science classes many, many years ago), all I remembered was that you divide the array, and then you return a merge call of those array halves being recursively divided. Oh, and something about there being a single leftover data point that needed tacked onto something else at the end of the merge.

Here is my first compilable attempt:

//Mergesort. Works by splitting array into half until getting arrays of size 1
//Then merges the arrays back together, sorting as they are merged.

const mergeSort = (array) => {
    const length = array.length;

    if(length == 1) {
        return array;
    }

    const midpoint = Math.floor(length / 2);

    let left = array.slice(0,midpoint);
    let right = array.slice(midpoint);

    return merge(mergeSort(left), mergeSort(right));
}

const merge = (array1, array2) => {
    if(!array2.length) {
        return array1;
    }

    let mergedArray = [];

    while(array1.length && array2.length) {
        if(array1[0] <= array2[0]) {
            mergedArray.push(array1.shift());
        } else {
            mergedArray.push(array2.shift());
        }
    }

    if(!array1.length) {
        while(array2.length) {
            mergedArray.push(array2.shift());
        }
    }

    if(!array2.length) {
        while(array1.length) {
            mergedArray.push(array1.shift());
        }
    }

    return mergedArray;
}

const rawArray = [9,1,2,3,4,5,7,88,77,5,5,66,3,8,7,49,59,62,73,24,81,67,32,5,3,2];

let sorted = mergeSort(rawArray);
console.log(sorted);

As a side note, the reason that the version I wrote on paper didn't work was two-fold.
1. I just merged left and right inside of "mergeSort" rather than merging what was returned from calling mergeSort on each of those two array halves.
2. I used a Math.ceil rather than a Math.floor to generate the midpoint, which resulted in an array of length two not being properly split.

While writing my initial algorithm, the merge is the part that gave me pause, primarily, because I couldn't see how we could consistently guarantee that once we were done with the while loop that there would just be one item in one of the arrays. I could envision a scenario (for instance in an already-sorted array), where array1 gets emptied before any items are taken from array2

After a few minutes, I just went with my gut, and created something that would deal with emptying an unknown number of items out of whichever of the arrays still had items in it.

Once I'd confirmed that my code was compiling and properly sorting the array, I went back to Tim Han's algorithm and compared it to what I'd come up with.

The biggest thing that stuck out to me was the fact that he was using pointers rather than shifting items off of the two arrays passed into the merge function.

The more experienced of you are already nodding your heads, and don't need me to tell you that his approach is superior, but in my defense, I'm used to thinking in terms of JavaScript arrays, which are different than how most languages define an array.

In Java, an array is created with a pre-determined size and you can't grow or shrink that array after it is created. Instead, you have to create a new array of the desired size and copy the desired items from the first array into the new array.

The fact that Java arrays are all a single contiguous block of memory likely makes them faster than JavaScript arrays, which as I understand it are just objects that can be added to and removed from without creating a new object and destroying the old object.

Still, even running this in JavaScript, there isn't any reason that we have to shorten array1 and array2 while growing the mergedArray--unless maybe memory was becoming a constraint and we needed to free up the space being occupied by array1 and array2 simultaneously to growing the mergedArray.

Still, though, that seems unlikely. Generally, you'd just run things on a machine with more memory, or use a streaming type approach to the algorithm.

The next thing that I noticed was that I didn't need my if(!array2.length) statement. Ultimately, if array2 (or array1 for that matter) were empty, it's just going to skip the while loop and tack the other array onto the mergedArray.

Lastly, the use of the concat function rather than a while loop was nice and slick. It's another reminder that I need to spend some time just reading through JavaScript's built-in methods and arrays. I know a bunch of them by memory, but there are still tons of them that I don't use because I don't realize that they are out there.

Here's my updated version:

//Mergesort. Works by splitting array into half until getting arrays of size 1
//Then merges the arrays back together, sorting as they are merged.
const mergeSort = (array) => {
    const length = array.length;

    if(length == 1) {
        return array; //split down as far as we can go. Time to return so we can merge.
    }

    const midpoint = Math.floor(length / 2);

    let left = array.slice(0,midpoint);
    let right = array.slice(midpoint);

    return merge(mergeSort(left), mergeSort(right)); //split left and right again and the merge what comes back
}

const merge = (left, right) => { //merge the two arrays together into a sorted array
    let mergedArray = [];
    let leftIndex = 0;
    let rightIndex = 0;

    while(leftIndex < left.length && rightIndex < right.length) {
        //compare the two and put the smallest into mergedArray until we get to the end of one of them
        if(left[leftIndex] <= right[rightIndex]) {
            mergedArray.push(left[leftIndex]);
            leftIndex++;
        } else {
            mergedArray.push(right[rightIndex]);
            rightIndex++;
        }
    }

    //either left or right will still have values that can be tacked onto the end of
    //our sorted array
    let leftRemainder = left.slice(leftIndex);
    let rightRemainder = right.slice(rightIndex);

    return mergedArray.concat(leftRemainder).concat(rightRemainder);
}

const rawArray = [9,1,2,3,4,5,7,88,77,5,5,66,3,8,7,49,59,62,73,24,81,67,32,5,3,2];

let sorted = mergeSort(rawArray);
console.log(sorted);

Performance wise, my algorithm is the same now as what Tim did (which makes sense given that I read his algorithm several days before sitting down to do my own). I explicitely broke out the leftRemainder and rightRemainder because I think it's easier to follow what's going on that way.

I did confirm that while Tim's algorithm is great, he was wrong about there being just a single item left over after the while loop in the merge function. If you log leftRemainder and rightRemainder out, you'll see that they can be more than 1 item.

A big thanks to Tim for putting such a clear, well-written algorithm out there!

Logging objects in NodeJS

I’ve logged objects out to the console previously and had them show up, but today I logged on object out and got “[Object]” instead of the key value pairs I was expecting.

It turns out that NodeJS will only log up to two levels of nesting. If you want to log out on object with more than two levels of nesting, then the best bet is to stringify it.

Like so:

console.log(JSON.stringify(obj, null, 2))

That will indent the levels for you and everything. Hat tip to https://nodejs.dev/how-to-log-an-object-in-nodejs for teaching me that.

Express App Guide

(This post was written back in October. I had it written, but didn’t get it edited before getting fired, so it’s just been sitting on my drive gathering digital dust. It’s still good information, but just keep in mind that the timing is off. Everything I’m talking about happening in the present actually happened almost half a year ago.)

Welcome to another week. This week was a little rough on the development front. One of the other, more experienced, developers is drowning in projects (actually, most of the more experienced developers are drowning in projects pretty much all of the time).

Being a naturally helpful sort, and possibly not nearly as smart as I like to think I am, I offered to give him a hand with what I thought was the easiest part of his task list, new rules in the rule engine that we used to provide all kinds of customization to the user experience.

As it turns out, the rules engine is pretty complicated because it’s called multiple times—from multiple different points inside of the code base—and therefore there is a lot of different context that needs to be understood in order to effectively write the rules.

I suspect that there are many things that the other developers are doing which are more complicated than the rules engine, but it’s feeling like I’m very much in over my head right now.

The week (when I wasn’t helping out with accounting tasks that haven’t fully transitioned away from me yet) was spent with me taking my best stab at writing new rules, sending them over to my manager (who is the one who wrote the rules engine) to approve and having him kick them back to me with a verbal explanation as to why they won’t work.

He’s not doing anything wrong—he’s been really patient and awesome, but it’s still a little wearing to continually come up short (inside the privacy of my own mind if not necessarily with regards to his expectations yet).

Added to the less awesomeness of the week is that I haven’t made very much progress on my side projects. So far, I’ve created a simple express app from scratch, got it working on my local box, and successfully loaded it up to Heroku (to serve as the cloud-based compute infrastructure).

I’ve also created a MySQL database using Google Cloud as my infrastructure provider, downloaded SSL certs, used those certs to connect from my local box to the database (using Navicat since that’s what I was using previously at work while doing the accounting).

That all feels like pretty good progress, but it all happened last week. Partially that’s because I’ve been putting in extra hours at work trying to get my arms around the rules engine, and partially that’s because I’ve been stuck on how to use Sequelize to connect to the database while using an SSL cert.

I think I’ve finally found a guide that is pointing me at the right direction as far as that’s going, so I’m hoping to make more progress later today on connecting my app to the database and making it stateful, but right now I need to be working on a blog post, and the logical thing to me would be to share the steps that I used to create my express app and push it up to Heroku.

There are tons of videos out there talking about how to do something like this, but my preferred way of consuming that kind of stuff is in written form. It saves me having to pause and repeat stuff all of the time as I’m trying to follow along with what the presenter is doing.

Putting my steps up on the blog has the benefit of checking the box on the blog post that I’m supposed to be writing today. Even better, it means that anyone who prefers written guides to video guides will have the ability to search down my guide. And, for a third (admittedly minor in this day and age of cloud-based backups) benefit, it means that I’ll have me steps recorded for future use in case I need them.

On to the guide:
Preliminaries:
1. I’m working on a Mac (more regarding that in a future post)
2. I’ve installed the Heroku command line interface using Brew
3. I’ve created a Heroku account and a GitHub account
4. I’m using Intellij. You can use Sublime or another editor of your choice.

The actual process:
1. Create a new repository on GitHub.com (or with Git desktop)
a. Say to initialize it with a readme
b. Fetch your new repository from Git. (I use Git Desktop to pull it down to my local machine)
c. I have a services directory, and then put each Git repository as a subfolder inside of services.
2. Create a .gitignore file at the root directory of your project.
a. For me this is in Services\Storage-App
b. At this point, I got a message from Intellij asking if I wanted to add .gitignore (if I wanted it to be part of the git repository). I said yes, but said no on all of the stuff that it asked about in the .idea/ folder.
c. I populated .gitignore as per this example:
d. https://github.com/expressjs/express/blob/master/.gitignore
# OS X
.DS_Store*
Icon?
._*

# npm
node_modules
package-lock.json
*.log
*.gz

.idea
.idea/

# environment variables
env*.yml
e. .idea and .idea/ are there because I’m using Intellij. The link above has options for windows or linux.
3. Run npm install express -g from the command line inside of the project folder.
a. This downloads and installs express (I’m pretty sure that you only have to run this the first time that you want to add express to an app).
4. Run npm init from the command line inside of the project folder.
a. This creates the package.json file that lists your dependacies and tells the computer what to use as your starting file.
b. I used all just the default options other than changing index.js to app.js. You can use either option.
5. Run npm install express –save from the command line inside of the project folder.
a. This brings a whole bunch of express dependencies into the project. (in the node_modules folder)
b. You should now have “express”: “^4.16.3” (or a later or earlier version depending on what version of express you have installed) in your list of dependencies in package.json.
6. Create app.js inside of the root directory (same level as package.json and package-lock.json)
a. I did this via Intellij. You should in theory be able to just do it from inside the command line via touch app.js if you wanted to.
7. Inside of app.js add the following lines:
const express = require(‘express’);
const app = express();

const normalizePort = port => parseInt(port, 10);
const PORT = normalizePort(process.env.PORT || 5000);

app.get(‘/’, function(req, res) {
res.send(‘Hello World’);
}).listen(PORT);

console.log(“Waiting for requests. Go to LocalHost:5000”);

8. Inside of package.json at the end of the “test” line, put a comma and then add a new line:
a. “start”: “node app.js”
9. Inside the app directory, type npm start
a. (You should see “Waiting for requests. Go to LocalHost:5000” in the terminal)
10. Open a browser window and got to http://localhost:5000/
a. (you should see “Hello World” in the browser)
b. This means that you’ve successfully run the app on your local machine
11. Create a Procfile at the root level.
a. Input (into the proc file) web: node app.js
12. Push the app up to heroku
a. Change to the directory containing the app.
b. Type git init
c. heroku create $APP_NAME –buildpack heroku/nodejs
i. I left the app name blank and just let Heroku create a random name.
ii. That means my command was heroku create –buildpack Heroku/nodejs
d. git add .
e. git commit -m “Ready to push to Heroku”
i. You should also be able to do the commit via github desktop.
f. git push heroku master
g. heroku open
i. This should open your browser and show you “Hello World”.
h. You’ve successfully pushed the app up to Heroku. Congratulations!

That’s it for this week. I’ll come back and add some additional detail as I get a better understanding of what some of these commands do.