Database Update Command

I’ve been reworking my side project to use Typescript and the PG npm package (so that I can use a postgres database). In the course of doing that, started out using ‘INSERT INTO’ statements because they have the benefit of creating a new record if no record currently exists with that primary key, and updating the existing record if there currently exists a record with that primary key.

That has by and large worked well, because my initial tables that I’ve been rebuilding used a string as a primary key rather than an auto-incrementing integer (or in postgres parlance a SERIAL).

Once I started trying to do INSERT statements into a table with a more ‘normal’ primary key, I ran into problems around the primary key.

Additionally, I came across a scenario where I needed to decrement a number in the database and I didn’t want to have to do a query to get the current number, and then do an insert because I suspected that would result in more load on my database and more latency for my application.

You may not run across the same issues, but if you do run into problems with the INSERT approach, it may be worth playing around with the UPDATE command.

Here is an example of an update query:

UPDATE users
SET instances = instances - 1
WHERE id = 102
RETURNING *

That syntax is fairly easy to find, but I couldn’t find an example with parameterization for the PG npm package, so I had to do some experimenting with that. More on that next time.

Postgres ‘pg’ NPM Connection Pools

I’m currently in the middle of re-working my side project to use the pg npm package. As part of that, I now have to worry about managing the connection pool, which is something that was handled behind the scenes with the ORM that I was using previously, the ORM that I was having problems getting to play nicely with Typescript.

Online, I saw a recommendation to create a pool like so:

import { Pool } from 'pg';

const pool = new Pool({
  host: 'localhost',
  user: 'database-user',
  max: 20,
  idleTimeoutMillis: 30000,
  connectionTimeoutMillis: 2000,
})

And then to execute database operations like so:

const wrapperFunc = async (): Promise<void> => {
  const client = await pool.connect(); // create a connection

  try {

    const query = `
    CREATE TABLE users (
        email varchar,
        firstName varchar,
        lastName varchar,
        age int
    );
    `;

    await client.query(query); // use the connection
  } catch (error) {
    // Deal with your catch error
  } finally {
    client.release(); // release the connection
  }
}

Releasing the connection is important because otherwise you end up with a bunch of zombie connections that your database is waiting on, but which aren’t live anymore. Eventually, if you don’t release the connections, then your database hits its max connection count and it starts refusing new connections.

I had the bright idea of changing up my code to look more like this, which I thought would work just as well:

const wrapperFunc = async (): Promise<void> => {
  const client = await pool.connect(); // create a connection

  try {

    const query = `
    CREATE TABLE users (
        email varchar,
        firstName varchar,
        lastName varchar,
        age int
    );
    `;

    await client.query(query); // use the connection
    client.release(); // release the connection
    return
  } catch (error) {
    client.release(); // release the connection
    // Deal with your catch error
  }
}

I thought that would still result in the client being released regardless of whether we threw an error or not, but quickly started getting errors while trying to run my unit tests.

I can’t explain why yet, but for some reason, a client.release() position like I’ve got above doesn’t release the clients like it should, so I’ll be sticking with putting my client.release() statements in my finally block going forward.

Thoughts on Typescript

I once worked at a company where there was an ongoing rivalry among the developers between Java and Javascript/NodeJS.

To be honest, I didn’t know enough back in those days to have anything remotely resembling an informed opinion. I’m not sure that much has changed there, but one of the things that surprised me a bit was the way that one of the Java proponents would dismiss Typescript as just being a way to try and make Javascript more like Java.

I ended up ultimately choosing to focus my learning efforts more an Javascript/NodeJS simply because I liked the idea of not having to master Java for the back end and Javascript for the front end, but I always figured that someone who was in favor of strong typing would be in favor of bringing stronger typing to Javascript.

I’ve now been working with Typescript for a few months. There is still obviously a ton that I need to get my arms around with Typescript, but my initial impression has largely been favorable for all of the reasons that people typically praise it for.

I do occasionally miss being able to do crazy things with my variables, but by and large those types of things are likely to end up bitting me at a future point. By giving that up, I get instant feedback from my code editor when I mis-remember the type of a variable and try to use it in a way that is going to get me in trouble the first time I do a test run of my new code.

Getting things setup with some of the packages I was previously using on my side project proved to be beyond by current abilities, but once the setup is done, I seem to be developing at a faster rate simply because I’m chasing down fewer typing errors.

Typescript doesn’t magically make NodeJS something it isn’t. You still have to worry about blocking the event loop, but all in all (for what it’s worth), I think it’s a helpful extension to the base language.

Logging Adventures

Is ServiceNow you don’t really have a debugger that you can use to trace what’s happening in your code. Instead, most people spend a lot of time using logging statements.

I’ve gotten into the habit of using JSON.stringify(object, null, 2) to convert objects to something that can be logged out. I ran into an occasion recently where my logging statements were logging out an object that was missing a bunch of information on it.

Pro tip: (/sarcasm) Maps don’t convert and print via a JSON.stringfy, so if you have an object that contains a map, you’ll have to try something else :).

On Staffing

This is a potentially touchy subject.

Every time I’ve ever ‘settled’ when hiring someone onto my team, I’ve ended up wishing that I hadn’t settled.

This has held true both when acting as the hiring manager when I was a controller, and when I’ve had input into hiring decisions not as the hiring manager.

Obviously, no candidate is perfect because no person is perfect. Beyond that, you’re unlikely to get a near-perfect senior developer for what the market is paying a junior developer, and if you did, that person is going to leave if they ever figure out just how far under market you’ve got them, so you’re depending on them continuing to be uninformed, which isn’t a great place to put yourself.

Setting aside all of that, my strategy is that you should always do your very best to hire someone whom you’re really excited about, someone who is bright, driven, and conscientious.

If your budget is on the lower side, then that often means that you’re hiring someone with a bit less experience than you would have ideally wanted to bring in. That can be a tough combination because it means that you’re training someone to do a job that they are probably going to be outgrowing in a year or two.

Ideally, your department or team is growing too, and you can move them on to bigger and better things in your organization, but even if that isn’t the case, someone with the right mindset generally seems to still find a way to make it worth having trained them before they end up moving on, and just because they move on doesn’t mean that they won’t be back, or that you won’t work with them again at some other point.

Coming at things from the job seeker side, being turned down for a position that you’re sure you would have been great at is frustrating and incredibly disheartening. I get that–I’ve lived that and the struggle is real.

Coming at things from the hiring manager side, I get that the cost of a bad hire is really high. You’re probably going to spend a lot of time trying to figure out if they are hitting the performance levels they should or not (the more complicated their job and skill set, generally the harder it is to determine that), then you’ve got to spend time coaching them, then you’ve got to go through the painful process of firing them, and then you’ve got to interview candidates again.

Given all of that, most organizations and hiring managers tend to be willing to pass on candidates that would ‘probably’ be awesome at the role and wait until they find one that they are ‘certain’ will be awesome at the role.

It’s definitely not a perfect system, and I understand why people (on both sides) get frustrated.

Sometimes it helps me to imagine a world where hiring managers don’t do that. Instead, they say “I’m going to fill this role in the next 10 days. I don’t care how many applicants I do or don’t interview during that time. I’ll give it 10 days and then just hire the best of the bunch…or the one applicant who is minimally qualified.”

Having seen some bad hires in my day, that sentiment would give me the creepy crawlies.

As I described above. You have to be reasonable in your expectations. You have to calibrate your expectations against the market you’re in (the pool of talent may be deep and vast, or a bit lacking), and what you can afford to pay.

That calibrating process can take some time, but I think you have to respect that process of calibrating yourself, making your exceptions reasonable, and then finding someone who (inside the limitations you’re working with) makes you really excited to be bringing them onto your team.

Obviously, emergencies arise, and we don’t work in a perfect world, but not respecting that process would generally represent a failure of leadership.

Getters and Setters

I’m in the middle of a Typescript class. It’s been very informative, however in a recent video the teacher created a class and then proceeded to reference one of the attributes of the class rather than creating a getter in order to get the value.

I proceeded to adjust my notes so that the class had a getter method and then updated the calling code to use the getter method.

All of which made me laugh a bit. It’s a minor point, and I get the fact that the teacher is balancing how far to get into any one given topic, but it’s amusing to me that just a few years ago I was watching another developer program and I couldn’t see the point to the getters and setters he was building into his class.

For anyone that isn’t convinced, getters and setters give you a layer of virtualization which means that you’re buying yourself the ability to change the inner workings of your class if necessary without blowing up the rest of your code.

Given some of my recent experiences, I’m a big fan of adding virtualization layers like that anywhere it makes sense. It saves a lot of time and effort in the long run.

Database Denormalization

I’ve been working on a side project for quite a while now. It’s been a great learning experience, and I hope to eventually have something that I can monetize.

One of the initial decisions resolved around what to use for my database solution.

I ended up in a normalized, SQL database solution for various reasons, but I flirted quite heavily with the idea of a No-SQL database solution.

As I was building out my SQL tables, though I came to the conclusion that a fully normalized approach was the wrong answer.

I have certain tables which will see record creates take place, and some edits to those records across certain fields, but there are some fields that will never be edited.

Under a very common use-case, I need data from what would be two different tables under a fully-normalized approach. That would mean that every time a request was made to that endpoint, the database would have to make a join on those two tables.

I’m still getting my feet under me as it relates to the correct way to scale a system, but my sense is that your database generally ends up being the constraint that is the hardest to scale, so I’ve tried to build with an aim towards minimizing database load whenever possible.

A de-normalized scheme generally seems to trade additional space consumption for reduced load as a result of fewer joins, so that’s exactly what I ended up doing.

I took a few fields from one table and am also storing them on the table that my endpoint is primarily reading from. The fields in question aren’t being updated after the initial write, so I’m not introducing load to keep the two tables in sync, and the elimination of those joins should drastically push back the point where I have to worry about database load being a problem.

You Can’t Make a Good Deal With a Bad Person

This is a post that I wrote quite some time ago. I’ve got several posts like this that could be considered inflammatory or controversial. Generally, rather than posting them as soon as I’ve written them, I let them sit in draft from–generally for years.

Only after significant time and distance has passed from the incident or situation that caused them to be written, do I go ahead and publish them.

I have things that I’ve been learning lately that are more technical in nature, but the most pressing thing that I can think to share relates to something that read recently from Suzy Welch. She quoted Warren Buffet as saying: “You cannot make a good deal with a bad person.”

She then went on to indicate that the same principle is true when it comes to your career, that “You cannot build a good career with bad people.”

This echos something that I heard so many years ago that I can’t be 100% sure where I heard it, but which I believe came from the lawyer who runs “The Passive Voice” blog. In essence, the takeaway from the quote as I remember it was that “no contract, however well written will be good enough to protect you from doing business with someone who is untrustworthy. That when you think that you’re entering into a contract with someone who will try to cheat you, rather than trying to beef up the language in the contract, you’d be better served by simply walking away from the deal.”

Suzy Welch’s article (which you can find here: https://www.cnbc.com/2019/01/18/warren-buffetts-career-advice-could-change-how-you-approach-your-job.html) crystalized some things for me.

It’s very easy to rationalize staying in a job where you’re dealing with bad people. Oftentimes, there are good people in even the worst company. You may think that you just need to accomplish X or Y before you can change jobs, or that it’s too soon to move on once you realize that you’re working in a ‘bad’ company, but my advice is to begin looking for another position immediately and to move on as soon as you can.

There is a lot more that I could say about all of this–a lot more that I want to say about it, but that risks straying into dangerous territory. If you find that there are bad people at the top of your company, or your department, get out as quickly as you can, and don’t let yourself forget that anyone who stays working for a bad person for long enough will naturally begin to adopt their values, justifications and behaviors.

If you find a company that has good people at the top, that should weigh in your decisions to leave or stay much more than I realized 20 years ago when I first started my career.

New Posting Schedule

My blog posts have historically meant to serve a few purposes.

  1. Serve as a positive signal for prospective employers.
  2. Document things I’ve learned so that I have the for future reference.
  3. Serve as an outlet through which I could explore and solidify my views on things.
  4. Serve as a positive signal for prospective consulting clients.

Given that I’ve recently accepted a position with an organization where I can see myself staying for years potentially, item #1 has become much less pressing.

Additionally, for various reasons, I’m not actively looking for another consulting engagement, so item #4 is less in play that it has been previously.

Given all of that, it makes sense to cut back a bit on the frequency of my posts. I’ll aim for every other week roughly on average until something changes. That should be more than sufficient to cover points #2 & #3 for now.

Orthogonality an Example

“In computer programmingorthogonality means that operations change just one thing without affecting others.”
“Compactness and Orthogonality”www.catb.org

Maybe you’re familiar with the concept of orthogonality. Maybe you’re not. Either, way I find that an example is often just the thing for me when it comes to solidifying a concept.

I was recently working on a fairly challenging Agile story, and as part of that, I built a function that took in an optional ‘options’ object. I was finding that under certain circumstances I wanted to log information to the user, and other times I didn’t want to log that information from that function.

So, I created the options object and had the function check for the log property on options. If it was true I went ahead and changed the database and logged the change to the user. If it wasn’t true, I didn’t change the database and didn’t log anything to the user.

Simple right? Except I violated the rule of orthogonality, and didn’t even realize that I’d violated it until a few days later as I was in the middle of debugging my function.

My initial assumption was that if I logged, I also needed to update the database. If I didn’t log, I didn’t need to update the database.

Unfortunately, I promptly found myself in a situation where I needed to update the database without logging, and then proceeded to tie myself in knots trying to satisfy that need without changing my underlying assumption.

Ultimately, I did what I should have done from the start. I started checking for two properties on my options object. Log and update. Like magic, my problems went away.

Imagine that. 🙂