GET Requests

Coming from a non-traditional software engineering background (meaning I didn’t finish a college degree in computer science, or go to a bootcamp), I’ve had to piece together things from disparate sources of information.

I feel like this has largely worked for me, and has meant that I have a different, sometimes better, approach to certain aspects of development.

Unfortunately, I sometimes find out that I’ve got a huge blindspot in an area.

My initial experience with HTTP requests left me with the impression that GET requests are used to get information, and POST requests are used to pass information to the server.

For a variety of reasons, I’ve mostly needed to work with POST requests up to this point, but I do remember an instance where I was working with Google’s translation API, and I was shocked that making a call to translate some text was a POST call rather than a GET call. After all, I was requesting information (a translation) from their service.

As it turns out, there is a lot more going on there than I’d initially realized. GET requests are meant to be cacheable, and idempotent. Because of this, it is generally bad form to pass information in the request body. Everything is supposed to be contained in the URL…which in turn creates a limit in the number of parameters you can pass because there is a limit to the length of the URL that can be passed via a get request (~2,000 characters).

Google needed to be able to pass translation requests with more than 2000 characters to them. Additionally, they don’t want common translations to be cached and then served out to multiple customers. They want to be able to charge each individual customer for each translation, no matter how common the translation is.

There are likely other factors, but those two alone pretty much ensured that using a GET for that translation endpoint was a bad idea.

I found this medium post really helpful if you want a more detail explanation.

Converting Database Objects via JSON (part 2)

My last post discussed using a toJson method/static for converting the result from a database call. I’ve used that to make sure that sensitive information (like a hashed password) isn’t return to users as the result of an API call.

Today I ran into another unexpected bit of behavior. For some reason, I thought that the toJson method/static was just changing the string that was returned when JSON.stringify was called.

As it turns out (at least with Sequelize when working with Postgres), it actually changes the underlying object and then produces the corresponding JSON string for the changed object.

This tripped me up because I was trying to debug something, so I logged out the object, and then promptly saw a bunch of other stuff break because the subsequent logic was looking for the user object to have a password, but the object that had been returned from the database no longer had a password.

This is a good reason for jumping into the debugger rather than using logging statements. That was never an option in ServiceNow, so I’ll have to dust off those skills and get back into the habit of using the debugger instead of just using log statements.

Converting Database Objects via JSON

I’ve spoken before about the two Andrew Meads Udemy classes I took a couple of years ago. That was where I was introduced to the concept of establishing a toJSON method or static on a database object.

In the class, we used the toJSON method to remove the password from the object. That way we could be sure that what was returned to the user after an API call wouldn’t have the password on it.

That meshes well with one of my pet theories, which is fixing stuff in the right spot. By putting the logic on the database model, then you don’t have to remember to strip it out each time you return a user via an API call.

I’ve used this now with Mongoose/MongoDB and Sequelize/Postgres

Recently, however I ran into some unexpected behavior. I needed to do some complex logic around a group of records returned from the database.

My approach was to create a new object with some additional information on it for each record that had been returned and then add the database object to this new object. The logic all worked splendidly, but when I converted the new object via JSON.stringify for logging purposes, a bunch of information was logged out that I expected to be removed via my toJSON method.

Apparently, when you add a database model/object to another object, stringifying that new object doesn’t take advantage of the logic defined on the model.

I went back to the logic and instead of creating a new object, I just attached additional information to each record I got back from the database. That did the trick quite handily, and I still get the advantage of the toJson method declared on my database model, so that is the pattern I’ll use going forward.

Update

Apologies all round. My posting schedule has gotten really irregular, but I can share the reason now.

Back in July, I had a manager at one of the big tech companies reach out about the possibility of joining her team. I let her know that I was very much interested, but that I wasn’t prepared yet for the kind of white-boarding interviews that her company used for vetting candidates.

This led to a roughly 6-month process of reviewing data structures and algorithms as I prepared for the interviewing process at her company. When all was said and done with preparing and interviewing (~8 months after she first reached out), I failed the final round at her company, but got offers from 4 other tech companies of varying sizes.

The process has been exhausting, and between that and maintaining my performance at my job, there wasn’t much time left for anything else.

I’d like to say that being done with interviewing will mean that things will calm back down, but I suspect that I’ll have to put in some longer hours to get up to speed with my new job.

Be that as it may, I’m going to try to get back to a more regular schedule.

Here’s to hoping.

Regression Testing

I recently had a discussion with a delightful individual regarding my current side project.

I mentioned that I was using Jest for my regression testing, and she asked something along the lines of “If this is a side project and nobody is forcing you to write unit tests, then why are you writing unit tests?”

For context, I’m pretty sure that she writes unit tests on everything that she writes regardless of whether or not someone is ‘forcing’ her to write them. You could say that she was testing me. (See what I did there?)

She wanted to know my reason for writing them and see how it matched up with her reasons.

I told her that I’d heard a quote attributed to Robert Martin (Uncle Bob) that went something along the lines of “Anyone who thinks that they can move faster by not writing unit tests is smoking some pretty crazy stuff”.

That’s not an exact reproduction of what I heard, and I haven’t been able to find the original quote, but I think the sentiment is correct.

It can feel like you’re ‘wasting’ time writing unit tests when you could instead be creating something new, but the fact is that automated tests save a ton of time when it comes to debugging things down the road, and if your unit tests help you avoid introducing a bug into production that prevents the loss of hundreds of thousands (or more) of dollars, then you’ve likely just paid for the unit tests many times over.

So, I’m still going to keep on building unit tests even when nobody ‘makes’ me do it.

Minimum Viable Product (MVP)

My accounting background, combined with working for a startup previously means that I’m familiar with at least some of the advice given to tech founders.

Generally, the advice is to get your minimum viable product out the door as soon as possible. Make sure that people are actually willing to pay money for your service, and then you can start worrying about cleaning things up and worrying about how your product needs architected in order to be able to scale.

I can see the value to the suggestion. You don’t want to spend 10 years building something and then find out that you were solving a problem that nobody else feels like is a problem.

It’s far better to spend 6 months building a MVP, and then find out that your product isn’t going to be a go.

For certain personality types, it’s really hard to move forward on anything until it’s perfect. I don’t normally consider myself to be one of those people. I tend to think that perfect is the enemy of good enough, but I’m running across situations lately that give me a bit more sympathy for that mindset than I used to have.

It can be hard to know when you’re reinventing the wheel, vs. when you’re trying to gather enough foundational knowledge in an area to avoid tripping over something that is otherwise going to torpedo your entire idea.

It’s a tricky judgement call, made all the more tricky when it’s your first rodeo and you’re not 100% sure that you can build the application in the first place.

ServiceNow UI Policies

I really thought that I’d posted about this previously, but I couldn’t find the post, and it’s something that’s tripped me up a couple of times.

When building a UI policy in ServiceNow, if you want to be able to clear the variable value, then you need to build it to hide the variable.

Or to put it another way, never build a UI Policy Action that has the ‘Clear the Variable Value’ box checked and which also has ‘Visible’ set to false.

If you do that, then when the variable is shown, the system will clear the value of the variable, which is never what you want to happen. You want the system to clear the value of the variable when the variable is hidden.

Pattern for Pushing API Call to Database

I’m working on a side project where I accept a post call into an end point and then push that data into a user record.

My starting point was based on a Udemy class where the instructor walked through a project that tied an Express app into Mongo DB. Here’s the user schema (../models/user.js):

const mongoose = require('mongoose');

const userSchema = new mongoose.Schema({
    firstName: {
        type: String,
        required: true,
        trim: true
    },
    lastName: {
        type: String,
        required: true,
        trim: true
    },
    entitlements: {
        type: String,
    },
}, {
    timestamps: true
});

So, a first name, a last name, and some kind of entitlements field that we don’t want to have filled out by the user–we want to apply some kind of business logic and fill that in ourselves.

Here is the user router (../routers/user.js):

const express = require('express');

const router = new express.Router();

// Import Model
const User = require('../models/user');

// Create a new user
router.post('/user', async (req, res) => {
    try {
        //Create the user
        const user = new User(req.body);
        await user.save();
        res.status(201).send(user);
    } catch (error) {
        res.status(400).send({Error: error.message});
    }
})

Here is my app.js (which is called by index.js):

const express = require('express');

require('./db/mongoose');

// Routers
const userRouter = require('./routers/user');
const app = express();


// Options
app.use(express.json());
app.use(userRouter);

module.exports = app;

Here is my ./db/mongoose.js file:

const mongoose = require('mongoose');

mongoose.connect(process.env.MONGODB_URL, {
    useNewUrlParser: true,
    useCreateIndex: true,
    useFindAndModify: false,
    useUnifiedTopology: true
});

Here is my index.js:

'use strict';

const app = require('./app');

const port = process.env.PORT;

app.get('', (req, res) => {
    res.send("Nothing here, but it's up and working...");
});

app.listen(port, () => {
    logger.logInfo("Server is up on port " + port);
})

As you’ve no-doubt guessed, this is a very trimmed down version of my actual app. I believe that I’ve got all of the relevant pieces such that this would run, but if I’ve missed something I apologize–the point of this post should still come through.

The issue I noticed is that with what I’ve got above, it’s actually possible for the user to pass an ‘entitlements’ property on the request and it will push that through to the database. Obviously, that requires that the person trying to exploit the loophole figures out that you’ve got an entitlements field on your user record. Then, they have to figure out how you’re representing elevated access in that string field (or however you’re storing the thing that you don’t want users setting themselves).

That’s all very unlikely, but it would be foolish to depend on it not happening.

As far as addressing the gap, my first instinct was to go through and delete off the attributes on the request that I don’t want the user to be able to set. Something like this:

const express = require('express');

const router = new express.Router();

// Import Model
const User = require('../models/user');

// Create a new user
router.post('/user', async (req, res) => {
    try {
        //Remove any protected fields
        delete req.body.entitlements;

        //Create the user
        const user = new User(req.body);
        await user.save();
        res.status(201).send(user);
    } catch (error) {
        res.status(400).send({Error: error.message});
    }
})

That does the trick, but is asking for problems down the road. Each time I add a new ‘protected’ field that I don’t want users to be able to set, I’ve got to remember to come back here and remember to delete it off of the request body. I can virtually guarantee that I’ll forget to do that at some point.

The better option is to make sure that I only submit ‘non-protected’ fields to the database. Something like this:

const express = require('express');

const router = new express.Router();

// Import Model
const User = require('../models/user');

// Create a new user
router.post('/user', async (req, res) => {
    try {
        //Copy over acceptable attributes so that call can't populate protected fields
        const userObject = {
            firstName: req.body.firstName,
            lastName: req.body.lastName
        }

        //Create the user
        const user = new User(userObject);
        await user.save();
        res.status(201).send(user);
    } catch (error) {
        // console.log(error.message);
        res.status(400).send({Error: error.message});
    }
})

Obviously, that’s not perfect. If I add a new ‘non-protected’ field, I’ve got to remember to come back here and add it to the list of the fields that are being copied over.

The plus side is that failing to remember to do that will cause things to break things right away, which will cause me to come back and fix the bug. Failing to remember won’t result in a vulnerability.

ServiceNow Impacting Scoped Records From Out of Scope

We stumbled across an interesting loophole today. The general rule is that you put things inside of a scoped app so that you can make sure things aren’t touching the records in your scoped app.

However, if your scoped table is a child of something in the global scope (like the task table), then you can actually make changes to the task-related fields from the global scope.

Something that might either trip you up, or which could be really useful at some point.

ServiceNow GlideRecord Bug

It turns out that if you use an invalid field name on a GlideRecord.addQuery() method call, it doesn’t fail (which is what I would have expected it to do). Instead, I runs through all of the records in the underlying table you’ve queried.

That’s annoying in a situation where you’re just querying for informational purposes and you get back several thousand times more records than you were expecting to get. It’s a lot worse than that if you’re in a situation where you’re making an update and you’ve got a typo in the field_name. It will result in an update being made to every record in that table.

Something else to watch for. I always recommend running any update query with a count variable and the update piece commented out first. That lets you confirm that you’re getting the expected number of records back, after which you can go ahead and perform the updates.

A hat tip to my amazing co-worker, Jasmine, who helped figure this out.

Here’s an example of how not to do it:

var gr = new GlideRecord('incident');
gr.addQuery('descriptio', 'Something...');
gr.query();

while(gr.next()) {
	gr.setValue('short_description', 'Redacted');
	gr.update();
}

As you can see, I’ve used ‘descriptio’ instead of ‘description’ in line two.