You Can’t Make a Good Deal With a Bad Person

This is a post that I wrote quite some time ago. I’ve got several posts like this that could be considered inflammatory or controversial. Generally, rather than posting them as soon as I’ve written them, I let them sit in draft from–generally for years.

Only after significant time and distance has passed from the incident or situation that caused them to be written, do I go ahead and publish them.

I have things that I’ve been learning lately that are more technical in nature, but the most pressing thing that I can think to share relates to something that read recently from Suzy Welch. She quoted Warren Buffet as saying: “You cannot make a good deal with a bad person.”

She then went on to indicate that the same principle is true when it comes to your career, that “You cannot build a good career with bad people.”

This echos something that I heard so many years ago that I can’t be 100% sure where I heard it, but which I believe came from the lawyer who runs “The Passive Voice” blog. In essence, the takeaway from the quote as I remember it was that “no contract, however well written will be good enough to protect you from doing business with someone who is untrustworthy. That when you think that you’re entering into a contract with someone who will try to cheat you, rather than trying to beef up the language in the contract, you’d be better served by simply walking away from the deal.”

Suzy Welch’s article (which you can find here: https://www.cnbc.com/2019/01/18/warren-buffetts-career-advice-could-change-how-you-approach-your-job.html) crystalized some things for me.

It’s very easy to rationalize staying in a job where you’re dealing with bad people. Oftentimes, there are good people in even the worst company. You may think that you just need to accomplish X or Y before you can change jobs, or that it’s too soon to move on once you realize that you’re working in a ‘bad’ company, but my advice is to begin looking for another position immediately and to move on as soon as you can.

There is a lot more that I could say about all of this–a lot more that I want to say about it, but that risks straying into dangerous territory. If you find that there are bad people at the top of your company, or your department, get out as quickly as you can, and don’t let yourself forget that anyone who stays working for a bad person for long enough will naturally begin to adopt their values, justifications and behaviors.

If you find a company that has good people at the top, that should weigh in your decisions to leave or stay much more than I realized 20 years ago when I first started my career.

New Posting Schedule

My blog posts have historically meant to serve a few purposes.

  1. Serve as a positive signal for prospective employers.
  2. Document things I’ve learned so that I have the for future reference.
  3. Serve as an outlet through which I could explore and solidify my views on things.
  4. Serve as a positive signal for prospective consulting clients.

Given that I’ve recently accepted a position with an organization where I can see myself staying for years potentially, item #1 has become much less pressing.

Additionally, for various reasons, I’m not actively looking for another consulting engagement, so item #4 is less in play that it has been previously.

Given all of that, it makes sense to cut back a bit on the frequency of my posts. I’ll aim for every other week roughly on average until something changes. That should be more than sufficient to cover points #2 & #3 for now.

Orthogonality an Example

“In computer programmingorthogonality means that operations change just one thing without affecting others.”
“Compactness and Orthogonality”www.catb.org

Maybe you’re familiar with the concept of orthogonality. Maybe you’re not. Either, way I find that an example is often just the thing for me when it comes to solidifying a concept.

I was recently working on a fairly challenging Agile story, and as part of that, I built a function that took in an optional ‘options’ object. I was finding that under certain circumstances I wanted to log information to the user, and other times I didn’t want to log that information from that function.

So, I created the options object and had the function check for the log property on options. If it was true I went ahead and changed the database and logged the change to the user. If it wasn’t true, I didn’t change the database and didn’t log anything to the user.

Simple right? Except I violated the rule of orthogonality, and didn’t even realize that I’d violated it until a few days later as I was in the middle of debugging my function.

My initial assumption was that if I logged, I also needed to update the database. If I didn’t log, I didn’t need to update the database.

Unfortunately, I promptly found myself in a situation where I needed to update the database without logging, and then proceeded to tie myself in knots trying to satisfy that need without changing my underlying assumption.

Ultimately, I did what I should have done from the start. I started checking for two properties on my options object. Log and update. Like magic, my problems went away.

Imagine that. 🙂

Update

Apologies all round. My posting schedule has gotten really irregular, but I can share the reason now.

Back in July, I had a manager at one of the big tech companies reach out about the possibility of joining her team. I let her know that I was very much interested, but that I wasn’t prepared yet for the kind of white-boarding interviews that her company used for vetting candidates.

This led to a roughly 6-month process of reviewing data structures and algorithms as I prepared for the interviewing process at her company. When all was said and done with preparing and interviewing (~8 months after she first reached out), I failed the final round at her company, but got offers from 4 other tech companies of varying sizes.

The process has been exhausting, and between that and maintaining my performance at my job, there wasn’t much time left for anything else.

I’d like to say that being done with interviewing will mean that things will calm back down, but I suspect that I’ll have to put in some longer hours to get up to speed with my new job.

Be that as it may, I’m going to try to get back to a more regular schedule.

Here’s to hoping.

Regression Testing

I recently had a discussion with a delightful individual regarding my current side project.

I mentioned that I was using Jest for my regression testing, and she asked something along the lines of “If this is a side project and nobody is forcing you to write unit tests, then why are you writing unit tests?”

For context, I’m pretty sure that she writes unit tests on everything that she writes regardless of whether or not someone is ‘forcing’ her to write them. You could say that she was testing me. (See what I did there?)

She wanted to know my reason for writing them and see how it matched up with her reasons.

I told her that I’d heard a quote attributed to Robert Martin (Uncle Bob) that went something along the lines of “Anyone who thinks that they can move faster by not writing unit tests is smoking some pretty crazy stuff”.

That’s not an exact reproduction of what I heard, and I haven’t been able to find the original quote, but I think the sentiment is correct.

It can feel like you’re ‘wasting’ time writing unit tests when you could instead be creating something new, but the fact is that automated tests save a ton of time when it comes to debugging things down the road, and if your unit tests help you avoid introducing a bug into production that prevents the loss of hundreds of thousands (or more) of dollars, then you’ve likely just paid for the unit tests many times over.

So, I’m still going to keep on building unit tests even when nobody ‘makes’ me do it.

Minimum Viable Product (MVP)

My accounting background, combined with working for a startup previously means that I’m familiar with at least some of the advice given to tech founders.

Generally, the advice is to get your minimum viable product out the door as soon as possible. Make sure that people are actually willing to pay money for your service, and then you can start worrying about cleaning things up and worrying about how your product needs architected in order to be able to scale.

I can see the value to the suggestion. You don’t want to spend 10 years building something and then find out that you were solving a problem that nobody else feels like is a problem.

It’s far better to spend 6 months building a MVP, and then find out that your product isn’t going to be a go.

For certain personality types, it’s really hard to move forward on anything until it’s perfect. I don’t normally consider myself to be one of those people. I tend to think that perfect is the enemy of good enough, but I’m running across situations lately that give me a bit more sympathy for that mindset than I used to have.

It can be hard to know when you’re reinventing the wheel, vs. when you’re trying to gather enough foundational knowledge in an area to avoid tripping over something that is otherwise going to torpedo your entire idea.

It’s a tricky judgement call, made all the more tricky when it’s your first rodeo and you’re not 100% sure that you can build the application in the first place.

ServiceNow UI Policies

I really thought that I’d posted about this previously, but I couldn’t find the post, and it’s something that’s tripped me up a couple of times.

When building a UI policy in ServiceNow, if you want to be able to clear the variable value, then you need to build it to hide the variable.

Or to put it another way, never build a UI Policy Action that has the ‘Clear the Variable Value’ box checked and which also has ‘Visible’ set to false.

If you do that, then when the variable is shown, the system will clear the value of the variable, which is never what you want to happen. You want the system to clear the value of the variable when the variable is hidden.

Pattern for Pushing API Call to Database

I’m working on a side project where I accept a post call into an end point and then push that data into a user record.

My starting point was based on a Udemy class where the instructor walked through a project that tied an Express app into Mongo DB. Here’s the user schema (../models/user.js):

const mongoose = require('mongoose');

const userSchema = new mongoose.Schema({
    firstName: {
        type: String,
        required: true,
        trim: true
    },
    lastName: {
        type: String,
        required: true,
        trim: true
    },
    entitlements: {
        type: String,
    },
}, {
    timestamps: true
});

So, a first name, a last name, and some kind of entitlements field that we don’t want to have filled out by the user–we want to apply some kind of business logic and fill that in ourselves.

Here is the user router (../routers/user.js):

const express = require('express');

const router = new express.Router();

// Import Model
const User = require('../models/user');

// Create a new user
router.post('/user', async (req, res) => {
    try {
        //Create the user
        const user = new User(req.body);
        await user.save();
        res.status(201).send(user);
    } catch (error) {
        res.status(400).send({Error: error.message});
    }
})

Here is my app.js (which is called by index.js):

const express = require('express');

require('./db/mongoose');

// Routers
const userRouter = require('./routers/user');
const app = express();


// Options
app.use(express.json());
app.use(userRouter);

module.exports = app;

Here is my ./db/mongoose.js file:

const mongoose = require('mongoose');

mongoose.connect(process.env.MONGODB_URL, {
    useNewUrlParser: true,
    useCreateIndex: true,
    useFindAndModify: false,
    useUnifiedTopology: true
});

Here is my index.js:

'use strict';

const app = require('./app');

const port = process.env.PORT;

app.get('', (req, res) => {
    res.send("Nothing here, but it's up and working...");
});

app.listen(port, () => {
    logger.logInfo("Server is up on port " + port);
})

As you’ve no-doubt guessed, this is a very trimmed down version of my actual app. I believe that I’ve got all of the relevant pieces such that this would run, but if I’ve missed something I apologize–the point of this post should still come through.

The issue I noticed is that with what I’ve got above, it’s actually possible for the user to pass an ‘entitlements’ property on the request and it will push that through to the database. Obviously, that requires that the person trying to exploit the loophole figures out that you’ve got an entitlements field on your user record. Then, they have to figure out how you’re representing elevated access in that string field (or however you’re storing the thing that you don’t want users setting themselves).

That’s all very unlikely, but it would be foolish to depend on it not happening.

As far as addressing the gap, my first instinct was to go through and delete off the attributes on the request that I don’t want the user to be able to set. Something like this:

const express = require('express');

const router = new express.Router();

// Import Model
const User = require('../models/user');

// Create a new user
router.post('/user', async (req, res) => {
    try {
        //Remove any protected fields
        delete req.body.entitlements;

        //Create the user
        const user = new User(req.body);
        await user.save();
        res.status(201).send(user);
    } catch (error) {
        res.status(400).send({Error: error.message});
    }
})

That does the trick, but is asking for problems down the road. Each time I add a new ‘protected’ field that I don’t want users to be able to set, I’ve got to remember to come back here and remember to delete it off of the request body. I can virtually guarantee that I’ll forget to do that at some point.

The better option is to make sure that I only submit ‘non-protected’ fields to the database. Something like this:

const express = require('express');

const router = new express.Router();

// Import Model
const User = require('../models/user');

// Create a new user
router.post('/user', async (req, res) => {
    try {
        //Copy over acceptable attributes so that call can't populate protected fields
        const userObject = {
            firstName: req.body.firstName,
            lastName: req.body.lastName
        }

        //Create the user
        const user = new User(userObject);
        await user.save();
        res.status(201).send(user);
    } catch (error) {
        // console.log(error.message);
        res.status(400).send({Error: error.message});
    }
})

Obviously, that’s not perfect. If I add a new ‘non-protected’ field, I’ve got to remember to come back here and add it to the list of the fields that are being copied over.

The plus side is that failing to remember to do that will cause things to break things right away, which will cause me to come back and fix the bug. Failing to remember won’t result in a vulnerability.

ServiceNow Impacting Scoped Records From Out of Scope

We stumbled across an interesting loophole today. The general rule is that you put things inside of a scoped app so that you can make sure things aren’t touching the records in your scoped app.

However, if your scoped table is a child of something in the global scope (like the task table), then you can actually make changes to the task-related fields from the global scope.

Something that might either trip you up, or which could be really useful at some point.

ServiceNow GlideRecord Bug

It turns out that if you use an invalid field name on a GlideRecord.addQuery() method call, it doesn’t fail (which is what I would have expected it to do). Instead, I runs through all of the records in the underlying table you’ve queried.

That’s annoying in a situation where you’re just querying for informational purposes and you get back several thousand times more records than you were expecting to get. It’s a lot worse than that if you’re in a situation where you’re making an update and you’ve got a typo in the field_name. It will result in an update being made to every record in that table.

Something else to watch for. I always recommend running any update query with a count variable and the update piece commented out first. That lets you confirm that you’re getting the expected number of records back, after which you can go ahead and perform the updates.

A hat tip to my amazing co-worker, Jasmine, who helped figure this out.

Here’s an example of how not to do it:

var gr = new GlideRecord('incident');
gr.addQuery('descriptio', 'Something...');
gr.query();

while(gr.next()) {
	gr.setValue('short_description', 'Redacted');
	gr.update();
}

As you can see, I’ve used ‘descriptio’ instead of ‘description’ in line two.