Underlying Catalog Tables (ServiceNow) & An Update

In a past position, I went to the effort of hunting down and documenting the underlying tables that support catalog items, requests, request items, tasks, and the variables.

I’ve moved on from that position though, and no longer have access to those notes. As I started to re-pull that information, I found this graphic, which shows how that tables are related to each other.

I found it originally on this post, and it was originally created by Julian Hoch.

I thought I would pass it along in case any of my readers found themselves needing to know what tables to reference when coding against that area of ServiceNow.

In other news, the guided tour bug that I reached out to HI Support about has been turned into a problem so that they can get the right group working on resolving it.

 

Catalog UI Policy Bug?

A quick update on the guided tour bug that I reported a while ago. I’ve been back and forth with HI support several times, and during my last call with them we got them to the point of being able to re-create the bug on a consistent basis.

It appears that the bug (callouts not working on the submit button) only kicks in if you populate an intro and conclusion to the tour. It’s possible that just one of those two (intro & conclusion) is the issue. I didn’t test beyond that.

I found what feels like a bug, but which could just be me trying to use the system in a way that it wasn’t designed to be used.

If you go into a catalog UI policy, it gives you the option to change the catalog item that the UI policy applies to. I did an insert and stay on several UI policies, copying them from one catalog item to another catalog item that had the same variables.

I thought I was being clever and saving myself a bunch of time, but after doing that, none of the catalog UI policies (the UI actions) worked.

My best guess is that the variables on the two different catalog items had the same names, but different sys ID’s. So they look like the same variables, but aren’t actually the same variables. I haven’t had a chance to test that though so I’m not 100% sure that’s the cause.

Troubleshooting UI Policies

My last post covered some of the issues I’ve had to troubleshoot with variables lately, but didn’t cover the strategy for doing so.

When troubleshooting UI Policies, often the best thing to do is simply to deactivate all of the policies and then turn them on one at a time until you see the behavior that you’re trying to stop.

Sometimes, depending on what you’re seeing, turning them off one at a time until you see a particular behavior start or stop is the way to go.

Really, when bug hunting, it’s always best to look for strategies that allow you to pin the bug down to a specific section of code. Generally, if you don’t find the bug right away, then it relates to something that you either don’t understand well, or which you understand incorrectly. By eliminating big chunks of code, you reduce what you have to look at to something that is much more manageable.

That drastically improves your odds of figuring out what is driving the bug. It’s a strategy that I’ve learned, forgotten, and then relearned again. I tend to use it instinctively until I run into a new tool, application, or technology. Then, for some reason, I seem to forget to bring the principles with me that worked so well with previous tools and technologies.

A big hat tip to my coworkers Kim and Tatiana for reminding me the right way to go about debugging something. Hopefully it sticks for the next novel situation I find myself in.

ServiceNow Variables

I’ve been working a lot with variables lately. Here are some things that I’ve learned that either weren’t in my classes up to this point, or which didn’t stick for me when my classes covered them.

1. If a variable is refusing to be hidden, check to see if it is mandatory. I can’t say categorically that mandatory variables can’t be hidden, but I’ve definitely seen instances where a variable refused to hide until after it was no longer mandatory.

2. If you have something that isn’t behaving the way you are expecting it too, check that you don’t have a container that isn’t closed out. That can cause behavior that has been applied to the container to apply to variables that you don’t realize are part of the container.

2a. It’s implied by 2. above, but worth being called out specifically. I’m used to more specific rules trumping more general rules. That isn’t how containers and variables work. Instead, the less specific (UI Policies applied to a container) trumps the more specific (UI Policies applied to the variables inside of the container). If you make a container mandatory, you can’t set one of the variables inside the container to not be mandatory. You would have to set the container to be non-mandatory, and then individually set the other variables mandatory.

 

They Need To Feel The Pain

By nature, I don’t like to be mean to people, or make someone feel bad. That predisposes me not to write blog posts that call someone out on something that they are doing wrong.

That being said, there are some things (actions or behaviors) that need to be described so that others can avoid making mistakes that could cause serious harm to their careers.

In a past position I witnessed a terrible dynamic between a founder and their development manager. The founder would go to the development manager with a feature request. The development manager would agree that the feature was a good one, but would raise concerns and highlight problems with the founder’s desired method of implementing the feature.

They would go back and forth for a while, and then the founder would say something to the effect of “I’m the boss, this is my company, do it my way”.

Which is of course the founder’s prerogative even if it’s the wrong decision.

The development manager would go off and build, or have built, the feature using the founder’s methodology. Subsequent to the feature going live, the very problems that the development manager had warned about would begin to surface, and the product would begin to suffer.

I’m not privy to the full history between the development manager and founder. I have some suspicions as to the cause of this dynamic, but I don’t know for sure what led to the subsequent behavior. In my opinion, the correct action to take would be to go back to the founder and say something like:

“We’re seeing problem ‘x’, which is the result of the decision to do ‘y’ while designing this feature. What do you want us to do next?”

or maybe:

“The system is breaking down because of ‘q’. I think we need to do ‘r’.”

Rather than doing that, the development manager would confirm to themselves that the problem was arising because of the issues that they had warned the founder about, and then the development manager would go off and ‘fix’ the code by stripping out the founder’s design and coding it the way that the development manager had wanted to write it all along.

The pros to that course of action:

The development manager avoided a fight with the founder.

The technical problems were solved.

The cons:

The founder became convinced that he could ignore the advice of the development manager who was the actual domain expert when it came to developing software. Rather than making a decision to over-ride the domain expert and then feeling the pain from making a bad decision, the founder became convinced that there was no need to listen to subordinates who disagreed with the founder.

In effect, the founder was always right and everyone else was always wrong. The founder would be cautioned against something, do it, and then as nearly as the founder could tell there were never any consequences for having ignored the domain experts in the company.

The issues that the domain experts cautioned the founder about never materialized, which meant that the founder either had to lose faith in the domain experts, assume that the founder was somehow infallible, or some do some combination of the two.

Secondary effects of this decision by the development manager included:

Lengthened development cycles (things were essentially being built twice).

Insufficient focus on technical debt (other problems that the founder had been warned against never materialized, therefore there was no reason to worry about big data problems or other technical debt).

My takeaway from watching this dynamic over an extended period of time is that you should always let people–especially people above you–feel the pain of their decisions.

I think that human beings who are drawing a paycheck have a moral obligation to warn our managers when we see a decision being made that will have negative consequences. How far you go on something like that is a judgement call based on your personal circumstances. The companies that most need someone to stand up and take a strong position against a bad decision are usually the companies that will make an employee suffer the worst consequences for taking that kind of position.

Depending on your role, seniority, and the consequences of the bad decisions being made, you may or may not want to get out of that company as soon as it becomes evident that the bad decision is going to be made in spite of your warnings.

If you stick around though, it is vital that you let people feel the pain from that bad decision. If you don’t, you’re undercutting your credibility and signing yourself up for more of the same.

ServiceNow Documentation Error For Inbound Email Actions

I recently came across an error in the inbound email action documentation from ServiceNow, and I thought I would share my finding in case it is tripping someone else up.

The relevant documentation is here:

https://docs.servicenow.com/bundle/london-servicenow-platform/page/administer/notification/concept/c_InboundEmailActions.html

As you’ll see, there are three types of inbound actions defined, Forward, Reply, and New.

On the Forward action, it indicates that:

“The system classifies an email as a forward only when it meets all these criteria:

  • The subject line contains a recognized forward prefix such as FW:.
  • The email body contains a recognized forward string such as From:.”

After some testing, I can confirm that the FW: needs to be at the start of the subject line. If you got something before the FW: for some reason, it will skip past the Forward rule and get picked up by one of the other two rules.

A relatively minor point admittedly, but one that caused one of my tests on a recent project not to function the way that I’d been expecting it to.

ServiceNow Guided Tour Bug

Just a quick post today to describe a bug that I found in ServiceNow relating to the guided tour functionality under Madrid.

It turns out that ServiceNow is struggling with call outs that are positioned on the ‘submit’ button on the incident form.

It’s possible–likely even–that the issues are with all of the UI Actions positioned up at the top of the header, but I didn’t test that.

What I can confirm is that if you create a guided tour under Madrid with a call out on the submit button on the incident form, it breaks. It never shows that call out, meaning that the tour never completes.

That functionality works on London, but creating the tour on a London instance and moving it over to a Madrid instance also results in the tour breaking at the submit call out.

Interestingly, if you have the guided tour built on London and then upgrade that instance to Madrid, the tour continues to work under Madrid.

I’ve submitted a HI ticket on this bug, so hopefully this is fixed in the near future, but in the meantime, if you have a guided tour that isn’t working, and the call out involves one of the UI Actions at the top of the header, you’re probably not doing anything wrong.

Hacker Rank Array Manipulation Problem

I ran into this problem on HackerRank:

Starting with a 1-indexed array of zeros and a list of operations, for each operation add a value to each of the array element between two given indices, inclusive. Once all operations have been performed, return the maximum value in your array.

My first go at this works, but isn’t fast enough:

 let myArray = [];

 for (let i = 0; i < n; i++) {
     myArray.push(0);
 }

 for (let i = 0; i < queries.length; i++) {
   let operationStart = queries[i][0] – 1;
   let operationEnd = queries[i][1];
   let action = queries[i][2];
   for (let j = operationStart; j < operationEnd; j++) {
      myArray[j] += action;
   }
}

return Math.max(…myArray);

As I thought more about the problem, I realized that only the end points of each operation mattered. I tried a few different approaches,  but still was coming back with my algorithms taking too long to execute on several of the tests.

I finally threw in the towel, and read the discussion, which pointed out that you could treat it as a signal processing problem, and only record the changes–in essence have an array with a plus on the start point of the range being summed by an operation and a minus one spot after the end of the summation.

For example:

If the operation is 2, 4, 100 (meaning add 100 to the 2nd, 3rd, and 4th spots in the array)

[0, 100, 100,100,0] could instead be treated as:

[0, 100,0,0,-100]

The approach being advocated in the comments essentially required n operations to create the array of zeros, then a set of operations to populate the changes, and then n more operations to run back through the array keeping a running total in order to figure out what the largest number is.

That made sense to me, but I wondered if there was a way to combine the two approaches and come up with something that required fewer operations.

My thought was that all you really needed was to record the points where the signal changed, and the magnitude of the change.

let endPoints = new Set();
for (let i = 0; i < queries.length; i++) {
   endPoints.add(queries[i][0]);
   endPoints.add(queries[i][1]+1);
}

let sortedEndPoints = Array.from(endPoints)
sortedEndPoints.sort((a, b) => a-b);

let values = [];

for (let i = 0; i < sortedEndPoints.length; i++) {
   values.push(0);
}

for (let i = 0; i < queries.length; i++) {
   let leftIndex = sortedEndPoints.findIndex((element) => {
      return element === queries[i][0];
  })

   let rightIndex = sortedEndPoints.findIndex((element) => {
      return element === queries[i][1]+1;
   })

   values[leftIndex] += queries[i][2];
   values[rightIndex] -= queries[i][2];
}

let maximum = 0;
let runningTotal = 0;
for (let i = 0; i < values.length; i++) {
   runningTotal += values[i];
   if (runningTotal > maximum) {
      maximum = runningTotal;
   }
}

return maximum;

The solution above came back as a fail on a bunch of tests (due to a time out) on HackerRank.

That really surprised me, and continued to not make sense to me until I went ahead and unlocked some of the test cases that had more data.

I had been envisioning problems that scaled up to to something like a billion data points in the array and ten thousand add operations.

The test cases scaled up to 4k points in the array and 30k addition ranges or 10 million points in the array and 100k addition ranges.

In that type of data set, the overhead from sorting the array of edges gets really intensive really quickly as compared to the overhead from traversing the relatively smaller array that they were using compared to what I’d been envisioning.

In the interest of proving my theory, I used their test data to create some test data that fit the profile I’d been envisioning.

The test data was as follows:

Array of 5 million places with 3 addition ranges.

Array of 4 million places with 3 addition ranges.

Array of 4 million places with 30 addition ranges.

Array of 4 million places with 30 addition ranges.

Array of 4 million places with 4,894 addition ranges.

Array of 4 million places with 4,994 addition ranges.

I then duplicated the test data 27 times and ran a comparison with a stopwatch.

On average, the method suggested by the users at HackerRank took ~8 seconds to run through that data on my machine and my algorithm took ~2.5 seconds to run through the same test set. The margin of error on something like that is probably half a second, so it’s not super accurate, but it does tend to support the idea that depending on the data you’re dealing with, the overhead of sorting the array of edge points can still end up being much less than the overhead of traversing the full array.

Here is the version that the guys and gals in the discussion for the problem on HackerRank suggested:

let edgesArray = Array(n).fill(0)
queries.forEach(([a, b, k]) => {
   edgesArray[a-1] += k;
   edgesArray[b] -= k;
})
let maximum = 0;
let tempAccumulater = 0;
edgesArray.reduce((acc, cur) => {
   tempAccumulater = acc + cur;
   if (tempAccumulater > maximum) {
      maximum = tempAccumulater;
   }
   return tempAccumulater;
})
return maximum;

At some point, I would like to spend some more time trying to tweak my ‘edges only’ solution to figure out a way to reduce the overhead involved in the sort. I’m thinking that putting the edge points into some sort of tree structure might reduce the sorting overhead and allow for my solution to be more competative across a broader set of test cases.

A better route still would be if I could figure out how to sort or partially sort the edge points as I put them into the set or array, but so far nothing I jumping out at me as to how I could make that happen.

As far as optimizing the algorithm from the people at HackerRank, I considered putting in ‘leftMost’ index and a ‘rightMost’ index that could be used to trim the leftmost and rightmost parts of the array so that the final traversal can be accomplished more quickly, but that ends up introducing extra overhead an a problem set like they are using. If you tended to have fewer tests that were clustered around one part of the set of possible locations in the array, it could be helpful on average. I can think of a few real-world type situations where that might be the case. Maybe involved with calibrating some kind of sensor or machinery where you know that once it’s mostly aligned most of the data is going to hit the same part of the sensor, but on the first few runs you don’t know which part of the sensor are going to be hit, so you have to watch the entire sensor.

It’s definitely an unlikely set of edge cases, but something that’s kind of fun to think about.

 

Database Structure

I received a bit of advice approximately one year ago with regards to designing database tables. It boiled down to “treat different things differently by putting them in separate tables that have been designed for that specific thing”.

I think that is great advice generally. One of the problems I saw at a past position was that they had one table that was storing three fairly different things. The end result was that the table was difficult to work with, and the code base was more complex than it needed to be in order to deal with the various different edge cases in that table.

In a recent project, I architected a solution that dealt with a number of different tables that all inherited from the ServiceNow task table. My proposal was to have a different custom table for each of the three tables that were children of task.

My boss countered by suggesting that we have just one custom table that dealt with all three of the stock ServiceNow tables, and add another column to it that had the name of the table that particular entry related to. He indicated that building the back end that way would be more scalable if additional tables needed to be covered by my project at a later date, and he was exactly right.

So, my addendum to the rule that I’ve been following for the last year or so is that you want to treat different things differently, and give them each their own table, but things that appear to be different at first glance might not actually be as different as you think. If you’ve got the same fields/columns across different tables, and they are all being populated, then you could probably replace the tables with a single table and use some kind of ENUM to categorize the records appropriately.

All of which will tend to make your solution more scalable.

ServiceNow Focus & a Review of Learning Resources

I just wanted to give everyone a quick update. A started a new position in Nov 2018 with a company I’m programming in the ServiceNow ecosystem. That means that my ‘finds’ over the next little while are likely to be focused on ServiceNow quirks and techniques.

However, before I get into that, I wanted to talk about learning to program. I’ve mostly been learning on my own, and I’ve realized that there is a lot of difference between resources that you can use.

I started out using CodeCademy.com. They have a workspace built right into the browser, which I really liked initially. It seemed like a great option because that meant I could get right to the business of programming.

Since then, I’ve spent some time in the Team Treehouse tech degree program, on Pluralsight, and on this Udemy course: https://www.udemy.com/modern-javascript/

Here are my thoughts:

1st Point: Learning syntax can be challenging, but once you’ve got your arms around that, an even bigger challenge is getting your development environment set up so that you can start working on something other than tutorials. I think that is a big part of why people end up moving from one tutorial to another, which is why I highly recommend picking a course where they start out by setting up your development environment.

That is something that I really liked about the Andrew Meads Udemy course that I linked above. Andrew runs you through installing Node, npm, Visual Studio, and a bunch of other really useful tools.

2nd Point: It can be really tempting to go with a free resource when money is tight. I’m not advocating spending money that you don’t have, but don’t discount the value of your time. If you choose resources that don’t don’t do the trick and you double the time required to learn how to program, you’ll end up losing out on months of dev salary that will end up being much more expensive than the cost of a reasonably priced course of study.

3rd Point: The price of a course doesn’t necessarily correspond directly to the quality of the course. I quite liked what I saw of Pluralsight during the three days that I tried out their courses. My feeling is that I would have made much quicker progress if I’d started out with Pluralsight rather than starting out with CodeCademy.com’s free classes. However, Team Treehouse’s tech degree program, while costing $200 a month–much more than Pluralsight–came in behind Pluralsight for me.

In summary, out of all of the options that I’ve tried out so far, Andrew Meads’ JavaScript bootcamp course has been my favorite and I felt like the best value for the money. I liked the Team Treehouse tech degree in theory. You ‘graduate’ 3 months after starting with a tech degree that in theory makes you seem like less of a risk to prospective employers, and they have a Slack channel with moderators who can help answer your questions and get you through any difficulties you might have with the learning process. What I found was that the video courses were very uneven when it came to the quality of the teaching. I was studying Python during the month that I was enrolled in the tech degree program. I thought one of the instructors was really good. The other I found to be less skilled as a teacher, and some of his examples weren’t a very good match for the concept that he was trying to convey.

Likewise, I found the slack channel to be underwhelming. There were a lot of nice people, both moderators and other students, but when I asked questions, it seemed more often than not that the answer was something along the lines of ‘don’t worry about that now’.

I can’t speak to whether or not the tech degree makes someone more employable. It’s possible that there is enough value there to offset both the deficiencies I came across and the $200 per month price tag, but even with me spending 40 or more hours per week working through the tech degree, I found that I wasn’t able to maintain a pace that would allow me to get through the tech degree program in the 3-month minimum time frame, which leads me to me next point.

I started my Udemy class after beginning my new job in the ServiceNow ecosystem. That means that I don’t have anywhere near 40 hours per week to dedicate to JavaScript courses, but even so, my Udemy class–for the very low price of $10 or $11 has kept me busy for nearly 3 months, and I’m still not quite all of the way through the videos. The $600 or more likely $800 that I would have to spend in order to complete the Team Treehouse Tech degree would pay for something like 60-80 Udemy courses, and keep me busy learning for years.

Similarly, while I really liked the npm course that I took from Pluralsight, and I can’t say enough good things about the Sequelize class that I started, but didn’t get enough time to finish, I have a hard time right now justifying $35 per month to take Pluralsight classes when one month of Pluralsight would enable me to buy 3 high-quality Udemy classes that could very possibly keep me busy for 8 or 9 months.

Given that, and the fact that I’ve got a backlog of 4 or 5 Udemy classes that I’ve purchased but not yet even started watching, I expect that the bulk of my money will continue to go to Udemy for the next little while. That being said, I don’t think Pluralsight is a terrible value, and there are a couple of scenarios where I think Pluralsight makes a lot of sense.

If you’re already working in development, and your time is extremely valuable, then a course that is even just slightly better could save you enough time to justify paying 5 or even 10 times as much for a course as what you might pay for something off of Udemy.

Likewise, if you’re a company, and your employees need to learn something while on the clock, then the potential time savings involved in classes that are more tightly focused on just what your developers need to learn could justify Pluralight’s price point.

More importantly, because part of what Pluralsight ultimately offers is curation of their course catalog, you’re likely to find a very consistent level of quality across their offerings, which probably isn’t going to be the case with a course of study that is stitched together via Udemy classes from various instructors.

If you’ve got a lot of time to dedicate towards learning new skills, and some extra disposable income, then Pluralsight by all appearances can be a great way to go.

Otherwise, my suggestion is just to find a good Udemy class on the subject you’re wanting to learn (I highly recommend Andrew Meads’ class if you want to learn JavaScript). Even if you pick a bad one to start out with and have to purchase a second one, you probably still come out ahead compared to the other options–it’s just such an incredible value.

All of that being said, Pluralsight recently sent me an email with a limited time offer of a year for $199. It was very hard for me to pass up that deal. I suspect that if I didn’t have a big backlog of Udemy classes that I’ve purchased but not yet completed, and if I had an extra few hours a week that I knew I would be able to dedicate to learning new skills, that I would have jumped at that particular Plurasight offer.