ServiceNow Documentation Error For Inbound Email Actions

I recently came across an error in the inbound email action documentation from ServiceNow, and I thought I would share my finding in case it is tripping someone else up.

The relevant documentation is here:

https://docs.servicenow.com/bundle/london-servicenow-platform/page/administer/notification/concept/c_InboundEmailActions.html

As you’ll see, there are three types of inbound actions defined, Forward, Reply, and New.

On the Forward action, it indicates that:

“The system classifies an email as a forward only when it meets all these criteria:

  • The subject line contains a recognized forward prefix such as FW:.
  • The email body contains a recognized forward string such as From:.”

After some testing, I can confirm that the FW: needs to be at the start of the subject line. If you got something before the FW: for some reason, it will skip past the Forward rule and get picked up by one of the other two rules.

A relatively minor point admittedly, but one that caused one of my tests on a recent project not to function the way that I’d been expecting it to.

ServiceNow Guided Tour Bug

Just a quick post today to describe a bug that I found in ServiceNow relating to the guided tour functionality under Madrid.

It turns out that ServiceNow is struggling with call outs that are positioned on the ‘submit’ button on the incident form.

It’s possible–likely even–that the issues are with all of the UI Actions positioned up at the top of the header, but I didn’t test that.

What I can confirm is that if you create a guided tour under Madrid with a call out on the submit button on the incident form, it breaks. It never shows that call out, meaning that the tour never completes.

That functionality works on London, but creating the tour on a London instance and moving it over to a Madrid instance also results in the tour breaking at the submit call out.

Interestingly, if you have the guided tour built on London and then upgrade that instance to Madrid, the tour continues to work under Madrid.

I’ve submitted a HI ticket on this bug, so hopefully this is fixed in the near future, but in the meantime, if you have a guided tour that isn’t working, and the call out involves one of the UI Actions at the top of the header, you’re probably not doing anything wrong.

Hacker Rank Array Manipulation Problem

I ran into this problem on HackerRank:

Starting with a 1-indexed array of zeros and a list of operations, for each operation add a value to each of the array element between two given indices, inclusive. Once all operations have been performed, return the maximum value in your array.

My first go at this works, but isn’t fast enough:

 let myArray = [];

 for (let i = 0; i < n; i++) {
     myArray.push(0);
 }

 for (let i = 0; i < queries.length; i++) {
   let operationStart = queries[i][0] – 1;
   let operationEnd = queries[i][1];
   let action = queries[i][2];
   for (let j = operationStart; j < operationEnd; j++) {
      myArray[j] += action;
   }
}

return Math.max(…myArray);

As I thought more about the problem, I realized that only the end points of each operation mattered. I tried a few different approaches,  but still was coming back with my algorithms taking too long to execute on several of the tests.

I finally threw in the towel, and read the discussion, which pointed out that you could treat it as a signal processing problem, and only record the changes–in essence have an array with a plus on the start point of the range being summed by an operation and a minus one spot after the end of the summation.

For example:

If the operation is 2, 4, 100 (meaning add 100 to the 2nd, 3rd, and 4th spots in the array)

[0, 100, 100,100,0] could instead be treated as:

[0, 100,0,0,-100]

The approach being advocated in the comments essentially required n operations to create the array of zeros, then a set of operations to populate the changes, and then n more operations to run back through the array keeping a running total in order to figure out what the largest number is.

That made sense to me, but I wondered if there was a way to combine the two approaches and come up with something that required fewer operations.

My thought was that all you really needed was to record the points where the signal changed, and the magnitude of the change.

let endPoints = new Set();
for (let i = 0; i < queries.length; i++) {
   endPoints.add(queries[i][0]);
   endPoints.add(queries[i][1]+1);
}

let sortedEndPoints = Array.from(endPoints)
sortedEndPoints.sort((a, b) => a-b);

let values = [];

for (let i = 0; i < sortedEndPoints.length; i++) {
   values.push(0);
}

for (let i = 0; i < queries.length; i++) {
   let leftIndex = sortedEndPoints.findIndex((element) => {
      return element === queries[i][0];
  })

   let rightIndex = sortedEndPoints.findIndex((element) => {
      return element === queries[i][1]+1;
   })

   values[leftIndex] += queries[i][2];
   values[rightIndex] -= queries[i][2];
}

let maximum = 0;
let runningTotal = 0;
for (let i = 0; i < values.length; i++) {
   runningTotal += values[i];
   if (runningTotal > maximum) {
      maximum = runningTotal;
   }
}

return maximum;

The solution above came back as a fail on a bunch of tests (due to a time out) on HackerRank.

That really surprised me, and continued to not make sense to me until I went ahead and unlocked some of the test cases that had more data.

I had been envisioning problems that scaled up to to something like a billion data points in the array and ten thousand add operations.

The test cases scaled up to 4k points in the array and 30k addition ranges or 10 million points in the array and 100k addition ranges.

In that type of data set, the overhead from sorting the array of edges gets really intensive really quickly as compared to the overhead from traversing the relatively smaller array that they were using compared to what I’d been envisioning.

In the interest of proving my theory, I used their test data to create some test data that fit the profile I’d been envisioning.

The test data was as follows:

Array of 5 million places with 3 addition ranges.

Array of 4 million places with 3 addition ranges.

Array of 4 million places with 30 addition ranges.

Array of 4 million places with 30 addition ranges.

Array of 4 million places with 4,894 addition ranges.

Array of 4 million places with 4,994 addition ranges.

I then duplicated the test data 27 times and ran a comparison with a stopwatch.

On average, the method suggested by the users at HackerRank took ~8 seconds to run through that data on my machine and my algorithm took ~2.5 seconds to run through the same test set. The margin of error on something like that is probably half a second, so it’s not super accurate, but it does tend to support the idea that depending on the data you’re dealing with, the overhead of sorting the array of edge points can still end up being much less than the overhead of traversing the full array.

Here is the version that the guys and gals in the discussion for the problem on HackerRank suggested:

let edgesArray = Array(n).fill(0)
queries.forEach(([a, b, k]) => {
   edgesArray[a-1] += k;
   edgesArray[b] -= k;
})
let maximum = 0;
let tempAccumulater = 0;
edgesArray.reduce((acc, cur) => {
   tempAccumulater = acc + cur;
   if (tempAccumulater > maximum) {
      maximum = tempAccumulater;
   }
   return tempAccumulater;
})
return maximum;

At some point, I would like to spend some more time trying to tweak my ‘edges only’ solution to figure out a way to reduce the overhead involved in the sort. I’m thinking that putting the edge points into some sort of tree structure might reduce the sorting overhead and allow for my solution to be more competative across a broader set of test cases.

A better route still would be if I could figure out how to sort or partially sort the edge points as I put them into the set or array, but so far nothing I jumping out at me as to how I could make that happen.

As far as optimizing the algorithm from the people at HackerRank, I considered putting in ‘leftMost’ index and a ‘rightMost’ index that could be used to trim the leftmost and rightmost parts of the array so that the final traversal can be accomplished more quickly, but that ends up introducing extra overhead an a problem set like they are using. If you tended to have fewer tests that were clustered around one part of the set of possible locations in the array, it could be helpful on average. I can think of a few real-world type situations where that might be the case. Maybe involved with calibrating some kind of sensor or machinery where you know that once it’s mostly aligned most of the data is going to hit the same part of the sensor, but on the first few runs you don’t know which part of the sensor are going to be hit, so you have to watch the entire sensor.

It’s definitely an unlikely set of edge cases, but something that’s kind of fun to think about.

 

Database Structure

I received a bit of advice approximately one year ago with regards to designing database tables. It boiled down to “treat different things differently by putting them in separate tables that have been designed for that specific thing”.

I think that is great advice generally. One of the problems I saw at a past position was that they had one table that was storing three fairly different things. The end result was that the table was difficult to work with, and the code base was more complex than it needed to be in order to deal with the various different edge cases in that table.

In a recent project, I architected a solution that dealt with a number of different tables that all inherited from the ServiceNow task table. My proposal was to have a different custom table for each of the three tables that were children of task.

My boss countered by suggesting that we have just one custom table that dealt with all three of the stock ServiceNow tables, and add another column to it that had the name of the table that particular entry related to. He indicated that building the back end that way would be more scalable if additional tables needed to be covered by my project at a later date, and he was exactly right.

So, my addendum to the rule that I’ve been following for the last year or so is that you want to treat different things differently, and give them each their own table, but things that appear to be different at first glance might not actually be as different as you think. If you’ve got the same fields/columns across different tables, and they are all being populated, then you could probably replace the tables with a single table and use some kind of ENUM to categorize the records appropriately.

All of which will tend to make your solution more scalable.

ServiceNow Focus & a Review of Learning Resources

I just wanted to give everyone a quick update. A started a new position in Nov 2018 with a company I’m programming in the ServiceNow ecosystem. That means that my ‘finds’ over the next little while are likely to be focused on ServiceNow quirks and techniques.

However, before I get into that, I wanted to talk about learning to program. I’ve mostly been learning on my own, and I’ve realized that there is a lot of difference between resources that you can use.

I started out using CodeCademy.com. They have a workspace built right into the browser, which I really liked initially. It seemed like a great option because that meant I could get right to the business of programming.

Since then, I’ve spent some time in the Team Treehouse tech degree program, on Pluralsight, and on this Udemy course: https://www.udemy.com/modern-javascript/

Here are my thoughts:

1st Point: Learning syntax can be challenging, but once you’ve got your arms around that, an even bigger challenge is getting your development environment set up so that you can start working on something other than tutorials. I think that is a big part of why people end up moving from one tutorial to another, which is why I highly recommend picking a course where they start out by setting up your development environment.

That is something that I really liked about the Andrew Meads Udemy course that I linked above. Andrew runs you through installing Node, npm, Visual Studio, and a bunch of other really useful tools.

2nd Point: It can be really tempting to go with a free resource when money is tight. I’m not advocating spending money that you don’t have, but don’t discount the value of your time. If you choose resources that don’t don’t do the trick and you double the time required to learn how to program, you’ll end up losing out on months of dev salary that will end up being much more expensive than the cost of a reasonably priced course of study.

3rd Point: The price of a course doesn’t necessarily correspond directly to the quality of the course. I quite liked what I saw of Pluralsight during the three days that I tried out their courses. My feeling is that I would have made much quicker progress if I’d started out with Pluralsight rather than starting out with CodeCademy.com’s free classes. However, Team Treehouse’s tech degree program, while costing $200 a month–much more than Pluralsight–came in behind Pluralsight for me.

In summary, out of all of the options that I’ve tried out so far, Andrew Meads’ JavaScript bootcamp course has been my favorite and I felt like the best value for the money. I liked the Team Treehouse tech degree in theory. You ‘graduate’ 3 months after starting with a tech degree that in theory makes you seem like less of a risk to prospective employers, and they have a Slack channel with moderators who can help answer your questions and get you through any difficulties you might have with the learning process. What I found was that the video courses were very uneven when it came to the quality of the teaching. I was studying Python during the month that I was enrolled in the tech degree program. I thought one of the instructors was really good. The other I found to be less skilled as a teacher, and some of his examples weren’t a very good match for the concept that he was trying to convey.

Likewise, I found the slack channel to be underwhelming. There were a lot of nice people, both moderators and other students, but when I asked questions, it seemed more often than not that the answer was something along the lines of ‘don’t worry about that now’.

I can’t speak to whether or not the tech degree makes someone more employable. It’s possible that there is enough value there to offset both the deficiencies I came across and the $200 per month price tag, but even with me spending 40 or more hours per week working through the tech degree, I found that I wasn’t able to maintain a pace that would allow me to get through the tech degree program in the 3-month minimum time frame, which leads me to me next point.

I started my Udemy class after beginning my new job in the ServiceNow ecosystem. That means that I don’t have anywhere near 40 hours per week to dedicate to JavaScript courses, but even so, my Udemy class–for the very low price of $10 or $11 has kept me busy for nearly 3 months, and I’m still not quite all of the way through the videos. The $600 or more likely $800 that I would have to spend in order to complete the Team Treehouse Tech degree would pay for something like 60-80 Udemy courses, and keep me busy learning for years.

Similarly, while I really liked the npm course that I took from Pluralsight, and I can’t say enough good things about the Sequelize class that I started, but didn’t get enough time to finish, I have a hard time right now justifying $35 per month to take Pluralsight classes when one month of Pluralsight would enable me to buy 3 high-quality Udemy classes that could very possibly keep me busy for 8 or 9 months.

Given that, and the fact that I’ve got a backlog of 4 or 5 Udemy classes that I’ve purchased but not yet even started watching, I expect that the bulk of my money will continue to go to Udemy for the next little while. That being said, I don’t think Pluralsight is a terrible value, and there are a couple of scenarios where I think Pluralsight makes a lot of sense.

If you’re already working in development, and your time is extremely valuable, then a course that is even just slightly better could save you enough time to justify paying 5 or even 10 times as much for a course as what you might pay for something off of Udemy.

Likewise, if you’re a company, and your employees need to learn something while on the clock, then the potential time savings involved in classes that are more tightly focused on just what your developers need to learn could justify Pluralight’s price point.

More importantly, because part of what Pluralsight ultimately offers is curation of their course catalog, you’re likely to find a very consistent level of quality across their offerings, which probably isn’t going to be the case with a course of study that is stitched together via Udemy classes from various instructors.

If you’ve got a lot of time to dedicate towards learning new skills, and some extra disposable income, then Pluralsight by all appearances can be a great way to go.

Otherwise, my suggestion is just to find a good Udemy class on the subject you’re wanting to learn (I highly recommend Andrew Meads’ class if you want to learn JavaScript). Even if you pick a bad one to start out with and have to purchase a second one, you probably still come out ahead compared to the other options–it’s just such an incredible value.

All of that being said, Pluralsight recently sent me an email with a limited time offer of a year for $199. It was very hard for me to pass up that deal. I suspect that if I didn’t have a big backlog of Udemy classes that I’ve purchased but not yet completed, and if I had an extra few hours a week that I knew I would be able to dedicate to learning new skills, that I would have jumped at that particular Plurasight offer.