Express App Guide

(This post was written back in October. I had it written, but didn’t get it edited before getting fired, so it’s just been sitting on my drive gathering digital dust. It’s still good information, but just keep in mind that the timing is off. Everything I’m talking about happening in the present actually happened almost half a year ago.)

Welcome to another week. This week was a little rough on the development front. One of the other, more experienced, developers is drowning in projects (actually, most of the more experienced developers are drowning in projects pretty much all of the time).

Being a naturally helpful sort, and possibly not nearly as smart as I like to think I am, I offered to give him a hand with what I thought was the easiest part of his task list, new rules in the rule engine that we used to provide all kinds of customization to the user experience.

As it turns out, the rules engine is pretty complicated because it’s called multiple times—from multiple different points inside of the code base—and therefore there is a lot of different context that needs to be understood in order to effectively write the rules.

I suspect that there are many things that the other developers are doing which are more complicated than the rules engine, but it’s feeling like I’m very much in over my head right now.

The week (when I wasn’t helping out with accounting tasks that haven’t fully transitioned away from me yet) was spent with me taking my best stab at writing new rules, sending them over to my manager (who is the one who wrote the rules engine) to approve and having him kick them back to me with a verbal explanation as to why they won’t work.

He’s not doing anything wrong—he’s been really patient and awesome, but it’s still a little wearing to continually come up short (inside the privacy of my own mind if not necessarily with regards to his expectations yet).

Added to the less awesomeness of the week is that I haven’t made very much progress on my side projects. So far, I’ve created a simple express app from scratch, got it working on my local box, and successfully loaded it up to Heroku (to serve as the cloud-based compute infrastructure).

I’ve also created a MySQL database using Google Cloud as my infrastructure provider, downloaded SSL certs, used those certs to connect from my local box to the database (using Navicat since that’s what I was using previously at work while doing the accounting).

That all feels like pretty good progress, but it all happened last week. Partially that’s because I’ve been putting in extra hours at work trying to get my arms around the rules engine, and partially that’s because I’ve been stuck on how to use Sequelize to connect to the database while using an SSL cert.

I think I’ve finally found a guide that is pointing me at the right direction as far as that’s going, so I’m hoping to make more progress later today on connecting my app to the database and making it stateful, but right now I need to be working on a blog post, and the logical thing to me would be to share the steps that I used to create my express app and push it up to Heroku.

There are tons of videos out there talking about how to do something like this, but my preferred way of consuming that kind of stuff is in written form. It saves me having to pause and repeat stuff all of the time as I’m trying to follow along with what the presenter is doing.

Putting my steps up on the blog has the benefit of checking the box on the blog post that I’m supposed to be writing today. Even better, it means that anyone who prefers written guides to video guides will have the ability to search down my guide. And, for a third (admittedly minor in this day and age of cloud-based backups) benefit, it means that I’ll have me steps recorded for future use in case I need them.

On to the guide:
Preliminaries:
1. I’m working on a Mac (more regarding that in a future post)
2. I’ve installed the Heroku command line interface using Brew
3. I’ve created a Heroku account and a GitHub account
4. I’m using Intellij. You can use Sublime or another editor of your choice.

The actual process:
1. Create a new repository on GitHub.com (or with Git desktop)
a. Say to initialize it with a readme
b. Fetch your new repository from Git. (I use Git Desktop to pull it down to my local machine)
c. I have a services directory, and then put each Git repository as a subfolder inside of services.
2. Create a .gitignore file at the root directory of your project.
a. For me this is in Services\Storage-App
b. At this point, I got a message from Intellij asking if I wanted to add .gitignore (if I wanted it to be part of the git repository). I said yes, but said no on all of the stuff that it asked about in the .idea/ folder.
c. I populated .gitignore as per this example:
d. https://github.com/expressjs/express/blob/master/.gitignore
# OS X
.DS_Store*
Icon?
._*

# npm
node_modules
package-lock.json
*.log
*.gz

.idea
.idea/

# environment variables
env*.yml
e. .idea and .idea/ are there because I’m using Intellij. The link above has options for windows or linux.
3. Run npm install express -g from the command line inside of the project folder.
a. This downloads and installs express (I’m pretty sure that you only have to run this the first time that you want to add express to an app).
4. Run npm init from the command line inside of the project folder.
a. This creates the package.json file that lists your dependacies and tells the computer what to use as your starting file.
b. I used all just the default options other than changing index.js to app.js. You can use either option.
5. Run npm install express –save from the command line inside of the project folder.
a. This brings a whole bunch of express dependencies into the project. (in the node_modules folder)
b. You should now have “express”: “^4.16.3” (or a later or earlier version depending on what version of express you have installed) in your list of dependencies in package.json.
6. Create app.js inside of the root directory (same level as package.json and package-lock.json)
a. I did this via Intellij. You should in theory be able to just do it from inside the command line via touch app.js if you wanted to.
7. Inside of app.js add the following lines:
const express = require(‘express’);
const app = express();

const normalizePort = port => parseInt(port, 10);
const PORT = normalizePort(process.env.PORT || 5000);

app.get(‘/’, function(req, res) {
res.send(‘Hello World’);
}).listen(PORT);

console.log(“Waiting for requests. Go to LocalHost:5000”);

8. Inside of package.json at the end of the “test” line, put a comma and then add a new line:
a. “start”: “node app.js”
9. Inside the app directory, type npm start
a. (You should see “Waiting for requests. Go to LocalHost:5000” in the terminal)
10. Open a browser window and got to http://localhost:5000/
a. (you should see “Hello World” in the browser)
b. This means that you’ve successfully run the app on your local machine
11. Create a Procfile at the root level.
a. Input (into the proc file) web: node app.js
12. Push the app up to heroku
a. Change to the directory containing the app.
b. Type git init
c. heroku create $APP_NAME –buildpack heroku/nodejs
i. I left the app name blank and just let Heroku create a random name.
ii. That means my command was heroku create –buildpack Heroku/nodejs
d. git add .
e. git commit -m “Ready to push to Heroku”
i. You should also be able to do the commit via github desktop.
f. git push heroku master
g. heroku open
i. This should open your browser and show you “Hello World”.
h. You’ve successfully pushed the app up to Heroku. Congratulations!

That’s it for this week. I’ll come back and add some additional detail as I get a better understanding of what some of these commands do.

ServiceNow Documentation Error For Inbound Email Actions

I recently came across an error in the inbound email action documentation from ServiceNow, and I thought I would share my finding in case it is tripping someone else up.

The relevant documentation is here:

https://docs.servicenow.com/bundle/london-servicenow-platform/page/administer/notification/concept/c_InboundEmailActions.html

As you’ll see, there are three types of inbound actions defined, Forward, Reply, and New.

On the Forward action, it indicates that:

“The system classifies an email as a forward only when it meets all these criteria:

  • The subject line contains a recognized forward prefix such as FW:.
  • The email body contains a recognized forward string such as From:.”

After some testing, I can confirm that the FW: needs to be at the start of the subject line. If you got something before the FW: for some reason, it will skip past the Forward rule and get picked up by one of the other two rules.

A relatively minor point admittedly, but one that caused one of my tests on a recent project not to function the way that I’d been expecting it to.

ServiceNow Guided Tour Bug

Just a quick post today to describe a bug that I found in ServiceNow relating to the guided tour functionality under Madrid.

It turns out that ServiceNow is struggling with call outs that are positioned on the ‘submit’ button on the incident form.

It’s possible–likely even–that the issues are with all of the UI Actions positioned up at the top of the header, but I didn’t test that.

What I can confirm is that if you create a guided tour under Madrid with a call out on the submit button on the incident form, it breaks. It never shows that call out, meaning that the tour never completes.

That functionality works on London, but creating the tour on a London instance and moving it over to a Madrid instance also results in the tour breaking at the submit call out.

Interestingly, if you have the guided tour built on London and then upgrade that instance to Madrid, the tour continues to work under Madrid.

I’ve submitted a HI ticket on this bug, so hopefully this is fixed in the near future, but in the meantime, if you have a guided tour that isn’t working, and the call out involves one of the UI Actions at the top of the header, you’re probably not doing anything wrong.

Hacker Rank Array Manipulation Problem

I ran into this problem on HackerRank:

Starting with a 1-indexed array of zeros and a list of operations, for each operation add a value to each of the array element between two given indices, inclusive. Once all operations have been performed, return the maximum value in your array.

My first go at this works, but isn’t fast enough:

 let myArray = [];

 for (let i = 0; i < n; i++) {
     myArray.push(0);
 }

 for (let i = 0; i < queries.length; i++) {
   let operationStart = queries[i][0] – 1;
   let operationEnd = queries[i][1];
   let action = queries[i][2];
   for (let j = operationStart; j < operationEnd; j++) {
      myArray[j] += action;
   }
}

return Math.max(…myArray);

As I thought more about the problem, I realized that only the end points of each operation mattered. I tried a few different approaches,  but still was coming back with my algorithms taking too long to execute on several of the tests.

I finally threw in the towel, and read the discussion, which pointed out that you could treat it as a signal processing problem, and only record the changes–in essence have an array with a plus on the start point of the range being summed by an operation and a minus one spot after the end of the summation.

For example:

If the operation is 2, 4, 100 (meaning add 100 to the 2nd, 3rd, and 4th spots in the array)

[0, 100, 100,100,0] could instead be treated as:

[0, 100,0,0,-100]

The approach being advocated in the comments essentially required n operations to create the array of zeros, then a set of operations to populate the changes, and then n more operations to run back through the array keeping a running total in order to figure out what the largest number is.

That made sense to me, but I wondered if there was a way to combine the two approaches and come up with something that required fewer operations.

My thought was that all you really needed was to record the points where the signal changed, and the magnitude of the change.

let endPoints = new Set();
for (let i = 0; i < queries.length; i++) {
   endPoints.add(queries[i][0]);
   endPoints.add(queries[i][1]+1);
}

let sortedEndPoints = Array.from(endPoints)
sortedEndPoints.sort((a, b) => a-b);

let values = [];

for (let i = 0; i < sortedEndPoints.length; i++) {
   values.push(0);
}

for (let i = 0; i < queries.length; i++) {
   let leftIndex = sortedEndPoints.findIndex((element) => {
      return element === queries[i][0];
  })

   let rightIndex = sortedEndPoints.findIndex((element) => {
      return element === queries[i][1]+1;
   })

   values[leftIndex] += queries[i][2];
   values[rightIndex] -= queries[i][2];
}

let maximum = 0;
let runningTotal = 0;
for (let i = 0; i < values.length; i++) {
   runningTotal += values[i];
   if (runningTotal > maximum) {
      maximum = runningTotal;
   }
}

return maximum;

The solution above came back as a fail on a bunch of tests (due to a time out) on HackerRank.

That really surprised me, and continued to not make sense to me until I went ahead and unlocked some of the test cases that had more data.

I had been envisioning problems that scaled up to to something like a billion data points in the array and ten thousand add operations.

The test cases scaled up to 4k points in the array and 30k addition ranges or 10 million points in the array and 100k addition ranges.

In that type of data set, the overhead from sorting the array of edges gets really intensive really quickly as compared to the overhead from traversing the relatively smaller array that they were using compared to what I’d been envisioning.

In the interest of proving my theory, I used their test data to create some test data that fit the profile I’d been envisioning.

The test data was as follows:

Array of 5 million places with 3 addition ranges.

Array of 4 million places with 3 addition ranges.

Array of 4 million places with 30 addition ranges.

Array of 4 million places with 30 addition ranges.

Array of 4 million places with 4,894 addition ranges.

Array of 4 million places with 4,994 addition ranges.

I then duplicated the test data 27 times and ran a comparison with a stopwatch.

On average, the method suggested by the users at HackerRank took ~8 seconds to run through that data on my machine and my algorithm took ~2.5 seconds to run through the same test set. The margin of error on something like that is probably half a second, so it’s not super accurate, but it does tend to support the idea that depending on the data you’re dealing with, the overhead of sorting the array of edge points can still end up being much less than the overhead of traversing the full array.

Here is the version that the guys and gals in the discussion for the problem on HackerRank suggested:

let edgesArray = Array(n).fill(0)
queries.forEach(([a, b, k]) => {
   edgesArray[a-1] += k;
   edgesArray[b] -= k;
})
let maximum = 0;
let tempAccumulater = 0;
edgesArray.reduce((acc, cur) => {
   tempAccumulater = acc + cur;
   if (tempAccumulater > maximum) {
      maximum = tempAccumulater;
   }
   return tempAccumulater;
})
return maximum;

At some point, I would like to spend some more time trying to tweak my ‘edges only’ solution to figure out a way to reduce the overhead involved in the sort. I’m thinking that putting the edge points into some sort of tree structure might reduce the sorting overhead and allow for my solution to be more competative across a broader set of test cases.

A better route still would be if I could figure out how to sort or partially sort the edge points as I put them into the set or array, but so far nothing I jumping out at me as to how I could make that happen.

As far as optimizing the algorithm from the people at HackerRank, I considered putting in ‘leftMost’ index and a ‘rightMost’ index that could be used to trim the leftmost and rightmost parts of the array so that the final traversal can be accomplished more quickly, but that ends up introducing extra overhead an a problem set like they are using. If you tended to have fewer tests that were clustered around one part of the set of possible locations in the array, it could be helpful on average. I can think of a few real-world type situations where that might be the case. Maybe involved with calibrating some kind of sensor or machinery where you know that once it’s mostly aligned most of the data is going to hit the same part of the sensor, but on the first few runs you don’t know which part of the sensor are going to be hit, so you have to watch the entire sensor.

It’s definitely an unlikely set of edge cases, but something that’s kind of fun to think about.

 

Database Structure

I received a bit of advice approximately one year ago with regards to designing database tables. It boiled down to “treat different things differently by putting them in separate tables that have been designed for that specific thing”.

I think that is great advice generally. One of the problems I saw at a past position was that they had one table that was storing three fairly different things. The end result was that the table was difficult to work with, and the code base was more complex than it needed to be in order to deal with the various different edge cases in that table.

In a recent project, I architected a solution that dealt with a number of different tables that all inherited from the ServiceNow task table. My proposal was to have a different custom table for each of the three tables that were children of task.

My boss countered by suggesting that we have just one custom table that dealt with all three of the stock ServiceNow tables, and add another column to it that had the name of the table that particular entry related to. He indicated that building the back end that way would be more scalable if additional tables needed to be covered by my project at a later date, and he was exactly right.

So, my addendum to the rule that I’ve been following for the last year or so is that you want to treat different things differently, and give them each their own table, but things that appear to be different at first glance might not actually be as different as you think. If you’ve got the same fields/columns across different tables, and they are all being populated, then you could probably replace the tables with a single table and use some kind of ENUM to categorize the records appropriately.

All of which will tend to make your solution more scalable.

ServiceNow Focus & a Review of Learning Resources

I just wanted to give everyone a quick update. A started a new position in Nov 2018 with a company I’m programming in the ServiceNow ecosystem. That means that my ‘finds’ over the next little while are likely to be focused on ServiceNow quirks and techniques.

However, before I get into that, I wanted to talk about learning to program. I’ve mostly been learning on my own, and I’ve realized that there is a lot of difference between resources that you can use.

I started out using CodeCademy.com. They have a workspace built right into the browser, which I really liked initially. It seemed like a great option because that meant I could get right to the business of programming.

Since then, I’ve spent some time in the Team Treehouse tech degree program, on Pluralsight, and on this Udemy course: https://www.udemy.com/modern-javascript/

Here are my thoughts:

1st Point: Learning syntax can be challenging, but once you’ve got your arms around that, an even bigger challenge is getting your development environment set up so that you can start working on something other than tutorials. I think that is a big part of why people end up moving from one tutorial to another, which is why I highly recommend picking a course where they start out by setting up your development environment.

That is something that I really liked about the Andrew Meads Udemy course that I linked above. Andrew runs you through installing Node, npm, Visual Studio, and a bunch of other really useful tools.

2nd Point: It can be really tempting to go with a free resource when money is tight. I’m not advocating spending money that you don’t have, but don’t discount the value of your time. If you choose resources that don’t don’t do the trick and you double the time required to learn how to program, you’ll end up losing out on months of dev salary that will end up being much more expensive than the cost of a reasonably priced course of study.

3rd Point: The price of a course doesn’t necessarily correspond directly to the quality of the course. I quite liked what I saw of Pluralsight during the three days that I tried out their courses. My feeling is that I would have made much quicker progress if I’d started out with Pluralsight rather than starting out with CodeCademy.com’s free classes. However, Team Treehouse’s tech degree program, while costing $200 a month–much more than Pluralsight–came in behind Pluralsight for me.

In summary, out of all of the options that I’ve tried out so far, Andrew Meads’ JavaScript bootcamp course has been my favorite and I felt like the best value for the money. I liked the Team Treehouse tech degree in theory. You ‘graduate’ 3 months after starting with a tech degree that in theory makes you seem like less of a risk to prospective employers, and they have a Slack channel with moderators who can help answer your questions and get you through any difficulties you might have with the learning process. What I found was that the video courses were very uneven when it came to the quality of the teaching. I was studying Python during the month that I was enrolled in the tech degree program. I thought one of the instructors was really good. The other I found to be less skilled as a teacher, and some of his examples weren’t a very good match for the concept that he was trying to convey.

Likewise, I found the slack channel to be underwhelming. There were a lot of nice people, both moderators and other students, but when I asked questions, it seemed more often than not that the answer was something along the lines of ‘don’t worry about that now’.

I can’t speak to whether or not the tech degree makes someone more employable. It’s possible that there is enough value there to offset both the deficiencies I came across and the $200 per month price tag, but even with me spending 40 or more hours per week working through the tech degree, I found that I wasn’t able to maintain a pace that would allow me to get through the tech degree program in the 3-month minimum time frame, which leads me to me next point.

I started my Udemy class after beginning my new job in the ServiceNow ecosystem. That means that I don’t have anywhere near 40 hours per week to dedicate to JavaScript courses, but even so, my Udemy class–for the very low price of $10 or $11 has kept me busy for nearly 3 months, and I’m still not quite all of the way through the videos. The $600 or more likely $800 that I would have to spend in order to complete the Team Treehouse Tech degree would pay for something like 60-80 Udemy courses, and keep me busy learning for years.

Similarly, while I really liked the npm course that I took from Pluralsight, and I can’t say enough good things about the Sequelize class that I started, but didn’t get enough time to finish, I have a hard time right now justifying $35 per month to take Pluralsight classes when one month of Pluralsight would enable me to buy 3 high-quality Udemy classes that could very possibly keep me busy for 8 or 9 months.

Given that, and the fact that I’ve got a backlog of 4 or 5 Udemy classes that I’ve purchased but not yet even started watching, I expect that the bulk of my money will continue to go to Udemy for the next little while. That being said, I don’t think Pluralsight is a terrible value, and there are a couple of scenarios where I think Pluralsight makes a lot of sense.

If you’re already working in development, and your time is extremely valuable, then a course that is even just slightly better could save you enough time to justify paying 5 or even 10 times as much for a course as what you might pay for something off of Udemy.

Likewise, if you’re a company, and your employees need to learn something while on the clock, then the potential time savings involved in classes that are more tightly focused on just what your developers need to learn could justify Pluralight’s price point.

More importantly, because part of what Pluralsight ultimately offers is curation of their course catalog, you’re likely to find a very consistent level of quality across their offerings, which probably isn’t going to be the case with a course of study that is stitched together via Udemy classes from various instructors.

If you’ve got a lot of time to dedicate towards learning new skills, and some extra disposable income, then Pluralsight by all appearances can be a great way to go.

Otherwise, my suggestion is just to find a good Udemy class on the subject you’re wanting to learn (I highly recommend Andrew Meads’ class if you want to learn JavaScript). Even if you pick a bad one to start out with and have to purchase a second one, you probably still come out ahead compared to the other options–it’s just such an incredible value.

All of that being said, Pluralsight recently sent me an email with a limited time offer of a year for $199. It was very hard for me to pass up that deal. I suspect that if I didn’t have a big backlog of Udemy classes that I’ve purchased but not yet completed, and if I had an extra few hours a week that I knew I would be able to dedicate to learning new skills, that I would have jumped at that particular Plurasight offer.

UI Actions In ServiceNow

A quick ‘Pro Tip’. If you’re testing out a UI action inside of ServiceNow, and you’ve got to windows open, one where you’re making changes to the UI action and another that has an incident open where the UI action is located, refresh the incident window before testing changes to the UI action.

I was testing out a UI action recently, and there were definitely times where the changes I made to the UI action didn’t propagate out to the window with the incident until after I did a hard refresh of the page. I wasted a bit of time there trying to figure out why my UI action wasn’t behaving as intended when the problem turned out to be that I was still running an earlier version of the UI action.

Choosing the Right Tools

(This post was written back in October. I had it written, but didn’t get it edited before getting fired, so it’s just been sitting on my drive gathering digital dust. It’s still good information, but just keep in mind that the timing is off. Everything I’m talking about happening in the present actually happened almost half a year ago.)

Hello, and welcome to another week of writing code.

I’m still trying to progress in my actual programming skills — and it’s not going all that spectacularly — but fortunately I have something else that I’ve been wanting to talk about for a little while now.

When the development manager here at work first approached me about making the switch over from accounting to development, one of the questions that came up was what computer I was going to end up programming on.

I had — have — a perfect good HP Spectre 13, but the advice from the development manager was that I should go ahead and get a MacBook. Our company seems to of fully switched over to a bring your own device policy for the development team, which meant I was looking at having to spend $2k to $3k buying a MacBook if I was going to take his advice, which was a little hard to stomach for a number of reasons, but I went ahead and did it anyway simply because at the time we had no Windows programmers at work. We had a whole bunch of programmers working on MacBooks or other Apple computers, and a couple of people working on Linux-based computers, but I would have been the one and only programmer who was trying to do what needs to be done on a Windows-based laptop.

There was a part of me that wanted to push forward using my Windows laptop out of sheer stubbornness, but I knew that choosing that option would just mean that I would either spend a lot of time on my own troubleshooting problems that nobody else had (with very little actual understanding of how to do what needed to be done), or I would constantly be going to him or one of the other developers asking for help troubleshooting problems that only I had.

Given all of the other things that I knew I was going to need to learn in order to be a adequate developer, and given that the development manager was already facing a pretty large investment to get me up and running to the point where I was adding more value for him personally than I was requiring in the way of training, the only smart decision was to go ahead and pick the platform that the majority of the other developers were using.

I’ve never actually read Stephen R Covey’s seven habits of highly effective people, but it’s my understanding that he relates a story about two guys out in a forest who are competing to see who can chop down the most trees or something along those lines.

The story is told from the point of view of the one guy, who is convinced that he’s going to when because his opponent keeps walking off into the trees for several minutes at a time on a regular basis. The first guy figures that there’s no way that his opponent can keep up with him given that his opponent is taking so much more in the way of breaks and he is.

Flash forward to the end of the story, and it turns out that the second guy was walking off into the trees so that he could sharpen his ax. So, even though the first guy spend more time chopping trees, the second guy was using a sharper ax (and therefore a better tool), and as a result managed to cut down a lot more trees than the first guy.

This isn’t quite the same thing, Covey’s seems — from my secondhand understanding at least — to be advocating taking time off to improve your skills and ability to do the work, but it’s a close cousin. What I’m advocating is to be actively looking for tools that can simplify your life, make you more effective, or save you time.

The $2500 or so that I spent on my refurbished Mac Book was a lot of money for me, but if my time is worth 50 bucks an hour, then at some point the time that I’ve saved by not having to troubleshoot my Windows PC in order to get it properly set up and keep it properly set up should more than offset the money that I had to spend on my Mac Book.

Of course, I could’ve gone with a Linux computer, but when I was being told by both the development manager, and another developer who’d switched from Lenox to a MacBook approximately a year ago, was that while a Linux computer requires a lot less ongoing troubleshooting than a Windows PC, it still requires a significantly greater amount of troubleshooting on an ongoing basis than an Apple product.

I’m sure that there are any number of people reading this who could provide perfectly reasonable arguments why going with an Apple laptop was the absolute worst thing I could have done, but regardless of the realities of everything, even if both the development manager and other developer I talked to were completely wrong in their appraisal of the situation, the simple fact that I’m using the same platform as the two of them should mean that they’ll be a lot more willing to help me as I run into problems with the set up on my box.

Again, this is a very specific instance and not extremely useful in and of itself for most of you, but there is a principle there that I do think is very valuable. Back when I was writing novels full-time, I figured out that I could write between 1000 and 2000 words per hour by typing on the keyboard.

I’m sure there are a lot of people out there that can type — and think — a lot faster than that, but I found that a typing speed in that neighborhood was enough to allow me to write a book in roughly 30 days. However, as time went on I started hearing reports from other self published authors that they were seeing really good results using the latest version of Dragon Naturally Speaking for voice recognition while they were writing their books.

I had actually used Dragon Naturally Speaking 10 or 15 years before that point while I was in college, and was never able to get it to work satisfactorily. Part of that was probably my poor enunciation, part of it was the fact that all of the microphones I tried were likely not up to spec for working with voice recognition, and part of it was the fact that Dragon Naturally Speaking wasn’t as good back then as it is now, but the result was that I relatively quickly stopped using Dragon Naturally Speaking and went back to typing (back in my college days).

As it turned out, with the right microphone, and the new version of Dragon naturally speaking I found that I was able to routinely turn out 2600 words per hour via dictation, and occasionally even break 3000 words per hour.

Getting myself up and running with Dragon Naturally Speaking was a lot of work. I went through a couple of different microphones before I found one that really seemed to work well — even though my original microphone was highly rated by the company that makes Dragon NaturallySpeaking — and even more than that, it took some practice to get myself to the point where I was comfortable speaking my thoughts rather than just simply writing them out via my keyboard.

In spite of all of the (metaphorical) pain and effort involved, being able to increase my productivity by 30 to 50% was hugely helpful at that time in my life, and if there hadn’t been such a huge uptick of piracy when it came to my titles, that increase in productivity would’ve been enough to ensure that I made a very good living writing.

So, the moral of that story — or the principle that I’m trying to communicate — is that don’t be afraid to try new things that seem like they will have a significant impact on your life. Even things that have a small impact can end up making a large difference if you chained together enough things that all individually only make a small difference.

The ‘competition’ is going to end up using anything that is hugely helpful at some point, and if you let yourself get left behind from a productivity standpoint, you just asking for problems at some point in your career or life. That being said, I don’t advocate doing anything stupid. Don’t risk your help, and don’t spend money that you don’t have in the pursuit of efficiencies in areas that you haven’t proven you can make enough money at to eventually repay the investment.

A lot of times it’s human nature to look for a magic bullet to solve all our problems, which results in a lot of people going the debt for what can only be described as get rich quick kind of solutions, but more often than not if you stop and take a hard look at what you doing, there’s a productivity enhancement that you could unlock which is much closer to what you’re already doing, and therefore much more likely to pay off in a reasonable amount of time.

That’s it for me for the week, good luck with your endeavors in the coming week!

Skimming Code

(This post was written back in October. I had it written, but didn’t get it edited before getting fired, so it’s just been sitting on my drive gathering digital dust. It’s still good information, but just keep in mind that the timing is off. Everything I’m talking about happening in the present actually happened almost half a year ago.)

I can’t believe it’s been another week already! I’m afraid that I don’t have the exciting update I was hoping for, which means I’m still stuck on my side project, but I do have something that I hope will be useful to some of you.

I fully expected that transitioning from accounting to development was going to be tough — which it has been — but I’ve been consistently surprised at all of the things about it that I didn’t realize were going to be so tough.

I think that it’s human nature to forget how hard it was when you started something once you’re a decade or more down the road and have achieved a very high degree of mastery in your current field. You tend to think that you’re just really smart, and that’s why things come so easily to you, but in many instances I’ve been under calling all of the beneficial experience that went into getting me to where I currently am as an accountant.

One of the things that I’ve noticed this week is that I’m tending to skim through stuff when I’m working on a programming task even though I really should know better. At first, I thought that I was just tired and overwhelmed given some of the other accounting commitments that I’m still trying to satisfy while simultaneously attempting to get my feet under me as a developer, but as I’ve watched the development manager review code, he scanned through stuff with incredible speed, and I am starting to realize that I read quickly on my development tasks because I’m used to being able to read through stuff very quickly when it relates to accounting.

In accounting, I have sufficient master the subject that it’s generally easy for me to pick out the relevant points of whatever it is that I’m reviewing without having to slow down significantly. With development, that tendency to read through stuff quickly — relying upon a mastery of the subject that I don’t have to make sure I pick out the key points of what I’m reading — will continue to get me in trouble until I finally manage to break myself of that habit.

In fairness, the other thing that I think I have working against me is the high degree of pressure that I’ve been under in this most recent role to turn things around quickly in an effort to keep up with everything that was changing on a monthly and sometimes even weekly basis.

Either way, I have made a renewed commitment to slow down and take the time needed to stop making so many stupid mistakes. For those of you who are making a similar kind of transition — whether it be from accounting, or from a completely unrelated field — tried to keep an eye out for the habits and assumptions that you developed in your previous roles. Yours might not be exactly like mine, but the chances are that sooner or later you come across something that you’re going to have to unlearn in order to achieve your goals as a developer.

That’s it for this week from me, good luck with all of your efforts in the coming one.

Google Translate Inside of ServiceNow

I recently had a client (a sizable multinational with employees speaking 9 different languages) that needed real-time translation inside of ServiceNow.

The translation of labels on a given form–and of UI Actions–is already built in. It’s not real-time, but the translation plugins for the various languages already have translations defined for the baseline labels.

Translating things like description, short description and resolution notes (especially in real-time) is a whole different ball game.

I created some custom tables, set up an outgoing REST message, and then did a whole bunch of coding. I can’t give away any of the secret sauce, but here is the outcome:

First we start with a standard incident with the short description and description populated with English. You’ll also notice that we’ve got two new buttons at the top of the form, ‘Translate’ and ‘Translate Notes’.

Next, you can see a screenshot showing the same incident, but with the user’s language switched to French. You’ll notice that the labels are translated (‘Caller’ is changed to ‘Appelant’), which is standard with the French Translation plugin.

The ‘Translate’ and ‘Translate Notes’ buttons have been translated, but the values in the description and short description fields are still in English.

Then, after clicking the ‘Traduire’ button, French translations are added below the short description and the description fields. (In the blue field decorator.)

 

Then, if another user (with their language set to Spanish) opens up the same incident, the labels and buttons are both translated, but since the ‘Translate’ button hasn’t ever been clicked by a user with Spanish set as their language, we don’t have a translation for the description or short description.

Once the ‘Traducir’ button is clicked, the Spanish translation is displayed.Below is a screen shot of a new incident, once again in English. This time from the service portal.

Here is the same incident, this time as it would be viewed by the ITIL user with their language set to Spanish. The additional comments in English that were visible on the ESS view are visible here as well, untranslated.

Clicking the ‘Translate Notes’/’Traducir Notas’ button brings up a UI Page with the original value in English in the 4th column, and the Spanish translation in the 5th column.

Looking at this particular screenshot, I can see that I probably should have made the title “Work Notes & Additional Comments” and the work_notes/comments label in the second column dynamic. I should also probably duplicate the Spanish value from the 1st row, 4th column and push it into the 5th column. I’ll have to circle back around to the client and see if they want that change made.

Finally, here is the ESS view with an English translation for the Spanish additional comments response which was input by the ITIL User.

There were some other wrinkles with this particular engagement that I won’t go into here, but overall I’m very pleased with how everything came together.

It would have been very easy to set the description, short description, and resolution notes to translate automatically rather than requiring the user to click a button, but there was some concern about things being translated unnecessarily.

I did however set it up to re-translate anything that has been previously translated if the source value of the original field is changed. All of the translations are saved off so that the API doesn’t have to be called (and charges incurred) unless the value of a field changes, or a particular field hasn’t ever been translated.

There is an app on the ServiceNow store with a price tag of $9,500 per month that has a few extra bells and whistles, but this replicates a significant chunk of the functionality, and the only ongoing charge is a $20 per million characters.

I think this is a big win for everyone involved, and I’m happy to have been able to automate the translation process so that the ITIL users at that company can spend more time doing other, higher-value work instead of manually translating incidents.