unfortunately, a bug that occurred once in a while (~5% of the time) prevented some users from completing one of the early missions in our game. At first I thought it was more serious, and brought it up to the CEO. I didn’t want this bug to stop people from playing. I was then directed to one of the senior programmers who would help determine whether it was patch worthy. We figured it occurred ~ 5% of the time and decided it could wait a few days. I guess things could have been worse. I had this bug fixed in trunk, but never put it in branch because it would have taken additional QA resources, a patch, and the bug never occurred in production–until tonight of course. I guess the good thing is that I caught it early on and we were able to make a decision on its level of severity. The bug was basically setting the value for a cache key incorrectly. It was set correctly in one spot but not in another, and the code that read the value in the 2nd spot and used it threw an error because of this.
I’m also working on performance issues. We have some slow API calls and I’m trying to figure out what’s causing it. Is it locking in the DB? Is it network issues? Are they isolated or clustered in time? Whatever the issue is, this is a great exercise.
Another issue we had was a large backup in our message Q. They piled up and then a while later, the Q cleared up.
There’s so much to do at work.