I’ve been at my startup for a year and a half or so, and things are looking up, but I’m pretty tired and need some time to decompress. I’m currently writing a lot of code that had to get rewritten for multiple reasons. It’s not that fun, although I am doing some more interesting stuff with Mongodb, and implementing ways to make writes to the db atomic and only occur once even if the write is made by multiple threads (i.e. race conditions, out of band processes, etc).
we have a lot of slow APIs at work that are just returning the same information the majority of the time. one API in particular takes 3-6 seconds to return because of all the data it’s fetching from the db (i.e. client configuration data). This is a disaster and will easily bring your app down to a halt with enough concurrent requests. memcache is definitely helpful as well, but that could be another layer behind nginx caching which will give you the best performance imo. just know how/when to invalidate the nginx cache (i.e. you can send a request with a certain cookie key to get nginx). Or you could make a cookie’s value part of the cache key. Cached responses served by nginx after that change show a duration of 0 seconds (I’m not sure why it records as 0.000 but hey, I’ll take it). The results are far better than 3-6 seconds!. To prevent stampede scenarios, you could also warm nginx’s cache before serving live traffic (i.e. some time during deployment).
i was raising the max connections to postgres to 1000 and:
* The PostgreSQL server failed to start. Please check the log output:2012-09-04 23:51:06 PDT FATAL: could not create shared memory segment: Invalid argument2012-09-04 23:51:06 PDT DETAIL: Failed system call was shmget(key=5432001, size=53542912, 03600).2012-09-04 23:51:06 PDT HINT: This error usually means that PostgreSQL’s request for a shared memory segment exceeded your kernel’s SHMMAX parameter. You can either reduce the request size or reconfigure the kernel with larger SHMMAX. To reduce the request size (currently 53542912 bytes), reduce PostgreSQL’s shared_buffers parameter (currently 3200) and/or its max_connections parameter (currently 1004). If the request size is already small, it’s possible that it is less than your kernel’s SHMMIN parameter, in which case raising the request size or reconfiguring SHMMIN is called for. The PostgreSQL documentation contains more information about shared memory configuration.
solution was to raise the max:
sudo sysctl -w kernel.shmmax=60000000
more info here: http://michael.otacoo.com/postgresql-2/take-care-of-kernel-memory-limitation-for-postgresql-shared-buffers/
I’ve been emailing proxiesforrent for almost a year now…
Here’s my last email
PLEASE cancel my account, it’s been almost a year since I’ve tried to cancel…
Hi, I’ve been trying to cancel for almost a YEAR NOW….This has to be ILLEGAL. PLEASE PLEASE cancel my account. I can’t even login to your website and you dont have my email address when I try to retrieve my password.
My email address for this account is firstname.lastname@example.org
this is the side project i’m working on now.
so far i can get X instances up of a given AMI Y with whatever tags Z you want on them and some DB records of these instances.
after the isntances are all running, my script attempts to copy over a shell script to execute on each instance (the copying happens in parallel and in a non-blocking manner).
then I connect to each instance over SSH and run the shell script I just copied over (parallel + nonblocking using gevent).
the shell script each instance runs is basically a set of commands to install puppet and connect to a puppet master.
so far the scripts will download puppet + dependencies, but i dont keep a puppet master running normally.
ill spend some time later setting up a puppet master. and to get more advanced, running masterless puppet would be ideal, although I haven’t explored that at all or tried to learn it.
setting up puppet though is pretty annoying, at least when I tried for hours one weekend. I eventually got it running, but it took so long I didn’t care.
I suppose the next step is to see puppet through to manage role based deployments.
i know there are simpler configuration management tools than puppet, but puppet is probably the most comprehensive tool out there from what I read.