Thursday, July 25, 2013

Kindle Paperwhite

Just got a Kindle Paperwhite today.

Initial setup experience: D. Not sure if it was because I got pulled away partway through and it went back to the sleep screen mid-setup or because the wi-fi signal in our office was poor and it doesn't deal well with bad connectivity. The thing has one button and no manual, and still isn't as intuitive (or anywhere near as responsive) as an iPad, but being a techie I just screwed with it for half an hour until somehow I got it to reset and recover. Had that happened to a normal user with limited time and patience I suspect there would have been at least a tech support call, perhaps a return. But once I finally fixed that, everything was fine.

UI/Controls: C+. I got some previous gen Kindle a while back and returned it in the 30 day return period because it was so annoying. Looked and felt like something 5 years older than it was, and the physical keyboard took up a big chunk of real estate better spent on the primary use case (reading books). This is much improved, with a touchscreen and a single physical (lock) button. Still not as user-friendly as an iPad, but pretty good. Major gripe: it's slow as *$&%. Seriously, this thing is waaay less responsive than my now over 3 year old 1st gen iPad, and it doesn't even have to deal with complexities like color. And I don't think it's the eInk screen, it's the software. Turn a page...wait...do it again..oh! There it went--two pages! But you learn to accept that and it's fine. Getting very tired of accepting poor performance from gadgets, though. Every once in a while you step back and realize you've been trained by the device and that someday there will be somebody watching you on video amazed by how tolerant you were.

Screen: B+. Only dinged for touchscreen responsiveness and general slowness, which is probably more software than hardware. Text looks excellent, but needs more use to really reach a verdict. Backlight is a very nice feature, and pretty well implemented in software. Low battery use of eInk screen, like all Kindles, should be excellent, though how the backlight affects it remains to be seen. Highlighting text seems to work well (it's a disaster in the Kindle app on Microsoft Surface RT, when I read with it I actually do highlighting separately on my iPhone, which is completely stupid), though of course no color. Haven't tried the dictionary, but looks like it should work pretty well.

General form factor: A. I suspect the Kindle app running on a newer retina iPad would be just as good or better than the Paperwhite in many ways, but this is small, light, and much cheaper. (Not sure how it compares to Android tablets, though.) But this is something that will be great on the nightstand, in a plane, travelling, etc. Also stupidly easy to lose on a plane, but I guess an iPad is too. :)

So...gonna keep this one, I think, unlike my previous Kindle. They've obviously come a long way and hopefully this one has room to grow with some software updates.

Saturday, January 23, 2010

Heroku: The Good, the Bad, and the Ugly

After over a year as a happy customer with A2 Hosting, having Kitsch'nware running on one of their inexpensive shared host plans, I decided it was time to move on. Nothing against A2--they've been great, service is solid, support is EXCELLENT, and the price is right. But it was really only ever meant as a staging/development server, for a few beta testers to use--I knew it wouldn't hold up under any sort of load. So I wanted to put it somewhere else, but had a few requirements:

  1. The new server needed to be dynamically scalable. Right now Kitsch'nware isn't being heavily used, but I'm almost ready to start opening it up to more users, and I want to be able to crank up the server power on the fly as necessary. The flip side is that right now it's still a low traffic site, so I don't want to pay a bunch of money for this flexibility until I actually need it. But when the time comes, I don't want to have to re-deploy or mess with anything. Heroku fit the bill--for Kitsch'nware's basic functionality, it may actually cost me a little less than I was paying at A2, which I didn't expect. (Another bonus is that New Relic's Bronze monitoring package is included for free, which would normally cost at least $50/month!) And scaling it up is as easy as going to their web page and moving a slider. Way cool!
  2. I didn't want to have to manage the server. I could have stuck with A2, and moved up to a virtual server, but I don't want to manage a Linux box on top of everything else. Kitsch'nware is a one man show right now, and I already have way too much work to do on it without having to worry about dealing with server details too. I just need to be able to deploy and run my Ruby on Rails app. Also moving to an A2 virtual server would have been a guaranteed higher cost, even if I wasn't using the extra capacity, and should things *really* take off later, there would have been another move to a dedicated server. Heroku makes this sort of thing really simple, and you don't pay for what you're not using.
  3. I wanted to have access to memcached. A2 doesn't offer that on their shared hosting, only on virtual/dedicated servers, which as I said, I didn't want. Heroku is currently beta-testing memcached, and it's supposed to be going into production soon, so I figured that would work. If I could get into the beta, cool (and I did), and if not I could wait a little longer for it.

So sorry, A2--you were great, but just didn't offer what I needed. Sorry!

I actually looked into Heroku when I originally put Kitsch'nware out into production. At the time (late 2008/early 2009?) it wouldn't run at all. I don't remember what the issue was, but I think I was using some RoR feature, or maybe some gem that wasn't supported. There was also the problem that SSL wasn't supported, and I don't think custom domain names were supported either (but I could be wrong about that). At any rate, it didn't meet my needs. Last October when I was at Aloha On Rails, I helped out with their RoR class and got to play around with Heroku again. Their deployment process was really slick, but I wasn't sure if Kitsch'nware would run on it. During the conference, Heroku's Blake Mizerany (@bmizerany) gave a great talk on Heroku, and I filed a bug to test it out. (BTW: A shout out to Blake to thank him for pinging me on Twitter when I posted I was having Heroku issues.)

Time went on, and I didn't try Heroku for a few reasons. First, because I was using Beanstalk's Subversion hosting for source control, and Heroku requires Git. The rest of the RoR world moved on to Git long ago, but I started with Subversion and Beanstalk for a number of reasons, and didn't have any compelling reason to switch. Second, I just hadn't needed to scale Kitsch'nware. So I plodded on--everything worked, no reason to mess with it. But recently I decided it was time to get Kitsch'nware ready to go to the next level, which meant making it scalable. Heroku looked like the best option, so step 1 was converting to Git. I set up a GitHub repo for Kitsch'nware, and after a few attempts got my SVN repo imported the way I wanted it. (Some articles I found via Google, such as this one helped greatly, especially with correcting the author history.) Note: Beanstalk has been testing a Git system, but I didn't think it was available yet, so jumped to GitHub. They contacted me on Twitter after that, telling me it was available if I wanted them to enable it on my account. If I'd known ahead of time, I might have done that, but now that I'm already switched I'm staying (even though I now have to pay for GitHub, where Beanstalk was free for my little repo). Sorry, guys!

Once that was done, I tried deploying it. Had to fix some things, like getting gems set up the way they do it (really cool, actually), switching from the Whenever gem I'd been using to set up cron jobs to Heroku's cron system (also very cool), and some other minor config/deploy types of changes. To my shock and glee, it worked!

So the next step was to switch my domain over, and get SSL set up--this is where things got ugly. To be fair, some of my headaches were caused by domain registar issues, which isn't Heroku's fault. (Let me preface all this by saying Heroku could use some work on their documentation--I'm going to try to outline everything I came across here in case somebody there sees this and wants to fix it.) Here's more or less what happened:

  1. I followed Heroku's custom domain instructions to get kitschnware.com working. Their page could use to be reorganized--right at the beginning they recommend using their Zerigo DNS plug-in, but the instructions to do that are at the bottom of the page. Minor detail, but it seems like the recommended solution should be front and center. :) Anyway, it turned out my registrar (A2 hosting, I transferred the domain to them when I started using their hosting) has a really shitty system for changing DNS info. They just assume you'll be using their hosting if you're using them as a registrar, so what they allow you do is minimal. I was at least able to put in the Zerigo DNS info, but it didn't seem to work. I think the issue in the end was DNS caching/propagation, as I noticed using a different machine the next day that I was suddenly able to hit kitschnware.com, even though I still couldn't from my MacBook Pro. A little time and rebooting the MacBook eventually fixed it.
  2. When I changed the DNS entries at A2, it apparently nuked extra DNS info I forgot was there, like MX entries. I use Google Apps for Kitsch'nware's email accounts so I get custom @kitschnware.com email addresses, but mail gets routed to Google rather than my web host. This was something I must have set up before moving to A2, and it just happened to keep working when I transferred the domain to them. (Previously I had it registered with 1&1, a registrar I've used for years that allows you to change whatever DNS settings you like.) I'm in the middle of transferring the domain back to 1&1, but things were slightly complicated because I'd set it up as privately registered. So hopefully that will be fixed soon, and I can get my custom Gmail up and running again, but for now all @kitschnware.com email has been dead going on a week, which pisses me off. Not Heroku's fault, though.
  3. I had things set up at A2 so you could go to either kitschnware.com or www.kitschnware.com, and either way you would get redirected to kitschnware.com (which is the domain my SSL certificate is for). When I don't have my SSL cert installed, both domains seem to resolve correctly. Though I just removed the cert to try to repro this, and it didn't happen. :( Maybe this actually had something to do with my Heroku custom domains add-on settings? I switched it back and forth between www.kitschnware.com and kitschnware.com a few times while trying to get everything working. Right now it's set on kitschnware.com, because that's what my SSL cert is for, but it means that anybody trying to hit www.kitschnware.com just gets a lookup error. Lame. I'm hoping this can all be easily remedied by getting the DNS settings at 1&1 set up correctly (using an A record like Heroku mentions on their custom domains page), but until the domain transfer happens that domain is dead. It's possible adding www.kitschnware.com to the custom domains list might make the URL work, but login won't work because of the SSL cert being for kitschnware.com. So not really Heroku's fault here, and I should be able to fix it once my domain is transferred, but it would be nice if Heroku had some sort of redirect set-up where you could just automatically send 'www' (or any subdomain maybe) to the root, or vice-versa to support a single SSL cert. (I also tried setting up the Heroku wildcard domains add-on, and it didn't work for me. Not sure if this was me trying to do something unsupported, me using it incorrectly, or the documentation being incomplete/wrong.)
  4. SSL. This turned into another head-scratching adventure. After digging up my cert and private key from A2, I used the Heroku command line tool to install the cert. At the time, I had my custom domain set up as 'www.kitschnware.com', so that caused some headaches. I finally got that sorted out, so the custom domain and the SSL cert match. So I thought everything was good, until I tried to log in with Safari instead of Firefox. All of a sudden I got a security error saying the cert wasn't signed by a valid authority. WTF? The cert is from GoDaddy (cheap, and they work!) and I never had any issues with it on A2. So I figured the problem was with my Heroku setup, not the cert itself. Did some Googling and found a gotcha when using GoDaddy certs with nginx servers (like Heroku) and Safari. Applied the fix from that page (combining the two cert files you get directly from GoDaddy, who don't provide nginx in their list of servers when you download the cert) and bam! things started to work. This is something that Heroku really needs to put on their SSL documentation page, so I'll see if a can at least make them aware of this bit.

    • Related note about Heroku SSL: Using SNI so you can run your own cert on their system means that XP users using IE will see a cert warning, which really sucks. I suspect Kitsch'nware will have more than a few users using XP/IE, so I'm not sure how I'll handle that yet. It isn't Heroku's fault at all--it's a Microsoft problem, and there is a fix, but it costs $100 a month at Heroku. :( It's just not important enough of an issue (yet) for me to do that, but this is something that worked just fine at A2 and I'm having to give up moving to Heroku.


Some other notes on Heroku's documentation:

  • Their documentation pages that talk about 3rd party services, like their New Relic page and Zerigo page don't mention whether you need to create an account with that 3rd party. I'm pretty sure you don't, though I have them anyway (I already had accounts on New Relic and Exceptional). I wound up setting up a Zerigo account I didn't need because when I couldn't get things working, I thought maybe that was required. This would be a nice thing to add to the documentation just to avoid any confusion.
  • When you do a Git push to Heroku, you can only push from the master branch. I had set up a 'heroku' branch to keep my A2/heroku versions separate during the transition, and kept trying to push it, but then my changes just wouldn't show up on the live site! I don't remember what error, if any, the Git push command gave me, but I was totally stumped until I did a little digging through the Heroku docs. Don't remember where I found it, but there was some little thing somewhere that mentioned this. It probably should be a little more prominent in the documentation, as I'm sure I'm neither the first, nor the last person to try this. :)


    UPDATE (01/24/10): According to @telinnerud: ' To push a branch other than to master to Heroku do git push heroku some_local_branch:master'. I haven't tried it myself, I'm just using master for Heroku now, and actually I don't need to maintain two branches as I've switched over completely.


So that's my adventure with Heroku thus far. It's been frustrating, but not really their fault in the end. Things are very close to working, and hopefully will be totally fixed once my domain transfer finally goes through. Heroku is a cool service, and I'm looking forward to being a happy customer. :)

Monday, November 2, 2009

Doing deep copies of arrays of ActiveRecord objects in Ruby

So in Kitsch'nware I have a big collection of ActiveRecord objects I pull from the db, and in a loop I do some processing on the collection where I modify some of the AR objects. Then on the next iteration of the loop, I need the collection restored to a clean state again. One option is to do another DB query to pull the same data, but that seemed silly and inefficient (though I would think Rails' SQL caching would take care of it, but I couldn't verify that it actually worked).

My first thought was to copy the original collection at the top of the loop, so I looked into copying options in Ruby: dup and clone. dup is designed for shallow copies, and what I wanted was a total deep copy of the collection, which I thought clone would do. Alas, no. Did some more looking into it and found on the ActiveRecord API documentation that when AR does a clone, it skips the id but copies all the attributes. In most cases, that would make sense--if you clone something to put back in the DB, you'd want to give it a new id. But in my case, I wanted an exact copy with the id.

One option was to override clone in my AR classes, but I found what seemed to me to be a cleaner, easier to follow method here and here. It's just a one line thing:

copy_of_your_array = Marshal.load(Marshal.dump(your_array))

and bam! you've got a deep copy of your array. This cut the number of SQL queries on the page (using a copy of the DB pulled from production, which is still pretty small) from almost 400 queries to about 70, and the amount of time using MySQL went down about 80% (the previous stats were courtesy of the awesome query_reviewer plugin). A few more optimizations to prevent doing the same work over and over in the loop, and I should have a nice performance improvement. One (small) worry is the Ruby overhead doing this clone might incur, but from what I've seen so far it looks fine. Not sure if that will change as my DB gets much bigger.

Sunday, September 27, 2009

Xcode and Subversion

I'm getting started on some iPhone development because Project Unblowuppable will have an iPhone app as part of the whole system. I'm already using Subversion for my Ruby on Rails work, so as much as I loathe it, I decided to just stick with it for my Xcode work too. I found a page with good info on setting up SVN and Xcode (check out the PDF it links to for sure), so I just wanted to put that up here as a note to myself, if nothing else--I have no doubt I'll need to go back and look over this next time I want to set up an Xcode project in Subversion. That's what a PITA Subversion is. :)

Sunday, June 28, 2009

Updating multiple form items with Ajax.InPlaceEditor

I have a few places where I want to do in place editing in my Rails app, so I tried using the official Rails in_place_editing plugin, but my form setup was just too complex for it to work. (From doing some reading up online, the plugin also has some other issues and hasn't been maintained. Because of this there are several alternative plugins out there.) But I eventually wound up finding a solution I could adapt and use.

So after implementing that, simple in-place editing worked beautifully! Unfortunately, simple in-place editing isn't quite enough for my app. When a page item is updated, sometimes I need it to update another item on the page. The script.aculo.us Ajax.InPlaceEditor is only designed to update one page item--the one that was edited. I tried working around it by changing something like this (in my controller action where I handled the AJAX call):

render :text => new_value

To something like this:

render :update do |page|
page.replace_html(original_page_id_to_update, :text => new_quantity)
page.replace_html(secondary_page_id_to_update, :text => new_value)
end

That didn't quite work. It updated the secondary_page_id_to_update element correctly, but instead of putting new_quantity into original_page_id_to_update, it put in something like this:

try { Element.update("original_page_id_to_update", new_quantity); Element.update("secondary_page_id_to_update", new_value); } catch (e) { alert('RJS error:\n\n' + e.toString()); alert('Element.update(\"original_page_id_to_update\", new_quantity);\nElement.update(\"secondary_page_id_to_update\", new_value);'); throw e }

I was stumped on this for quite a while. Spent a lot of time scouring The Google for answers, digging into the script.aculo.us documentation for Ajax.InPlaceEditor, and examining the actual Ajax.InPlaceEditor code. What my web searches turned up was that I'm not the only person with this problem--I found several forum posts on different sites from people trying to do the same, or similar, things. Unfortunately, none of them had a solution. :(

So I went for a run to take a break, then came home to take a shower and like Archimedes, I had a "Eureka!" moment. (Though his was in the bath.) I realized what was happening was that both of my replace_html calls were actually working, but as part of what Ajax.InPlaceEditor does it returns the final value of the render call, and sticks it into the field that was edited. This is just the default behavior, as it assumes you only want to update that one field.

Looking into the Ajax.InPlaceEditor code, I found this (relevant lines in bold):

handleFormSubmission: function(e) {
var form = this._form;
var value = $F(this._controls.editor);
this.prepareSubmission();
var params = this.options.callback(form, value) || '';
if (Object.isString(params))
params = params.toQueryParams();
params.editorId = this.element.id;
if (this.options.htmlResponse) {
var options = Object.extend({ evalScripts: true }, this.options.ajaxOptions);
Object.extend(options, {
parameters: params,
onComplete: this._boundWrapperHandler,
onFailure: this._boundFailureHandler
});
new Ajax.Updater({ success: this.element }, this.url, options);
} else {
var options = Object.extend({ method: 'get' }, this.options.ajaxOptions);
Object.extend(options, {
parameters: params,
onComplete: this._boundWrapperHandler,
onFailure: this._boundFailureHandler
});
new Ajax.Request(this.url, options);
}
if (e) Event.stop(e);
}

What was happening was that it was creating a new Ajax.Updater, which is meant to update a single element with the return value from the url it calls, which turned out to be that chunk of Javascript above when I used the render :update block. The Javascript was being eval'd correctly, and updating both my fields, but then Ajax.Updater was re-updating the 1st field with the result of the render block, which was that chunk of Javascript. (BTW, I think this Rails bug is the same thing, but they didn't see that anything was being eval'd because they only had the one field being updated, so their guess was that the eval was failing.)

I tweaked the Ajax.InPlaceEditor code to replace the Ajax.Updater line with the Ajax.Request line, and suddenly everything worked! But that didn't seem like the right solution, so I looked at the logic in that block. It checks options.htmlResponse, so I tried setting :htmlResponse to false where I set up the Ajax.InPlaceEditor, and voila! It worked.

So hopefully this will be useful to people, and turns out to be very easy. Here's the summary:

  1. Use a render :update block with as many replace_html calls as you need (including one to update the in-place edit field itself).

  2. Set :htmlResponse to false when you set up your Ajax.InPlaceEditor.


And that's it!

Saturday, June 13, 2009

Beware trying to do an Array find() with an ActiveRecord collection, and other various lessons.

Wow, this is something that's been haunting me for a week or two, and I just couldn't figure it out. I've been on a major performance improvement spree on Project Unblowuppable, which has always been a very database intensive application. After building up a decent sized database in production through the private beta, I pulled it down to play around with locally and find the real bottlenecks. One particular page (the one that does by far the most work) for one user was doing over 1200 db queries! Ouch. I've been trying to optimize that for a few weeks by using :include (in several cases nested includes) in my original queries to eager load as much as possible up front, and by rearranging the code to do things like pull db queries out of loops where possible. On most of the pages this was a big win, but not on that One Evil Page. I used the same techniques there, and spent literally a few weeks (in my spare time) going over the code again and again, and looking at what was going on in a debugger (thank you, RubyMine!!). Nothing worked, but watching exactly what was happening in the debugger and doing lots of research on Google was very interesting.

If you watch what gets retrieved in an ActiveRecord find() call in a debugger, you'll see (of course) that by default it doesn't retrieve any of the associated records (via belongs_to, has_many, etc.) automatically. If you use the :include option, you can tell Rails that you want to pull those associations at the same time, so you're not doing more queries later--quite possibly one at a time in a loop, which is terrible if db queries are a bottleneck for your app. An interesting thing if you peek at the query result is that the :included record(s) may not actually be there in the model if you drill down through it. In fact, what you may see is an new attribute for an association you loaded that's nil! I kept seeing this and thinking "WTF?" But looking at the SQL in the server log, I could see that indeed the associated records were being loaded--but where did Rails put the data?

In most cases, using records with eager loaded associated data worked well, but not always. Eventually I was satisfied that the data was somewhere, even though I couldn't drill down to it and view it directly in the debugger. It turns out ActiveRecord does some method_missing magic to pull that data in. Ok, no problem--there's lots of PFM in AR. :) But on the One Evil Page, even though I was pre-loading the data, it was still doing a bunch of queries to pull it from the DB again. What I finally decided to do was switch from using an ActiveRecord find() (find_by() really) to a standard Ruby Array find(). Looking at the types in the debugger, my collection was an Array, so this should work, right? Not necessarily. Rails does some magic with the find() method on these collections (which are actually AssociationProxies) and instead of just doing an in-memory search through the array, it was still doing a db lookup. So I did some looking through the pickaxe book and saw that in Enumerable collections, find() is actually just an alias for detect(). So I replaced 'find' with 'detect' and PRESTO: with all the eager loading set up, I dropped the number of db queries on the Evil Page from over 1200 to under 150! (And I'm still not done--I think I can still cut it a lot more.)

So the moral of the story? I'm not sure. I don't know if this is a bug or a feature in AR, or if it's just a side effect of my design, but wanted to put this out there in case anybody else is having a similar issue. Do some playing around and see if using eager loading and Ruby's Array detect() can cut down your db queries. (Note: the query_reviewer plug-in is an essential tool for checking your db usage. Not only will it give you the total query count, but it will show you duplicate queries and give you the stack trace of exactly where the problem is! It also shows when indices are missing or not being used, which may just be because mysql decided there wasn't enough data yet to bother.) A related note about Array find_all(): according to this thread, it is also hijacked by AR and the equivalent thing to do is use select() instead. But in my case whether I use find_all() or select() the effect is the same. Not sure why this seems to work differently than find()/detect(). Anyway, just be aware of this sort of thing and play around. Hopefully this isn't some bug that future versions of Rails will fix and break in my app. ;)

A similar thing that may help cut down db queries is using Array length() (or size()) instead of count() to get the number of records in a collection. This is great if you've already loaded the records, especially if the count is being done in a loop, which was my case. I was able to shave off a lot of SQL count queries this way.

An aside about using :include: I guess Rails (pre 2.0?) used to do a big Cartesian join when you did includes, which I can imagine was pretty horrible. The things that caused me to do some head scratching following my AR model objects in the debugger are probably a side effect of the fix for that, which is to do separate (and simple) db queries to load the data separately instead of joined. Looking at the number of SQL queries this looks very efficient, and like a very clever solution. I'm betting there was a lot of work to make this happen so transparently--my hat's off to the Rails team for this and all their other hard work!

Sunday, June 7, 2009

hpricot and Ruby 1.8.7

So the problems from my Ruby 1.8.7 upgrade aren't over yet...! None of the Rails scripts (including script/server) will run from the terminal on my MacBook, because of an hpricot incompatibility. I've ignored it because everything still seems to work from RubyMine--I run the app there, and can do other things like script/generate. But I decided to try to fix it anyway.

Found a page that said yes indeed, hpricot and Ruby 1.8.7 don't get along, so I pulled down the git fork, tried to build and install, but had build problems with it (it was trying to build for an old darwin target, I think). So I tried pulling down and building Why's git repo per the instructions. Sure enough, that built and installed a new hpricot version--apparently the gem is not up to date.

Some notes: You'll need ragel, which can be installed via macport, and fast_xs, which is available as a gem to build and install hpricot.