Damian Lettie http://www.lettie.id.au Sun, 17 May 2015 17:54:08 +1000 Sun, 17 May 2015 17:54:08 +1000 Thoughts On Software Development Trustworthy Backups Sun, 17 May 2015 17:54:00 +1000 http://www.lettie.id.au/blog/2015/05/trustworthy-backups/ http://www.lettie.id.au/blog/2015/05/trustworthy-backups/ Several years ago, one of my hard disks failed. Being the cautious type, I already had a backup routine in place, and therefore only had to suffer the small inconvenience of restoring from a recent backup rather than losing anything important.At the time, I was looking up some instructions on... Several years ago, one of my hard disks failed. Being the cautious type, I already had a backup routine in place, and therefore only had to suffer the small inconvenience of restoring from a recent backup rather than losing anything important.

At the time, I was looking up some instructions on the web and ended up reading several stories from people who’d been carefully making backups, thinking they were doing the right thing, only to find out at the worst possible time that their backups were incomplete, or couldn’t be recovered at all.

These days, my routine includes an extra step: occasionally verifying what’s in my backups, and that they can be successfully restored.

On the Mac side of things, I’ve always been happy with Time Machine. When we got a Windows machine at home, File History seemed like the logical equivalent. Which was fine, until I saw this in Event Viewer today:

File was not backed up due to its full path exceeding MAX_PATH limit or containing unsupported characters:
C:\Users\xxxxx\Pictures\xxxxx\xxxxx\xxxxx.JPG

If you want it to be protected, try using different directory and file names.

It’s equal parts amusing and frustrating that a first-party Microsoft product which didn’t exist until 2012 still suffers from the decades-old maximum path length limitation. In my case though, it was the “unsupported characters” part that turned out to be problem. There was a Japanese symbol in some of the directory names that Windows Explorer was happy to use but File History didn’t like. None of the files in those directories were being backed up.

I never saw the desktop notification that Microsoft claims is shown if File History detects a problem. I’m just glad I found the problem while nosing around in Control Panel before we lost any of those photos to a hard disk failure.

If you’re possibly in the same boat, the Event Viewer log to check is:

Applications and Services Logs
   Microsoft
      Windows
         FileHistory-Engine
            File History backup log

I guess I’ll have to start making partition backups more often. And for file-based backups, I’ll be giving Duplicati a go. (It’s a shame that CrashPlan doesn’t support backing up to network shares.)

]]>
Interaction Between iOS and OS X Devices Tue, 22 Apr 2014 08:21:00 +1000 http://www.lettie.id.au/blog/2014/04/interaction-between-ios-and-osx-devices/ http://www.lettie.id.au/blog/2014/04/interaction-between-ios-and-osx-devices/ Apple already have computers, smart-phones, tablets, and a living room media player on the market. If the media player grows more interactive features, and if a wearable device (a watch, perhaps?) is thrown in to the mix, then it beomes even more important that all these devices interact well with... Apple already have computers, smart-phones, tablets, and a living room media player on the market. If the media player grows more interactive features, and if a wearable device (a watch, perhaps?) is thrown in to the mix, then it beomes even more important that all these devices interact well with each other. And by “well”, I mean in the Apple of old “It just works” way.

Notifications

Enable push notifications on a device to find out quickly when something has happened. But enable push notifications on a second device, to find out when something has happened even if you’re not near the first device, and now every notification has to be dismissed twice. These devices need to communicate with each other. Don’t show me a notification on device A if I’ve already dismissed it on device B.

AirDrop

iOS AirDrop can transfer data easily between two iOS devices. OS X AirDrop can do the same between two Macs. But it’s not yet possible to transfer from iOS to OS X, or vice versa.

Mobile devices as peripherals

There are already many examples of this happening:

  • Apple’s Remote for controlling iTunes or Apple TV.
  • Apple’s AirPlay for audio / video streaming.
  • Third-party application-specific controller apps such as CTRL+Console and touchAble.
  • Third-party client / server apps for using an iPad as a Mac’s remote keyboard and trackpad.
  • etc.

Still, I can’t help but feel there’s a lot of untapped (no pun intended) potential here - like the direction Microsoft headed in with Xbox Smartglass. What if OS X had native support for pairing with a mobile device as an input peripheral? What if the iOS and OS X SDKs included this functionality in an abstracted, easy-to-use way? Official, out-of-the-box support would drive inclusion in a larger variety of third-party apps.

Phone features on non-phone devices

This is an extension of the device-as-peripheral idea.

I might not have my phone by my side when a phone call or SMS message arrives, but I might be in front of a second device which has a display, microphone, speakers, (virtual) keyboard, etc. - everything it needs to handle that call or message except the mobile phone service. If those two devices are connected to each other by Bluetooth or WiFi then couldn’t you treat the phone as a proxy, or the second device as a set of wireless peripherals, and let me take the phone call or reply to the SMS message without dragging myself over to the actual phone?

Device proximity

Assuming that Apple does release a wearable device this year, and assuming it pairs with other devices using Bluetooth, I’d like to see some features developed that are triggered by Bluetooth proximity events. Automatically lock a Mac if the logged in user’s watch moves too far away, or don’t require a PIN to unlock an iPad if the owner’s watch is nearby, for example.

]]>
Django on OpenShift: Beginner Miscellanea Sun, 06 Apr 2014 08:24:00 +1000 http://www.lettie.id.au/blog/2014/04/django-on-openshift-beginner-miscellanea/ http://www.lettie.id.au/blog/2014/04/django-on-openshift-beginner-miscellanea/ I’ve been working on a Django application lately, hosted on Red Hat’s PaaS, OpenShift. While getting started, I made notes about some areas that I thought weren’t well documented, or just weren’t obvious to a beginner. Below is an expanded version of those notes. Hopefully they’ll be of use to... I’ve been working on a Django application lately, hosted on Red Hat’s PaaS, OpenShift. While getting started, I made notes about some areas that I thought weren’t well documented, or just weren’t obvious to a beginner. Below is an expanded version of those notes. Hopefully they’ll be of use to someone.

SQLite and the Admin User Record

The openshift/django-example repository uses SQLite as the default database engine. To summarise what happens when you deploy (git push) to OpenShift for the first time when using SQLite: On the server, the initial database file is copied from your git repository (app-root/runtime/repo/wsgi/openshift/sqlite3.db) to the application’s data directory (app-root/data/sqlite3.db). It then runs app-root/runtime/repo/.openshift/action_hooks/secure_db.py to set a random default admin password. Rather than use Django functionality for this, the script manually hashes the password and runs an SQL UPDATE to modify the admin user’s record directly.

Unfortunately, the initial database file provided in OpenShift’s repository, and the above-mentioned password creation code, appear to have been created for a version of Django prior to 1.4. The Django 1.4 documentation says:

Django 1.4 introduces a new flexible password storage system and uses PBKDF2 by default. Previous versions of Django used SHA1, and other algorithms couldn’t be chosen.

In most cases this is fine, as Django is backwards-compatible with the format used by OpenShift. The only time it caused a problem for me was when I deleted the SQLite database file and recreated it from scratch, then tried to force the server to use that new file. I was no longer able to log in to the admin interface on the server.

The simple workaround here is: Don’t delete the database file.

Resetting the Admin Password

One of the misguided reasons why I tried to delete the database file was that while the method of retrieving the default admin password created on the server during deployment is documented in the README.md, the local database file will not be updated with the new password, and I couldn’t find any documentation about what password the original database file contained. There was no data in my database yet, so I figured it would be easy to just delete the database file and use syncdb to recreate it, specifying my admin password of choice.

Of course, a saner person would have just checked the Django documentation and found that resetting a password is a trivial one-liner:

./manage.py changepassword admin

Database Migration

The other misguided reason why I tried to delete the database file was that I’d made some changes to the models after deploying the application, and there was still no important data in the database at the time, so I thought it would be easier to create a new database than to learn how to use South for database migration.

The only clue that I had at the time about how to use South on OpenShift was a terse mention in a post by Nate Aune about adding the migrate command to .openshift/action_hooks/deploy.

I did some more research later, and concluded that: - migrate should be called after syncdb, to ensure that the South database tables have been created first. - syncdb should not be skipped on the first deployment (which is what the default deploy script does), for the same reason.

With those two points in mind, I was able to get migrations working by changing my .openshift/action_hooks/deploy to look like this:

# --- snip ---

if [ ! -f $OPENSHIFT_DATA_DIR/sqlite3.db ]
then
	# Copy database file...
fi

echo "Executing 'python $OPENSHIFT_REPO_DIR/wsgi/openshift/manage.py syncdb --noinput'"
python "$OPENSHIFT_REPO_DIR"wsgi/openshift/manage.py syncdb --noinput

echo "Executing 'python $OPENSHIFT_REPO_DIR/wsgi/openshift/manage.py migrate --noinput'"
python "$OPENSHIFT_REPO_DIR"wsgi/openshift/manage.py migrate --noinput

# --- snip ---

Download sqlite3.db Using rhc

Once my server did have some useful data in the database, I found it was sometimes useful to download a copy locally, for testing. The fact that I’m using SQLite means that the entire database is stored in a single file, so it was welcome news from RedHat when they added an scp command to the command-line OpenShift management tool, rhc.

To download the SQLite database file from the server:

rhc scp <appname> download . app-root/data/sqlite3.db

Upgrading Django

OpenShift’s django-example installs Django 1.4. It’s possible to upgrade to Django 1.6, but changing the install_requires value in setup.py alone is not enough. I submitted a pull request with the full set of changes required to use Django 1.6, based on the work of @suhailvs. However, you’ll need to merge it into your local repository as it hasn’t been merged into the master branch yet.

Update (31 Mar 2015): The latest OpenShift newsletter says the Django QuickStart has been updated to v1.7.7.

]]>
Treat Every Release As If It’s The Last Tue, 14 May 2013 12:18:28 +1000 http://www.lettie.id.au/blog/2013/05/treat-every-release-as-if-its-the-last/ http://www.lettie.id.au/blog/2013/05/treat-every-release-as-if-its-the-last/ Your manager breaks the news to you - there’s been a change of direction. You and the rest of your team are needed for a great new project. Unfortunately, that means the current product you’ve been pouring so much effort into is going to be shelved. The product will still... Your manager breaks the news to you - there’s been a change of direction. You and the rest of your team are needed for a great new project. Unfortunately, that means the current product you’ve been pouring so much effort into is going to be shelved. The product will still be available to customers, and existing support plans will continue, but starting next month all development work will be frozen. You have one month left to add new features, fix annoying bugs, or find some other way to add value. After that, you won’t ever be allowed to touch the code again. What are you going to spend that last month working on?

When looking at a long back-log of feature requests and issues for a product and trying to decide what to work on next, we need to estimate the value of each. As a bit of an experiment, I’m going to try a new way to get that process started:

Treat every release as if it’s the last.

Don’t take that too literally. For example, you’d have to be crazy to implement a major new feature in the twilight days of a project, and we don’t want to rule out working on new features. I’m sure there are many such holes in this idea. Regardless, my theory is that it can still help us focus on what’s important.

What’s that one feature that’s been a glaring omission from your product but keeps getting shoved into the too hard basket? What’s that one bug that’s been driving customers nuts but no-one’s ever been able to track down? Wouldn’t you want to implement / fix it this month, before the code repository is frozen in carbonite?

And then repeat the process next month?

]]>
Replacing Google Reader’s Sync API Thu, 14 Mar 2013 23:42:50 +1100 http://www.lettie.id.au/blog/2013/03/replacing-google-readers-sync-api/ http://www.lettie.id.au/blog/2013/03/replacing-google-readers-sync-api/ Google is retiring Google Reader. Fortunately, there’s no shortage of alternative feed readers: Desktop and mobile applications, cloud-based and self hosted browser-based solutions. But when Google Reader shuts down on July 1st, we won’t just be losing a feed reader, we’ll also be losing a synchronisation API.The unofficial API that... Google is retiring Google Reader. Fortunately, there’s no shortage of alternative feed readers: Desktop and mobile applications, cloud-based and self hosted browser-based solutions. But when Google Reader shuts down on July 1st, we won’t just be losing a feed reader, we’ll also be losing a synchronisation API.

The unofficial API that Google Reader provides is accessed by many third-party applications. It’s what makes it possible for someone to start reading articles at home using a native application on their desktop computer, and continue where they left off using a hand-held device on their way to work. It synchronises which feeds you’re subscribed to, which articles you’ve read, which are your favourites, and so on.

Google Reader’s API was free (as in beer), allowed effectively unlimited usage, and was well understood. It became something of a de-facto standard. I tried dozens of different feed reader applications when I bought my first smart-phone, and while many didn’t have synchronisation, the majority of the ones that did relied on Google’s service to do so. When Google Reader stops working, so will the synchronisation features of those applications. That’s the reason behind Nick Bradbury’s decision to stop development of FeedDemon. Other developers will also be making the tough decision: To throw in the towel or scramble to implement an alternative before the deadline.

Brent Simmons saw all of this coming, way back in Oct 2011. In a great post about all this, called Google Reader and Mac/iOS RSS readers that sync, he said:

Smart developers are rightly uncomfortable relying on undocumented, unofficial, unsupported APIs.

Feedly have stepped up to the plate with an offer of assistance to stranded developers: A clone of the Google Reader API. It sounds like an easy transition, and must be a tempting thought for many developers out there. Then again, perhaps developers are asking themselves how long before Feedly decides that offering this service isn’t profitable enough and starts driving prices up, or shutting it down completely, or even just tweaking it in ways that break stuff.

Developers are a resourceful bunch though. No doubt there’s already someone hacking away on their own open source, reverse-engineered implementation of the Google Reader API server. Once ready, developers will be able to host their own synchronisation servers. Assuming most of the servers follow the de-facto standard, users of feed reading applications will also have a choice of which synchronisation server to use. That sounds like a good result for all.

But developers: Before you start on that, there’s one more Brent Simmons post I’d like you to read. It’s called Why “Just Store the App Data on Dropbox” won’t work for RSS readers, and it has some great ideas about how synchronisation could be implemented. And while while you’re thinking about how to implement it, remember this: Google’s announcement has put a lot of developers in the same situation as you. There must be others out there thinking about the same problem. Perhaps even ready to collaborate. Perhaps even going so far as to think about progressing from a proprietary de-facto standard to a full-blown open standard.

What better result could you ask from the death of a popular proprietary implementation than the birth of a timeless, open standard?

]]>
Great Micro-Interactions Through Minimum Viable Features Wed, 13 Feb 2013 08:09:05 +1100 http://www.lettie.id.au/blog/2013/02/great-micro-interactions-through-minimum-viable-features/ http://www.lettie.id.au/blog/2013/02/great-micro-interactions-through-minimum-viable-features/ On Don’t Make Crap, Chris Truman wrote: Each micro-interaction increases or decreases the chance of faithful users.Make every interaction great from the button press to the backendreliability and you will have users who love your product. Don’t everunderestimate the benefit of paying attention to the micro-interactions.Sounds like good advice, but... On Don’t Make Crap, Chris Truman wrote:

Each micro-interaction increases or decreases the chance of faithful users. Make every interaction great from the button press to the backend reliability and you will have users who love your product. Don’t ever underestimate the benefit of paying attention to the micro-interactions.

Sounds like good advice, but to what stages of software development does it apply?

Minimum Viable Product

If your product is still just a fledgling idea and you’re not even sure if it’s ever going to be launched, micro-interactions should be the last thing on your mind. This is the time to remember: Perfect is the enemy of good. Focus on the big picture, because if you don’t get that right then no-one’s going to care how polished your implementation is.

Mature Product

On the other hand, if your product already has a well established audience, or is on it’s way there quickly, then yes, micro-interactions should be a priority. Strong sales and / or growth is a good sign that the amount of functionality in your product has hit a sweet spot. You can afford to focus on making people happier by improving their micro-interactions.

But on the front page of Don’t Make Crap, Chris appeals to us to “put quality first”, and that means starting on those micro-interactions before you reach feature complete.

Everything In-Between

If quality needs to take a back seat in the early days, and needs to play a central role in the later days, at what point in-between should the focus start shifting? What happens while you’re actively developing new features? One method might be to start concentrating on the details as soon as the MVP has proven your concept viable. Another method might acknowledge how rare it is for ideas to appear already fully formed, and allow a grace period after the MVP for concepts to be prototyped and refined without religious attention to detail, then shift into quality mode. What both these methods have in common is the assumption that anything you intend to use in the shipping product must pass the quality criteria before being made public.

In programming, there’s the concept of premature optimisation: Tweaking a bit of code to try and make it run more efficiently before thinking about whether you’re using the right data structure or algorithm, or perhaps even before asking yourself if the time spent optimising this bit of code might have been better spent working on something else. If we can talk about prematurely optimising for speed or size, can we also talk about prematurely optimising for quality? What if you invest heavily in polishing a new feature, only to find that customers didn’t want the feature in the first place?

Minimum Viable Feature

Prototyping isn’t only for whole products. We can prototype a single feature. Likewise, the Minimum Viable Product concept doesn’t only have to be for whole products either. We can bring the MVP mindset down to the small scale too: The Minimum Viable Feature. A bare-bones implementation of a single feature, put into the hands of our customers as early as possible, to get feedback as early as possible.

Of course, with the MVF mindset, ideas have to be tested early and tested often. Features that don’t make the grade have to be dropped or rethought as soon as possible. There’s no time for worrying about the micro-interactions here. At least, not until after a feature has proven itself worthy. Does that mean showing some semi-complete, totally unpolished features to your customers in the meantime? Sure. But it doesn’t have to be to all of your customers. Make it opt-in, make sure they know what they’re getting themselves in for, and make sure you keep open channels of communication with the ones who take up the offer.

Conclusion

Even if we acknowledge the importance of paying attention to micro-interactions, that doesn’t mean there has to be a single point in the development cycle where a master switch is flicked, after which nothing less than perfection is accepted. Think about exposing some of your customers to the bleeding edge, one small feature at a time, and use their feedback to discard or perfect each feature in turn. You could have the best of both worlds: Making an end product that has great micro-interactions, without the risk of wasting time perfecting things that no-one will ever use.

]]>