After the recent upgrade to OS X Lion, I installed the latest release of
Xcode (version 4.2.1 when I’m writing these words).
I used to download big disk image files from Apple Developer
site, but this time, given that I’m not a member
of any of Apple Developer Programs, I had to turn to the App Store… which
downloaded the Install Xcode.app. Confident enough, I started it, but the
interface was too simple and sleek, with no way to tweak the installation
process.
Digging a bit into the application bundle, it turned out that the old
interface is still there, because the new installer just wrap the meta package
and the product packages, so opening Xcode.mpkg, hidden into the
Contents/Resources
folder, brought up the usual interface which allows one
to deselect some optional components.
Doing it from the terminal is actually much, much easier:
Recently I had to upgrade to OS X Lion; given that I didn’t want to lose all
the backups performed since my first days in OS X’s world, back in April 2008,
I studied for a while a plan that allowed me to move a Time Machine disk
across.
I know, I know: had I used Migration Assistant, there probably would have
been no issues at all, but I moved things manually and took the time to clean
up a bit some things. Anyway, moving such a disk is not difficult, really, it
just takes a lot of time.
What?!?!?
Before going on, however, maybe it’s better to clarify what I mean with
“move”: I don’t refer to moving the machine directory from a smaller disk to
a larger one (another easy operation: fire up Disk Utility and “restore” the
smaller disk onto the larger one; you’ll end up with a “clone”, just bigger),
I mean using the same machine directory with a differt machine, keeping the
history and list of every backup already performed.
With Lion, there’s no need to fiddle with ACLs, extended attributes, MAC
addresses and so on, thanks to the addition of tmutil; nevertheless, there
will certainly be some troubles due to “insufficient” Spotlight’s indexes
that will slow down backups if we don’t improve the situation (would you like
if your backups lasted hours and you logs filled up with tons of Waiting
for index to be ready (100)
? No, I guess the answer is no).
Please, note that the following operations are potentially destructive, will
prevent you from using again the backup disk with the old Mac, should not be
performed if you’re not comfortable with the terminal and you don’t know what
you’re doing. Moreover, in no event I shall be liable for any claim, damages
or other liability. If you accept that risks are all yours, go on.
How?!?!?
Let’s assume the following:
- backup disk is called MyBackupDisk;
- the Mac is called MyMac;
- old Mac’s disk is called MyOldDisk.
So, without further ado:
- turn off automatic backups (
sudo tmutil disable
);
- connect the backup disk;
- associate that disk with the Mac:
- “reset” Spotlight’s indexes:
- reindexing everything will take a lot of time, so you’ll have to wait
patiently;
- turn on automatic backups (
sudo tmutil enable
).
With brand new Spotlight’s indexes, Time Machine won’t have any problem to
quickly backup your machine and all your old backups will be at your disposal.
Before ending, let me repeat: I have personally executed this procedure
successfully, but it involves direct manipulation of your backups, so take
your time to think about it before doing anything and be ready to take the
consequences.
I like static site
generators: you need less resources to host
your sites (even one of the almost useless EC2’s micro instances could be
enough) and you can move them around more easily, even to other hosting
services with very little fuss. Obviously, there are some inherent
limitations, but some of them can be overcome easily by using external
services, while others cannot be reliably defeated without help from the
hosting platform.
An example which intrigued me recently: your site is translated in multiple
languages and the users should be offered the “best” language.
What’s the “best” language? The official one spoken in the users’ countries or
the system/browser language chosen by the users themselves?
In the first case, we should try to locate the users via some geolocation
mechanism, but is it really the right way to guess the users’ languages? I
mean, I live in Italy but for many years (and to some extent still these days)
I haven’t used Italian as the primary language of my computers; a good friend
of mine lives, at the moment, in a German-speaking country but I don’t think
he’s using his system in German. No, I think geolocation should not be used to
guess the users’ languages, but it can be useful if your site’s contents
are more tailored to countries than languages.
What’s left? Asking the users/browsers. Enter the Accept-Language header.
Good browsers send requests with the Accept-Language header listing the
preferred languages, each with an optional priority values, and usually
ordered according to these priorities. Users can modify this list at system or
browser level (setting the system language is usually enough), so assuming it
reflects the languages our users would like to use is a safer bet, I think.
Forgetting (unportable? unreliable? impossible?) attempts to get this list via
JavaScript, we must rely on the web server because it’s the only “active” part
of our system. My preferred one is Nginx, at the moment;
to have it parse the Accept-Language header we must use a third-party module,
the Nginx Accept Language
module (adding it is
not really difficult and thanks to a smart guy I’ve learnt a couple of ways to
automate the process).
The module’s doc already shows how to use the detected language value but here
are my two cents:
In this way, every generic, “language-less” request will have a chance to lead
users to a translated resource, but, assuming internal links are correctly
namespaced, requests for already translated resources won’t force another
language upon our users.
So, with a little help from the web server we can try to guess users’
preferred languages; we’re left with problems like how to organize the static
site with minimal duplication and how to reduce the effort to keep the
different languages in sync, but here Nginx cannot help us :-)
I’ve recently started to use nanoc more often;
I still like Jekyll, but it’s better to have more
options in one’s toolbox and I’ll probably try other generator in the not so
distant future.
Just today’s has been released version
3.2, which allows you to create custom
commands! Super! Anyway, for the kind of “transformation” I needed, the
well-known filter mechanism was the essential ingredient and was already
available about a month ago, when I needed it :-)
I had an image gallery and wanted to remove every Exif tag from every photo.
It’s a menial task when you have the right tool, i.e. the wonderful
ExifTool, but first of all I
didn’t want to do it manually and then I wanted to keep the original files
untouched.
So, here’s what I came up with:
It’s a really, really simple filter… I’d dare to say it’s rather stupid :-)
and its use is a no-brainer.
Today marks the first month into my new job at
Shopify.
What a month! Not a single day’s been tiresome: while externally Shopify could
seem a slowly moving entity, behind the scene there’s a constant flux of
improvements, fixes and tweaks to the facing application and a growing army of
internal projects and tools.
It’s a wonderful business to work for.
I’ve been so lucky that Alex, Dale and Cody decided to hire me on and I’m so
happy to work with so many bright people that during the last month I woke up
at least a couple of times thinking to have dreamed up everything :-D
So, cucumber-rails 0.5.0 has been released today and among the other changes
there’s the final blow to Webrat. Now, the tortuosities I wrote
about some months
ago are not needed anymore… but may a lightning strike me right now if I’ve
been able to use it right out of the box!
To fix the all too frequent “undefined method `visit’” error message I had to
add a couple of require statements:
Dropped that file in the support directory and everything turned
green.
Here is a really simple extension for Sinatra to
ease even more the use of Rack::Csrf
with this beautiful micro-framework/DSL. It’s totally untested, written on the
spot, published as is.
The activation is delayed until apply_csrf_protection to allow
passing options to Rack::Csrf.
If you use sinatra-contrib, then
you should not use Rack::Csrf.