The Future Is Now

Release: Asuka 120% Limit Over English Translation

Asuka 120% Limited was a 1997 fighting game for the Sega Saturn, the final1 game in a long-running series. The Asuka 120% games were always underdog favourites of mine; despite looking like throwaway anime fan-pandering, they were surprisingly deep, innovative games with unique mechanics that paved the way for later, more famous games like Guilty Gear and X-Men vs Street Fighter.

In 1999, an unofficial mod titled Asuka 120% Limit Over was released. Rumoured to have been made by the original developers, Limit Over is a surprisingly polished update with many refinements and new gameplay mechanics. It went ignored by the English internet for many years, until the patch was discovered in 2007 by Lost Levels forum posters; it’s now also circulating as a prepatched ISO.

Even though there isn’t much text, Limit Over is hard to play without knowing Japanese, so I’ve prepared a translation patch.

The patch

Asuka 120% Limit Over English patch, version 1.0

This patch is compatible with the final release of Limit Over for the Saturn2. In order to use it, you need to have already built or obtained a disc image of Limit Over. The patch includes two options: a PPF disc image patch, or individual patches for each file on the disc that was changed. Detailed installation instructions are included in the ZIP file.

For more on what’s been translated, and how it was done, read on.

Graphics

Limit Over contains very little text, almost all of it in English. Unfortunately, though, one critical thing is in Japanese: the character names. Since the barebones menus have no graphics, not even character portraits, it’s very difficult to actually play without knowing Japanese. Have any idea who you’re picking in the screenshot below? I don’t…

A few other minor bits of text are stored in Japanese: the “N characters beaten” text in ranking and deathmatch modes, and the round start and round end graphics. Their meaning is obvious without being able to read them, however, so I decided to leave them alone.

Finding the tiles

Like most games of this era, Asuka 120%’s graphics are stored as sets of fixed-size tiles with a set, non-true colour palette. Since these are generally stored as raw pixel data without any form of header, it can be tricky to figure out where the tiles are and how they’re stored; fortunately, however, there are many good tile editors available that simplify the task.

I used a free Windows tile editor called Crystal Tile 2, pictured above, which has some very useful features, including presets for a large number of common tile formats, support for arbitrary tile size, and the ability to import and export tiles to PNG. Via trial and error, and with help from information gleaned via the Yabause emulator’s debug mode, pictured right, I was able to locate three copies of the character name tiles in the TTLDAT.SSP3, V_GAGE.SSP and VS_SPR.SSP files. The latter two files are used to store the in-battle user interface and the menus, respectively.

Each character name is a 4 bits per pixel 72x244 tile and, fortunately, Crystal Tile’s “N64/MD 4bpp” preset supports them perfectly. After configuring the palette in Crystal Tile I exported every name to a set of 13 PNG files, like the tile to the left.

Editing

I redrew the text using a highly-edited version of a font lovingly stolen from the Neo-Geo game Pochi & Nyaa. Compared to other fonts I looked at, it had the advantage of being both attractive and variable-width—which is important since the English names (which take up at least twice as many characters as the original Japanese) were very hard to fit in a width of 72 pixels. I also expanded the size of the characters compared to the original.

I briefly experimented with a thin variation of the character names for use in the game’s menus, but abandoned it after determining legibility was poor; the menus are 480i, and the flicker inherent in an interlaced image on a CRT or a deinterlaced image rendered the thinner lines harder to read than necessary. To the right are the thick and thin variations of the main character’s name.

Text

The rest of the game’s menu text is stored as ASCII strings in the main executable, and is completely in English. I did, however, make several changes to the character names displayed during loading screens. The one American character’s name, Cathy, was misromanized as “Cachy”. This is an easy mistake to make, since her name was rendered in Japanese as “きゃしい” (Kyashii)5. I also changed the romanization of several characters' names from Kunrei-shiki (Sinobu, Tetuko, Genitiro) to the more familiar Hepburn (Shinobu, Tetsuko, Genichirou).

What’s new in Limit Over?

It’s been many years since I’ve played Limited, so this list is based on my imperfect memory.

  • Every character has been rebalanced, and every single one of the core game mechanics has been refined.
  • Every character now has three strengths of normals and special moves instead of two.
  • A dodge button has been added, allowing characters to sidestep out of attacks.
  • Many characters have new special or super moves.

  1. The original developer, Fill-in-Cafe, went bankrupt after Limited was released in 1997, but a mediocre sequel and PC port were released in 1999 by another company.

  2. The only version readily available on the internet is dated “12/31”; I’ve heard there were earlier versions, but I’ve never seen them.

  3. This file is probably unused in Limit Over, since there is no graphical title screen.

  4. Except the tile for Genichirou, which is so long it’s allocated two tiles.

  5. Cathy’s name is rendered in Hiragana despite being a foreign name; it would more normally be rendered as “キャシー”.

Attack the Vector

One downside of working with ancient OSs is coming across bugs that will never be fixed.

In the early days of Tigerbrew, I had just started experimenting with GCC 4.2 as a replacement for Xcode 2.5’s GCC 4.0. Everything was going great until I built something that uses Cocoa, which blew up with this message:

1
2
3
4
5
In file included from /System/Library/Frameworks/CoreServices.framework/Frameworks/CarbonCore.framework/Headers/DriverServices.h:32,
                 from /System/Library/Frameworks/CoreServices.framework/Frameworks/CarbonCore.framework/Headers/CarbonCore.h:125,
                 from /System/Library/Frameworks/CoreServices.framework/Headers/CoreServices.h:21,
                 from test.c:2:
/System/Library/Frameworks/CoreServices.framework/Frameworks/CarbonCore.framework/Headers/MachineExceptions.h:115: error: expected specifier-qualifier-list before ‘vector’

A syntax error coming not from within the package I was building, but from within… Carbon?

The error actually comes from the Vector128 union within MachineExceptions.h, located in CoreServices’s CarbonCore framework. The very first section of this union had the offending code. Let’s play spot the bug:

Here’s what it looks like as of OS X 10.4.11:

1
2
3
4
5
6
7
8
9
typedef struct FPUInformationPowerPC    FPUInformationPowerPC;
union Vector128 {
#ifdef __VEC__
 vector unsigned int         v;
#endif
  unsigned long       l[4];
  unsigned short      s[8];
  unsigned char       c[16];
};

And here it is in OS X 10.7.51:

1
2
3
4
5
6
7
8
9
typedef struct FPUInformationPowerPC    FPUInformationPowerPC;
union Vector128 {
#ifdef __APPLE_ALTIVEC__
   vector unsigned int         v;
#endif
  unsigned long       l[4];
  unsigned short      s[8];
  unsigned char       c[16];
};

The bug comes from the use of the vector keyword. As pointed out in this MacPorts ticket, the issue is with the #ifdef that checks for __VEC__. __VEC__ is defined if AltiVec is on, without necessarily meaning that AltiVec syntactic extensions are enabled. The vector keyword is only available if either altivec.h is included, or the -mpim-altivec or -faltivec flags are passed. Since Tigerbrew always optimizes for the host CPU, G4 and G5 users were getting AltiVec enabled without forcing syntactic extensions. I fixed this in Tigerbrew this by always passing -faltivec on PowerPC systems when building any package, regardless of whether AltiVec is being used.

As to why Apple never caught this, GCC 4.0’s behaviour seems to be different; it seems to enable AltiVec syntactic extensions whenever -maltivec is on. Apple did eventually fix the bug, as seen in the Lion header above. According to the MacPorts ticket linked previously, it was fixed in the CarbonCore header in 10.5 and in the 10.4u SDK packaged in Xcode 3 for Leopard. Since the 10.4u SDK fix was never backported to Tiger itself, Tiger users have to make do with the workaround.


  1. 10.7 may be Intel-only, but the relevant code’s still there. In fact, the same PowerPC unions and structs are still there as of 10.9.4.

Reevaluate

My least favourite backtrace is a backtrace that doesn’t include my own code.

Tigerbrew on OS X 10.4 uses Ruby 1.8.2, which was shipped on Christmas Day, 2004, and it has more than its fair share of interesting bugs. In today’s lesson we break Ruby’s stdlib class OpenStruct.

OpenStruct is a simple data structure that provides a JavaScript object-like interface to Ruby hashes. It’s essentially a hash that provides getter and setter methods for each defined attribute. For example:

1
2
3
os = OpenStruct.new
os.key = 'value'
os.key #=> 'value'

Homebrew uses OpenStruct instances in place of hashes in code which only performs reading and writing of attributes, without using any other hash features. For example, in the deps command, OpenStruct is used for read-only access to a set of attributes read from ARGV:

1
2
3
4
5
6
7
8
9
10
mode = OpenStruct.new(
  :installed?  => ARGV.include?('--installed'),
  :tree?       => ARGV.include?('--tree'),
  :all?        => ARGV.include?('--all'),
  :topo_order? => ARGV.include?('-n'),
  :union?      => ARGV.include?('--union')
)

if mode.installed? && mode.tree?
  # ...

The first time I ran brew deps in Tigerbrew, however, I was greeted with this lovely backtrace:

1
2
3
4
5
6
7
8
9
10
11
SyntaxError: (eval):3:in `instance_eval': compile error
(eval):3: syntax error
        def topo_order?=(x); @table[:topo_order?] = x; end
                       ^
(eval):3: syntax error
    from /usr/lib/ruby/1.8/ostruct.rb:72:in `instance_eval'
    from /usr/lib/ruby/1.8/ostruct.rb:72:in `instance_eval'
    from /usr/lib/ruby/1.8/ostruct.rb:72:in `new_ostruct_member'
    from /usr/lib/ruby/1.8/ostruct.rb:51:in `initialize'
    from /usr/lib/ruby/1.8/ostruct.rb:49:in `each'
    from /usr/lib/ruby/1.8/ostruct.rb:49:in `initialize'

Given that the backtrace includes only stdlib code and nothing I wrote, I wasn’t sure how to interpret this until I saw “(eval)”. It couldn’t be, could it…? Of course it was.

Accessors for attribute of OpenStruct instances are methods, and they are defined by OpenStruct a) whenever a new attribute is assigned, and b) when OpenStruct is initialized with a hash. This is achieved using the method OpenStruct#new_ostruct_member1, which was defined like this in Ruby 1.8.2:

1
2
3
4
5
6
7
8
def new_ostruct_member(name)
  unless self.respond_to?(name)
    self.instance_eval %{
      def #{name}; @table[:#{name}]; end
      def #{name}=(x); @table[:#{name}] = x; end
    }
  end
end

Yes: OpenStruct dynamically defines method names by interpolating the name of the variable into a string and evaluating the string in the context of the object. Unsurprisingly, this is very fragile. In our example, the attributes being defined end with a question mark; #installed? is a valid method name in Ruby, but #installed?= is not, and so a SyntaxError exception is raised inside eval.

This was eventually fixed2; in Ruby 2.2.2’s definition, the #define_singleton_method method is used instead; metaprogramming is not limited to the normal naming restrictions, so the unusual setters are defined properly3.

1
2
3
4
5
6
7
8
def new_ostruct_member(name)
  name = name.to_sym
  unless respond_to?(name)
    define_singleton_method(name) { @table[name] }
    define_singleton_method("#{name}=") { |x| modifiable[name] = x }
  end
  name
end

Thankfully, the definiton of the method from modern versions of Ruby is fully compatible with Ruby 1.8.2, so Tigerbrew ships with a backported version of OpenStruct#new_ostruct_member.


  1. This sounds like it should be a private method, and is documented as being “used internally”, but for some reason this was a public instance method right up until Ruby 2.0.

  2. Close to a year after Ruby 1.8.2 was released.

  3. These illegal method names can’t be called using the normal syntax, but they can be called via metaprogramming using the #send instance method, e.g. os.send "foo?=", "baz"

Widescreen Gaming in the 90s

Most people got their first taste of widescreen gaming with the Wii, Xbox 360, and PS3, but not a lot of people know that companies were experimenting with widescreen all the way back in the fifth console generation (PS1, Saturn, N64). A tiny number of games have full widescreen support, which looks great on modern widescreen TVs.

Anamorphic widescreen

(Skip to the next section if you just care about pretty pictures!)

Since there isn’t a widescreen resolution in the SDTV standards, all widescreen games used a technique called anamorphic widescreen1. In anamorphic widescreen, the game squeezes down a 16:9 scene into 4:3; the TV then stretches the 4:3 image back out to 16:9 for display. For example, take a look at this image from Christmas Nights:

You can see that the proportions on everything are too thin—it’s very noticeable on Claris. Here’s what it looks like stretched out:

This shows you a lot more of the game world than you’d get in the standard 4:3 mode, but you can see that all of the 2D elements in the scene are displayed with the wrong aspect ratio. This lack of aspect ratio correction for 2D elements is common to most widescreen games of that era. In Nights, for example, you can see that the interface elements and all 2D in-game elements (the tree, the stars in the upper left corner) are displayed at the wrong aspect ratio when playing in widescreen. This hurts some games more than others.2

Examples

Christmas Nights

Both Christmas Nights and Nights support native widescreen. This is probably one of the most famous widescreen games of the era.

Nights benefits enormously from a wider field of view, even though it uses a lot of sprites. Being able to see more of where you’re going makes for a much better game.

Panzer Dragoon Zwei

Like Nights, Panzer Zwei has a similar field of view in either mode, with more content displayed on the sides.

Baroque

This is an obscure game, but Baroque’s spare, gritty low-poly nightmarish landscapes are some of the most beautiful and haunting I’ve ever seen. It reminds me a lot of Lilith’s dreamscapes, like Oneiric Gardens and Crypt Worlds.

Baroque makes heavy use of sprites; all of the game’s NPCs and enemies are sprites in a 3D space. Unfortunately, that makes it look worse in widescreen than the other games I’ve written about.

Virtua Fighter (32X)

This game’s almost entirely 3D, so it scales very well; the only major 2D element is the 2D background.

More

A more complete list of widescreen games of this generation is available on the Sega-16 forums.


  1. This is the same technique used by widescreen DVDs and Wii games.

  2. I’ve focused on Sega games for this post; I haven’t checked to see whether PS1 or N64 games have the same aspect ratio issues with sprites.

Homebrew GCC Changes Coming

Big GCC changes are a-coming to Homebrew, which will make building your own software against Homebrew-provided GCCs much more reliable. There’s going to be a transition period, though, and any software built against GCC will need to be rebuilt against the new package to work. We’ll be pushing the changes on December 12th, 2014, and this post is here to help you get ready for it!

(This only affects software built using Homebrew’s GCC. Any software built using Clang, which is the compiler that Apple ships, will be unaffected. If you don’t know what this means: you’re probably fine.)

The problem

Since Apple provides many of the same libraries as GCC, Homebrew installs GCC to a nonstandard location in order to avoid shadowing those libraries. Homebrew currently installs GCC using the --enable-version-specific-runtime-libs option to sandbox its libraries and headers, which installs libraries into versioned locations like so:

1
/usr/local/lib/gcc/x86_64-apple-darwin13.4.0/4.9.2/libgfortran.3.dylib

Since the full version of GCC is embedded—including the minor version—along with the OS version, every minor release is installed to a new location; this breaks any software which has dynamically linked against a previous GCC version’s copies of these libraries.

What’s changing

The new GCC package we are shipping will install GCC libraries to a path containing only the series version of GCC. For example, libgfortran will now be installed to:

1
/usr/local/lib/gcc/4.9/libgfortran.3.dylib

This has several advantages:

  • New releases of GCC 4.9 will be installed to the same path, so software built using GCC 4.9.2 will work with software built using GCC 4.9.3.
  • The same changes will be applied to the gcc49 formula in the homebrew/versions repository, allowing gcc49 to provide the 4.9 libraries when gcc is eventually upgraded to 5.0.

What you need to do

If you’re a user

If you have built any software using the Homebrew-installed GCC, you will need to reinstall that software once the package is updated on the 12th.

If you provide binary packages built using the Homebrew-installed GCC

If you provide binary packages that were built using the Homebrew-installed GCC, you should rebuild them using the new formula and have them ready for your users on the 12th.

If you maintain Homebrew formulae that use GCC/GFortran

If you maintain Homebrew formulae that build using GCC or GFortran, you should consider bumping their revisions on the 12th to ensure that users rebuild them against the new GCC package.

The tl;dr version

On December 12, 2014, we will push a new GCC package that changes the install location of libraries. Any software you’ve built using the old package (for instance, C++ or Fortran software) will no longer work and will need to be reinstalled. If you build packages for distribution using Homebrew’s GCC package, make sure you’ve built new versions using our new package and have them ready to distribute at that date.

Thank You, Ada

Bess Sadler, Andromeda Yelton, Chris Bourg and Mark Matienzo have stepped forward to pledge to match up to $5120 of donations to the Ada Initiative, a non-profit organization that supports the participation of women in open source and culture. I completely support this very generous act; the Ada Initiative does incredibly important work, and I’m extremely proud of my friends and of the library community for supporting them.

I’ve written before about how I stopped pursuing a career in tech in my late teens. I saw few female (or trans) role models in the tech industry; at a time when my self-image and self-identity was its most fragile, I pivoted away from something I saw as too masculine, without room for me. The Ada Initiative’s conferences and advocacy work have done a lot to help make the open tech world a more welcoming space.

There are a lot of reasons why women don’t enter, or don’t stay, in the tech industry. The last few weeks, when harassment campaigns have targeted women to drive them out of the video game industry, have made me reflect on how important it is to work to make online communities and conferences safe spaces.

The Ada Initiative’s conference policy advocacy work and their example anti-harassment policy have been instrumental in helping many organizations and projects adopt their own policies. Both the Code4lib anti-harassment policy and the Homebrew code of conduct, for example, were inspired by and partially based on the Ada Initiative’s work. Seeing organizations adopt these policies has done a lot to make me feel comfortable, and given me confidence that both preventing and dealing with these forms of harassment is something that they see as important. My hope is that future generations of women will feel comfortable entering and interacting in these spaces in ways that others may not have in the past.

In just a few years, the Ada Initiative has helped make sure that these policies are becoming the norm and not the exception for conferences and online communities. I’m so grateful we have their advocacy; please consider donating to help them do even more great things.

Mind the Dust

Please excuse the sparseness! I’m in the process of migrating from Wordpress to Octopress; I haven’t had time to change the default team or migrate over my older posts.

-no-cpp-precomp: The Compiler Flag That Time Forgot

I’m often surprised how long software can keep trying to use compatibility features ages after their best-by date.

Now that GCC 4.8 builds on Tiger1, I’ve been testing as much software with it as I can. When building ncurses using GCC 4.8.1, though, I came across a strange error message:

gcc-4.8: error: unrecognized command line option ‘-no-cpp-precomp’

It built fine with the gcc-4.0 that came with the OS. GCC rarely removes support for flags like this, so I assumed it must be an Apple-only flag. Unfortunately, it wasn’t listed in the manpage at all, and the internet was no help either – the search results were full of users confused about the same build failures, or trying to figure out what it does. All I could find was confirmation that it’s an obsolete Apple-only flag.

Not finding anything, I decided to find out straight from the horse’s mouth and try source-diving. Luckily Apple publishes the source code for all obsolete versions of their tools at their Apple Open Source site.

Recent versions of Apple GCC don’t include the flag anywhere in their source. The only place it’s still referenced is in a few configure scripts and changelogs, such as these:

1
2
3
4
5
# The spiffy cpp-precomp chokes on some legitimate constructs in GCC
# sources; use -no-cpp-precomp to get to GNU cpp.
# Apple's GCC has bugs in designated initializer handling, so disable
# that too.
stage1_cflags="-g -no-cpp-precomp -DHAVE_DESIGNATED_INITIALIZERS=0"

In several releases prior to that, for instance gcc-5493, the flag is explicitly mentioned as being retained for compatibility and is a no-op:

1
2
/* APPLE LOCAL cpp-precomp compatibility */
%{precomp:%ecpp-precomp not supported}%{no-cpp-precomp:}%{Wno-precomp:}\

The last time it was actually documented was in gcc-1765’s install.texi, shipped as a part of the WWDC 2004 Developer Preview of Xcode, which also provides a hint as to what the flag actually did:

It’s a good idea to use the GNU preprocessor instead of Apple’s @file{cpp-precomp} during the first stage of bootstrapping; this is automatic when doing @samp{make bootstrap}, but to do it from the toplevel objdir you will need to say @samp{make CC=’cc -no-cpp-precomp’ bootstrap}.

So this partially answers our question: Apple shipped an alternate preprocessor, and -no-cpp-precomp triggers the use of the GCC cpp instead. I can only assume this was a leftover that had yet to be excised, because the flag itself was still a no-op at that time. To actually find a version where the flag does something, we have to go all the way back to the December 2002 developer tools, whose gcc-937.2 actually has code that uses the flag. This particular build of GCC is Apple’s version of gcc-2.95, and it appears to be the very last where it had any effect. Interestingly, the #ifdef that guards this particular block of code is “#ifdef NEXT_CPP_PRECOMP” – suggesting that this dates back to NeXT, rather than Apple.

To actually find out what this means, O’Reilly’s Mac OS X for Unix Geeks, from September 2002, has a nice explanation in chapter 5:

Precompiled header files are binary files that have been generated from ordinary C header files and that have been preprocessed and parsed using cpp-precomp. When such a precompiled header is created, both macros and declarations present in the corresponding ordinary header file are sorted, resulting in a faster compile time, a reduced symbol table size, and consequently, faster lookup. Precompiled header files are given a .p extension and are produced from ordinary header files that end with a .h extension.

Chapter 4 also provides a nice explanation of why -no-cpp-precomp was desirable:

cpp-precomp is faster than cpp. However, some code may not compile with cpp-precomp. In that case, you should invoke cpp by instructing cc not to use cpp-precomp.

So there we have it – -no-cpp-precomp became somewhat widely used in Unix software as a compatibility measure to prevent Apple’s cpp-precomp feature from breaking their headers, and has stuck around more than a decade since the last time it’s actually done anything.


  1. More on that in a future blog post.

Software Archaeology: Apple’s Cctools

One of the things I’ve been working on in Tigerbrew is backporting modern Apple build tools. The latest official versions, bundled with Xcode 2.5, are simply too old to be able to build some software. (For example, the latest GCC version available is 4.0.)

In the process, I’ve found some pretty fascinating bits of history littered through the code and makefiles for Apple’s build tools. Here are some findings from Apple’s cctools1 package:

1
2
3
4
5
6
7
8
9
10
11
12
13
# MacOS X (the default)
#  RC_OS is set to macos (the top level makefile does this)
#  RC_CFLAGS needs -D\__KODIAK__ when RC_RELEASE is Kodiak (Public Beta),
#      to get the Public Beta directory layout.
#  RC_CFLAGS needs -D\__GONZO_BUNSEN_BEAKER__ when RC_RELEASE is Gonzo, 
#      Bunsen or Beaker to get the old directory layout. 
#  The code is #ifdef'ed with \__Mach30__ is picked up from <mach/mach.h> 
# Rhapsody 
#  RC_OS is set to teflon 
#  RC_CFLAGS needs the additional flag -D__HERA__ 
# Openstep
#  RC_OS is set to nextstep 
#  RC_CFLAGS needs the additional flag -D__OPENSTEP__ 

This comment from near the top of cctools’s Makefile lists some of the valid build targets, which includes:

  • Kodiak, which was the Mac OS X public beta from September, 2000
  • Gonzo (Developer Preview 4), Bunsen (Developer Preview 3), and Beaker (PR2)
  • Rhapsody (internal name for the OS X project as a whole), Hera (Mac OS X Server 1.0, released 1999), and teflon (unknown to me)
  • OPENSTEP, NeXT’s implementation of their own OpenStep API

From further down in the same Makefile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
ifeq "macos" "$(RC_OS)"
  TRIE := $(shell if [ "$(RC_MAJOR_RELEASE_TRAIN)" = "Tiger" ] || \
             [ "$(RC_MAJOR_RELEASE_TRAIN)" = "Leopard" ] || \
             [ "$(RC_RELEASE)" = "Puma"      ]  || \
             [ "$(RC_RELEASE)" = "Jaguar"    ]  || \
             [ "$(RC_RELEASE)" = "Panther"   ]  || \
             [ "$(RC_RELEASE)" = "MuonPrime" ]  || \
             [ "$(RC_RELEASE)" = "MuonSeed"  ]  || \
             [ "$(RC_RELEASE)" = "SUPanWheat" ] || \
             [ "$(RC_RELEASE)" = "Tiger" ]      || \
             [ "$(RC_RELEASE)" = "SUTiSoho" ]   || \
             [ "$(RC_RELEASE)" = "Leopard" ]    || \
             [ "$(RC_RELEASE)" = "Vail" ]       || \
             [ "$(RC_RELEASE)" = "SugarBowl" ]  || \
             [ "$(RC_RELEASE)" = "BigBear" ]    || \
             [ "$(RC_RELEASE)" = "Homewood" ]   || \
             [ "$(RC_RELEASE)" = "Kirkwood" ]   || \
             [ "$(RC_RELEASE)" = "Northstar" ]; then \
                echo "" ; \ 

A lot of familiar cats here, along with a couple of early iOS versions (SugarBowl, BigBear) and a lot of names I’m not familiar with. (Please leave a comment if you have any insight!) As far as I know “Vail” was the Mac LC III from 1993 with no NeXT connection, but I’m sure it must be referring to something else.

From elsewhere in the tree, there’s code to support various CPU architectures. Aside from the usual suspects (PPC, i386), there are some other interesting finds:

  • HP/PA, aka PA-RISC, a CPU family from HP; some versions of NeXTSTEP were shipped for this
  • i860, an Intel CPU used in the NeXTdimension graphics board for NeXT’s computers
  • M680000, the classic Motorola CPU family, used in the original NeXT computers
  • M880000, a Motorola CPU family; NeXT considered using this in their original hardware but never shipped a product using it
  • SPARC, a CPU family from Sun; some versions of NeXTSTEP were shipped for this

I find it fascinating that, even now, cctools still carries the (presumably unmaintained) code for all of these architectures Apple no longer uses.


  1. Apple’s equivalent of binutils.

Tiger’s `which` Is Terrible; or, Necessity Is the Mother of Invention

One of the most useful things about running software in unusual configurations is that sometimes it exposes unexpected flaws you never knew you had.

The which utility is one of those commandline niceties you never really think about until it’s not there anymore. While sometimes implemented as a shell builtin1, it’s also frequently shipped as a standalone utility. Apple’s modern version, which is part of the shell_utils package and crystallized around Snow Leopard, works like this:

  • If the specified tool is found on the path, prints the path to the first version found (e.g., the one the shell would execute), and exits 0.
  • If the specified tool isn’t found, prints a newline and exits 1.

This version of the tool is really useful in shell scripts to determine a) if a program is present, and b) where it’s located, and until fairly recently Homebrew used it extensively. Unfortunately, early on in my work on Tigerbrew, I discovered that Tiger’s version was… deficient. It works like this:

  • If the specified tool is found on the path, prints the path to the first version found, and exits 0.
  • If the specified tool isn’t found, prints a verbose message to stdout, and exits 0.

The lack of a meaningful exit status and the error message on stdout are both pretty poor behaviour for a CLI app, and broke Homebrew’s assumptions about how it should work.

To work around this, I replaced Homebrew’s wrapper function with a two-line native Ruby method for Tigerbrew, like so:

1
2
3
4
def which cmd
  dir = ENV['PATH'].split(':').find {|p| File.executable? File.join(p, cmd)}
  Pathname.new(File.join(dir, cmd)) unless dir.nil?
end

As it turns out, not only does it work better on Tiger, but this method is actually faster2 than shelling out like Homebrew did; process spawning is relatively expensive. As a result, I ended up using the new helper in Homebrew even though it wasn’t strictly necessary.

(And as for the commandline utility, Tigerbrew has a formula for the shell_cmds collection of utilities.)


  1. zsh does; bash doesn’t.

  2. On the millisecond scale, at least.