The Future Is Now

Review: Undertale Vinyl Soundtrack

I got my copy of the Undertale vinyl soundtrack in the mail the other day. While I’ve seen a lot of excitement for it, I haven’t seen many people talk about what it’s like, so I decided to write up my impressions.

First, to start with the good news: the art design is beautiful, and gorgeously printed. The red-on-black front cover is stylish, and the mostly-new pixel artwork is very well-done. The gatefold opens to a house party scene with a ton of the game’s NPCs, which is adorable and fits the mood of the game really well too. Most of the printing is matte, with a few details in glossy ink that really stands out—the Undertale logo on the cover, and the coloured lights in the party scene.

Unfortunately the quality of the sleeve itself is subpar. The cardboard feels thin and flimsy compared to other double-albums I own; it lacks weight. The gatefold hinge is also poorly-folded, and doesn’t stay closed on its own when the records are in; it flops open awkwardly. The records also don’t slide comfortably into the sleeve when the gatefold is closed, which makes putting records back after a play more awkward than it has to be. (See right: the record is inserted as far as it gets when the gatefold is closed. The black thing poking out is the inner record sleeve.) If you open the sleeve to insert the records all the way, then they can’t be removed while the gatefold is closed, which is even more awkward. The overall feeling is surprisingly cheap for a $35 album.

The records themselves are also mixed quality. The transparent red and blue coloured vinyl is classy, and the simple label design is attractive as well. It’s a minor detail, but the inner record sleeves are very attractive and easy to get the discs in and out of.

Unfortunately, my copy shipped with large scratches on sides A and B. (See the photo on the left—the scratches are very visible at full size.) Both discs were also covered in dust fresh out of the package. Everything I’ve seen suggests this isn’t an isolated issue—I know other people whose iam8bit records shipped with scratches before the first play, and from scanning their twitter feed, it looks like this is a common complaint with other customers. Needless to say, this is a huge issue, and I’m shocked it’s as common a problem as it is with them.

Given those defects, I was surprised when the discs sounded great. The mastering quality is very good, and leaves very little to complain about if you’re lucky enough to get undamaged discs. I did notice a mastering error that clips off the first note of Snowy on side one, but that was the only discernible error.

Given just how huge the Undertale soundtrack is, the entire thing was not going to fit on two discs. The vinyl release curates a selection of tracks to fit the available running time. They generally made a good set of calls, hitting all the major notes and popular moments, though by necessity your favourite deep cut is probably missing. Given the time constraints, I don’t see much to complain about in the selection. Pushing the soundtrack to three discs would probably have been a good call, even if that would have pushed up the price.

Despite the good sound quality, it doesn’t feel like iam8bit actually has the experience it takes to release a premium package like Undertale was meant to be. It’s pretty clear they have big ambitions, and I hope they learn from their mistakes so they get to the point that their quality lives up, but they’re not there yet. The stuff they’re good at—art and graphic design—is offset by poor physical design and unreliable QC. I can’t really recommend this as-is, especially with the high chance of getting a dud.

Aside from the main album, preorders came with a special dogsong single. This one-sided 7" single has an extended version of dogsong, and is pressed on transparent vinyl with an image of the annoying dog on the other side. It’s very cute, and for what it is it’s well-produced and pretty-looking.

Elegance

A few months ago, I wrote decoders for the PCM formats used by the Sega CD and Sega Saturn versions of the game Lunar: Eternal Blue1. I wanted to write up a few notes on the format of the Sega CD version’s uncompressed PCM format, and some interesting lessons I learned from it.

All files in this format run at a sample rate of 16282Hz. This unusual number is based on the base clock of the PCM chip. Similarly, the reason that samples are 8-bit signed instead of 16-bit is because this is the Sega CD PCM chip’s native format.

Each file is divided into a set of fixed-size frames; the header constitutes the first frame, and every group of samples constitutes the subsequent frames. These frames are set to a size of 2048 bytes—chosen because this is the exact size of a Mode 1 CD-ROM sector, and hence the most convenient chunk of data which can be read from a disc when streaming. This is also why the header takes up 2048 bytes, when the actual data stored within it uses fewer than 16 bytes.

Each frame of content contains 1024 samples. Because each sample takes up one byte in the original format, this means only half of a frame is used for content. The actual content is interleaved with 0 bytes. Despite seeming a pointless, inefficient use of space, this does serve a purpose by allowing for oversampling. The PCM chip’s datasheet lists a maximum frequency of 19800Hz, but this trick allows for pseudo-32564Hz playback.

The format used in Lunar: Eternal Blue supports stereo playback, though only two songs are actually encoded in stereo2. Songs encoded in stereo interleave their content not every sample, as in most PCM formats, but every frame; one 2048-byte frame will contain 1024 left samples, the next frame will contain the matching 1024 right samples, and so forth. This was likely chosen because the PCM chip expects samples for only one output channel at a time, and doesn’t itself implement stereo support.

The loop format used is highly idiosyncratic. Loop starts are measured in bytes from the beginning of the header; that is, a position to seek to from the start of actual content. The loop end time, however, uses a different unit; loop end is measured in terms of the sample at which the loop ends, not the byte. Because one sample is stored for every two bytes, and because left/right samples in a stereo pair are counted numerically as the same sample, this means that this count differs from the byte position at the end of the song by a factor of either 2 or 4. This is a quirk of how the PCM playback routine works; it’s more efficient to keep track of the number of samples played instead of the bytes played, and therefore storing the data in that format means that no extra math has to be performed to determine if the end of a loop has been reached. Similarly, the treatment of left/right samples as being the same sample is likely an artifact of what was the simplest way for the PCM playback code to handle this condition.

Looking at these PCM files gave me a new appreciation for design, and helped me appreciate more how important it is to understand the reason behind design decisions. In a lot of ways this format feels like it should be “bad” design: it’s strange, it’s idiosyncratic, it’s internally inconsistent. But every single detail is carefully chosen; everything serves a purpose. Each of these idiosyncrasies was carefully chosen to solve a particular problem, or to ensure peak performance in a particular bottleneck. Given the restrictions of a 16-bit game console, all of these choices were necessary to be able to support constantly streaming audio like this in the first place. Aligning every single detail of a format or API with the job which needs to be done is its own kind of elegance—a kind of elegance I understand a little better now.


  1. I referenced a couple of open-source decoders for the Sega CD version’s format when writing my own (foo_lunar2 by kode54, and vgmstream), along with some notes given to me by kode54.

  2. The Pentagulia theme, and the sunken tower.

Flat Whites in Vancouver

This post is completely off-topic. Please indulge me.

I recently spent three lovely weeks in Melbourne, Australia. I’m a big coffee fan, and Melbourne is one of the best cities for coffee in the world, so I spent a lot of time in cafes acquainting myself with the native coffee—specifically the flat white. Now that I’m back in Vancouver, I find myself craving flat whites nostalgically; sipping a flat white in a nice place reminds me of the wonderful time I spent there. (That nostalgia may have something to do with spending so many of those days out with a particular girl.) Finding a good flat white locally is hard though, and while there isn’t a really authentic one anywhere in the city there are a few good places I keep going back to.

There are three things that define a good flat white: milk volume, milk texture, and espresso. The right combination isn’t always easy to find on this side of the Pacific.

Milk

Volume

In my experience in Melbourne, most flat whites are served in roughly 190mL (6.5oz) cups; this is smaller than the average North American small latte (235mL, 8oz), and larger than the New Zealand flat white (160mL, 5.5oz). The relatively lower milk volume helps more of the espresso taste come through, which I find very nice.

Some Canadian coffeeshops I’ve been to will serve their flat whites in much smaller cups—smaller than both North American lattes and Australian flat whites, presumably intended to be closer to the size of a New Zealand flat white. Or perhaps they’ve simply heard that flat whites are served smaller and, having only too-small and too-large cups to choose from, they chose too-small.

Purely as a personal preference, I’ll take a bit too much milk over a bit too little, though 190mL as I had in Melbourne feels just right.

Foam

Flat whites are made using microfoam, a very fine, smoothly-textured, velvety foamed milk that gives the coffee the perfect texture. A good flat white also retains the crema from the espresso, merging the milk with the crema at the surface of the drink.

The espresso

Given the lower milk volume, it’s important for the coffee to not be overpowering. An espresso that’s too earthy or too bitter can ruin the drink for me; I like the flavour to be strong but not overly sharp. According to Wikipedia the kiwi flat white is usually served using a ristretto shot, and I find that can help cut the bitterness of certain beans; however I don’t feel like it’s a requirement for a good one. That said, a few of the Vancouver shops I’ve been to served me overly earthy, bitter flat whites that didn’t work for me. I make my own flat whites at home using ristretto shots.

The rankings

  1. Old Crow (New Westminster)

    A photo posted by @mistydemeo on

    Old Crow serves their flat white in an 8oz cup, though they’ll also do it in a 5oz glass on request. (I find it works better in the 8oz cup, personally.) They steam the milk very nicely, producing a good microfoam. Old Crow uses Bows X Arrows’s St. 66 beans, which is one of my favourite espresso roasts—pleasant, low bitterness, wonderful flavour with good acidity. At my request they started making flat whites using ristretto shots, which works beautifully with these beans. The result is a velvety, smooth coffee with a lovely natural sweetness. If they had 190mL cups to serve this in instead of 8oz, this could easily be mistaken for a Melbournian flat white.

  2. Continental Coffee (Commercial Drive)

    A photo posted by @mistydemeo on

    Continental also produces a very nice flat white. Served in an 8oz cup, it’s served with decently-foamed milk and an excellent ristretto shot. The foam on the milk as a bit thick compared to Old Crow and Prado, but it was delicious.

  3. Prado (Commercial Drive)

    A photo posted by @mistydemeo on

    Just down the street from Continental, Prado also produces a nice flat white. The milk is textured beautifully, a bit better than Continental’s, though I found the coffee a bit too sharp in comparison.

  4. Milano (Gastown)

    A photo posted by @mistydemeo on

    Milano made their flat white with an very earthy coffee, which overpowered everything else for me—even given the higher milk volume (8oz). This wasn’t a bad drink, but it wasn’t great. Next time I’d try ordering with a different espresso and see if that’s any better, or ask them to do a ristretto shot.

  5. Nelson the Seagull (Gastown)

    A photo posted by @mistydemeo on

    Nelson’s flat white suffers from basically the same problem as Milano’s, though it’s magnified by being served in a smaller cup. I also tried this a second time using their house almond milk; it has too much of a flavour of its own and ended up competing with the coffee.

  6. Revolver (Gastown)

    A photo posted by @mistydemeo on

    Revolver serves their flat white in a small 5oz (150mL) glass, which from what I’ve read is presumably closer to the kiwi style. The milk is steamed well, and the espresso is excellent—not that I’d expect anything less from Revolver.

    This doesn’t resemble anything I had in straya, but it is tasty. I’m only rating this low for failing to rekindle my nostalgia; it’s delicious taken on its own.

  7. Delany’s (West End)

    A photo posted by @mistydemeo on

    Delany’s smallest size is a 12oz (355mL)—nearly twice the size of an Australian flat white. The huge quantity of milk makes it hard to think of this as being a flat white at all.

  8. Bump N Grind

    A photo posted by @mistydemeo on

    Bump N Grind is one of my favourite shops so I had high hopes, but I was very disappointed in their flat white.

    Like Revolver, their flat white runs noticeably smaller than an Australian flat white; Bump N Grind serves theirs in a cappucino cup. The coffee is nice, but the milk was steamed poorly and the foam at the surface was far too thick—not as thick as a cappucino, but this’d be on the thick side for a latte, much less a flat white. The result was okay, but I wouldn’t call it a flat white at all. It’s basically a cappucino with less foam.

Release: Asuka 120% Limit Over English Translation

Asuka 120% Limited was a 1997 fighting game for the Sega Saturn, the final1 game in a long-running series. The Asuka 120% games were always underdog favourites of mine; despite looking like throwaway anime fan-pandering, they were surprisingly deep, innovative games with unique mechanics that paved the way for later, more famous games like Guilty Gear and X-Men vs Street Fighter.

In 1999, an unofficial mod titled Asuka 120% Limit Over was released. Rumoured to have been made by the original developers, Limit Over is a surprisingly polished update with many refinements and new gameplay mechanics. It went ignored by the English internet for many years, until the patch was discovered in 2007 by Lost Levels forum posters; it’s now also circulating as a prepatched ISO.

Even though there isn’t much text, Limit Over is hard to play without knowing Japanese, so I’ve prepared a translation patch.

The patch

Asuka 120% Limit Over English patch, version 1.0

This patch is compatible with the final release of Limit Over for the Saturn2. In order to use it, you need to have already built or obtained a disc image of Limit Over. The patch includes two options: a PPF disc image patch, or individual patches for each file on the disc that was changed. Detailed installation instructions are included in the ZIP file.

For more on what’s been translated, and how it was done, read on.

Graphics

Limit Over contains very little text, almost all of it in English. Unfortunately, though, one critical thing is in Japanese: the character names. Since the barebones menus have no graphics, not even character portraits, it’s very difficult to actually play without knowing Japanese. Have any idea who you’re picking in the screenshot below? I don’t…

A few other minor bits of text are stored in Japanese: the “N characters beaten” text in ranking and deathmatch modes, and the round start and round end graphics. Their meaning is obvious without being able to read them, however, so I decided to leave them alone.

Finding the tiles

Like most games of this era, Asuka 120%’s graphics are stored as sets of fixed-size tiles with a set, non-true colour palette. Since these are generally stored as raw pixel data without any form of header, it can be tricky to figure out where the tiles are and how they’re stored; fortunately, however, there are many good tile editors available that simplify the task.

I used a free Windows tile editor called Crystal Tile 2, pictured above, which has some very useful features, including presets for a large number of common tile formats, support for arbitrary tile size, and the ability to import and export tiles to PNG. Via trial and error, and with help from information gleaned via the Yabause emulator’s debug mode, pictured right, I was able to locate three copies of the character name tiles in the TTLDAT.SSP3, V_GAGE.SSP and VS_SPR.SSP files. The latter two files are used to store the in-battle user interface and the menus, respectively.

Each character name is a 4 bits per pixel 72x244 tile and, fortunately, Crystal Tile’s “N64/MD 4bpp” preset supports them perfectly. After configuring the palette in Crystal Tile I exported every name to a set of 13 PNG files, like the tile to the left.

Editing

I redrew the text using a highly-edited version of a font lovingly stolen from the Neo-Geo game Pochi & Nyaa. Compared to other fonts I looked at, it had the advantage of being both attractive and variable-width—which is important since the English names (which take up at least twice as many characters as the original Japanese) were very hard to fit in a width of 72 pixels. I also expanded the size of the characters compared to the original.

I briefly experimented with a thin variation of the character names for use in the game’s menus, but abandoned it after determining legibility was poor; the menus are 480i, and the flicker inherent in an interlaced image on a CRT or a deinterlaced image rendered the thinner lines harder to read than necessary. To the right are the thick and thin variations of the main character’s name.

Text

The rest of the game’s menu text is stored as ASCII strings in the main executable, and is completely in English. I did, however, make several changes to the character names displayed during loading screens. The one American character’s name, Cathy, was misromanized as “Cachy”. This is an easy mistake to make, since her name was rendered in Japanese as “きゃしい” (Kyashii)5. I also changed the romanization of several characters' names from Kunrei-shiki (Sinobu, Tetuko, Genitiro) to the more familiar Hepburn (Shinobu, Tetsuko, Genichirou).

What’s new in Limit Over?

It’s been many years since I’ve played Limited, so this list is based on my imperfect memory.

  • Every character has been rebalanced, and every single one of the core game mechanics has been refined.
  • Every character now has three strengths of normals and special moves instead of two.
  • A dodge button has been added, allowing characters to sidestep out of attacks.
  • Many characters have new special or super moves.

  1. The original developer, Fill-in-Cafe, went bankrupt after Limited was released in 1997, but a mediocre sequel and PC port were released in 1999 by another company.

  2. The only version readily available on the internet is dated “12/31”; I’ve heard there were earlier versions, but I’ve never seen them.

  3. This file is probably unused in Limit Over, since there is no graphical title screen.

  4. Except the tile for Genichirou, which is so long it’s allocated two tiles.

  5. Cathy’s name is rendered in Hiragana despite being a foreign name; it would more normally be rendered as “キャシー”.

Attack the Vector

One downside of working with ancient OSs is coming across bugs that will never be fixed.

In the early days of Tigerbrew, I had just started experimenting with GCC 4.2 as a replacement for Xcode 2.5’s GCC 4.0. Everything was going great until I built something that uses Cocoa, which blew up with this message:

1
2
3
4
5
In file included from /System/Library/Frameworks/CoreServices.framework/Frameworks/CarbonCore.framework/Headers/DriverServices.h:32,
                 from /System/Library/Frameworks/CoreServices.framework/Frameworks/CarbonCore.framework/Headers/CarbonCore.h:125,
                 from /System/Library/Frameworks/CoreServices.framework/Headers/CoreServices.h:21,
                 from test.c:2:
/System/Library/Frameworks/CoreServices.framework/Frameworks/CarbonCore.framework/Headers/MachineExceptions.h:115: error: expected specifier-qualifier-list before ‘vector’

A syntax error coming not from within the package I was building, but from within… Carbon?

The error actually comes from the Vector128 union within MachineExceptions.h, located in CoreServices’s CarbonCore framework. The very first section of this union had the offending code. Let’s play spot the bug:

Here’s what it looks like as of OS X 10.4.11:

1
2
3
4
5
6
7
8
9
typedef struct FPUInformationPowerPC    FPUInformationPowerPC;
union Vector128 {
#ifdef __VEC__
 vector unsigned int         v;
#endif
  unsigned long       l[4];
  unsigned short      s[8];
  unsigned char       c[16];
};

And here it is in OS X 10.7.51:

1
2
3
4
5
6
7
8
9
typedef struct FPUInformationPowerPC    FPUInformationPowerPC;
union Vector128 {
#ifdef __APPLE_ALTIVEC__
   vector unsigned int         v;
#endif
  unsigned long       l[4];
  unsigned short      s[8];
  unsigned char       c[16];
};

The bug comes from the use of the vector keyword. As pointed out in this MacPorts ticket, the issue is with the #ifdef that checks for __VEC__. __VEC__ is defined if AltiVec is on, without necessarily meaning that AltiVec syntactic extensions are enabled. The vector keyword is only available if either altivec.h is included, or the -mpim-altivec or -faltivec flags are passed. Since Tigerbrew always optimizes for the host CPU, G4 and G5 users were getting AltiVec enabled without forcing syntactic extensions. I fixed this in Tigerbrew this by always passing -faltivec on PowerPC systems when building any package, regardless of whether AltiVec is being used.

As to why Apple never caught this, GCC 4.0’s behaviour seems to be different; it seems to enable AltiVec syntactic extensions whenever -maltivec is on. Apple did eventually fix the bug, as seen in the Lion header above. According to the MacPorts ticket linked previously, it was fixed in the CarbonCore header in 10.5 and in the 10.4u SDK packaged in Xcode 3 for Leopard. Since the 10.4u SDK fix was never backported to Tiger itself, Tiger users have to make do with the workaround.


  1. 10.7 may be Intel-only, but the relevant code’s still there. In fact, the same PowerPC unions and structs are still there as of 10.9.4.

Reevaluate

My least favourite backtrace is a backtrace that doesn’t include my own code.

Tigerbrew on OS X 10.4 uses Ruby 1.8.2, which was shipped on Christmas Day, 2004, and it has more than its fair share of interesting bugs. In today’s lesson we break Ruby’s stdlib class OpenStruct.

OpenStruct is a simple data structure that provides a JavaScript object-like interface to Ruby hashes. It’s essentially a hash that provides getter and setter methods for each defined attribute. For example:

1
2
3
os = OpenStruct.new
os.key = 'value'
os.key #=> 'value'

Homebrew uses OpenStruct instances in place of hashes in code which only performs reading and writing of attributes, without using any other hash features. For example, in the deps command, OpenStruct is used for read-only access to a set of attributes read from ARGV:

1
2
3
4
5
6
7
8
9
10
mode = OpenStruct.new(
  :installed?  => ARGV.include?('--installed'),
  :tree?       => ARGV.include?('--tree'),
  :all?        => ARGV.include?('--all'),
  :topo_order? => ARGV.include?('-n'),
  :union?      => ARGV.include?('--union')
)

if mode.installed? && mode.tree?
  # ...

The first time I ran brew deps in Tigerbrew, however, I was greeted with this lovely backtrace:

1
2
3
4
5
6
7
8
9
10
11
SyntaxError: (eval):3:in `instance_eval': compile error
(eval):3: syntax error
        def topo_order?=(x); @table[:topo_order?] = x; end
                       ^
(eval):3: syntax error
    from /usr/lib/ruby/1.8/ostruct.rb:72:in `instance_eval'
    from /usr/lib/ruby/1.8/ostruct.rb:72:in `instance_eval'
    from /usr/lib/ruby/1.8/ostruct.rb:72:in `new_ostruct_member'
    from /usr/lib/ruby/1.8/ostruct.rb:51:in `initialize'
    from /usr/lib/ruby/1.8/ostruct.rb:49:in `each'
    from /usr/lib/ruby/1.8/ostruct.rb:49:in `initialize'

Given that the backtrace includes only stdlib code and nothing I wrote, I wasn’t sure how to interpret this until I saw “(eval)”. It couldn’t be, could it…? Of course it was.

Accessors for attribute of OpenStruct instances are methods, and they are defined by OpenStruct a) whenever a new attribute is assigned, and b) when OpenStruct is initialized with a hash. This is achieved using the method OpenStruct#new_ostruct_member1, which was defined like this in Ruby 1.8.2:

1
2
3
4
5
6
7
8
def new_ostruct_member(name)
  unless self.respond_to?(name)
    self.instance_eval %{
      def #{name}; @table[:#{name}]; end
      def #{name}=(x); @table[:#{name}] = x; end
    }
  end
end

Yes: OpenStruct dynamically defines method names by interpolating the name of the variable into a string and evaluating the string in the context of the object. Unsurprisingly, this is very fragile. In our example, the attributes being defined end with a question mark; #installed? is a valid method name in Ruby, but #installed?= is not, and so a SyntaxError exception is raised inside eval.

This was eventually fixed2; in Ruby 2.2.2’s definition, the #define_singleton_method method is used instead; metaprogramming is not limited to the normal naming restrictions, so the unusual setters are defined properly3.

1
2
3
4
5
6
7
8
def new_ostruct_member(name)
  name = name.to_sym
  unless respond_to?(name)
    define_singleton_method(name) { @table[name] }
    define_singleton_method("#{name}=") { |x| modifiable[name] = x }
  end
  name
end

Thankfully, the definiton of the method from modern versions of Ruby is fully compatible with Ruby 1.8.2, so Tigerbrew ships with a backported version of OpenStruct#new_ostruct_member.


  1. This sounds like it should be a private method, and is documented as being “used internally”, but for some reason this was a public instance method right up until Ruby 2.0.

  2. Close to a year after Ruby 1.8.2 was released.

  3. These illegal method names can’t be called using the normal syntax, but they can be called via metaprogramming using the #send instance method, e.g. os.send "foo?=", "baz"

Widescreen Gaming in the 90s

Most people got their first taste of widescreen gaming with the Wii, Xbox 360, and PS3, but not a lot of people know that companies were experimenting with widescreen all the way back in the fifth console generation (PS1, Saturn, N64). A tiny number of games have full widescreen support, which looks great on modern widescreen TVs.

Anamorphic widescreen

(Skip to the next section if you just care about pretty pictures!)

Since there isn’t a widescreen resolution in the SDTV standards, all widescreen games used a technique called anamorphic widescreen1. In anamorphic widescreen, the game squeezes down a 16:9 scene into 4:3; the TV then stretches the 4:3 image back out to 16:9 for display. For example, take a look at this image from Christmas Nights:

You can see that the proportions on everything are too thin—it’s very noticeable on Claris. Here’s what it looks like stretched out:

This shows you a lot more of the game world than you’d get in the standard 4:3 mode, but you can see that all of the 2D elements in the scene are displayed with the wrong aspect ratio. This lack of aspect ratio correction for 2D elements is common to most widescreen games of that era. In Nights, for example, you can see that the interface elements and all 2D in-game elements (the tree, the stars in the upper left corner) are displayed at the wrong aspect ratio when playing in widescreen. This hurts some games more than others.2

Examples

Christmas Nights

Both Christmas Nights and Nights support native widescreen. This is probably one of the most famous widescreen games of the era.

Nights benefits enormously from a wider field of view, even though it uses a lot of sprites. Being able to see more of where you’re going makes for a much better game.

Panzer Dragoon Zwei

Like Nights, Panzer Zwei has a similar field of view in either mode, with more content displayed on the sides.

Baroque

This is an obscure game, but Baroque’s spare, gritty low-poly nightmarish landscapes are some of the most beautiful and haunting I’ve ever seen. It reminds me a lot of Lilith’s dreamscapes, like Oneiric Gardens and Crypt Worlds.

Baroque makes heavy use of sprites; all of the game’s NPCs and enemies are sprites in a 3D space. Unfortunately, that makes it look worse in widescreen than the other games I’ve written about.

Virtua Fighter (32X)

This game’s almost entirely 3D, so it scales very well; the only major 2D element is the 2D background.

More

A more complete list of widescreen games of this generation is available on the Sega-16 forums.


  1. This is the same technique used by widescreen DVDs and Wii games.

  2. I’ve focused on Sega games for this post; I haven’t checked to see whether PS1 or N64 games have the same aspect ratio issues with sprites.

Homebrew GCC Changes Coming

Big GCC changes are a-coming to Homebrew, which will make building your own software against Homebrew-provided GCCs much more reliable. There’s going to be a transition period, though, and any software built against GCC will need to be rebuilt against the new package to work. We’ll be pushing the changes on December 12th, 2014, and this post is here to help you get ready for it!

(This only affects software built using Homebrew’s GCC. Any software built using Clang, which is the compiler that Apple ships, will be unaffected. If you don’t know what this means: you’re probably fine.)

The problem

Since Apple provides many of the same libraries as GCC, Homebrew installs GCC to a nonstandard location in order to avoid shadowing those libraries. Homebrew currently installs GCC using the --enable-version-specific-runtime-libs option to sandbox its libraries and headers, which installs libraries into versioned locations like so:

1
/usr/local/lib/gcc/x86_64-apple-darwin13.4.0/4.9.2/libgfortran.3.dylib

Since the full version of GCC is embedded—including the minor version—along with the OS version, every minor release is installed to a new location; this breaks any software which has dynamically linked against a previous GCC version’s copies of these libraries.

What’s changing

The new GCC package we are shipping will install GCC libraries to a path containing only the series version of GCC. For example, libgfortran will now be installed to:

1
/usr/local/lib/gcc/4.9/libgfortran.3.dylib

This has several advantages:

  • New releases of GCC 4.9 will be installed to the same path, so software built using GCC 4.9.2 will work with software built using GCC 4.9.3.
  • The same changes will be applied to the gcc49 formula in the homebrew/versions repository, allowing gcc49 to provide the 4.9 libraries when gcc is eventually upgraded to 5.0.

What you need to do

If you’re a user

If you have built any software using the Homebrew-installed GCC, you will need to reinstall that software once the package is updated on the 12th.

If you provide binary packages built using the Homebrew-installed GCC

If you provide binary packages that were built using the Homebrew-installed GCC, you should rebuild them using the new formula and have them ready for your users on the 12th.

If you maintain Homebrew formulae that use GCC/GFortran

If you maintain Homebrew formulae that build using GCC or GFortran, you should consider bumping their revisions on the 12th to ensure that users rebuild them against the new GCC package.

The tl;dr version

On December 12, 2014, we will push a new GCC package that changes the install location of libraries. Any software you’ve built using the old package (for instance, C++ or Fortran software) will no longer work and will need to be reinstalled. If you build packages for distribution using Homebrew’s GCC package, make sure you’ve built new versions using our new package and have them ready to distribute at that date.

Thank You, Ada

Bess Sadler, Andromeda Yelton, Chris Bourg and Mark Matienzo have stepped forward to pledge to match up to $5120 of donations to the Ada Initiative, a non-profit organization that supports the participation of women in open source and culture. I completely support this very generous act; the Ada Initiative does incredibly important work, and I’m extremely proud of my friends and of the library community for supporting them.

I’ve written before about how I stopped pursuing a career in tech in my late teens. I saw few female (or trans) role models in the tech industry; at a time when my self-image and self-identity was its most fragile, I pivoted away from something I saw as too masculine, without room for me. The Ada Initiative’s conferences and advocacy work have done a lot to help make the open tech world a more welcoming space.

There are a lot of reasons why women don’t enter, or don’t stay, in the tech industry. The last few weeks, when harassment campaigns have targeted women to drive them out of the video game industry, have made me reflect on how important it is to work to make online communities and conferences safe spaces.

The Ada Initiative’s conference policy advocacy work and their example anti-harassment policy have been instrumental in helping many organizations and projects adopt their own policies. Both the Code4lib anti-harassment policy and the Homebrew code of conduct, for example, were inspired by and partially based on the Ada Initiative’s work. Seeing organizations adopt these policies has done a lot to make me feel comfortable, and given me confidence that both preventing and dealing with these forms of harassment is something that they see as important. My hope is that future generations of women will feel comfortable entering and interacting in these spaces in ways that others may not have in the past.

In just a few years, the Ada Initiative has helped make sure that these policies are becoming the norm and not the exception for conferences and online communities. I’m so grateful we have their advocacy; please consider donating to help them do even more great things.

Mind the Dust

Please excuse the sparseness! I’m in the process of migrating from Wordpress to Octopress; I haven’t had time to change the default team or migrate over my older posts.