Archive for the 'Macintosh' Category

TransGaming Cider's promises: fact or fiction?

Sorry about the title, I couldn’t resist. All the unnecessary sensationalism makes me feel like a real journalist! Will your porting kit kill you?? Find out at 11 (10 Central)!

Well, theoretically it could. You could be in the middle of a Doom-a-thon when a bug in the porting kit locks up your client, and your ex-best-friend frags you.

Tragic. Just tragic.

Anyway, this new fangled product from TransGaming, Cider, has me interested. From the article over at The Mac Observer it sounds like it uses an “emulation” approach to porting to the Mac.

Its interesting to me because, in my experience, attempting to “emulate” one platform API with another is pretty much the worst way to port anything. The API’s of Mac and Windows aren’t a one to one match, and some comparable technologies have vastly different architectures. That means trying to emulate a Windows API on the Mac might prove to be very difficult (i.e. the Windows model might assume polling, while the Mac model might assume callbacks). It also means trying to emulate certain API’s on another platform might be slow or inefficient. And that’s bad in a game, or so I am told.

Although I have to admit, by limiting the porting kit to games they dodge some of the bullets fired by using the “emulation” approach. Namely, all the UI differences on Mac vs Windows. Games usually have a totally custom UI that is unique to the game, not to the platform. So the porting kit doesn’t have to worry about Windows icons not looking right on the Mac, or controls that exist on one platform, but not the other, or menu items being in the wrong place. Its all custom, so it doesn’t matter.

By limiting the kit to games, it also means they can concentrate on certain API’s. From the Cider FAQ:

TransGaming’s Cider implements common multimedia Windows APIs such as Direct3D, DirectInput, DirectSound and many others by mapping them to Mac equivalents.

Games will probably focus on DirectX and graphics API’s, and probably not so much on, say, CD burning API’s. Which means Cider’s decision to focus on multimedia API’s was a wise one, in a bang for the buck sense.

Of course, if not all the API’s are implemented, what are your options if your product uses one that’s not supported? Well, you could change your Windows’ code to use a different, supported API. But that’s no fun, and may not even be feasible. You could also just switch directly to using Mac API’s and #ifdef your code based on platform. But that means you don’t get one codebase, which is one of the things TransGaming touts. It doesn’t sound like you’ll actually get the source to Cider, so implementing the Windows API on the Mac might not be an option. From the Cider FAQ:

Cider works by directly loading a Windows program into memory on an Intel-Mac and linking it to an optimized version of the Win32 APIs.

It certainly sounds like they’re handing you a DLL you link in, and you pray that they implemented all the right API’s, and in the right way. If you were depending on an obscure side effect (in a Win32 API? Never!), and they didn’t implement it, you’re kind of screwed. Or if, heaven forbid, there was a bug in their implementation. Of course, TransGaming might fix it for you, for the right price.

If they’re handing you a black box DLL, I wonder how big it will be. In my experience, “emulation” porting layers tend to be pretty thick as far as code goes, so it could be a large DLL. If you have the source code you could hand optimize out the functions you don’t need, or even let the linker do that for you. I doubt TransGaming would like to be giving .NET a run for its money in terms of download size.

The other interesting thing about Cider is that it seems to be derived indirectly from WINE. From the Mac Observer article:

Cider shares the same core technology as Cedega, which has its roots in WINE but branched from that technology in 2002.

This is interesting because WINE is covered under the LGPL. I know LGPL isn’t as strict as the regular GPL, but doesn’t it still mean someone should be opening up their source? I also wonder about the legal implications of the ported game. Perhaps I should leave that question to the GPL experts and to the law talking guys.

Although Cider might surprise me, it doesn’t sound like its all its cracked up to be. Emulating an API is a poor way to port an application to a platform, even for a game. I truly wonder if they can pull off the performance they say they can. I also wonder how many games will be ported as easily as they imply:

TransGaming works with the game developers and publishers to optimize the game for Intel Mac but this process takes hours to a mere few days.

That prediction only seems plausible if the game is using the API’s that Cider supports. If not, then things will take a lot longer. They also seem to be forgetting about testing. The Cider porting kit isn’t Windows, even if its trying real hard to be. The game code might be the same, but the OS/porting kit code isn’t. You’ll have to spend time testing on the Mac as well.

Of course, this doesn’t keep their Founder/CTO, Mr. State, from being cocky. From the Mac Observer article:

Mr. State said: “We imagine that they[the traditional porting companies] are re-evaluating their business models. Our technology does revolutionize how games are brought to the Mac, which we believe will result in a paradigm shift in the Mac game publishing landscape.”

I don’t know Mr. State. I’ve used and maintained “emulation” porting kits before, and even with the source they’re very hard to make work, and make work well. I don’t think anyone is re-evaluating their business model until Cider is proven.

In search of search

Spotlight has to be one of the most unused technologies on my computer. Its not that I don’t need it — I need to search for things all the time. Its not that the idea behind Spotlight isn’t sound, it is. It makes a lot of sense to index files and make them available for a quick search.

The problem is it doesn’t work.

First, its slower than Christmas in Tehran on my machine. I’m not talking about the “indexing” time that I see a lot of people complaining about. I actually rarely see it indexing. However, on the off chance I try to do a search, it brings my machine to its knees, and its a Dual 2 Ghz G5 with a gig of RAM. The Finder locks up until the search is done and sometimes the disk activity it generates is so intense my whole machine locks up temporarily. I would assume its just this machine, but it happens on my PowerBook and my iMac/Intel as well.

I don’t get it. It always works for Steve.

And Lord help you if you use that stupid widget in the top right of your screen. Good grief. You’d better hope that the ten results it happens to show there are what you’re looking for (and they’re not), otherwise you have to open a real Spotlight window to get all the results. That means Spotlight has to run the entire query all over again, disk thrashing and all. Its the model of efficiency, that Spotlight.

The other thing that’s wrong with search in general is it just tells you that your search term is somewhere in a given file. There ought to be a way to then double click on the search result and have the application open up to where the search result is. But perhaps that’s just a distant dream.

Of course, the only reason I even tried Spotlight is because Xcode search sucks so badly. It only wants to search the current project, and even then only the files in the actual project (not included files). Sure, Apple has added all kinds of options to the Find dialog, and maybe one day they’ll add one that makes it useful. Until then, searching an arbitrary folder is one of the most painful experiences within Xcode (and there are a lot). Here’s what you have to do in Xcode:

  1. Command-Shift-F to open the Find in Files dialog.
  2. Press the “Options” button to get the Find Options dialog. (Why are these in a separate dialog?!?)
  3. So you don’t overwrite one of Apple’s precious predefined sets, press the “Add” button.
  4. Type in a name for your set and hit enter.
  5. Press the + button at the bottom and add the folder you want to search. (Also remove any left over folders from previous sets you don’t want.)
  6. Check the “Search in files and folders” box. (Why do I have check this? I just added a folder, isn’t it obvious what I want to do??)
  7. Uncheck “Search in open documents” and “Search in open projects.”
  8. Close the Find Options dialog and/or go back the Find dialog.
  9. Select your new Find set from the set popup.
  10. Type in the search term(s) you want to find, and press return.

Yes, in those short ten steps, you too can search for something in an arbitrary folder!

The problem is these stupid find sets that I have to create are there to increase flexibility. Undoubtedly, the engineer who created that whole mess thought “Think of the power and flexibility I’m giving the user! They can search for anything in any way they want! Having them save find sets means all the find options are nicely encapsulated!” But the problem is I don’t want flexibility, I want speed. And I don’t mean raw search speed, I mean speed of entering in my search criteria and having Xcode find it. When I’m looking for something, I’m in a hurry. I don’t have time to create one of your stupid Find sets, as architecturally nice as they might be. I’m sorry, ten steps to do anything is too many.

At the very least Apple should merge all the options into the Find dialog. And don’t force me to create a stupid find set. They could also add a default set that searches $SRC_ROOT.

The truth is I still keep CodeWarrior around, just so I can use its Find in Files dialog. It has a nice popup of all the previously searched in folders, or text field I can type the path into, or a browse button which I can use to go select the folder I want, all from the Find dialog. Apple, if you want to know how to make a decent search, look no further than CodeWarrior.

I hate searching on my Mac. Granted, I don’t have a dog asking me retarded questions, but its always a painful ordeal. It doesn’t have to be. The technology is there, but they need to run it through some usability specialists or at least a couple of real users.

wchar_t: Unsafe at any size

One of today’s fads in software engineering is supporting multiple languages. It used to be that each language or script had its own code point system (or encoding), with each code point representing a different character. For reasons of convenience, the various scripts were incompatible, they could not identified by simply looking at the code points, and an identifier describing which script the text was in was not allow in the same zip code. Sometimes this caused problems with engineers who had weak constitutions; was that ‘c’ or a ‘¥’? Experienced programmers knew the correct answer was to cycle through all the known scripts, interpreting the text with each in turn, and ask the user to tell them when they could read it or saw the hidden picture. These were the earliest known captchas.

The Unicode Consortium was unhappy with this because they were not the cause of the mass confusion, as a result of being late to the party. They devised a scheme in which each character had its own unique code point. They also allocated enough code points to represent all the characters of a lot of different languages. They even added a unique byte sequence at the start of any Unicode text to mark it as Unicode. And thus, all was well and good as long as you didn’t mind having text that took four times more space than usual, and wasted three out of four bytes. The Unicode Consortium at first wasn’t interested in fixing this problem until they realized they could use it to add more “features” (read: confusion). The Consortium begat UTF-8 and UTF-16 in order to fill this need. UTF-8 encoding allowed most characters to be encoded in 8 bits, with the rest as escape sequences, and UTF-16 allowed most characters to be encoded in 16 bits.

Originally people implemented these types in C by using unsigned char (UTF-8), unsigned short (UTF-16), or unsigned int (UTF-32). At the time of adoption of Unicode both Win32 and the Mac Toolbox used UTF-16. It was a nice tradeoff between size and efficiency. For most characters they were only wasting one byte (as opposed to three bytes in UTF-32), but could still assume most characters where just 16-bits (as opposed to UTF-8 which escaped anything not ASCII). Life was good.

As most standards committees, the C/C++ standards committee were bent on death and destruction. They saw people were using this newfangled Unicode, and that it was almost sufficiently confusing. The standards committee wanted to advocate this confusion while adding even more. To achieve their demented objective, they introduced wchar_t and std::wstring. But which encoding of Unicode did it use: UTF-16 or UTF-32? BUHAHAHAHA! In their greatest show of leadership to date, the standards committee refused to say. It would be a surprise, and they would hate to spoil a surprise. wchar_t was defined to be more than a byte but no larger than jet liner.

With this new edict in hand, compiler and library writers quickly got to work. Instead of following each other’s lead, they each implemented wchar_t and its supporting libraries as they saw fit. Some saw the benefit of making wchar_t UTF-16. Others wanted it to be UTF-32. And thus, the standards committee bode their time.

Since both Windows and Mac OS (Classic) had adopted UTF-16 already, the compiler makers implemented wchar_t as UTF-16. But this was just a trap, meant to ensnare hard working cross platform engineers. Engineers who worked on software that ran on Windows and MacOS started using wchar_t. It was easy and worked well. A little too well.

Meanwhile, Unix vendors had decided that wasting one byte was insufficient, and that wasting three bytes per character was definitely the way to go. Besides its not like anyone on Unix was using Unicode for anything other than Klingon.

The trap was sprung in 1996 when Apple purchased NeXT and its Unix based operating system. Like all good traps no one realized what had happened for several more years. It wouldn’t be until 2001 when Mac OS X was released and Steve Jobs started after developers with cattle prods to get them to port to Mac OS X. Unfortunately for the standards committee, some developers continued to use the old developer tools, like CodeWarrior, and old executable formats, like CFM/PEF, that implemented wchar_t as UTF-16. But the standards committee was patient. They knew they would prevail in the end.

Apple would turn out to be the instrument of the standards committee. They continued to improve Xcode until it was good enough to actually build most of their own sample code. At the same time, Metrowerks finally won its game of Russian Roulette, and stopped development of CodeWarrior. Apple delivered the final blow when they announced they were moving to the Intel architecture and that they had the only compiler that supported it. A compiler with a secret.

There were screams of anguish when it dawned on engineers the cruel trick Apple and the standards committee had played. Mac OS X, being a Unix variant, had implemented wchar_t as UTF-32! All the cross platform code, code that used to work on Windows and Mac, no longer worked. Apple felt their pain, and issued this technical note, which essentially says: “instead of using wchar_t, which used to be cross platform before we destroyed it, use CFStringRef, which is not cross platform, has never been, and never will be. P.S. This is really your own fault for ever using wchar_t. Suckers.”

At the time that this was happening, I happened to work for Macromedia (now Adobe). Being the most important company that implements Flash, some of the Apple execs came down and talked to the Mac engineers at Macromedia. When the appropriate time came, I sprang into action demanding to know what would be done about wchar_t. There was stunned silence. “What’s wchar_t?” was the first answer. After explaining it, the next answer was “We don’t implement that.” After pointing them to their own documentation, the next answer was “Oh. Huh. Well, why did you use it? We don’t use that crap. Use CFString instead!” After slamming my head against the table, I attempted to explain wchar_t was used everywhere in our codebase, and CFString wasn’t cross platform. “Sure it is! It works on both Mac OS 9 and Mac OS X!”

The solution in the end for those duped into using wchar_t, is to go back and use unsigned short instead. Unfortunately, that means doing a lot find and replace (find: wchar_t replace: char16_t, where char16_t is typedef’d to unsigned short) and then re-implementing the wchar_t library (including wstring) for the new type. Yep. Reimplement the wchar_t library. The lucky jumped into a pit of rabid ice weasels, where they were torn from limb to limb. The unlucky had to repurpose all the old CodeWarrior MSL code to re-implement the wchar_t library as char16_t library.

The moral of the story is: don’t trust the standards committee. Especially on standards that aren’t really defined or when they start snickering behind your back. Usually that means they stuck a note on your back that says “Standardize me.” I’m not sure why that’s funny, but they think its hilarious. If you need to use a Unicode encoding, use UTF-8. You can just use char and std::string for that.

Besides, who doesn’t speak English?