Home Page
Archive > Posts > 2008 > All
Search:

Managing Firefox History
Software likes hiding sensitive information and keeping it persistent :-(

Since version 3 of Firefox, the browser has moved over from using flat files for keeping track of browsing history (history.dat) and bookmarks (bookmarks.html) to using SQLite databases (places.sqlite). This change over was required because the old flat file formats were badly implemented, clunky, and not able to handle the new demands of the location bar and browser history. Using a SQL database was the perfect solution for the complexity brought in with the new location bar and its dynamic searching of previous URLS, as SQL is easy to implement, is mostly compatible against multiple SQL application implementations (removing dependency on a single product), and powerful for cross referencing lookups. As a matter of fact, most of the data Firefox keeps now is stored in SQLite databases.

SQLite was also a good choice for the SQL solution because it can be implemented minimally straight into a product without needing a large install and a lot of bloat. While I like SQLite for this purpose and its ease of implementation, it lacks a lot of base SQL functionality that would be nice, like TABLE JOINS inside of DELETE statements, among many other language abilities. I wouldn’t suggest using it for large database driven products that require high optimization, which I believe it can’t handle. It’s meant as a simpler SQL implementation.


Anyways, I was very happy to see that when you delete URLs from the history in the newest version of Firefox that it actually deletes them out of the database as opposed to just hiding them, like it used to. The history manager actual seems to do its job quite well now, but I noticed one big problem. After attempting to delete all the URLs from a specific site out of the Firefox history manager, I noticed there were still some entries from that site in the SQLite database, which is a privacy problem.

After some digging, I realized that there are “hidden” entries inside of the history manager. A hidden entry is created when a URL is loaded in a frame or IFrame that you do not directly navigate too. These entries cannot be viewed through the history manager, and because of this, cannot be easily deleted outside of the history database without wiping the whole history.

At this point, I decided to go ahead and look at all the table structures for the history manager and figure out how they interact. Hidden entries are marked in places.sqlite::moz_places.history with the value “1”. According to a Firefox wiki “A hidden URL is one that the user did not specifically navigate to. These are commonly embedded pages, i-frames, RSS bookmarks and javascript calls.” So after figuring all of this out, I came up with some SQL commands to delete all hidden entries, which don’t really do anything anyways inside the database. Do note that Firefox has to be closed to work on the database so it is not locked.

sqlite3 places.sqlite
DELETE FROM moz_annos WHERE place_id IN (SELECT ID FROM moz_places WHERE hidden=1);
DELETE FROM moz_inputhistory WHERE place_id IN (SELECT ID FROM moz_places WHERE hidden=1);
DELETE FROM moz_historyvisits WHERE place_id IN (SELECT ID FROM moz_places WHERE hidden=1);
DELETE FROM moz_places WHERE hidden=1;
.exit

This could all be done in 1 SQL statement in MySQL, but again, SQLite is not as robust :-\. There is also a “Favorite’s Icon” table in the database that might keep an icon stored as long as a hidden entry for the domain still exists, but I didn’t really look into it.

Perl Magic
The language of obfuscation

I’ve been delving into the Perl language more lately for a job, and have found out some interesting things about it. Perl itself is a bit shrouded in mysticism, with it often being said that it runs on “magic”. The original Perl engine, written by Larry Wall, has never been duplicated due to its incredible complexity and hacked together nature.


One funny little thing I noticed is that an arrow “=>” and comma “,” are completely synonymous in the language. For example, this is how you SHOULD declare a hash and an array, because it just looks better and is proper coding standards:
@MyArray=('a',1,'b',2); #An array with values a,1,b,2
%MyHash=(a=>1, b=>2); #A hash with keys a,b that contain the values 1,2
but you can actually declare the exact same array and hash objects like this
@MyArray=('a'=>1=>'b'=>2); #An array with values a,1,b,2
%MyHash=(a,1,b,2); #A hash with keys a,b that contain the values 1,2

It’s also easy to find the length of a non referenced array in Perl as follows:
print $#MyArray; #Index of the last element, so add 1 to get length
or
$ArrayLength=@MyArray;
print $ArrayLength;

There are two ways to do it with a referenced array:
$MyRefArray=[1,2,3];
print scalar @$MyRefArray;
print $#$MyRefArray; #Index of the last element, so add 1 to get length
Moral of the story: there are many ways to do things in Perl.

After now having delved a bit more into how Perl works, I still like PHP better as a strictly quick scripting language. Oh well.

Sony eBook Readers
Why can you never find a product that has all the features you want!

To start off, Merry XMas ya’ll! (And Happy Holidays, of course! [I’m actually Jewish by heritage for those who don’t know me personally ^_^; ] )


I decided to get an eBook reader as a present for someone for the holidays, so I tried out both the Sony PRS-505 and Sony PRS-700. I decided on the Sony readers for now as they can handle most, if not all, of the main eBook formats. Here are the important things I discovered out about both.


Sony PRS-505

This is a minor upgrade to the first eBook reader that Sony released in September of 2006 (the PRS-500), and costs $300. It works as it should and is advertised, and does everything I’d really want from a basic eBook reader.


Sony PRS-700

This is a major update to Sony’s eBook line, released in September of 2008, and costs $400. The most important new feature to this is the touch screen, which has some major pros and cons.


The main comparison points that I found between the 505 and 700 are as follows.
  • I immediately noticed upon comparing the two how much lighter and more reflective the screen is on the 700, making it much harder to read. After some quick research, I found the following here:
    Sony added a touch layer on top of the e-ink display and embedded LED side-lights into the frame that surrounds the display. Clever. But this comes at the expense of contrast and glare, and the Sony Reader PRS-700 looks more like a grayscale notebook screen than an eBook reader. The glare isn’t nearly as bad as the average PDA or gloss notebook display-- it’s on par with matte finish notebook displays.
    As far as I’m concerned, very unfortunately, this makes the product completely worthless as far as an eBook reader. You might as well just use an LCD display instead of an eInk display for the quality and price!
  • The touch screen (that comes with a pointer pen too) itself is a spectacular design, and would make the device far better than the 505 if it didn’t ruin the readability of the device. The ability to navigate the device is much easier, quicker, and more intuitive due to the touch screen interface, which also allows for a lot of additional functionality including a virtual keyboard and selecting text.
  • The 700 “turns pages” about twice as fast, due to the processor being about twice as powerful.
  • The 700 also has many more zoom levels by default, which is a big plus for people who need the eBook devices specifically for bad eyesight. The “Large” zoom level on the 500 just doesn’t always satisfy what is needed in some eBooks, but the XL and XXL on the 700 definitely go that extra step. I was told by a rep at the Sony Style store that there is a way to download larger fonts to the system (possibly through the eBook files themselves), but I have not fully researched into this yet.
  • The 700 allows for searching for text now because of the virtual keyboard. I find this to be an incredibly useful feature for a book reader.
  • The 700 also allows you to takes notes and make annotations on pages due to the virtual keyboard.
  • The 700 has side lights that can be turned on, which is kind of neat, but this is really just an extra luxury.

One unfortunate annoyance of both devices is that you cannot use them while they are plugged into the computer (for charging via the USB interface or uploading new books).


After playing with both, I’d definitely recommend the 505 for now. If they could fix the contrast problem with the 700, it would be perfect and well worth the price.

I’d like to try the Amazon Kindle too, but their stock of it is so far backordered, I don’t feel like dealing with it for the time being. When I checked around the 23rd of this month, they had a 13 week wait to have the product shipped to you! The Kindle is also, unfortunately, more DRM laden with proprietary formats. This can be bypassed though.

OllyDbg 2.0
Reverse engineering is fun! :-D

OllyDbg is my favorite assembly editing environment for reverse engineering applications in Windows. I used it for all of my Ragnarok Online projects in 2002, and you can find a tutorial that uses it here (sorry, the writing in it is horrible x.x; ).

Ever since I started using it back then, the author was talking about his complete rewrite of the program, dubbed version 2.0, that was supposedly going to be much, much better. I have been patiently waiting for it ever since :-). Rather randomly, I decided to check back on the website yesterday, after not having visiting there for over a year, and low and behold, the first beta of version 2.0 [self-mirror] was released yesterday! :-D. Unfortunately, I’m not really doing any reverse engineering or assembly level work right now, so I have no reason or need to test it :-\.


... So yes, just wanted to call attention to this wonderful program being updated, that’s all for today!

The IPod Touch
And IPhones

So I decided to go over to the evil side recently and get an IPod Touch. I originally wanted to just try it out in The Apple Store, but I just couldn’t find out all I wanted to about it there, and was getting highly annoyed by the completely ignorant sale reps, who couldn’t answer any of my questions anyways, hovering over my shoulder. And, yes, I asked them a few questions and neither they nor their managers had a clue. >:-(

However, all the sales reps I’ve been talking to lately at different stores about the IPod Touch and other electronic products I’ve been interested in buying have been pushing me to just buy them, and return them if I’m not satisfied. This sales tactic is a bit new to me, and I don’t like buying something and returning it needlessly, but they suggest it, so I decided what the heck! I guess it’s assumed most people will buy it and either decide they like it, forget to return it, or are too lazy to return it! So I decided to go to Fry’s to grab one (IPod Touch 2G v2.2) for testing and possibly keeping if I liked it because The Apple Store were really uncool about a lot of things, and also charged a hefty restocking fee on return... jerks. The jury is still out on if I’ll be keeping it or not, but I decided to share some of my findings.

When I talk about the IPod Touch here, I am also talking about the IPhone, because they are basically the exact same product. The IPhone just has the camera and the phone features, but the rest of the software is all the same (they run on the same OS). I also have a few IPhone specific comments below, as a good friend of mine got one for XMas and I helped him out with setting it up and found out a few things about it at the same time. Whenever I refer to the IPod Touch from here on out, I am referring to both IPod Touches and IPhones.


First of all, as is advertised and highly touted, The IPod Touch has style. The design is wonderful, it has a lot of nifty features, and has lots of useful applications in the App Store, many of them free. The product itself is by far better than anything else I’ve tried on the market for music playing and general PDA (personal digital assistant) purposes.

The Blackberrys I’ve tried out at a Verizon store (the Storm and Curve IIRC) weren’t even in the same league as the IPod Touch. I also tried out a G1 (Google phone) at a TMobile store, and initial impressions were not spectacular. However, I can’t make a solid judgment on the G1 because I didn’t spend as much time with it as I could have, as I knew I couldn’t use it anyways. This is because I refuse to switch from the Verizon network because the signal quality and customer support I have received from them are worlds better than what I had ever received from Cingular (now AT&T), AT&T, and Sprint.


Now that I’ve gotten the initial information out of the way including why the IPod Touch is nice; on to all of the problems I’ve found with it.

  • Apple has horrible draconian policies regarding what can be put on an IPod Touch. Applications can only (legally) be put on the IPod Touch from the App Store, and Apple specifically regulates what is in the store, only allowing in it what is “best for their interests”. This, of course, includes denying any application in the App Store that “duplicates functionality” of an Apple product. This is bad for many reasons.
    • First and foremost, it’s not Apple’s place (though they argue that it is) to say who can develop and what can be developed for the IPod Touch, as long as it is not malicious in any way.
    • Apply very specifically blocks, quite often, products that would be excellent with great functionality because it “competes” with their generally inferior applications. Of course, one can unlock older IPod Touches, and I’m sure newer ones will be unlockable soon enough, so this problem can be bypassed. When a phone is unlocked, it can be theoretically used on a compatible network (not AT&T), and you can install any application you want to on it for free (as long as you can find it). The legality of this is questionable, but it’s not really risky.
    • This can force developers who have spent their time and effort to build a good product to not be accessible to the market, thereby completely screwing them after the fact. Apple is not specific on what can be put on the store, and is very subjective about the whole matter. Unfortunately, many developers have found themselves in this position after submitting their application to Apple for inclusion in the store.
    • Apple can decide to block a product after it has been released and people have bought it, deleting it from their phones without refund. I believe (but have no proof) that this has already happened when a product “duplicated the functionality” of a new application or feature in an application of theirs that was added after the fact.
  • The SMS (texting) interface on the IPhone is horrible. It only allows you to see part of the message that you are typing at any time (40 characters as a hazy guess). This could easily be fixed through a third party application, but Apple blocks any application that has SMS as it is “duplicating” the functionality of something they built. See the above bullet for more information.
  • The keyboard correction on the IPod touches leaves much to be desired, and there is no text prediction (suggesting words you are typing).
  • The virtual keyboard itself, while far ahead of any other virtual keyboard on a cell phone I have tried as far as usability goes, also leaves a lot to be desired, and can be quite annoying. I did get used to it pretty fast, but mistakes were very often and easily made, and I do not believe one could ever type as fast on a virtual keyboard, like the IPod Touch’s, as a physical keyboard, though I haven’t spent near enough time practicing on it to confirm this. The Google phones (at least the G1) solves this problem with its flip-out keyboard interface.
  • No multitasking. Period. The IPod Touch can do a few things at the same time (mainly play music), but 2 applications cannot run at the same time, and trying is against their developer agreement. Apple did this to control the user experience, so that a user doesn’t try running too many things at once, creating a bad user experience on the product from lag, which they would blame on Apple. Granted, the IPod Touch isn’t that powerful and it would be easy to bog down the system if too many things were running, but some things need to continue running in the background, with minimal processor time, to create a good experience.
    One of many examples of this is AIM (AOL Instant Messenger). When you start the application, it signs you on, and it keeps you online AIM until you specifically sign off (or perhaps if you turn off the phone, but I doubt it). This means that if you exit the AIM application after signing on, it shows other people that you are still online and receiving messages, even if you aren’t getting them. When you open the application back up, it retrieves all of the queued messages that were sent to you while the application was not opened. How hard and taxing would it be on the system to pop up a message informing the user a new message has come in while they are in other applications? Apparently too much, as Apple has to be black and white about the multitasking issue instead of allowing developers to petition for the right. Further, this queued AIM message system also tips one off to the fact that ALL AIM messages are sent through their servers to get to your IPod Touch, instead of your system directly connecting to the AIM servers, which is essentially an invasion on your private conversations.
  • Crashing. The IPod Touch itself has crashed on me twice within the first 2 hours I used it. When this occurred, I could not even start most all of the applications, even after turning the IPod Touch on and off (all the way, not standby mode). The only way I found to fix this was installing a new application from the App Store, or updating an application that had a new version ready. Go figure.
  • The IPhone can only take pictures, and not video. While there are products that allow taking video on the IPhone, they can only be installed by unlocking the phone, as Apple will not allow them on the App Store (see the top bullet for more information).
  • No searching for text on the current page in the web browser (Safari). This really bugs me as it is an essential feature I need in my web browser :-(.
  • I don’t trust installing Apple applications on my computers. I actually ended up using VMWare to use ITunes for this reason >:-(. ITunes likes embedding itself in your system in lots of places it shouldn’t, much like AOL since version 5.0. I do not believe it uninstalls itself completely either if you try. Also, when I tried uninstalling bonjour (an Apple communication protocol, which the program that runs it is also named, It used to be called Rendezvous) it didn’t even TRY to uninstall itself from my system. It just took the program off of a few lists and left all the files there. Even worse, I noticed that Bonjour was hooking a bunch of other processes it shouldn’t have been *sighs*.
  • I’ve saved my biggest complaint for last. All music on the IPod Touches (all IPods actually, and Zunes and Zens too) organize music by the MP3’s ID3 tags into genre/album/artist/etc, and do not allow organizing the music in folder based structures. While for most people this is not a problem, it is a big one for me. This is not a problem for people “new” to the MP3 player scene that buy their music straight from the ITunes Store, as that music is already organized for them with proper tags how they want it. My, and many other peoples collections, that have been being built for well over a decade (from CDs myself or friends have ripped ourselves for the most part), are not all tagged very well, as it never mattered. While I could go through my whole directory and tag everything properly, this would take upwards of hundreds of hours to do, and would be a waste of my time. Even so, I feel being able to organize by directory can be easier to navigate and organize then straight up genre/album/artist listings. This is a very basic functionality of all MP3 players I have had up until this point.
  • The above problem is actually solvable by playlist folder structures. Unfortunately, these are only available on some of the IPod types (for example, the Classic and Nano, IIRC) but not on IPod Touches or IPhones :-(. Further, building these nested folder playlist structures is also a minor pain. I started writing a script to do it for my music collection until I realized it didn’t work on my IPod Touch. ITunes transfers each folder to the IPod Touch as a flat playlist of all the songs in the playlists under it, but again, this is not a problem on some of the other IPod Systems. Unfortunately, if I was to spend the money on an IPod, I would like it to be a PDA too with much more functionality, which the IPod Touch satisfies, and the others do not.

As previously mentioned, I might not be keeping the IPod Touch, as I cannot justify the cost of it mainly as an MP3 player while I’ve already had other solutions that are almost as good for a number of years. I was one of the first adopters of MP3 players (of the MP3s on CD variety) back in 1998, I believe, and they still work great. However, I would probably get an IPhone were I able to use it on the Verizon network because it combines all the features I like on the IPod Touch with a phone. I would love to be able to use its excellent web browser (as far as cell phone browsers go) anywhere, not just when an accessible WiFi network was handy. The cost of an IPhone is more proportionate to what I’d like to spend since I’d be getting a phone and a music player out of it. Unfortunately, when unlocked, IPhones (and G1s) cannot work on Verizon, like it can the other networks, because Verizon uses a different kind of technology for its carrier signals (CDMA instead of GSM). Alas :-\.


Oh, yes, one more thing I wanted to mention. Apple was originally turning a blind eye to the unlocking IPhone market because most of them were going oversees to markets untapped by Apple, which is good for business for them. However, when Apple started expanding into other countries and this practice no longer served their needs, they added on a section to the AT&T contract you are forced to sign up for when buying the phone. It basically stipulates that if you cancel the AT&T contract (which incurs a fee after the first 30 days anyways) that you have to return the IPhone too. This way Apple is guaranteeing people can’t use the phone outside of AT&T.

JavaScript Prototyping Headaches
A spiffy language feature leading to a problem

JavaScript is a neat little scripting language and does the job it is intended for very well. The prototype system is very useful too, but has one major drawback. First, however, a very quick primer on how objects are made in JavaScript and what prototyping is.


An object is made in JavaScript by calling a named function with the keyword “new”.
function FooBar(ExampleArgument)
{
	this.Member1=ExampleArgument;
	this.AnotherMember='Blah';
}
var MyObject=new FooBar(5);
This code creates a FooBar object in the variable MyObject with 2 members: Member1=5, and AnotherMember='Blah' .

Prototyping adds members to all objects of a certain type, without having to add the member to it manually. This also allows you to change the value of a member of all objects of a single type at once. For example (all examples are continued from above examples):
FooBar.prototype.NewMember=7;
var SecondObject=new FooBar();
Now both MyObject and SecondObject have a member NewMember with value 7, which can be changed easily for both objects like this:
FooBar.prototype.NewMember=9;

The way to detect if an object has a member is to use the in function, and then to determine if the member is prototyped, the hasOwnProperty function is used. For example:

'NewMember' in MyObject;			//Returns true
MyObject.hasOwnProperty('NewMember');		//Returns false

'Member1' in MyObject;				//Returns true
MyObject.hasOwnProperty('Member1');		//Returns true

'UnknownMember' in MyObject;			//Returns false
MyObject.hasOwnProperty('UnknownMember');	//Returns false

Now, the problem starts coming into play when using foreach loops.
for(var i in MyObject)
	console.log( i + '=' + MyObject[i].toString() ); //console.log is a function provided by FireBug for FireFox, and Google Chrome
This would output:
Member1=5
AnotherMember=Blah
NewMember=9

So if you wanted to do something on all members of an object and skip the prototype members, you would have to add a line of code to each foreach loop as follows:
for(var i in MyObject)
	if(MyObject.hasOwnProperty(i))
		console.log(i+'='+MyObject[i].toString());
This would output:
Member1=5
AnotherMember=Blah

This isn’t too bad if you are using prototyping yourself on your objects, but sometimes you might make objects that you wouldn’t expect to have prototypes. For good coding practice, you should really do the prototype check for every foreach loop because you can never assume that someone else will not add a prototype to an object type, even if your object type is private. This is especially true because all objects inherit from the actual Object object including its prototypes. So if someone does the following, which is considered very bad practice, every foreach loop will pick up this added member for all objects.

Object.prototype.GlobalMember=10;

You might ask “Why anyone would do this?”, but it could be useful for an instance like this...
Object.prototype.indexOf=function(Value)
{
	for(var i in this)
		if(this.hasOwnProperty(i) && this[i]===Value)
			return i;
	return undefined;
}
This function will search for the first member that contains the given value and return the member’s name.

It would be really nice if “for(x in y)” only returned non-prototype members and there was another type of foreach loop like “for(x inall y)” that also returned prototype members :-\.


This is especially important for Array objects. Arrays are like any other object but they come naturally with the JavaScript language. For Arrays, it is most appropriate to use
for(var i=0;i<ArrayObject.length;i++)
instead of
for(var i in ArrayObject)
loops. Also, in my own code, I often add the following because the “indexOf” function for Arrays is not available in IE, as it is not W3C standard. It is in Firefox though... but I’m not sure if this is a good thing, as it is not a standard.
//Array.indexOf prototype
if(Array.prototype.indexOf==undefined)
{
	function ArrayIndexOf(SearchIndex)
	{
		for(var i=0;i<this.length;i++)
			if(this[i]==SearchIndex)
				return i;
		return -1;
	}
	Array.prototype.indexOf=ArrayIndexOf;
}

I’m not going to go into how JavaScript stores the prototypes or how to find out all prototype members of an object, as that is a bit beyond what I wanted to talk about in this post, and it’s pretty self explanatory if you think about it.

Chrono Trigger DS
The best of the best, again

Ugh. It’s been a month today since I made my last post here. Things have just been way, way too busy! I’ll try to pick up on the content regularity, once again. I should be able to handle at least a few weeks worth of semi-regular updates ^_^;.


I’ll keep today’s post short and simple :-).


Chrono Trigger for the Nintendo DS was released a few weeks ago, which came relatively shortly after the release of Final Fantasy IV for the DS. I should mention Chrono Trigger is one of my all time favorite games. I’ve played it more time than I can count, and was very happy for a port to the DS. Yay :-).

It’s pretty much the exact same as the original, not like the 3D updates that were the Final Fantasy ports. It has all the typical “Extras” systems added on to game ports these days like keeping track of the monsters you’ve fought (bestiary) and items you’ve collected, game art, cutscene replaying, game music jukebox, maps of all the levels, etc. It also has a few GUI updates, 2 new areas, a pokemon type fight-your-friend-over-the-DS-with-a-monster type system, and last but definitely not least, a great new translation.

The new translation is probably the best thing about the port. Tom Slattery did a wonderful job on it, though to his credit (according to Wikipedia ^_^; ) Ted Woolsey was only given 30 days to do the original translation. The new levels are all pretty lame :-\ but oh well. I still haven’t finished going through most of them because they involve a lot of annoying back-and-forth between time periods, and bad level design.


The main thing I wanted to mention was a single line of translation that really made me smile. If you take Ayla to Robo’s extra side quest, at one point she says “What you say?” ... Any of you nerds out there should know what that references :-).


Anywho, yeah, Chrono trigger is awesome. And now back to your regularly scheduled mostly technical posts... ^_^;

How to Tell if Your Cat is Plotting to Kill You
I love kitties! =^.^=

I have just now finished working a 16.5 hour shift for my current contract programming job, culminating from 5 days straight of work with an average of 13 hours per day, so I’m pretty fucking tired x.x;. I think I’ll be taking a bit of a break from posting, and possibly responsibility, for a little bit... especially since next week = Thanksgiving + Family :-).


Anywho, I was given this link earlier today, and it really made me laugh, so I thought I’d point everyone to it: How to Tell if Your Cat is Plotting to Kill You. Any cat owner/lover should get a good kick out of this ^_^.

This page is owned and was drawn by Matthew Inman, Founder & Designer of Mingle2 for said website. Also, check the page source for a cute Easter Egg at the top of the HTML.

I have mirrored this below since I never trust websites that I want to share to stay online. Curse you Internet and your servers going down and domains being lost!


More JavaScript Language Oddities
Make up your minds Standards Committees!

This is sort of a continuation of the parseInt in JavaScript post made a few months ago.


Another minor JavaScript oddity is that it has two very similar String functions in its base library that are so similar that they can cause confusion to programmers. If a programmer only uses one of the two, and then tries to work with someone else’s code that uses the other function, things could easily get messy. These two functions are substr and substring, which w3schools defines as follows:

Function Name Parameters Description
substr StartIndex, Length Extracts a specified number of characters in a string, from a start index
substring StartIndex, EndIndex Extracts the characters in a string between two specified indices

It is KIND of nice to have substring as a function for some purposes... but is it really so hard to do a...
String.substr(StartIndex, EndIndex - StartIndex)
?

I actually did something like this myself in my super awesome string library (which I have not yet released and do not know when I will...). I do not consider this hypocritical though because in my string library, I have a “substr”, like the JavaScript one, but the function that acts like JavaScript’s substring is called “mid”, after the function used in Visual Basic. I did this because I wanted the library to have “matching function names for many languages (PHP, JavaScript, VB, etc).” to make it easier on other programmers already familiar with other libraries.

Chrome no longer doing separate processes
Google broke Chrome :-(

There were at least 3 really neat things about Google Chrome when it made its spectacular entrance onto the web browser market a few months ago that made it a really viable option compared to its competitors. These features were [“are”, going to write it in present tense as they are still true]:

  1. It is fast, especially with JavaScript.
    • I have done speed tests on the JavaScript engines between browsers (which unfortunately I can’t post), and function calls, especially recursion, in the JavaScript engine in Chrome are incredibly faster when compared to the other Web Browsers.
    • However, SpiderMonkey, the new JavaScript engine being used in Firefox, seriously kicks all the other browsers in the butt in speed optimizations when it comes to loop iterations and some other areas. SpiderMonkey is available in the newest non-stable builds of Firefox (v3.1), but is not turned on by default.
  2. Different tabs run in different processes; which was very heavily advertised during Chrome’s launch. This carries with it two great advantages.
    1. A locked or crashed tab/window (usually through JavaScript) won’t affect the other tabs/windows.
    2. Since each tab is in a separate OS process, meaning they are also being run on separate OS threads, they can be run on separate logical operating cores (CPUs). This means that browser tabs can be run in parallel and not slow each other down (depending on the number of logical CPUs you have).

    Unfortunately, this is not as completely true as is widely advertised. New processes are only opened when the user manually opens a new window or tab. If a new window or tab is opened by JavaScript or by clicking a link, it still runs in the same process!

    Google has a FAQ Entry on this as follows:

    16. How can my web page open a new tab in a separate process?

    Google Chrome has a multi-process architecture, meaning tabs can run in separate processes from each other, and from the main browser process. New tabs spawned from a web page, however, are usually opened in the same process, so that the original page can access the new tab using JavaScript.

    If you’d like a new tab to open in a separate process:

    • Open the new tab with about:blank as its target.
    • Set the newly opened tab’s opener variable to null, so that it can’t access the original page.
    • Redirect from about:blank to any URL on a different domain, port, or protocol than that of the page spawning the pop-up. For example, if the page spawning the pop-up is on http://www.example.com/:
      • a different domain would be http://www.example.org
      • a different port would be http://www.example.com:8080
      • a different protocol would be https://www.example.com

    Google Chrome will recognize these actions as a hint that the new and old pages should be isolated from each other, and will attempt to load the new page in a separate process.

    The following code snippet can be used to accomplish all of these steps:

    var w = window.open();
    w.opener = null;
    w.document.location = "http://different.example.com/index.html";
    			

    The only problem is... THIS NO LONGER WORKS! Google recently (within the last 7 days) broke this FAQ recommendation with an automatic update to Chrome, so new tabs that are not manually opened by the user cannot be forced to new processes even with their little code snippet. Personally, I think this behavior is really lame and every tab should be able to open in separate processes every time no matter what, and still be able to talk to each other through process message passing. It may slow things down a little, but it’s a much more powerful model, IMO. An option for this in the window.open’s options parameter would be really nice...

  3. And of course, it’s Google, who, in general, “does no evil”. :-)
    • I can’t find the original article I was looking for on this “don’t do evil” topic :’( ... it basically said something to the extent that the “don’t be evil” motto only applies to business inside the USA, or something like that.
    • I have been a long time fan of Google though, and I still think that pretty much everything they’ve done, in general, has been for the good of everyone. There are always going to be blemishes on a company that size, and for how big they are and all they do, they’ve done a pretty damn good job, IMO. Just my two cents.
Erasing Website Cookies
A quick useful code snippet because it takes way too long to do this through normal browser means
This erases all cookies on the current domain (in the “ / ” path)

JavaScript:
function ClearCookies() //Clear all the cookies on the current website
{
	var MyCookies=document.cookie; //Remember the original cookie string since it will be changing soon
	var StartAt=0; //The current string pointer in MyCookies
	do //Loop through all cookies
	{
		var CookieName=MyCookies.substring(StartAt, MyCookies.indexOf('=', StartAt)).replace(/^ /,''); //Get the next cookie name in the list, and strip off leading white space
		document.cookie=CookieName+"=;expires=Thu, 01-Jan-1970 00:00:01 GMT; path=/"; //Erase the cookie
		StartAt=MyCookies.indexOf(';', StartAt)+1; //Move the string pointer to the end of the current cookie
	} while(StartAt!=0)
}

I went a little further with the script after finishing this to add a bit of a visual aspect.
The following adds a textarea box which displays the current cookies for the site, and also displays the cookie names when they are erased.
<input type=button value="Clear Cookies" onclick="ClearCookies()">
<input type=button value="View Cookies" onclick="ViewCookies()">
<textarea id=CookieBox style="width:100%;height:100%"></textarea>
<script type="text/javascript">
function ViewCookies() //Output the current cookies in the textbox
{
	document.getElementById('CookieBox').value=document.cookie.replace(/;/g,';\n\n');
}

function ClearCookies() //Clear all the cookies on the current website
{
	var CookieNames=[]; //Remember the cookie names as we erase them for later output
	var MyCookies=document.cookie; //Remember the original cookie string since it will be changing soon
	var StartAt=0; //The current string pointer in MyCookies
	do //Loop through all cookies
	{
		var CookieName=MyCookies.substring(StartAt, MyCookies.indexOf('=', StartAt)).replace(/^ /,''); //Get the next cookie name in the list, and strip off leading white space
		CookieNames.push(CookieName); //Remember the cookie name
		document.cookie=CookieName+"=;expires=Thu, 01-Jan-1970 00:00:01 GMT; path=/"; //Erase the cookie
		StartAt=MyCookies.indexOf(';', StartAt)+1; //Move the string pointer to the end of the current cookie
	} while(StartAt!=0)
	document.getElementById('CookieBox').value='Clearing: '+CookieNames.join("\nClearing: "); //Output the erased cookie names
}
</script>

Live Example:
Google Chrome - Bug?
And other browser layout bugs

To start off, sorry I haven’t been posting much the last couple of months. First, I got kind of burnt out from all the posting in August. More recently however, I’ve been looking for a J-O-B which has been taking a lot of my time. Now that I’ve found some work, I’m more in the mood to post again, yay. Hopefully, this coming month will be a bit more productive in the web site :-). Now on to the content.


Browser rendering [and other] bugs have been a bane of the web industry for years, particularly in the old days when IE was especially non-standards-compliant, so people had to add hacks to their pages to make them display properly. IE has gotten much better since then, but there are still lots of bugs in it, especially because Microsoft wants to not break old web sites that had to add hacks to make them work in the outdated versions of IE. Other modern browsers still have rendering problems too [see the acid tests], but again, these days it’s not so bad.


I just ran into one of these problems in a very unexpected place: Google Chrome. I kind of ignored the browser’s launch, as I’m mostly happy with Firefox (there’s a few major bugs that have popped up in Firefox 3.0 that are a super annoyance, but I try to ignore them), but needed to install Chrome recently. When I went to my web page in it, I noticed a major glitch in the primary layout, so I immediately researched it.


What it currently looks like
Rendered in Firefox v3.0.3
Chrome Error - What I wanted it to look like
What it looks like in Chrome v0.2.149.30
Which is apparently correct according to the CSS guidelines
Chrome Error - What it looks like in Chrome

So I researched what was causing the layout glitch, assuming it was my code, and discovered it is actually a rendering bug in Firefox and IE, not Chrome (I think)! Basically, DIV’s with top margins transfer their margins to their parent DIVs, as is explained here:

Note that adjoining vertical margins are collapsed to use the maximum of the margin values. Horizontal margins are not collapsed.
The text there isn’t exactly clear cut, but it seems to support my suggestion that Chrome has it right. Here is an example, which renders properly in Chrome, but not IE and Firefox.

<div style="background-color:blue;width:100px;height:100px;">
    <div style="margin-top:25px;width:25px;height:25px;background-color:green;">
</div>
 

In the above example, the green box’s top should be directly against the blue box, and the blue box inherits the margin and is pushed away from the top of the red box.


Honestly, I think this little margin-top caveat is quite silly and doesn’t make sense. Why collapse the margins when it would make more sense to just use the box model so the child has a margin against its parent. Go figure.

So to fix the problem, I ended up using “padding-top” on the parent instead of “margin-top” on the child. Blargh.



This isn’t the first bug I’ve discovered in Firefox either (which I usually submit to Firefox’s bugzilla).

At least one of the worst ones bugs I’ve submitted (which had already been submitted in the past, I found out) has been fixed. “Address bar should show path/query %-unescaped so non-ASCII URLs are readable” was a major internationalization problem, which I believe was a deal breaker for Firefox for anyone using any language that isn’t English. Basically any non-ASCII character in the address bar was escaped with %HEXVALUE instead of showing the actual character. Before Firefox got out an official bug fix, I had been fixing this with a nifty Firefox add-on, Locationbar2, which I still use as it has a lot of other nifty options.

One bug that has not yet been fixed that I submitted almost 2 years ago (it has been around for almost 4 years) is “overflow:auto gets scrollbar on focused link outline ”. I wrote up the following document on this when I submitted it to Mozilla:


I put this in an IFRAME because for some reason the bug didn’t show up when I inlined this HTML, go figure. The font size on the anchor link also seems to matter now... I do not recall it mattering before.

At least Firefox (and Chrome) are still way WAY more on the ball than IE.


Edit on 2009-7-26: The margin-top bug has been fixed on Firefox (not sure which version it happened on, but I’m running version 3.0.12 currently).

WikiMedia is Pretty Nifty
When software projects are done right

I don’t have any in depth information or interesting anecdotes in today’s post, but I really wanted to talk a little about MediaWiki. MediaWiki is the wiki software, written in PHP, used by Wikipedia which has also been released by the WikiMedia foundation (who runs Wikipedia and her sister projects) as free and open source software.


MediaWiki is an incredibly powerful and robust system from a programming perspective; and spectacular in writing, editing, and organizing information from an editor’s perspective. The tagging/mark-up system is very well designed, easy to use, and easy to read when properly formatted.

The part that really caught my attention, however, is the documentation. I can say that, hands down, MediaWiki has the best documentation for a software package that I have ever seen in my life. They thoroughly document (link 2) everything in the software from all needed perspectives; including using the software as a reader, writer, editor, programmer, moderator, and administrator.

I was particularly happy with the template system, all of the available extensions out there, and the functions that allow dynamic content and manipulation of the software (Tag Extensions, Parser Functions, Hooks, Special Pages, Skins, Magic Words).

The Cost of becoming a .COM Domain Registrar
How can this be financially feasible for anyone?!?

I’ve apparently had an incorrect view on the exact schema and costs of how domain registration works. I had always assumed that to become a registrar (the companies that normal people register domains through) of any .COM domain, you just had to get accredited by ICANN, and then pay $0.20 per domain. However, accreditation is an expensive and tough process, including (taken verbatim from the link):

  • US$2,500 non-refundable application fee, to be submitted with application.
  • US$4,000 yearly accreditation fee due upon approval and each year thereafter.
  • Variable fee (quarterly) billed once you begin registering domain names or the first full quarter following your accreditation approval, whichever occurs first. This fee represents a portion of ICANN’s operating costs and, because it is divided among all registrars, the amount varies from quarter to quarter. Recently this fee has ranged from US$1,200 to S$2,000 per quarter.
  • Transaction-based gTLD fee (quarterly). This fee is a flat fee (currently $0.20) charged for each new registration, renewal or transfer. This fee can be billed by the registrar separately on its invoice to the registrant, but is paid by the registrar to ICANN.
  • Please refer to http://www.icann.org/general/financial.html for the most recent ICANN Budget to find additional details about invoicing, including options for relief.
  • Please refer to http://www.icann.org/financials/payments.htm for instructions on how to submit payments to ICANN.

So I had thought that becoming an accredited .COM registrar would pay itself off in the first year if you had ~1,177 domains registered...
  • BASE FIRST YEAR FEE=$2500 application + $4000 yearly + ~$1500 ICANN operation fee = $8000
  • PER DOMAIN DIFFERENCE= $7.00 to register a domain at a good registrar - $0.20 ICANN FEE = $6.80 savings per domain
  • TO BREAK EVEN= BASE FIRST YEAR FEE / PER DOMAIN DIFFERENCE = $8000 / $6.80 = ~1,177 domains
but unfortunately, I was incorrect in that you ALSO have to pay Verisign (who owns the .COM TLD) a hefty fee per domain.

So once you become an accredited ICANN register, you have to hook your system up to Verisign, who charges an additional (to the $0.20 ICANN fee) $6.42 per domain. Even worse is that they require you to pay all of their fees up front for the number of domains you plan to register on a yearly basis!!!!

Taking into account these new findings, it would actually take ~21,053 domains (with PER DOMAIN DIFFERENCE being adjusted to $7.00-$0.20-$6.42=$0.38) to break even the first year when becoming your own registrar (as opposed to going through another registrar), YIKES!

I've always personally recommend gkg.net as a registrar, but their registration prices recently took a major hike, like most registrars, due to Verisign raising their per domain fee. I may have to reevaluate registrars at some point because of this.

Election Night on the Internet
The day that truly tests if your servers can handle the load

I’m not going to get into politics in this post (nor will I generally ever); I just wanted to point out a few things that I saw on the Internet on election night that made me smile :-) .


Wikipedia actually errored out when I tried to get to Barack Obama soon after it was announced he won.
Wikipedia Down on Barack Obama

I was also really surprised that Fox News had up the following picture/announcement mere minutes after all the other stations reported Obama had won (when it was still only projections). I would have thought Fox News would hold out on announcing it to their viewers until it was more sure...
Fox News Obama Wins
1337 car
Taking pictures at 70mph isn't easy

I’m currently working 60-80 hours a week right now doing contract work, so I don’t have much free time, but I’ll see if I can’t get my posting schedule a little more regular this month.


I noticed this car a few days ago while going down the highway. You can’t really make it out, but the license plate says “L33T”, which made me smile :-). It’s not easy getting a good picture with a shitty cell phone camera (1.3 megapixels) while going 70MPH down a highway!

L33T car license plate

I also encountered a BEAUTIFUL buck [deer] with 5ft+ high horns branching out in the most stunning patterns down in West Lake (south Austin) when I was pulling out of a parking lot, but I couldn’t get my camera out quick enough to get a picture before it disappeared back into the forest :-(. Alas. I might start more work in that area of town however, so I may get another change to see it again later on!

Winamp Problems
Adding features also potentially adds bugs

I’ve been getting tired of Winamp (v5.112 specifically) randomly crashing for no specific reason on my new home music server. I tried upgrading to the latest version, but it also crashes. The best solution at this point was to just downgrade to an old version, like v2.8, which I have been using for more years than I can remember on most all of my machines.

Old versions of Winamp have a pretty low footprint and are great at what they are supposed to do: playing music. It’s gotten bloated nowadays though, which often happens when great software has hit its peak and has nowhere to go. This is one reason to just keep using versions of software that you are used to and have no problems with; like using XP instead of Windows Vista, or some older versions of Microsoft Office, pre-2003. Newer does not always mean better!

Legacy Geocities Login Problems
Big corporations refusing to acknowledge that they have problems, let alone fix them

I have a friend that has a legacy Geocities (the MySpace of the 1990s for free web hosting) account (one from who knows how long before GeoCities was bought by Yahoo). The control panel (at geocities.yahoo.com/gcp) won’t allow logging in to his legacy account because it gets stuck in an infinite redirect loop, redirecting right back to itself.

My guess is that the problem has to do with cookies (on Geocities’ servers’ side, not the client’s!), but I didn’t get that far, as I found a roundabout solution to his problem. After logging in, the user can go to http://geocities.yahoo.com/filemanager or http://geocities.yahoo.com/v/fm.html to manage their files. While the rest of the control panel is still not accessible, this was enough of a solution for him.

Reports are that Yahoo refuses to respond about this problem with their servers.

XML Problems in PHP
I hate debugging other peoples’ libraries :-\

We recently moved one of our important web server clients to a newly acquired server (our 12th server at ThePlanetThePlanet [Used to be called EV1Servers, and before that RackShack], one of, if not the largest, server-farm hosting company in the states). A bad problem cropped up on his site in the form of a PHP script (CaRP) that deals with parsing XML.

The problem was that whenever the XML was parsed and then returned, all XML entities (escaped XML characters like “&gt;” “&lt;” and “&quot;”) were removed/deleted. I figured the problem had to do with a bad library, as the code worked perfectly on our old server, and the PHP settings on both were almost identical, but I wasn’t sure which one. After an hour or two of manipulating the code and debugging, I narrowed it down to which XML function calls had the problem, and that it was definitely not the scripts themselves. The following code demonstrates the problem.

$MyXMLData='<?xml version="1.0" encoding="iso-8859-1"?><description>&lt;img test=&quot;a&quot;</description>';
$MyXml=xml_parser_create(strtoupper('ISO-8859-1'));
xml_parser_set_option($MyXml,XML_OPTION_TARGET_ENCODING,'ISO-8859-1');
xml_parse_into_struct($MyXml, $MyXMLData, $MyData);
print htmlentities($MyData[0]['value']);
On the server with the problem, the following was outputted:
img test=a
while it should have outputted the following:
<img test="a"

I went with a hunch at this point and figured that it might be the system’s LibXML libraries, so I repointed them away from version 2.7.1, which appears to be buggy, to an older version that was also on the system, 2.6.32. And low and behold, things were working again, yay :-D.

Technical data (This is a cPanel install): In “/opt/xml2/lib/” delete the symbolic links “libxml2.so” & “libxml2.so.2” and redirect them as symbolic links to “libxml2.so.2.6.32” instead of “libxml2.so.2.7.1”.

I Win!
^_^

Yay, I've finally one 100 games of FreeCell in a row :-D *happy*.


100 strait wins in FreeCell
Site Expansion Progress
200 pages and counting...

This officially marks the 200th page for this site, yay! The full list of everything can be found on the Site Map (Not counting stuff in Ragnarok, Hynes, or Archive lists for posts and updates). About half of the pages have been within the last 4 months too. Just thought I’d mention it... makes me happy I’ve gone through and gotten so much up and organized :-).

I’ll also from here on out try to make sure to get content up on the live website right after I finish creating it. I have this bad habit of letting things queue up on my local testbed web server that I run on my laptop where I create everything.

High Computer Load
I hate servers. So much... so, so much...
High Server Load

I think it was actually much higher than this, but it wouldn’t let me log in to find out! >:-( . Wish I could easily make SSH and everything I do in it have priority over other process... but then again I probably wouldn’t be able to do anything to fix the load when this sometimes happens anyways. *sighs*

I’ll explain more about “load” in an upcoming post.

End of Posting Spree
Yay, 32 posts in 31 days! :-D
I’m now going to be seriously cutting back on the posting schedule and making it a bit more erratic as keeping up with writing this amount of content was way too hard for me x.x; .
Client Side Security Fallacies
Never rely solely on information you receive from untrusted sources

One of the most laughable aspects of client/server* systems is client side based security access restrictions. What I mean by this is when credentials and actions are not checked and restricted on the server side of the equation, only on the client side, which can ALWAYS be bypassed.


To briefly explain why it is basically insane to trust a client computer; ANY multimedia, software, data, etc that has touched a person’s computer is essentially now their property. Once something has been on or through a person’s computer, the user can make copies, modify it, and do whatever the heck they want with it. This is how the digital world works. There are ways to help stop copying and modification, like hashes and encryption, but most of the ways in which things are implemented nowadays are quite fallible. There may be, for example, safeguards in place to only allow a user to use a piece of software on one certain computer or for a certain amount of time (DRM [Digital Rights Management]), but these methods are ALWAYS bypassable. The only true security comes by not letting information which people aren’t supposed to have access to cross through their computer, and keeping track of all verifiable factual information on secure servers. A long time ago at an IGDA [International Game Developers Association] meeting (I only ever went to the one unfortunately :-\), I learned an interesting truth that hadn’t occurred to me before from the lecturer. That is, that companies that make games and other software [usually] know it will sooner or later be pirated/cracked**. The true intention of software DRM is to make it hard enough to crack to discourage the crackers into giving up, and to make it take long enough so that hopefully people stop waiting for a free copy and go ahead and buy it. By the time a piece of software is cracked (if it takes as long as they hope), the companies know the majority of the remainder of the people usually wouldn’t have bought it anyways. Now I’m done with the basic explanation of client side insecurities, back to the real reason for this post.


While it is actually proper to program safeguards into client side software, you can never rely on it for true security. Security measures should always be duplicated in both client and server software. There are two reasons off the top of my head for implementing security access restrictions into the client side of software. The first is to help remove strain on servers. There is no point in asking a server if something is valid when the client can immediately confirm that it isn’t. The second reason is for speed. It’s MUCH quicker if a client can detect a problem and instantly inform the user than having to wait for a server to answer, though this time is usually imperceptible to the user, it can really add up.

So I thought I’d give a couple of examples of this to help you understand more where I’m coming from. This is a very big problem in the software industry. I find exploitable instances of this kind of thing on a very regular basis. However, I generally don’t take advantage of such holes, and try to inform the companies/programmers if they’ll listen. The term for this is white hat hacking, as opposed to black hat.


First, a very basic example. Let’s say you have a folder on your website “/PersonalPictures” that you wanted to restrict access to with a password. The proper way to do it would be to restrict access to the whole folder and all files in it on the server side, requiring a password be sent to the server to view the contents of each file. This is normally done through Apache httpd (the most utilized web server software) with an “.htaccess” file and the mod_auth (authentication) module. The improper way to do it would be a page that forwarded to the “hidden” section with a JavaScript script like the following.

if(prompt('Please enter the password')=='SecretPassword')
	document.location.href='/PersonalPictures';

The problem with this code is two fold (besides the fact it pops up a request window :-) ). First, the password is exposed in plain text to the user. Fortunately, passwords are usually not as easy to find as this, but I have found passwords in web pages and Flash code before with some digging (yes, Flash files (and Java!) are 100% decompilable to their original source code, sans comments). The second problem is that once the person goes to the URL “/PersonalPictures”, they can get back there and to all files inside it without the password, and also give it freely to others (no need to mention the fact that the URL is written in plain text here, as it’s the same as with the password). This specific problem with JavaScript was much more prevalent in the old day when people ran their web pages through free hosting sites like Geocities (now owned and operated by Yahoo) which didn’t allow for proper password protection.

This kind of problem is still around on the web, though it morphed with the times into a new form. Many server side scripts I have found across the Internet assume their client side web pages can take care of security and ignore the necessary checks in the server scripts. For example, very recently I was on a website that only allowed me to add a few items to a list. The way it was done is that there was a form with a textbox that you submitted every time you wanted to add an entry to the list. After submitting, the page was reloaded with the updated list. After you added the maximum allowed number of items to the list, when the page refreshed, the form to add more was gone. This is incredibly easy to bypass however. The normal way to do this would be to just send the modified packets directly to the server with whatever information you want in it. The easier method would be to make your own form submission page and just submit to the proper URL all you want. The Firebug extension for Firefox however makes this kind of thing INCREDIBLY easy. All that needs to be done is to add an attribute to the form to send the requests to a new window “<form action=... method=... target=_blank>”, so the form is never erased/overwritten and you can keep sending requests all you want. Using Firebug, you can also edit the values of hidden input boxes for this kind of thing.

AJAX (Asynchronous JavaScript and XML - A tool used in web programming to send and receive data from a server without having to refresh a page) has often been lampooned as insecure for this kind of reason. In reality, the medium itself is not insecure at all; it’s just how people use it.


As a matter of fact, the majority of my best and most fun Ragnarok hacking was done with these methods. I just monitored the packets that came in and out of the system, reverse engineered how they were all structured, then made modifications and resent them myself to see what I could do. With this, I was able to do things like (These should be most of the exploits; listed in descending order of usefulness & severity):

  • Duplicate items
  • Crash the server (It was never fixed AFAIK, but I stopped playing 5+ years ago. I just put that it was fixed on my site so people wouldn’t look for it ^_^; )
  • Warp to any map from any warp location (warp locations are only supposed to link to 1 other map)
  • Spoof your name during chats (so you could pretend someone else was saying something - Ender’s game, anyone? ^_^)
  • Use certain skills of other classes (I have up pictures of my swordsman using merchant skills to house a selling shop)
  • Add skills points to an item on your skill tree that is not yet available (and use it immediately)
  • Warp back to save point without dying
  • Talk to NPCs on a map from any location on that map, and sometimes from other maps (great for selling items when in a dungeon)
  • Attack with weapons much quicker than was supposed to be allowed
  • Use certain skills on creatures from any location on a map no matter how far they are
  • Equip any item in any spot (so you could equip body armor on your head slot and get much more free armor defense points)
  • Run commands on your party/guild and in chat rooms as if you were the leader/admin
  • Rollback a characters stat’s to when you logged on that session (part of the dupe hack)
  • Bypass text repetition, length, and curse filters
  • Find out user account names

The original list is here; it should contain most of what I found. I took it down very soon after putting it up (replacement here) because I didn’t want to explicitly screw the game over with people finding out about these hacks (I had a lot of bad encounters with the company that ran the game, they refused to acknowledge or fix existing bugs when I reported them). There were so many things the server didn’t check just because the client wasn’t allowed to do them naturally.


Here are some very old news stories I saved up for when I wrote about this subject:


Just because you don’t give someone a way to do something doesn’t mean they won’t find a way.



*A server is a computer you connect to and a client is the connecting computer. So all you people connecting to this website are clients connecting to my web server.
**“Cracked” usually means to make a piece of software usable when it is not supposed to be, bypassing the DRM
Linux Runlevels
“Safe Mode” for Linux

I am still, very unfortunately, looking into the problem I talked about way back here :-( [not a lot, but it still persists]. This time I decided to try and boot the OS into a “Safe Mode” with nothing running that could hinder performance tests (like hundreds of HTTP and MySQL sessions). Fortunately, my friend whom is a Linux server admin for a tech firm was able to point me in the right direction after researching the topic was proving frustratingly fruitless.


Linux has “runlevels” it can run at, which are listed in “/etc/inittab” as follows:

# Default runlevel. The runlevels used by RHS are:
#   0 - halt (Do NOT set initdefault to this)
#   1 - Single user mode
#   2 - Multiuser, without NFS (The same as 3, if you do not have networking)
#   3 - Full multiuser mode
#   4 - unused
#   5 - X11
#   6 - reboot (Do NOT set initdefault to this)

So I needed to get into “Single user mode” to run the tests, which could be done two ways. Before I tell you how though, it is important to note that if you are trying to do something like this remotely, normal SSH/Telnet will not be accessible, so you will need either physical access to the computer, or something like a serial console connection, which can be routed through networks.

So the two ways are:
  • Through the “init” command. Running “init #” at the console, where # is the runlevel number, will bring you into that runlevel. However, this might not kill all currently unneeded running processes when going to a lower level, but it should get the majority of them, I believe.
  • Append “s” (for single user mode) to the grub configuration file (/boot/grub/grub.conf on my system) at the end of the line starting with “kernel”, then reboot. I am told appending a runlevel number may also work.
SpamSoap
Info on a spam filtering solution

I was long ago pointed to SpamSoap by a friend who helped lead the IT department of a rather large and prestigious law firm. It seems to be an excellent spam filtering solution, but can get to be rather expensive as it is a pay per month per mailbox program, kind of (you pay in groups, like 1-5, 6-10, ..., 201-250, etc). I wouldn’t mind too much trying out the filtering with Google’s domain email program, but Google has marked multiple legitimate emails as spam in my Gmail account in the past, and I don’t trust their cloud computing approach too much with my data.

I originally set up a SpamSoap account 2 to 3 years ago for a single client, and have more recently been setting it up for myself, family, and some other clients. The client that has been using it for that time has been very happy with it, and the only reason I didn’t start using it for myself and others at that time was because it marked a legitimate email as spam for me, and diagnosing why didn’t get very far with their tech support. I have however done a lot more research into their system recently, asking their staff lots and lots of questions to understand the system more, and believe I know why it was caught as spam. Unfortunately, their documentation is horrible and their site doesn’t really go into details at all, so information on how it all works and how to set some things up is not easy to come by. The problem, in my case, is that where the email arrives from to the SpamSoap servers is considered. Servers that SpamSoap receives a lot of spam from are marked as more likely to be sending spam, so unfortunately, forwarding emails from another address to an address on a domain filtered by SpamSoap is a bad idea, as then the whole server that manages the domain that is forwarding is marked as sending a spam message. However, this is only one of many spam determining metrics used, and, of course, it takes many spam messages to make a difference for a server, but if you are forwarding from an address that receives a lot, bad things happen :-). Anyways, here’s some of the information I gathered on how their system works, and other important tidbits, if anyone is interested in using them.


The way SpamSoap works is you pay for “user accounts”. Each user account has 1 white/black list associated with it (which isn’t technically needed, but helps things along), 1 quarantine area, and receives (if they choose too) daily quarantine reports to their master address. Each user account can have email aliases tied to it, but due to the quarantine area it’s important to separate users. Pricing is based upon number of user accounts in tiers, like 1-5=$15/month, 6-10=$25/month, 11-20=$42/month, and so on.

The actual filtering is done by setting MX records for the domains to be filtered to SpamSoap, and SpamSoap actually just sets up a proxy connection between the sending server and your server for delivery. If a message is detected as spam during this process, the delivery attempt is canceled.

There are 2 types of incoming spam that can be filtered different ways by the system; high scoring (100% likelihood) spam, and medium scoring (>90% or something like that, an exact number is not obtainable) spam. With either of these you can choose to either: Tag the message with “[SPAM]” in the header, quarantine the message, deny delivery, or let it through. There are also filtering rules and actions you can set up on based other criteria, like: viruses, content (profanity, racial insensitivity, sexual overtones, etc), click protection, and attachments.

Domain grouping with aliases is a slightly more complicated topic. You can have as many domains as you want, and it does not affect pricing; only the number of user accounts does (or if you choose other options, listed here).

Basically, first, you have master domains. A master domain can have multiple alias domains tied to it. All email addresses with the first section as the same are aliases of each other in this setup. For example, if domain1.com is a master, and domain2.com is an alias, then me@domain1.com and me@domain2.com are email aliases of each other no matter what. If you wanted to alias “myself” with that same user, then those 2 plus me@domain1.com and me@domain2.com would all be the same user. In this setup, if you wanted me@domain1.com and me@domain2.com as separate users, you would have to split up the domains to not be aliased (in a group). You cannot however alias emails across domains that are not aliased, so for example, if both domain1.com and domain2.com were master domains, you could not alias me@domain1.com and myself@domain2.com. These configuration issues really only tend to be problems with generic names like “info@” and “admin@”, for example, a problem would creep up if me@domain1.com wanted to alias myself@domain2.com but info@domain1.com and info@domain2.com needed to be separate user accounts. If this happened, domain1.com and domain2.com would have to both be their own master domains and myself&me would have to be separate user accounts and the white/black lists would need to be duplicated, and 2 quarantine reports would come in.


I would personally recommend for all normal user inboxes to have high likelihood spam denied, medium likelihood spam as quarantined, and anything with a virus as denied with a return notification. Also, anyone that wanted to not be filtered on a domain that runs through SpamSoap would need to be on one user account as aliases with the no filtering option set. The same goes for users who do not need a quarantine (freeloaders ^_^; ), in which one user account could be set up for basic filtering w/o quarantine and lists.

Because of the no forwarding problem as stated above, all domains would need to be pushed through SpamSoap with emails that needed filtered, and then they could be forwarded afterwards to the appropriate inbox from your own servers. So, in other words, domains that go through SpamSoap cannot be forwarded TO and filtered unless the domains that are forwarding to it are also set up with SpamSoap. The consequences of such are higher likelihood of anything being forwarded being counted as spam and that server being marked as a potential spammer.


SpamSoap also has separate reseller and partner programs for people that forward them business, but they would only be useful if one sent a lot of business their way, generating SpamSoap lots of revenue.


I hope that all made sense, it wasn’t easy to write out x.x; .

WinampRC
When the broad solution won’t cut it, get specific

I wrote earlier about my new entertainment center and how evil it has been. Unfortunately, things have only been getting worse. After trying to play music on it while torrenting or doing other things, I found out it can pretty much only do 1 task at a time, and barely, so I’ve decided to make it now only act as a music station and occasionally watch video through it when the video doesn’t require too much power. I even found an old 256MB stick of PC2700 RAM to put in it (yay for finding random antiquated computer parts around the house!), which did not help, as expected. This regrettably means I will have to keep my current home server at its job, which is a major power hog, and way too powerful for what it does, but ah well.

When listening to music I have the obvious need to easily pause playback, and the occasional need to skip songs I don’t feel like listening to ATM. I would normally use the multiple remote desktop hack for this, but the computer just can’t handle 2 XP sessions going at once. For this reason, Synergy (a great way to do KVM through software) would normally be the perfect fallback solution, except I’d rather not have to use my TV (which is the computer’s primary video output) just to control music on the surround sound system. That, and I’d rather not have to use the TV at all for the computer, because, as written before, I have to go through 5 minutes of hoops to get video working right on it. So the solution was to find a remote way to control Winamp, the only music player I’ve used since around ’98 :-).

After some searching, I found WinampRC, and it fits the remote control solution perfectly, especially as it is super lightweight! The only real problem I have with it is that its playlist editor is rather underdeveloped, and it’s hard to add music, especially in batches. Another minor problem is that there are no global keyboard shortcuts :-(, but I can fix this later with other software through macros. All in all though, I’m very happy with it :-).


[Edit on 2008-09-03 @ 7:34am]

Unfortunately, one other semi-major problem has crept up with the program, and it will be a hard one, if not impossible, to diagnose. Sometimes a few seconds after switching over to a new song, it automatically skips to the next song on the list. I can only assume this is because it has improperly measured audio playback times and thinks the previous song finished after it already did. This isn’t as bad as it could be though, and is only occasional, so I won’t be looking for another solution just yet.


[Edit on 2008-09-06 @ 4:30pm]

Ok, just using a normal keyboard, with a PS/2 extension cord, hooked up to the computer to issue shortcuts ~.~ . At least I don’t have to keep the TV on still.

parseInt in JavaScript
Know your libraries!!!

A very important part of programming languages is the standard library that comes with them. PHP has one of the strongest base standard libraries I’ve ever seen. It’s also great to always be able to just throw out any function call in a script and not need look up the library file that you need to include! Perl has one of the largest official library sets (not included by standard) that I know if, but I find it a pain always having to remember which libraries I have to include for all the different functions I need. Though this is probably just because I don’t use Perl that much, as I have most of the C standard include libraries memorized, heh.

To properly use any function from any library, it is important to know exactly how it is supposed to work and any idiosyncrasies. You can never know EXACTLY how a function works unless you have the source for it, but you can pretty much always guess the gist of the internals. This is one of the reasons I have always enjoyed writing my own Personal Libraries, besides that fact that I find it fun getting down in the nitty gritty of things. Not knowing the inner workings of a function is not really a problem when programming, as this is the whole point of encapsulation, and documentation is usually sufficient enough.

I ran into a problem with the parseInt (sister of parseFloat) JavaScript function a long ways back however (this topic has been written down for years to talk about). JavaScript is kind of special in that it is a language that you just kind of jump into and assume you can quickly pick up everything, as there is very very little to its base library. One would assume that the “parseInt” function would just turn anything given to it into an integer, so “parseInt('123')” would return “123” and “parseInt(1.4)” would return “1”, as expected. The gotcha comes in if you pass a 0 before an integral number in a string, in which case it assumes the number is in octal (base 8 math). I found this out by accident when parsing time strings, where minutes are always 2 digits with leading 0s. When “parseInt('09')” is called, it returns “0” because 9 is not a part of base 8 math. Oops! parseInt stops at the first character it identifies that is not part of the base it is currently parsing in. Incidentally parseInt will also parse hex[adecimal] (base 16) strings, as per standard C syntax, for example, “parseInt('0x10')” returns “16”. I would have just said standard hex syntax, but not all languages represent hex in that manner, for example, Visual Basic requires &H before a hex number instead, like “&H10” represents “16”.

Microsoft IIS Bug
Bad Programming: Only using file extensions as an indicator

According to a Microsoft KB article titled “Virtual directory names with executable extensions are not used correctly”, using a virtual folder ending in an executable extension (like .com, .exe, .dll, or .sh) under the web server for IIS [Microsoft’s Internet information services server suite] makes the contents inside the folder unviewable. This behavior itself is kind of silly, as you’d assume a web server would always check to see if something was a file or folder first.

Unfortunately, this doesn’t apply to just virtual folders, but all folders under an IIS web server, as I found out a few years ago when I backed up a site that I knew would be taken down very soon (ironically, because the company [SysInternals] was being taken over by Microsoft) and mirrored it on my Home Server, which runs IIS.

The solution I used was to add a character (in my case an underscore “_”) to the end of all the directory names ending in “.com” and then doing a global regular expression replace through all files in the mirror to replace any occurrences of these directories.


Search For: “(DOMAIN1|DOMAIN2|DOMAIN3)([\\/])
Replace With: “$1_/$2

I still plan on getting up some site mirrors of places that no longer exist and such for the miscellaneous section one of these days...

Custom Fonts in Web Browsers
Solutions for a strict medium

A very important part of the design world is fonts, but it is an unfortunately annoying part of web browser land. There are very few fonts that come by default with OSs and even less default ones that match each other across all OSs, so your website won’t look the same across all platforms unless you use the right combinations. It’s much pretty guaranteed that if you want anything even remotely special in terms of a font somewhere on your website, you will be out of luck to match it across all platforms.

The commonplace solution for this is, of course, creating images for whenever you need special fonts displayed. While this is the most elegant solution, it is only appropriate for special circumstances, and not normal site content, as image file sizes can get ridiculous, and you lose plain text advantages like searchability and search engine recognition. Another solution is to request the user to download the font, like here. While this is a valid solution, the vast majority of users would not download the font because, mostly, they don’t care enough, and secondly, people generally know not to go download unfamiliar files on the internet when they don’t have to, for security reasons.

This has actually been a problem for me recently as I realized some of the default fonts I use for my site, which have always come with Windows, do not have default equivalents that come with most Linux distributions, as I had assumed. That’s a topic for a different day though.


So I had a customer recently request the ability to dynamically display some text in a certain font, so I told him there are 2 solutions. The first would be to use JavaScript to load translucent PNG images, the second would be to embed a Flash applet, as Flash can store font files internally for use. So here are instructions and examples of both:


JavaScript + PNG Translucency (alpha blending) Method
There are 2 ways to create the PNG translucency in Photoshop; one easier but less effective way that doesn’t maintain quality, and a slightly more complex path with better results.
  • To start off for both paths, a screenshot (ALT+PRINT SCREEN to take only the current window) will need to be taken of the font rendered in black against a white background. This can be done in your favorite word processor as long as it properly renders with translucency, or (for Windows) by just going to the font file in “c:\windows\fonts” and opening it, which uses “fontview.exe”.
  • After you have the screenshot, open a new file in Photoshop (File > New OR CTRL+N) and paste the screenshot into a new layer (Edit > Paste OR CTRL+V)
  • Delete the background layer, which requires the layer window is open (Window > Layers OR F7 to toggle its display). Right click the text portion “Background” of the background layer, and choose “Delete Layer”.
  • Select the region that contains your font’s alphabet (M for selection tool) and crop it (Image > Crop).
  • You might want to zoom in at this point for easier viewing (CTRL++ for in, CTRL+- for out).
  • The easy way from there:
    • Deselect the area (Select > Deselect OR CTRL+D).
    • Select the Magic Wand tool (W), set Tolerance to 0, check Anti-Aliased, and uncheck Contiguous
    • Select a pure white pixel and then delete the selection (DELETE)
    • You now have a translucent image that you can save and use, but the translucency isn’t that of the original font, as that is not how the magic wand tool works.
    Example using “Aeolus True Type Font” (Set against a green background via HTML for example sake)
    Translucent Aeolus True Type Font Easy Method
  • The better way:
    • Add a mask to your current layer (Layer > Add Layer Mask > Reveal All)
    • Go to the channels window (Window > Channels to toggle its display, it should be in the same window as Layers, in a separate tab) and select either the red, green, or blue layer. It doesn’t matter which as they should all hold the exact same values (grayscale [white-black colors] have the same red, green, and blue values), so red channel (CTRL+1) is fine.
    • Copy the channel (CTRL+C) (the entire workspace should still be selected after the crop)
    • Select the mask channel (CTRL+\), and you also need to make it visible (toggle the little eyeball icon besides it)
    • Paste into the mask channel (CTRL+V), invert it (Image > Adjustments > Invert OR CTRL+I), and then make it invisible again (untoggle little eyeball icon besides it)
    • Reselect the RGB contents (CTRL+~) and flood fill it with black [or your color of choice]: Paint Bucket Tool (G), 255 tolerance, no antialias
    • You now have a translucent image of the font that you can save and use that has the original font quality. You can test it by adding a white layer below it.
    Example using “Aeolus True Type Font” (Set against a green background via HTML for example sake)
    Translucent Aeolus True Type Font Good Method
From there the image file can be split up into individual images called “a.png”, “b.png”, etc, and a simple JavaScript string could be used to convert a string to display the picture text like “'MyString'.replace(/(.)/g, '<img src="$1.png">')”.
Example (this is produced by JavaScript):

Internet Explorer 6 also has the added problem of not allowing translucent images, so a hack is needed for this. Basically, an element (like a blank image) needs to have its filter style set like the following (JavaScript DirectX hack...)
style.filter="progid:DXImageTransform.Microsoft.AlphaImageLoader(src='IMAGELOCATION', sizingMethod='scale')";


Flash Method
While this method is much quicker to complete and easier to pull off than the previous method, it is also more prone to problems and browser incompatibility. Flash and JavaScript never got along well enough in my book. Anywho, here’s the process. (Source file here)
  • In a new Flash document (v5.0+), create a text box with the following properties:
    • Type: “Dynamic Text”
    • var: MyText
    • Font: YOURFONTCHOICE
    • Embed (button): Select the set of characters the dynamic text box might display. The less glyphs you select, the smaller the output file will be. I included all alpha-numeric+punctuation in the below example (24.3KB).
  • That’s all you need for the Flash file, so all that’s left now is the JavaScript. The following function will set the text for you inside the movie. Also, you should set the embed (for normal browsers) and object (for IE) tags as different “id”s. The wmode is an important parameter here too, in that it makes the background invisible and the Flash applet more a part of the web page (not a “separate window”).
    <object width="300" height="40" id="CustomFontIE" classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000">
    	<param name="movie" value="OtherContent/CustomFonts/CustomFont.swf">
    	<param wmode="transparent">
    	<embed src="OtherContent/CustomFonts/CustomFont.swf" wmode="transparent" width="300" height="40" id="CustomFont" type="application/x-shockwave-flash">
    </object>
    <script type="text/javascript">
    	var IsIE=(navigator.appName.indexOf('Microsoft')!=-1);
    	function SetFlashText(NewText) { document.getElementById('CustomFont'+(IsIE ? 'IE' : '')).SetVariable('MyText', NewText); }
    </script>
    		
Example: (Set against a green background via HTML for example sake)
Enter text here:
Flash applet:



Core Dump Files
Not all OSs crash in the same way :-)

If you ever find a file named “core.#” when running Linux, where # is replaced by a number, it means something crashed at some point. Most of the time, you will probably just want to delete the file, but sometimes you may wonder what crashed. To do this, you use gdb (The GNU debugger), a very power tool, to analyze the core dump file.

gdb --core=COREFILENAME

Near the very bottom of the blob of outputted text after running this command, you should see a line that says “Core was generated by `...'.”. This tells you the command line of what crashed. To exit gdb, enter “quit”. You can also use gdb to find out what actually happened and troubleshoot/debug the problem, but that’s a very long and complex topic.


Recently, I started seeing hundreds of core dump files taking up gigabytes of space showing up in “/usr/local/cpanel/whostmgr/docroot/” on multiple of our web servers. According to several online sources, it seems cPanel (web hosting made easy!) likes to dump many, if not all, of its programs' core files into this directory. In our case, it has been “dnsadmin” doing the crashing. We’ve been having some pretty major DNS problems lately, this kind on the name server level, so I may have to rebuild our DNS cluster in the next few days. Joy.

Comparing Log File
Slow news day...

So for reasons I’m not going to go into, today I had to compare some log files. I was tempted to write the code in C, just because I miss it so much these days x.x;, but laziness won out, especially as there weren’t that many log files and they weren’t that large, so I wrote it in PHP.

Nothing else to this post except the working code which took me about 5 minutes to type out... The function goes through one directory and all of its subdirectories and checks all files against the same path in a second directory. If the file doesn’t exist in the second directory or its contents doesn’t match the first file up to the first file’s length, a message is outputted.


//Start the log run against 2 root directories
TestLogs("/DIR1", "/DIR2");

function TestLogs($RootDir1, $RootDir2, $CurDir="")
{
	//Iterate through the first directory
	$Dir1=opendir("$RootDir1$CurDir");
	$SubDirs=Array(); //Holds subdirectories
	while($File=readdir($Dir1))
		if($File=="." || $File=="..") //Skip . and ..
			continue;
		else if(is_dir("$RootDir1$CurDir/$File")) //Do not try to compare directory entries
			$SubDirs[]=$File; //Remember subdirectories
		else if(!file_exists("$RootDir2$CurDir/$File"))
			print "File '$CurDir/$File' does not exist in second directory.<br>";
		else if(file_get_contents("$RootDir1$CurDir/$File")!=substr(file_get_contents("$RootDir2$CurDir/$File"),0,filesize("$RootDir1$CurDir/$File"))) //Both files exist, so compare them - if first file does not equal second file up to the same length, output error
			print "'$CurDir/$File' does not match.<br>";
	
	//Run subdirectories recursively after current directories' file-run so directories do not get split up
	foreach($SubDirs as $NewDir)
		TestLogs($RootDir1, $RootDir2, "$CurDir/$NewDir");
}
Regular Expression Examples
Finding multiple domain’s name servers

Today I thought I’d give a demonstration on the use of regular expressions [reference page here]. Regular expressions are basically a simplified scripting language for finding and replacing complex text strings, and are implemented into much of today’s software which involve a lot of text editing. They are a fabulously handy tool for computer users and are especially useful for programmers. I believe RegExps actually originally gained their notoriety through the Perl programming language. I also recently heard that it is definite that the new version of C++ (C++0x) will have native library support for regular expressions, yay!

Since I posted yesterday on DNS stuff, and have the examples from it handy, I figured I’d use those :-).


Let’s say you had a group of .com domains and wanted to find out their name servers (I’ve had to do this when switching to new name servers to make sure all the domains we did not control at the registrar level had their name servers set to the new ones). For this example, we will use the following domains “castledragmire.com”, “riaboy.com”, “NonExistantDomainA.com”, and “dakusan.com”.

  • First, we’d need to have the list of the domains, for this example, one domain per line is used.
    castledragmire.com
    riaboy.com
    NonExistantDomainA.com
    dakusan.com
  • Next, we need to turn them into a bash (Linux) script to grab all the information we need.
    Replace: “^(.*)$
    With: “echo '!?$1?!'; host -t ns $1 a.gtld-servers.net | grep ' name server ';”
    Sample output: (The !? ?! stuff are markers for easier viewing and parsing)
    echo '!?castledragmire.com?!'; host -t ns castledragmire.com a.gtld-servers.net | grep ' name server ';
    echo '!?riaboy.com?!'; host -t ns riaboy.com a.gtld-servers.net | grep ' name server ';
    echo '!?NonExistantDomainA.com?!'; host -t ns NonExistantDomainA.com a.gtld-servers.net | grep ' name server ';
    echo '!?dakusan.com?!'; host -t ns dakusan.com a.gtld-servers.net | grep ' name server ';
  • Next, we run the script, and it would output the following:
    !?castledragmire.com?!
    castledragmire.com name server ns3.deltaarc.com.
    castledragmire.com name server ns4.deltaarc.com.
    !?riaboy.com?!
    riaboy.com name server ns3.deltaarc.com.
    riaboy.com name server ns4.deltaarc.com.
    !?NonExistantDomainA.com?!
    !?dakusan.com?!
    dakusan.com name server ns3.deltaarc.com.
    dakusan.com name server ns4.deltaarc.com.
  • Next, we would keep running the following regular expression until no more replacements are found.
    This would combine all domains with multiple name servers onto one line with name servers separated by spaces.
    Replace: “(.*?) name server (.*)\n\1 name server (.*)
    With: “$1 name server $2 $3
    It would output the following:
    !?castledragmire.com?!
    castledragmire.com name server ns3.deltaarc.com. ns4.deltaarc.com.
    !?riaboy.com?!
    riaboy.com name server ns3.deltaarc.com. ns4.deltaarc.com.
    !?NonExistantDomainA.com?!
    !?dakusan.com?!
    dakusan.com name server ns3.deltaarc.com. ns4.deltaarc.com.
  • The final regular expression would turn the output into a single line per domain, followed by its domain servers. The current extra line before the list of name servers is to help spot any domains that did not provide us with name servers.
    Replace: “!\?(.*?)\?!\n\1 name server (.*)
    With: “#$1 \t $2
    Which would output the final following data:
    #castledragmire.com ns3.deltaarc.com. ns4.deltaarc.com.
    #riaboy.com ns3.deltaarc.com. ns4.deltaarc.com.
    !?NonExistantDomainA.com?!
    #dakusan.com ns3.deltaarc.com. ns4.deltaarc.com.
    This data could be directly pasted into Excel, which would put the first column as domains and second column as name servers).
Diagnosing DNS Problems
Digging until you find the root

Yesterday I wrote a bit about the DNS system being rather fussy, so I thought today I’d go a bit more into how DNS works, and some good tools for problem solving in this area.


First, some technical background on the subject is required.
  • A network is simply a group of computers hooked together to communicate with each other. In the old days, all networking was done through physical wires (called the medium), but nowadays much of it is done through wireless connections. Wired networking is still required for the fastest communications, and is especially important for major backbones (the super highly utilized lines that connect networks together across the world).
  • A LAN is a local network of all computers connected together in one physical location, whether it be a single room, a building, or a city. Technically, a LAN doesn’t have to be localized in one area, but it is preferred, and we will just assume it is so for arguments sake :-).
  • A WAN is a Wide (Area) Network that connects multiple LANs together. This is what the Internet is.
  • The way one computer finds another computer on a network is through its IP Address [hereby referred to as IPs in this post only]. There are other protocols, but this (TCP/IP) is by far the most widely utilized and is the true backbone of the Internet. IPs are like a house’s address (123 Fake Street, Theoretical City, Made Up Country). To explain it in a very simplified manner (this isn’t even remotely accurate, as networking is a complicated topic, but this is a good generalization), IPs have 4 sections of numbers ranging from 0-255 (1 byte). For example, 67.45.32.28 is a (class 4) IP. Each number in that address is a broader location, so the “28” is like a street address, “32” is the street, “45” is the city, and “67” is the country. When you send a packet from your computer, it goes to your local (street) router which then passes it to the city router and so on until it reaches its destination. If you are in the same city as the final destination of the packet, then it wouldn’t have to go to the country level.
  • The final important part of networking (for this post) is the domain system (DNS) itself. A domain is a label for an IP Address, like calling “1600 Pennsylvania Avenue” as “The White House”. As an example, “www.castledragmire.com” just maps to my web server at “209.85.115.128” (this is the current IP, it will change if the site is ever moved to a new server).

Next is a brief lesson on how DNS itself works:
  • The root DNS servers (a.root-servers.net through m.root-servers.net) point to the servers that hold top-level-domain information (.com, .org., .net, .jp, etc)
    Examples of these servers are as follows:
    auns1.audns.net.au
    bizE.GTLD.biz
    caCA04.CIRA.ca
    cnA.DNS.cn
    com&netA.GTLD-SERVERS.NET
    deZ.NIC.de
    euU.NIC.eu
    infoB9.INFO.AFILIAS-NST.ORG
    orgTLD1.ULTRADNS.NET
    tvC5.NSTLD.COM
  • Next, these root name servers (like A.GTLD-SERVERS.NET through M.GTLD-SERVERS.NET for .com) hold two main pieces of information for ALL domains under their top-level-domain jurisdiction:
    • The registrar where the domain was registered
    • The name server(s) that are responsible for the domain
    Only registrars can talk to these root servers, so you have to go through the registrar to change the name server information.
  • The final lowest rung in the DNS hierarchy is name servers. Name servers hold all the actual addressing information for a domain and can be run by anyone. The 2 most important (or maybe relevant is a better word...) types of DNS records are:
    • A: There should be many of these, each pointing a domain or subdomain (castledragmire.com, www.castledragmire.com, info.castledragmire.com, ...) to a specific IP address (version 4)
    • SOA: Start of Authority - There is only one of these records per domain, and it specifies authoritative information including the primary name server, the domain administrator’s email, the domain serial number, and several timeout values relating to refreshing domain information.

Now that we have all the basics down, on to the actual reason for this post. It’s really a nuisance trying to explain to people why their domain isn’t working, or is pointing to the wrong place. So here’s why it happens!

Back in the old days, it often took days for DNS propagation to happen after you made changes at your registrar or elsewhere, but fortunately, this problem is of the past. The reason for this is that ISPs and/or routers cached domain lookups and only refreshed them according to the metrics in the SOA record mentioned above, as they were supposed to. This was done for network speed reasons, as I believe older OSs might not have cached domains (wild speculation), and ISPs didn’t want to look up the address for a domain every time it was requested. Now, though, I rarely see caching on any level except at the local computer; not only on the OS level, but even some programs cache domains, like FireFox.

So the answer for when a person is getting the wrong address for a domain, and you know it is set correctly, is usually to just reboot. Clearing the DNS cache works too (for the OS level), but explaining how to do that is harder than saying “just reboot” ^_^;.

To clear the DNS cache in XP, enter the following into your “run” menu or in the command prompt: “ipconfig /flushdns”. This does not ALWAYS work, but it should work.


If your domain is still resolving to the wrong address when you ping it after your DNS cache is cleared, the next step is to see what name servers are being used for the information. You can do a whois on your domain to get the information directly form the registrar who controls the domain, but be careful where you do this as you never know what people are doing with the information. For a quick and secure whois, you can use “whois” from your linux command line, which I have patched through to a web script here. This script gives both normal and extended information, FYI.

Whois just tells you the name servers that you SHOULD be contacting, it doesn’t mean these are the ones you are asking, as the root DNS servers may not have updated the information yet. This is where our command line programs come into play.

In XP, you can use “nslookup -query=hinfo DOMAINNAME” and “nslookup -query=soa DOMAINNAME” to get a domain’s name servers, and then “nslookup NAMESERVER DOMAINNAME” to get the IP the name server points too. For example: (Important information in the following examples are bolded and in white)

C:\>nslookup -query=hinfo castledragmire.com
Server:  dns-redirect-lb-01.texas.rr.com
Address:  24.93.41.127

castledragmire.com
        primary name server = ns3.deltaarc.com
        responsible mail addr = admins.deltaarc.net
        serial  = 2007022713
        refresh = 14400 (4 hours)
        retry   = 7200 (2 hours)
        expire  = 3600000 (41 days 16 hours)
        default TTL = 86400 (1 day)

C:\>nslookup -query=soa castledragmire.com
Server:  dns-redirect-lb-01.texas.rr.com
Address:  24.93.41.127

Non-authoritative answer:
castledragmire.com
        primary name server = ns3.deltaarc.com
        responsible mail addr = admins.deltaarc.net
        serial  = 2007022713
        refresh = 14400 (4 hours)
        retry   = 7200 (2 hours)
        expire  = 3600000 (41 days 16 hours)
        default TTL = 86400 (1 day)

castledragmire.com      nameserver = ns4.deltaarc.com
castledragmire.com      nameserver = ns3.deltaarc.com
ns3.deltaarc.com        internet address = 216.127.92.71

C:\>nslookup ns3.deltaarc.com castledragmire.com
Server:  ev1s-209-85-115-128.theplanet.com
Address:  209.85.115.128

Name:    ns3.deltaarc.com
Address:  216.127.92.71

Nslookup is also available in Linux, but Linux has a better tool for this, as nslookup itself doesn’t always seem to give the correct answers, for some reason. So I recommend you use dig if you have it or Linux available to you. So with dig, we just start at the root name servers and work our way up to the SOA name server to get the real information of where the domain is resolving to and why.

root@www [~]# dig @a.root-servers.net castledragmire.com

; <<>> DiG 9.2.4 <<>> @a.root-servers.net castledragmire.com
; (2 servers found)
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 5587
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 13, ADDITIONAL: 14

;; QUESTION SECTION:
;castledragmire.com.            IN      A

;; AUTHORITY SECTION:
com.                    172800  IN      NS      H.GTLD-SERVERS.NET.
com.                    172800  IN      NS      I.GTLD-SERVERS.NET.
com.                    172800  IN      NS      J.GTLD-SERVERS.NET.
com.                    172800  IN      NS      K.GTLD-SERVERS.NET.
com.                    172800  IN      NS      L.GTLD-SERVERS.NET.
com.                    172800  IN      NS      M.GTLD-SERVERS.NET.
com.                    172800  IN      NS      A.GTLD-SERVERS.NET.
com.                    172800  IN      NS      B.GTLD-SERVERS.NET.
com.                    172800  IN      NS      C.GTLD-SERVERS.NET.
com.                    172800  IN      NS      D.GTLD-SERVERS.NET.
com.                    172800  IN      NS      E.GTLD-SERVERS.NET.
com.                    172800  IN      NS      F.GTLD-SERVERS.NET.
com.                    172800  IN      NS      G.GTLD-SERVERS.NET.

;; ADDITIONAL SECTION:
A.GTLD-SERVERS.NET.     172800  IN      A       192.5.6.30
A.GTLD-SERVERS.NET.     172800  IN      AAAA    2001:503:a83e::2:30
B.GTLD-SERVERS.NET.     172800  IN      A       192.33.14.30
B.GTLD-SERVERS.NET.     172800  IN      AAAA    2001:503:231d::2:30
C.GTLD-SERVERS.NET.     172800  IN      A       192.26.92.30
D.GTLD-SERVERS.NET.     172800  IN      A       192.31.80.30
E.GTLD-SERVERS.NET.     172800  IN      A       192.12.94.30
F.GTLD-SERVERS.NET.     172800  IN      A       192.35.51.30
G.GTLD-SERVERS.NET.     172800  IN      A       192.42.93.30
H.GTLD-SERVERS.NET.     172800  IN      A       192.54.112.30
I.GTLD-SERVERS.NET.     172800  IN      A       192.43.172.30
J.GTLD-SERVERS.NET.     172800  IN      A       192.48.79.30
K.GTLD-SERVERS.NET.     172800  IN      A       192.52.178.30
L.GTLD-SERVERS.NET.     172800  IN      A       192.41.162.30

;; Query time: 240 msec
;; SERVER: 198.41.0.4#53(198.41.0.4)
;; WHEN: Sat Aug 23 04:15:28 2008
;; MSG SIZE  rcvd: 508

root@www [~]# dig @a.gtld-servers.net castledragmire.com

; <<>> DiG 9.2.4 <<>> @a.gtld-servers.net castledragmire.com
; (2 servers found)
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35586
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 2, ADDITIONAL: 2

;; QUESTION SECTION:
;castledragmire.com.            IN      A

;; AUTHORITY SECTION:
castledragmire.com.     172800  IN      NS      ns3.deltaarc.com.
castledragmire.com.     172800  IN      NS      ns4.deltaarc.com.

;; ADDITIONAL SECTION:
ns3.deltaarc.com.       172800  IN      A       216.127.92.71
ns4.deltaarc.com.       172800  IN      A       209.85.115.181

;; Query time: 58 msec
;; SERVER: 192.5.6.30#53(192.5.6.30)
;; WHEN: Sat Aug 23 04:15:42 2008
;; MSG SIZE  rcvd: 113

root@www [~]# dig @ns3.deltaarc.com castledragmire.com

; <<>> DiG 9.2.4 <<>> @ns3.deltaarc.com castledragmire.com
; (1 server found)
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 26198
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 0

;; QUESTION SECTION:
;castledragmire.com.            IN      A

;; ANSWER SECTION:
castledragmire.com.     14400   IN      A       209.85.115.128

;; AUTHORITY SECTION:
castledragmire.com.     14400   IN      NS      ns4.deltaarc.com.
castledragmire.com.     14400   IN      NS      ns3.deltaarc.com.

;; Query time: 1 msec
;; SERVER: 216.127.92.71#53(216.127.92.71)
;; WHEN: Sat Aug 23 04:15:52 2008
;; MSG SIZE  rcvd: 97

Linux also has the “host” command, but I prefer and recommend “dig”.


And that’s how you diagnose DNS problems! :-). For reference, two common DNS configuration problems are not having your SOA and NS records properly set for the domain on your name server.


I also went ahead and added dig to the “Useful Bash commands and scripts” post.

Windows Hosts File
When DNS decides to be finicky

Another of my favorite XP hacks is modifying domain addresses through XP’s Hosts file. You can remap where a domain points on your local computer by adding an IP address followed by a domain in the “c:\windows\system32\drivers\etc\hosts” file.

Domain names are locally controlled, looked up, and cached on your computer at the OS level, so there are simple hacks like this for other OSs too.

I often utilize this solution as a server admin who controls a lot of domains (Over 100, and I control most of them at the registrar level too ^_^). The domain system itself across the web is incredibly fastidious and prone to problems if not perfectly configured, so this hack is a wonderful time saver and diagnostic tool until things resolve and work properly.

Computers are Evil
Setting up new computers can be quite the hassle

The new home server for the new entertainment center I recently set up has made itself out to be quite a nuisance. I am unsure as to whether I will keep using it or not, but fortunately, I have not yet taken down my old home server, as I wanted to do some break in testing on the new one first.

Setting up new computers is almost always a pain in the ass, what with installing and configuring all the software from scratch (which always includes a format and new OS), and making sure all the hardware works properly and finding drivers for it (sometimes when you don’t even have the proper information on what that hardware is). But sometimes, computers can go above and beyond the normal setup nuances and annoyances and be downright evil. I have long proclaimed to people that computers have personalities and minds of their own and they decide when and where they want to be accommodating or uncooperative. Besides all the normal computer setup problems (including not knowing what the hardware was and having to figure that out), this one also had a few more doozies.

The first big problem started with the fact that I wanted to use this computer for video output, and it does not have an AGP slot. As I contemplated in the previous post on this topic, I went ahead and bought a PCI Geforce 5200 for $27.79 including shipping. The card did not fit properly in the new case, so I had to unscrew a few things, which were fortunately designed for just that reason. Then the big problem came up in that video outputted from the s-video port on the card showed up on the TV at a 50% over zoom, so I couldn’t see half the screen. I couldn’t test the monitor output port either because it is DVI, and I have no DVI monitors, alas. After 2 or 3 hours of tinkering with it and throwing everything plus the kitchen sink at the problem, including trying a different s-video cable, I finally stumbled on the solution and got it working, yay. That is... until after I rebooted and it wasn’t working again x.x;. Another 20 or so more minutes of tinkering got it fixed again, and I was able to quickly hone down on a procedure to fix the problem on the next reboot, optimizing it with each successive reboot over the next few days. The procedure is as follows: (The TV over s-video starts as the primary monitor, and I have a second monitor connected to the VGA port to the onboard graphics card)

  • Open “Display Properties” [Right click Desktop > Properties] > Settings
  • Attach second monitor so I can see what I’m doing
  • Open NVidia Control Panel
  • Rotate screen to 90 degrees. It only wants to rotate the screen at 1024x768, which is too high a resolution for the TV, so it kicks the resolution down to 640x480 while rotating
  • Keep setting the screen to no rotation (0 degrees) until the scaling is correct [usually twice]. The NVidia control panel doesn’t want to allow going back to normal rotation now due to the 1024x768 required resolution thing, and will keep the setting set as 90 degrees, so the process can easily be repeated until it works.
  • Now that the screen is at the correct scale (at 640x480), all that’s left is to get the rotation back to normal. To do this, immediately after accepting the rotation process in the NVidia Control Panel, it has to be closed out (alt+f4) so that it saves the rotation setting at 0 degrees but doesn’t try to set it back after all the resolution changes.
  • Raise the resolution back to 800x600
  • Detach secondary monitor now that it is no longer needed

The screen still unfortunately has about 100-200 “pixels” (monitors don’t have pixels, technically) on the top and bottom of the screen that are unused, but eh, NBD. At least this graphics card lets me properly pan and scan (zoom/scale and move) the s-video output around unlike my Geforce4 Ti 4600! The next problem with the video card is that some video outputted from it is just too slow. Though most content is watchable, the choppiness makes it unbearable. The problem with this might just be that the PCI bus doesn’t have the required throughput, which is why most video cards are used over AGP (or nowadays PCI express).

There are even two more final problems with it, one a possible deal killer, the other rather insignificant. The unimportant problem is that XP refuses to install updates. I believe this to be a problem with SP3. The final problem is that the computer seems to randomly compltely freeze up every now and then for no particular reason, requiring a reboot. This has happened 2 or 3 times so far, so I’m waiting to see how often it happens, if anymore. I know it’s not overheating as I currently have the case open; and I see no blown capacitors... hmmmm...



<frustration>Computers!</frustration>
Games to relax
Quickie timekillers

Whenever I need to take a break from working to help clear my mind, there are a few types of “repetitive” or short games I enjoy to play.

One of these is Freecell, a solitaire game that comes with XP, and also came with some versions of Windows 98. I really enjoy it because it is a game of pure reason, with no random chance. You know where all the cards are from the beginning and every game is winnable (theoretically at least... I’ve heard there are 2 combinations of the million possibilities in the Windows version that are unwinnable). When I was playing it a lot, I used to easily be able to win dozens of games in a row in under 2 minutes per game. My goal for this game for a long time has been to win 100 games straight without a loss. I have so far clocked in at ~80 as a record IIRC. Always with the stupid mistakes!

Another fun game I discovered in a computer blow-off class my senior year of high school was Icy Tower. I just picked it back up a few weeks ago, and it’s horribly addicting! I really like it because it’s about 90% skill and 10% randomness. Games that require quick reflexes and sharp hand eye coordination have always been one of my favorite genres, and Icy Tower is full of this. I’ve often found myself while playing the game wishing I could come up with a good idea like it, as programming something of its nature would be incredibly fun. I recently made a high score that I was pretty proud of until I noticed the world high score boards for the game, which are pretty insane (I am linking to a thread instead of the official high score board because the latter is badly programmed and incredibly slow). I can’t help but think a lot of those people cheated... but anywho, the game allows you to save replays of your games, and the file for my high score game is here, and I included a video of it below (more for demonstration of the game ^_^; ). Videos will be uploaded as soon as I get my video card replacement for my laptop, due in later this month, as my current one is failing, but you might as well download the game and play it some, and could watch the better rendered replay there anyways... not that anyone has any reason too watch it, but still XD.


Icy Tower Game in Fast Forward
VIDEO TO COME SOON

Icy Tower Game at Normal Speed
VIDEO TO COME SOON

Now back to work!!! ...
Ancient Software
a.k.a. Video Game Nostalgia Part 2

Oh, the memories of the good old days of gaming! When video games were far and few between, and could be made by one to a handful of people. Yesterday’s post [Video Game Nostalgia] touched on some old games I played when I was but a lad. I decided for today I’d drag out a lot of the old stuff, see what I still had for curiosity sake, and take a picture :-).

All of the software packages are DOS applications (except the Windows upgrades, obviously, and Visual Basic), most everything says for the “IBM/TANDY” :-).

On a silly side note, I had the bad habit of calling PCs (Personal Computers) “IBM Compatibles” (as opposed to Apples) until like 1998, heh.


Ancient Software
From left to right, top to bottom:
Some more really old software I found that I didn’t worry about taking pictures of:

And, Yes, I know I’m a packrat. I inherited it from my Dad :-).

Video Game Nostalgia
And Metal Gear Solid Problems

So a comic [Gunnerkrigg Court] that I enjoy and read daily [updates MWF] recently referenced Metal Gear Solid, which finally made me decide to play through the series.

For reference, whenever I bring up games from here on out, it’s usually to talk about encountered problems, which I will usually provide fixes for, or technical aspects of the game. I’m not qualified, or funny enough, to want to review games; and that is not the purpose of my postings here.


The first thing I wanted to mention is a fix for a graphical problem. As the game is rather “old” (released in 2000 for Windows), it can be incompatible with modern systems. One of the options it uses in hardware mode is 8-bit textures, which is no longer supported, though for the life of me I can’t see why a hack could be made in the video card drivers for this problem. Because of this, the game only allows you to run in software mode. After a lot of digging and searching, in which every place said the same thing (it’s not fixable), I finally found a hacked executable [Metal Gear Solid v1.0 [ENGLISH] No-CD/WinXP+Vista+GeForce+ATi Fixed EXE] made by a kind sole to fix the problem.

Another problem which really frustrated me was a “puzzle” in the game referring to looking for information on the “back of the CD case”. I had just received an “optical disk” in the game, however, it appeared to be a floppy disk and no matter what I did I couldn’t find the required information with the item. I figured it must have been a bug and finally gave in and looked it up online. It turns out they meant the actual CD case the game came in had a number [radio frequency] written on the back of it - “140.15”. I can only assume they did this as a means of “copy protection” to frustrate anyone who didn’t actually buy the game. Unfortunately, I acquired the game without a CD case so I was frustrated by this myself.


This kind of system reminded me of the very old days of gaming in which some games asked you to input a certain word from a certain paragraph on a certain page of the manual to enter the game, or asked questions with answers found in the manual. One of the games I had that did the former was Teenage Mutant Ninja Turtles [1989] for DOS. I have fond memories of playing this and a (monochrome? [green and black :-) ] IIRC?) version of Muppet Adventure: Chaos at the Carnival [1989] (Dear Thor! heh) [also a DOS game] as they were, IIRC, two of my first video games, though I got many others around that time. Both games had later released NES ports too.

My real favorite childhood games however, which are still both cult classics, were Doom, which got me into the design aspect of making games, and most importantly, ZZT, which is what really got me started on programming in 1991 at the age of 5. I still have the original floppy disks for ZZT too :-). ZZT was more scripting than programming though, and I didn’t start real programming until I got into QBasic in 1993. I might release some of my creations for these games one of these days for nostalgic sake ^_^;. I also remember thoroughly enjoying Star Trek: 25th Anniversary for DOS in 1992 :-). I was a nerd even as a kid! ^_^; This game also had copy protection I had forgotten about. As Wikipedia tells:

The game had a copy-protection system in that the player was forced to consult the game’s manual in order to find out which star system they were supposed to warp to on the navigation map. Warping to the wrong system would send them into either the Klingon or Romulan neutral zones, and initiate an extremely difficult battle that often ends with the destruction of the Enterprise.


[Edit 8/16/2008 @ 10:05pm] Pictures of some of this stuff can be found in tomorrow’s post, “Ancient Software”.
Language Optimization Techniques
A few tricks up the programmers sleeve

I’m gonna cheat today since it is really late, as I spent a good amount of time organizing the 3D Engines update which pushed me a bit behind, and I’m also exhausted. Instead of writing some more content, I’m just linking to the “Utilized Optimization Techniques” section of the 3D Engines project, which I put up today.

It describes 4 programming speed optimization tricks: Local variable assignment, precalculating index lookups, pointer transversing/addition, and loop unrolling. This project post also goes into some differences between the used languages [Flash, C++, and Java], especially when dealing with speed.

Multiple Windows XP Sessions
Making XP act like Windows Server

All of the Windows lines of OSs from XP through Windows Server 2003 (or 2005 or 2007?) are, to my knowledge and IMO, basically the exact same thing, with just some minor tweaks and extra software for the more expensive versions. My version of XP Professional even comes with IIS (Internet Information Services - Microsoft’s web/ftp/mail server suite). One of my favorite XP hacks adds on a desperately needed functionality found only in Windows Server editions, which is allowing multiple user sessions on a machine at once. This basically means allowing multiple people to log onto a machine at the same time through Remote Desktop (Microsoft’s internal Windows VNC client). I find the most useful function by far of this is the “Remote Control” feature, which allows a second logged in user to see exactly what is on the screen of another session, and if permissions are given, to take control of it. This is perfect for those people whom you often have to trouble shoot computer problems for, eliminating the need for a trip to their location or 3rd party software to view their computer.

This hack requires a few registry modifications, a group policy modification, and a DLL replacement. The DLL replacement was provided by Microsoft in early versions of XP SP2 when they were tinkering with allowing this feature in XP. I found the information for all this here a number of years ago and it has provided itself invaluable since. Unfortunately, this does not work on XP Home edition, just XP Professional. I tried adapting it once and wasted a lot of time :-\. The following is the text from where I got this hack.

Concurrent Remote Desktop Sessions in Windows XP SP2

I mentioned before that Windows XP does not allow concurrent sessions for its Remote Desktop feature. What this means is that if a user is logged on at the local console, a remote user has to kick him off (and ironically, this can be done even without his permission) before starting work on the box. This is irritating and removes much of the productivity that Remote Desktop brings to Windows. Read on to learn how to remove that limitation in Windows XP SP2

A much touted feature in SP2 (Service Pack 2) since then removed was the ability to do just this, have a user logged on locally while another connects to the terminal remotely. Microsoft however removed the feature in the final build. The reason probably is that the EULA (End User License Agreement) allows only a single user to use a computer at a time. This is (IMHO) a silly reason to curtail Remote Desktop’s functionality, so we’ll have a workaround.

Microsoft did try out the feature in earlier builds of Service Pack 2 and it is this that we’re going to exploit here. We’re going to replace termsrv.dll (The Terminal Server) with one from an earlier build (2055).

To get Concurrent Sessions in Remote Desktop working, follow the steps below exactly:

  1. Download the termsrv.zip file and extract it somewhere.
  2. Reboot into Safe Mode. This is necessary to remove Windows File Protection. [Dakusan: I use unlocker for this, which I install on all my machines as it always proves itself useful, and then usually have to do a “shutdown -a” from command line when XP notices the DLL changed.]
  3. Copy the termsrv.dll in the zip to %windir%\System32 and %windir%\ServicePackFiles\i386. If the second folder doesn’t exist, don’t copy it there. Delete termsrv.dll from the dllcache folder: %windir%\system32\dllcache
  4. Merge the contents of Concurrent Sessions SP2.reg file into the registry. [Dakusan: Just run the .reg file and tell XP to allow the action.]
  5. Make sure Fast User Switching is turned on. Go Control Panel -> User Accounts -> Change the way users log on or off and turn on Fast User Switching.
  6. Open up the Group Policy Editor: Start Menu > Run > ‘gpedit.msc’. Navigate to Computer Configuration > Administrative Templates > Windows Components > Terminal Services. Enable ‘Limit Number of Connections’ and set the number of connections to 3 (or more). This enables you to have more than one person remotely logged on.
  7. Now reboot back into normal Windows and try out whether Concurrent Sessions in Remote Desktop works. It should!

If anything goes wrong, the termsrv_sp2.dll is the original file you replaced. Just rename it to termsrv.dll, reboot into safe mode and copy it back.

The termsrv.dl_ file is provided in the zip is for you slipstreamers out there. Just replace that file with the corresponding file in the Windows installation disks.



I have included an old copy of the above web page, from when I first started distributing this, with the information in the hack’s zip file I provide.

If you want to Remote Control another session, I think the user needs to be part of the “Administrators” group, and don’t forget to add any users that you want to be able to remotely log on to the “Remote Desktop Users” group.

This is all actually part of an “Enhanced Windows XP Install” document I made years ago that I never ended up releasing because I hadn’t finished cleaning it up. :-\ One of these days I’ll get it up here. Some of the information pertaining to this hack from that document is as follows:

  • Any computer techy out there that has tried to troubleshoot over the phone knows how much of a problem/pain in the anatomy it is, and for this reason, I install this hack which makes it painless to automatically connect to a users computer through remote desktop, which can then be remotely viewed or controlled via their displayed console session.
  • I often use this hack myself when I am running computers without keyboards/mice, like my entertainment computer. For a permanent solution for something like this though, I recommend a KM (Keyboard/Mouse) solution like synergy, which allows manipulating one computer via a keyboard and mouse on another.
  • Your user account password must also not be blank. Blank passwords often cause problems with remote services.
  • The security risk for this is a port is opened for someone to connect to like telnet or SSH on Unix, which is a minimal risk unless someone has your username+password.
  • You have to have a second username to log into, which can be done under Control Panel > User Accounts, or Control panel > Administrative Tools > Computer Management > System Tools > Local users and Groups.
  • If you want the second user to be able to log in remotely, make sure to add them under Control Panel > System > Remote > Select Remote users, and also check “allow users to connect remotely to this computer”.
  • You also need to know the IP address of the user’s computer you want to connect to, and unfortunately, they are not always static. If you run into this, you may want to use a DDNS service like mine.
  • You may also run into the unfortunate circumstance of NAT/Firewalled networks, which is beyond the scope of this document. Long story short, you need to open up port 3389 in the firewall and forward it through the router/NAT (this is the port for both remote desktop and remote assistance).
  • You may also want to change the port number to something else so a port scanner will not pick it up. To connect to a different port, on the client computer, in remote desktop, you connect to COMPUTERIP:PORT like www.mycomputer.com:5050.
    • Registry Key: HKLM\System\CurrentControlSet\Control\TerminalServer\WinStations\RDP-Tcp\PortNumber - Set as your new port number
    • This requires a reboot to work.
    • Make sure you don’t provide a port that’s already used on the computer, and you probably shouldn’t make it a standard port either [21 [ftp], 25 [smtp], 80 [http], etc])
  • You can also log into their current console session by going to the task manager (ctrl+shift+esc in full screen, or right click taskbar and go to “task manager”) > Users > Right click username > Remote control
    • This will ask the user at the computer if they want to accept this. To have it NOT ask them, do the following:
      • Start > Run > gpedit.msc [enter] > computer configuration > administrative templates > windows components > terminal services
      • Double click the option “Sets rules for remote control of terminal services user sessions”
      • Enable it, and for the “Options” setting, set “Full control without users permission”
  • If the ability for you to access a client’s computer without their immediate permission or knowledge is too “dangerous” for their taste, you may suggest/use Remote Assistance, which is more troublesome, but much more “secure” sounding.
Mudslinging Campaigns
Politics is a dirty business after all

I really really really really really really really really really really hate politics. I try to completely ignore that scene when possible as most politicians are just brown nosing liars who would disown their own mother if they thought it could help them. The thing I hate most about politics though is the attempt to discredit opponents. Sniping and mudslinging is such a cheap, low, and pointless practice, but it always seems to inevitably come to it. One instance of this is the crimes against humanity Bush has committed are so much more atrocious that it is inconceivable to me that Bill Clinton came so much closer to impeachment just because he wanted to hide his dirty sex life, which isn’t really anyone but his families business anyways. The Republican Party just happens to be really good at being loud, obnoxious, and forceful over any issue they can bring up, it seems to me.

One recent example of this that occurred to me was when a good friend of mine who is incredibly left wing recently sent me some “Bush is a dirty old man” news articles. While I am as much of a Bush hater as the next democrat, and would have liked to have just agreed with her, I had to voice the flaw in the logic of the articles premise and say that it was a pretty silly thing to be paying attention to.

Another example of this came through again today in an email. I received this morning from another very good friend of mine, who is strongly republican and thinks if Obama is elected our economy will basically crash and all hell will break loose. It is as follows:

http://www.snopes.com/politics/obama/airplane.asp


WHAT A DISGRACE!!!  AND HE IS ALL AMERICAN????




http://www.msplinks.com/MDFodHRwOi8vczE4Ny5waG90b2J1Y2tldC5jb20vYWxidW1zL3gxNjYvRFVTSEtFV0lDSC9EQVgtMi8/YWN0aW9uPXZpZXcmY3VycmVudD1PYmFtYV9BZ2Fpbi5qcGc=
Obama The Patriot - Removes American Flag From His Plane

The Patriot Room
Posted on
Tuesday, July 22, 2008 8:11:07 AM by Bill Dupray

Barack Obama recently finished a $500,000 total overhaul of his 757. And
 as part of the new design, he decided to remove the American flag from the tail...

What American running for President of the United States would remove the symbol of his country? And worse, he replaced the flag with it with a symbol of himself... Obama is such a 
despicable human being.

http://www.msplinks.com/MDFodHRwOi8vdGlueXBpYy5jb20=

Please forward this if you're not ashamed of our country and our flag & ifthink this is a disgrace. 
http://www.msplinks.com/MDFodHRwOi8vczE4Ny5waG90b2J1Y2tldC5jb20vYWxidW1zL3gxNjYvRFVTSEtFV0lDSC9EQVgtMi8/YWN0aW9uPXZpZXcmY3VycmVudD1PYmFtYS0xLmpwZw==
If you do not forward this to everyone you know nothing will happen.  If Obama is elected president of the United States we are in trouble.  If you think the Liberals can lead our country just look what Pelosi and Reid have accomplished for us.

God Bless America

http://www.snopes.com/politics/obama/airplane.asp



         In God We Trust


My first thought was to check Snopes, which I quickly noticed was already provided for me, how thoughtful!!!
So after reading it, I replied to him via email:
From the (snopes) article linked to by the email you sent me:
The replacement of North American’s commercial markings included the removal of two stylized U.S. flag images which are used in the airline’s logo (a registered trademark of that company). The North American flag/logo on the forward portion of the fuselage was removed and the one on the tail was replaced with the Obama "O" campaign logo, while traditional depictions of the U.S. flag adjacent to the plane’s registration numbers remain.

However, I was just looking at the new design of McCain’s “Straight Talk Express” bus, and all its large American flags are gone now, as is the American flag theme while it was (Rudys?) bus. And while I can’t find enough good pictures to confirm the small details on the new design, it looks like it has removed ALL of its American flags.

You want to forward these updates to your buddies? :-p

And FYI, I know it sounds hypocritical including the McCain comment in there; I was trying to be ironic ^_^; .

I also just noticed from the Snopes article that McCain’s plane seems to lack any U.S. flags, and oddly, Bush's campaign plane kept the trademarked flag on the tail.

Also, on a side note, while I was searching through Flickr for images of the “Straight Talk Express” (after Google Image failed me somewhat), for some reason it brought me to this picture, which I thought was too adorable to pass up putting here ^_^; .
Jail to the Chief
New Entertainment Center
Missed a day of posting :'(

Doh, I wanted to try posting at least once a day all month and I missed out yesterday because I was just too busy with other stuff, alas.

I had lots of work, mostly house work, to catch up on, culminating in transporting and setting up a new entertainment system, wee. I had to wire the darn thing twice because the first setup really didn’t work well and would have been damaging to the cords; and it takes like an hour each time to do the wiring. Grrrr, oh well.

I originally had all my entertainment stuff set up in my living room, as I assumed that I was going to have guests and would want to watch stuff out there with them on the couch. This has unfortunately turned out to not be the case in this apartment, and I usually only end up going to the living room couch mainly to snuggle with my cat, as she will only get close to me on the couch for some reason, while I have the TV on in the background while I work. Now I can have stuff running in the background whenever I want from my room, and can use the surround sound speakers to play my music, as I am getting tired of my crappy (though better than many) laptop speakers, and headphones are more cumbersome than not.

I bought the new parts of my entertainment system from some [married] friends who finished moving yesterday to a one bedroom in downtown Austin that couldn’t even remotely hold all their stuff, so they were desperately trying to get rid of a lot of it in a hurry. I got a 32" CRT TV [$50], the black shelving unit w/ glass enclosure [$50], and an older computer (512MB of RAM, 780mhz CPU, 80GB hard drive) [$20] to use as a new Home Server. All of these were at about 20% of cost, yay :-).

I really needed the new TV, as my current 32" TV is badly scratched up from being dropped on cement a few years back. I am leaving the old TV hooked up to my current Home Server, which I will now be using as just a multimedia station for that TV and a place for extra backup hard drives. I wanted the “new” computer as a new Home Server, as the current one uses way too much power for what it’s being used for, and is too loud to keep in my room. I’ll be just turning it on when I need it now instead; backing up and watching videos that require too much CPU power that the new computer won’t be able to handle.

The new computer is quite tiny and has no AGP slots, just 2 PCI32 slots. I am therefore looking into getting a $25 (w/ shipping) e-GeForce 5200 PCI card for tv-out. I hope it fits in the small case, because if not, I will just have to leave the case cover off.


Click for full size (I love my new camera ^_^ )
Entertainment System
Blade Laws
No matter how much you want to be Hiro Protagonist

Speaking of swords, though I know most people would probably not agree with me, it’s always irked me that it is legal to carry around guns, but not swords. I guess swords would be a bit harder to conceal, so maybe that’s why, and not that I necessarily would ever carry one around, but still :-(. Guns produce one type of fear, but seeing a 3+ foot long blade that could easily hack off a limb produces a completely different kind of respect, heh. Even knifes are subject to scrutiny, as any blade, by different state laws, are only legal to carry around if less than a certain length, usually under half a foot. Though it is very hard to find these laws and everyone says something different on them, there are consequences for trying to push it.

I have a friend who was once transporting a sword home after buying it, and by unlucky coincidence was pulled over for whatever reason, and he allowed the cops to search his vehicle. He was given a hefty fine and a misdemeanor for having the sword. More unfortunately, this ruined his future prospects for going into the intelligence sector for our government, as you can’t have any blotches on your record, and a misdemeanor is considered bad enough.

It is of course perfectly legal to have one in your car if transporting it [home] right after buying it. So make that your story if you ever find yourself in this unlikely circumstance, whether it is true or not :-). My friend lamented to me he wishes he knew this at that time.

Then again, there is a complete double standard on this subject, as anyone, I believe of any age, can walk out of Walmart with a 22" machete! Like this time in college... when a friend of mine got one... and oh Thor, the poor watermelons!

Skuld Kitty
Pets are the best!!!

Because I’m a super nerd, I thought I’d post pictures of my adorable kitty =^.^= . Her name is Skuld and I adopted her from a store on May the 11th of 2005 when she was 6 weeks old, so I just assume she shares my birthday, March the 28th :-).

She’s a tiny bit psychotic, but I love her to death none the less ^_^. She’s most recently become much more sociable, and she has always especially loved energetic playing; mostly wrestling with my hands and feet, and games of tag.

She has survived living with me at 5 locations so far with 5 moves!!! (Garland, North Dallas, Parent’s House [near-death appendix burst put me here for a while], Canada [1 month stay x.x; but the 26+ hour non stop drives there and back with Skuld were torturous]), Parent’s House [very very short stay], Austin). She’s been a very good sport about it, and I think she loves the Austin place the most so far.

Mouse over the thumbnails for larger copies. (Video clips to be uploaded soon.)

Adoption 1 (2005-05-11) Adoption 2 (2005-05-11) Adoption 3 (2005-05-11)
(2005-05-11) These are all pictures of Skuld on the night I adopted her.
Adoption 4 (2005-05-11) The Box (2005-06-01) Where did Jamie go (2005-07-06)
(2005-05-11) Exploring on adoption night! (2005-06-01) Skuld is a few weeks older here, lounging in a box, as she has always loved to do. (2005-07-06) Skuld was much more daring in her youth and loved climbing onto very high things. She even went to the top of the TV on these shelves a few times which almost touched the ceiling. She is playing hide and seek with a friend of mine, Jamie, here.
Sleeping (2005-07-28) Stretching out (2006-02-16) Sleeping at the office (2006-02-21)
(2005-07-28) Just curled up on some blankets sleeping. (2006-02-16) Stretching out in crazy positions as usual. (2006-02-21) I often took Skuld to the Qrush office for the 6 or so months we had it, as she got very lonely at home and there weren’t many of us at the office. She’s sleeping on some office supplies here.
Grooming (2006-04-13) Stretching at Brads (2007-07-15) Someones watching her (2007-07-15)
(2006-04-13) Grooming herself in her favorite fluffy box bed. (2007-07-15) During the stay at my parents house, I did weekend Dungeons & Dragon games at a friend’s (Brad’s) house in North Dallas (a 30-40 minute drive away) once a month, and the cat demanded to come along. She’s just lounging here. (2007-07-15) Taken a minute or so after the last one, she noticed me taking her picture.
Eating at Brads (2007-07-15) Trip to Canada (2007-08-24) Just chilling (2008-08-02)
(2007-07-15) Eating later that day at Brad’s. (2007-08-24) Sleeping on the >24 hour trip back from Canada in a cat cube my mother bought for her. (2008-08-02) Taken about a week ago as I was testing out my new camera right before needing to use it at Angel Sword. She’s just chillin next to me in bed.
Olympic High Jumper (2005-10-18) What I have to come home to (2007-11-24)
(2005-10-18) Olympic High Jumper (Video) - My dad took this video of me playing with my cat. My mother is the one in the background. (2007-11-24) What I have to come home to (Video) - I was away from my place for about a week and my cat does not easily forgive. She meowed at me for hours on end like this.
Wedding Completed, Yay
Weddings (when participating in) are so much work...

It is now 4:44pm MST (5:44 CST, which is the time zone my website runs on) and I’m currently sitting in the Salt Lake City [Utah] Airport (SLC) with children crying all around me and businessman talking loudly on their cells... people, wee. Thank Thor for headphones + music :-).

Unfortunately, this airport, like most other airports AFAIK, does not provide free public WiFi (Wireless Internet) (examples off the top of my head are Dallas Fort/Worth [DFW] and Austin [AUS]). Phoenix (PHX) did provide free WiFi though, which was really nice. It was (and is, as I am flying there in a few hours for a layover) probably one of the nicest airports I have been to in a while. It even had lots of rows of comfy seats with small 1/.5 ft or so tables between them that had 2 electrical plugs on them, which was all very nicely designed and ergonomic too. Anywho, without access to the internet, I am stuck in 2D land (Snow Crash reference ^_^; ) and am withdrawn from the outside world.

You may notice however that this post is dated while I should be offline. Most of my work and entries for my site are done/posted on my laptop which runs a local web server, and then I move over batches of changes at once when I am finished. I will probably stop this practice once my website is more complete, but at the moment, it works better, and assures I have local backups. This is why, if anyone noticed, why 4 days of posts suddenly showed up yesterday :-). I’ve been on the move lately so I hadn’t had time to do a data transfer.

ANYWAYS, Luis’ wedding was really nice, though a ton of work. As one of the groomsmen I was trying to help out with stuff all day, which all the groomsmen were helping lots too. The bridesmaids had it super easy! I was on my feet almost all day in a heavy suit running around taking pictures, videos, and fixing things, but it was worth it. I’m so happy for Luis, and glad everything went so well ^_^.

Unfortunately, I was not allowed to be there for the actual marriage part (called a Sealing) as it was a Mormon/LDS (Church of Latter Day Saints) ceremony, which takes place in one of their temples, which is sacred and only practicing Mormon adults who have been confirmed are allowed in. Oh well ^_^;. There were probably 50 or so people in the temple to witness the ceremony and 5-10 people waiting outside with me. There were a couple hundred people that showed up for the reception though; the hall was one large mass of people!

There’s not many pictures I want to share publicly here from the event (this is not the place for their personal pictures :-) ), but I do have a couple; especially some I had taken of me in the suit I wore all day, as it’ll probably be a long time before anyone sees me in a formal suit again! ^_^; ... though it is nice to feel respectable sometimes and dress like that.

The weekend was well worth the expensive flight up here (and the suit), as myself and everyone had tons of fun and I got to meet a lot of great new people, and see a lot of old friends I haven’t had the opportunity to hang with for a while. It seems everyone is moving into opposite directions around the country :-\ ... or all the Mormons are just conglomerating in the LDS capitol of the world (Salt Lake City), but we won’t talk about that ^_^;.


The Happy Couple The Groom and I Me Dollar Dancing with the Bride
The Happy Couple The Groom and I Me Dollar Dancing with the Bride
Me (Jeffrey Riaboy)
Me (Jeffrey Riaboy)
An Agnostics Perspective
A lesson in logic
An Agnostics Perspective A lesson in logic Tags: Religion, Eragon, Books There’s a specific dialog that goes on in Eldest, the second book of the Inheritance Cycle (Eragon), about religion that I thought worded my general beliefs on the subject, though in a fantasy setting, quite well. I have included this verbatim below, which basically describes how agnostics reason. I particularly like the last paragraph, which basically says how non god-fairing people can be, from a certain perspective, in a way, on a higher moral ground due to basing their actions on what is right because they want to help others, as opposed to fearing divine retribution. FYI, this doesn’t really contain any spoilers for the books. The following text is copyrighted by the author, Christopher Paolini.

Nine days later, Eragon presented himself to Oromis1 and said, “Master, it struck me last night that neither you nor the hundreds of elven scrolls I’ve read have mentioned your religion. What do elves believe?”


A long sigh was Oromis’s first answer. Then: “We believe that the world behaves according to certain inviolable rules and that, by persistent effort, we can discover those rules and use them to predict events when circumstances repeat.”


Eragon blinked. That did not tell him what he wanted to know. “But who, or what, do you worship?”


“Nothing.”


“You worship the concept of nothing?”


“No, Eragon. We do not worship at all.”


The thought was so alien, it took Eragon several moments to grasp what Oromis meant. The villagers of Carvahall2 lacked a single overriding doctrine, but they did share a collection of superstitions and rituals, most of which concerned warding off bad luck. During the course of his training, it had dawned upon Eragon that many of the phenomena that the villagers attributed to supernatural sources were in fact natural processes, such as when he learned in his meditations that maggots hatched from fly eggs instead of spontaneously arising from the dirt, as he had thought before. Nor did it make sense for him to put out an offering of food to keep sprites from turning the milk sour when he knew that sour milk was actually caused by a proliferation of tiny organisms in the liquid. Still, Eragon remained convinced that otherworldly forces influenced the world in mysterious ways, a belief that his exposure to the dwarves’ religion3 had bolstered. He said, “Where do you think the world came from, then, if it wasn’t created by the gods?”


“Which gods, Eragon?”


“Your gods, the dwarf gods, our gods... someone must have created it.”


Oromis raised an eyebrow. “I would not necessarily agree with you. But be as that may, I cannot prove that gods do not exist. Nor can I prove that the world and everything in it was not created by an entity or entities in the distant past. But I can tell you that in the millennia we elves have studied nature, we have never witnessed an instance where the rules that govern the world have been broken. That is, we have never seen a miracle. Many events have defied our ability to explain, but we are convinced that we failed because we are still woefully ignorant about the universe and not because a deity altered the workings of nature.”


“A god wouldn’t have to alter nature to accomplish his will,” asserted Eragon. “He could do it within the system that already exists.... He could use magic to affect events.”


Oromis smiled. “Very true. But ask yourself this, Eragon: If gods exist, have they been good custodians of Alagaësia2? Death, sickness, poverty, tyranny, and countless other miseries stalk the land. If this is the handiwork of divine beings, then they are to be rebelled against and overthrown, not given obeisance, obedience, and reverence.”


“The dwarves believe3—”


“Exactly! The dwarves believe. When it comes to certain matters, they rely upon faith rather than reason. They have even been known to ignore proven facts that contradict their dogma.”


“Like what?” demanded Eragon.


“Dwarf priests use coral as proof that stone is alive and can grow, which also corroborates their story that Helzvog3 formed the race of dwarves out of granite. But we elves discovered that coral is actually an exoskeleton secreted by minuscule animals that live inside the coral. Any magician can sense the animals if he opens his mind. We explained this to the dwarves, but they refused to listen, saying that the life we felt resides in every kind of stone, although their priests are the only ones who are supposed to be able to detect the life in landlocked stones.”


For a long time, Eragon stared out the window, turning Oromis’s words over in his mind. “You don’t believe in an afterlife, then.”


“From what Glaedr said, you already knew that.”


“And you don’t put stock in gods.”


“We give credence only to that which we can prove exists. Since we cannot find evidence that gods, miracles, and other supernatural things are real, we do not trouble ourselves about them. If that were to change, if Helzvog were to reveal himself to us, then we would accept the new information and revise our position.”


“It seems a cold world without something... more.”


“On the contrary,” said Oromis, “it is a better world. A place where we are responsible for our own actions, where we can be kind to one another because we want to and because it is the right thing to do instead of being frightened into behaving by the threat of divine punishment. I won’t tell you what to believe, Eragon. It is far better to be taught to think critically and then be allowed to make your own decisions than to have someone else’s notions thrust upon you. You asked after our religion, and I have answered you true. Make of it what you will.”


1: Eragon is the protagonist of the book who is currently being tutored in magic by Oromis, an elf. The elves are an enlightened species that view the world as scientists.
2: Carvahall is the farming village Eragon grew up in, in the world of Alagaësia.
3: The dwarves have a typical polytheistic religion. In their case, they believe that they were created from stone by their god, Helzvog, and that coral, by growing, is proof that stone is alive.
Windows 98 for VMWare

I recently had to install Windows 98 through VMWare for some quick tests, and there were a few minor problems after the install that needed to be resolved. I thought I’d share them here if anyone ever needed them.

  • First, VMWare Tools needs to be installed to get video and some other drivers working.
  • Second, Windows 98 was really before the time when network cards were used to connect to the internet, as broadband technology was rare and modems were the commonplace solution, so it doesn’t make this process easy. To connect through your VMWARE bridge or NAT to the Internet (to use IE - FireFox [newer versions of?] doesn’t work on Windows 98), the following must be done through the MSN Connection Wizard (this is mostly from memory).
    • Open "Connect to the internet" from the desktop
    • Click Next
    • Select Modem Manually [next]
    • Select any of the normal modems in the list on the right, like a generic 56,000 modem [OK]
    • Click Next
    • Click lan/manual
    • Connect using my local area network (LAN) [next]
    • Click Next
    • "No" to email [next]
    • Click Finish
  • Lastly, the default sound driver does not work, so you need to do the following [Information found here by googling]
    • Install the Create Lab’s drivers for the PCI sound card
    • Add the following lines to your VMWare config (vmx) file
      • pciSound.DAC1InterruptsPerSec = 0
      • pciSound.DAC2InterruptsPerSec = 0
    • Optionally, for a better midi waveset, download Creative Lab’s 8mb GM/GS Waveset [version 5] and select it for use in the device’s properties by:
      • Right click my computer
      • Select properties
      • Select the Device Manager tab
      • Find the area for sound and go to “SB PCI(WDM)”
      • Go to the second tab
      • Change the Midi Synthesizer Waveset to the downloaded eapci8m.ecw
When your electronics know more than you
Sometimes technology knows best

I really love my GPS (Magellan Maestro 3100) system that I received last Christmas as a present from my mother. I use it whenever navigating to new places I have never been before and it is always pretty darn accurate.

So it worked as usual without a hitch, calculating the best route flawlessly, on my way to Angel Sword last Saturday. On the way back, however, it sent me back on a different route which ended up making the 50 minutes trip take 10-20 minutes longer due to sticking me on backwater highway roads with only 1 lane, and getting stuck behind old geezers going below the speed limit. This had made me really mad at the time, and rather confused as to why it chose a different "fastest" route back; that is until I got to my destination and found out there was heavy construction on the highway going northbound (my way back) that had delayed one of my friends for 4 hours on his trip into Austin :-O. Glad I ended up following its directions after all and not going with my gut and taking the same route I took down there!

My fatal mistake was assuming the GPS calculated the routes itself instead of asking a central server elsewhere with traffic information. Now I know better!

New Harry Potter Canon Stuff
Nothing that interesting... but still...
I’ve been rereading one of the many Harry Potter books again, as usual when wanting to relax, for the millionth time through the series. I thought I’d check around and see if there was anything new, and apparently there are a few new things I didn’t know about.


The Tales of Beedle the Bard has a mass release coming on December 4th.


According to a note on JK Rowling’s website (Wizard of the Month Archive), quoted verbatim:
(1980 - )
The Boy Who Lived, only known survivor of the Avada Kedavra curse and conqueror of Lord Voldemort, also known as Tom Riddle. Harry Potter joined the reshuffled Auror Department under Kingsley Shacklebolt at age 17, rising to become Head of said department in 2007.


And finally, an 800 word "Harry Potter Prequel" by JKR written for some charity book by WaterStone, or something like that. I found the text for it here, and it is quoted below.

The speeding motorcycle took the sharp corner so fast in the darkness that both policemen in the pursuing car shouted ‘whoa!’ Sergeant Fisher slammed his large foot on the brake, thinking that the boy who was riding pillion was sure to be flung under his wheels; however, the motorbike made the turn without unseating either of its riders, and with a wink of its red tail light, vanished up the narrow side street.

‘We’ve got ‘em now!” cried PC Anderson excitedly. ‘That’s a dead end!”

Leaning hard on the steering wheel and crashing his gears, Fisher scraped half the paint off the flank of the car as he forced it up the alleyway in pursuit.

There in the headlights sat their quarry, stationary at last after a quarter of an hour’s chase. The two riders were trapped between a towering brick wall and the police car, which was now crashing towards them like some growling, luminous-eyed predator.

There was so little space between the car doors and the walls of the alley that Fisher and Anderson had difficulty extricating themselves from the vehicle. It injured their dignity to have to inch, crab-like, towards the miscreants. Fisher dragged his generous belly along the wall, tearing buttons off his shirt as he went, and finally snapping off the wing mirror with his backside.

‘Get off the bike!’ he bellowed at the smirking youths, who sat basking in the flashing blue light as though enjoying it.

They did as they were told. Finally pulling free from the broken wind mirror, Fisher glared at them. They seemed to be in their late teens. The one who had been driving had long black hair; his insolent good looks reminded Fisher unpleasantly of his daughter’s guitar-playing, layabout boyfriend. The second boy also had black hair, though his was short and stuck up in all directions; he wore glasses and a broad grin. Both were dressed in T-shirts emblazoned with a large golden bird; the emblem, no doubt, of some deafening, tuneless rock band.

‘No helmets!’ Fisher yelled, pointing from one uncovered head to the other. ‘Exceeding the speed limit by - by a considerable amount!’ (In fact, the speed registered had been greater than Fisher was prepared to accept that any motorcycle could travel.) ‘Failing to stop for the police!’

‘We’d have loved to stop for a chat,’ said the boy in glasses, ‘only we were trying -’

‘Don’t get smart - you two are in a heap of trouble!’ snarled Anderson. ‘Names!’

‘Names?’ repeated the long-haired driver. ‘Er - well, let’s see. There’s Wilberforce… Bathsheba… Elvendork…’

‘And what’s nice about that one is, you can use it for a boy or a girl,’ said the boy in glasses.

‘Oh, OUR names, did you mean?’ asked the first, as Anderson spluttered with rage. ‘You should’ve said! This here is James Potter, and I’m Sirius Black!’

‘Things’ll be seriously black for you in a minute, you cheeky little -’

But neither James nor Sirius was paying attention. They were suddenly as alert as gundogs, staring past Fisher and Anderson, over the roof of the police car, at the dark mouth of the alley. Then, with identical fluid movements, they reached into their back pockets.

For the space of a heartbeat both policemen imagined guns gleaming at them, but a second later they saw that the motorcyclists had drawn nothing more than -

‘Drumsticks?’ jeered Anderson. ‘Right pair of jokers, aren’t you? Right, we’re arresting you on a charge of -’

But Anderson never got to name the charge. James and Sirius had shouted something incomprehensible, and the beams from the headlights had moved.

The policemen wheeled around, then staggered backwards. Three men were flying - actually FLYING - up the alley on broomsticks - and at the same moment, the police car was rearing up on its back wheels.

Fisher’s knees bucked; he sat down hard; Anderson tripped over Fisher’s legs and fell on top of him, as FLUMP - BANG - CRUNCH - they heard the men on brooms slam into the upended car and fall, apparently insensible, to the ground, while broken bits of broomstick clattered down around them.

The motorbike had roared into life again. His mouth hanging open, Fisher mustered the strength to look back at the two teenagers.

‘Thanks very much!’ called Sirius over the throb of the engine. ‘We owe you one!’

‘Yeah, nice meeting you!’ said James. ‘And don’t forget: Elvendork! It’s unisex!’

There was an earth-shattering crash, and Fisher and Anderson threw their arms around each other in fright; their car had just fallen back to the ground. Now it was the motorcycle’s turn to rear. Before the policemen’s disbelieving eyes, it took off into the air: James and Sirius zoomed away into the night sky, their tail light twinkling behind them like a vanishing ruby.


On a slightly off-topic non-official tangent, I really love this picture! Wish I knew who the artist was, especially to give credit here. :-\
Harry Potter Studying Stylized
Angel Sword
“Swords are Cool!” Internet Anonymous

So I went to an open house at Angel Sword yesterday, which produces the world’s best swords. I first fell in love with their work when I saw them well over 10 years ago at Scarborough Fair Renaissance Festival in Dallas, and have schemed owning one since, which is not easy as their lowest series cost about $2,000. Of course, I could get smaller (and legal) blades from them like knives, but that’s just not the same or as fun ^_^; .


Their lowest line of swords, the Bright Knight series, holds at least the top two world records in ability from international competitions. I asked why they haven’t submitted their upper series lines (Angel Swords, Avatar series, etc) and the basic reply was “You don’t show your queens and kings if you can win with your jacks” :-). The master sword smith who started and still does most all of the work on the swords is Daniel Watson, whom has been working on swords for well over 30 years. The reason his swords are the best is that he uses combinations of ultra-tech with old tech to produce the best results. He has over 14 patents on processes and technology to produce the swords that he has been creating and refining over his lifetime that make no one able (easily) to catch up to his quality of work. The ultra-tech stuff includes cryogenics with quick freezing using liquid nitrogen and super heating, electromagnetic manipulating machines, metallurgy, and good old fashion hammer techniques, including cold forging, which most sword smiths apparently ignore.


I unfortunately had to miss a wedding of a friend of my sister’s up in Dallas for this, but I thought this more important/pressing as they only have these open houses every few [2-3] years, and I have reason to believe this may be the last one. It was supposed to start off at 9am. I got there 20 or so minutes early, well before anyone else, so I got some one on one time with Mr. Watson and another of his employees, Wolf (his real name ^_^). So I checked out his large stock of swords in the show room for the first hour or two while everyone else was arriving, and we all chatted and had general fun. There are so many beautiful swords and blades of every style, size, and make; pure works of art!

After that we went up to the forge and he did a demo with running commentary of reworking a knife, sharpening and fixing it up using types of buffers. That demonstration and everything else from the whole day was all quite fascinating. During this we also got a good list of accidents that have occurred in the shop, it was quite gruesome!!! :-D. After that we went back to the display room and had some more discussion.

Next was practical sword use ^_^. We all either borrowed swords or those who already owned ones used them, and we went out to their large front yard area (they are out in the country) and Daniel went over how to properly make cuts with a sword; proper torque and body movements to get the most out of a swing and such. Then each person that showed up (we had 10-15 people) got one wet tatami mat to practice on, which depending on your aim, one could get 5 to 10 solid cuts with. A single tatami mat, which is what most of us used, is about similar to cutting someone through the neck and a shoulder, while a double tatami mat, which they only had 1 made, is about equivalent to a solid cut through someone’s midsection. It was really fun ^_^.

We then broke for lunch with a lot more chatting about swords and many other interesting topics. The final part of the structured day was a voted upon sword creation process that we wanted to hear about. How he made his Avatar swords and the difference between the lines easily won out, as he had never released the information to anyone before (besides perhaps his apprentices and significant others). The reason he was now willing to tell us all about it was his final batch of patents on the processes went through very recently so it was safe to give it out. One of the guests video taped the whole talk, and I’m hoping to get my hands on a copy of it to post here with his permission later. He told me he wanted to edit it a bit first and show it to Mr. Watson before he did anything with it or released it anywhere. The main gist is that the lower lines just used the electromagnetic manipulation and other ultra-tech to achieve their chemical bonding properties so they can be cut to have much smaller angles on the edges without being too brittle, while the higher lines in addition have the hard work muscle and sweat forging and pounding put into them.

I think he ended his talk a little early to hit the 4pm mark when everything was supposed to end. A few people left then but I stayed around for over an hour to keep talking with the other participants. Everyone was really great and from many different walks of life and experiences all brought together by a common interest. After a few more people left, Mr. Watson brought out many of his whiskeys to let us try out. That’s the business he’s trying to break into and away from swords BTW, making whiskeys and rums and some other types of alcohol. I think I heard him mention to someone that some of the alcohols he was giving us were going for over $125 a shot!

I then left around 5pm, even though I wanted to stay longer, because I had another previous engagement I had promised to make... and then when I got there it didn’t even happen and I was quite perturbed about this, as they could have told me it wasn’t going to happen and I could have stayed at Angel Sword longer, but ah well.


I will probably be getting one of their blades very soon, as I just found out you can trade in their blades at no markdown to help get more expensive blades. So I can get a $2,000 one that I like and then have that around while I work up to a $6000 blade I really like ^_^. They also have a guarantee against breakage, chips, or damage, which is really hard to do anyways, so they can be used all you want without worry. He even showed us a sword that had been shot due to an accidental misfire of a .22 gun from the room above the show room. The sheath it was in clearly showed where the bullet went through, but there wasn’t even a ding or a scratch on the sword! The only remaining evidence was some copper that came off the bullet and etched into the metal a bit :-).


And here are all the pictures with captions that I took for the day! :-D Mouse over the thumbnails for larger copies. (Video clips to be uploaded soon.)

Show Room - Northeast corner
Show Room - NorthEast Corner 1 Show Room - NorthEast Corner 2 Show Room - NorthEast Corner 3
A large assortment of different kinds of swords. Slightly left of the first picture, in the upper right corner are the few wakizashis on display. Directly above the first picture.
Show Room - NorthEast Corner 4The left side of the last picture. These two were BEAUTIFUL.
Show Room - Southeast corner
Show Room - SouthEast Corner 1 Show Room - SouthEast Corner 2
More assortments of swords. There were still a lot of areas in the show room I didn't even get pictures of. The right side of the last picture.
Show Room - West side - Katanas and Knives
Show Room - West side 1 Show Room - West side 2 Show Room - West side 3
This whole wall was mostly katanas, with knives and other smaller blades in the glass display cases below them. Most of these swords were worth ~$3,500 a piece. Directly to the right of the last picture. Zoomed in from the last picture. The katana with the black and red hilt is the one I plan on getting soon, hopefully!
The Forge - Buffers and Sharpeners
Forge - Buffers 1 Forge - Buffers 2 Forge - Buffers Video
Dan was showing us here how he sharpens and fixes blades with the buffers. A zoomed out (and underexposed?) view of the buffer area. To the left is a larger belt buffer for larger blades. 3 short video clips spliced together of Mr. Watson working on a blade.
Forge - Cavalier CastingThis is the sign in the 2nd picture directly above. I believe they take it to shows as a decoration.
The Forge - The Kiln
Forge - Kiln 1 Forge - Kiln 2 Forge - Metal sheets
A normal heating kiln with tons of different types of hammers, tongs, and tools. A different angle of the first picture. This ones a little hard to see... it's a little better in my original large resolution copy. This is a shelf directly to the left of the kiln that contains racks of different types of metal sheets including (from top to bottom) steel?, brass, and copper.
The Forge - Mr. Watson hammering
Forge - Hammering 1 Forge - Hammering 2
Hammering on a piece of steel while describing the process of creating the different series of blades. Hopefully I can get a copy of the video someone took of this later.
The Forge - Electromagnetic Machine
Forge - Electromagnetic Machine 1 Forge - Electromagnetic Machine 2 Forge - Electromagnetic Machine 3
All 3 of these are pretty blurry, but this is the electromagnetic machine used to rearrange the molecules of the steel and do micro hammering.
The Forge - The (High Pressure?) Furnace
Forge - Furnace 1 Forge - Furnace 2
This is the furnace. I believe it to be super high pressure, but I know it heats steel to over, IIRC 1400 Fahrenheit. A picture taken from the same spot as above, but shifted slightly to the left. The LOUD and powerful mechanical hammer is in the way.
The Forge - Liquid Nitrogen tanks for cryogenics
Forge - Liquid Nitrogen 1 Forge - Liquid Nitrogen 2
A close up of a liquid nitrogen tank. All 4 (or were there 5...?) tanks sitting next to each other.
The Forge - Unfinished blades
Forge - Unfinished Blades 1 Forge - Unfinished Blades 2
Blades that are being worked on. Mr. Watson is standing on the left side of the picture. He is currently standing right in front of the liquid nitrogen tanks from above.
The Forge - Other areas
Forge - Workbench Forge - Other
A workbench. This is directly to the right of the kiln, shown above. Another area of the forge with lots of other workstations and tools.
In the Yard - Group Photos
Yard - Group 1 Yard - Group 2
This was taken while we were waiting for Mr. Watson to come out and start demonstrating. This one has most of the group that was there. The guy on the left side in the red shirt is Ingrid's husband. Lucky guy :-).
In the Yard - Mr. Watson teaching and demonstrating
Yard - Daniel 1 Yard - Daniel 2 Yard - Daniel 3
Yard - Daniel 4 Yard - Daniel 5 Yard - Daniel 6
Using Angel (yes that's his real name... I think) as a dummy :-)
Yard - Daniel 7 Yard - Daniel 8 Yard - Daniel 9
Another "volunteer" 0:-)
Yard - Daniel 10 Yard - Daniel 11
In the Yard - Wolf cutting (video)
Yard - Wolf cut videoWolf (real name) is one of Mr. Watson's employees. He did some of the first cuts, for demonstration purposes.
In the Yard - Angel cutting
Yard - Angel 1 Yard - Angel 2 Yard - Angel 3
Angel (yes that's his real name... I think) cutting. He seemed quite good. Probably the second most experienced guest there (maybe?), after Kendall.
In the Yard - Kendall cutting (videos)
Yard - Kendall video 1 Yard - Kendall video 2 Yard - Kendall video 3
Kendall cutting up his tatami mat. He is apparently quite adept with swords... has won some past sword competitions and is a martial arts instructor and such.
In the Yard - Ingrid cutting
Yard - Ingrid 1 Yard - Ingrid 2 Yard - Ingrid video
Girls like swords too!!! heh. You may have seen Ingrid in a few other pictures (see Group #1) with her (5 month?) old boy that she brought along. Her husband was there too, they are both great people. Video...
In the Office - Mythbusters Barrel
Office - Myth Busters BarrelThe final picture I took. This is the .30 caliber steel barrel used in Myth Busters that they kind of cut through in an episode.
Dakusan v0.5a
Everyone’s gotta start somewhere
This is a picture of me receiving my first real programming language, Visual Basic 4.0, for Chanukah [a Jewish holiday, like Christmas in terms of gifts] in 1995 [I’m the one holding it, age 11, 5th grade]. I had been using QBasic before that. I just thought the picture was too cute to pass up ^_^. The person next to me, Luis Merino, was my best friend during elementary and middle school, and is the reason why I just found this picture. I’m flying up for his wedding in Salt Lake City, in which I will be a groomsman, this coming Tuesday :-).
Visual Basic for 12th Chanukah
LinkedIn Policies Part 2
Tech Support Hell

Continued from Part 1. Once again, I received another notification of a friend joining from an email I gave to the LinkedIn system. I contacted LinkedIn before writing the previous post on the topic with the following message:

For reference, your privacy policy states the following
Information about your Contacts
In order to invite others to connect with you directly in LinkedIn, you will enter their names and email addresses. This information will be used by LinkedIn to send your invitation including a message that you write. The names and email addresses of people that you invite will be used only to send your invitation and reminders.
I decided to search for accounts through your "Address Book Contacts" function, and manually entered my email contacts. I only used this function to find existing users, and not invite new ones. I expected the information to be immediately deleted from your servers, as it had no more use for the contacts I gave, but I found out today they were still there when one of said addresses was used to sign up a new account and LinkedIn informed me of such. While this is a nice feature, it would have been appropriate to allow the user to opt out of having LinkedIn keep the emails for further use, and downright shady that the user is not informed at all that given email addresses are kept by LinkedIn on your servers.
And this is the non-auto-generated response I received back 2 days later:
Dear Jeffrey
We are aware of the issue you are currently experiencing and we are working diligently to resolve the issue. We appreciate your patience while this issue is being resolved.

I thought it obvious from this reply that they did not take what I said into consideration, and a high probability that they didn’t really even read it. I mentioned in the last post this exact thing happened to my friend who was trying to communicate with LinkedIn about a problem he was having with errors with their site code. This kind of thing is typical from large corporations that receive a large amount of communications and do not have the staff to handle it. I consider this practice almost as bad as out-sourced tech support (usually India), another pet peeve of mine, as communication is often hard and the tech support agents often don’t know what they are talking about... at least very much more so than when with other first-tier tech support channels provided in-country ^_^; . I went ahead and contacted eTrust a few days ago in hopes that I get a more personal response from them.

Ready to get back to work
Reliving the good days of Final Fantasy

I got back from a couple day trip to Dallas last night. Man do I hate that drive, especially when it’s raining so hard you can barely see 6 feet in front of you, which seems to happen almost every time any of my friends or family make that drive (from Dallas to Austin or vice versa).

I just now beat Final Fantasy 4 DS too, yay. I was thoroughly happy with the remake they did of the game this time around, of which it had only one or two trifle annoyances of no real consequence, which is surprising for me as I always seem to find heavy fault in everything remade that I held dear as a child. The new game plus feature, as far as I can see, is pretty worthless though, as all it leaves you with is the augments, which I didn’t even use anyways. The cut scenes were all excellent, especially the opening credits pre-rendered cinematics, which I have included below. Now all I really have to wait for is the Chrono Trigger remake they are doing for the DS!!! :-D

I also finished the Eragon books again over the weekend, so with all of that sidetracking stuff out of the way I will be getting back to regularly posting stuff here as promised.



Final Fantasy IV DS Opening High Quality
Stolen from YouTube, owned by SquareSoft


Final Fantasy IV DS Characters Art
Owned by SquareSoft, Image from GamesPress
Final Fantasy IV DS Characters Art
Distractions
Getting sidetracked too easily x.x;
Sorry for the lack of posts and updates recently. I’ve been involved lately with both playing FF4 on the DS and rereading Eragon, as the next book is about to come out. Among other things... I’ll start posting regularly again come the new month.
Malcolm in the Middle - Mental Math
Because math is amazing...

This is a clip from the TV show “Malcolm in the Middle” in which the protagonist, Malcolm, demonstrates his freakish numeric abilities for the Krelboyne [the advanced learning/gifted class] Circus to save the day (episode “Krelboyne Picnic” Season 1 Episode 8).

I encoded this video, apparently, in February of 2007 and do not recall why. It’s a fun little clip, so instead of deleting it, since I recall that I could not find it at the time for whatever reason, I figured I’d put it here.


[flv 14.5MB] [Original avi 18.7MB]
Data Format Conversion
Moving from Point A to Point B

I am often asked to transfer data sets into MySQL databases, or other formats. In this case, I’ll use a Microsoft Excel file without line breaks in the fields to MySQL as an example. While there are many programs out there to do this kind of thing, this method doesn’t take too long and is a good example use of regular expressions.


First, select all the data in Excel (ctrl+a) and copy (ctrl+c) it to a text editor with regular expression support. I recommend EditPad Pro as a very versatile and powerful text editor.

Next, we need to turn each row into the format “('FIELD1','FIELD2','FIELD3',...),”. Four regular expressions are needed to format the data:

SearchReplaceExplanation
'\\'Escape single quotes
\t','Separate fields and quote as strings
^('Start of row
$'),End of row
From there, there are only 2 more steps to complete the query.
  • Add the start of the query: “INSERT INTO TABLENAME VALUES”
  • End the query by changing the last row's comma “,” at the very end of the line to a semi-colon “;”.

For example:
a	b	c
d	e	f
g	h	i
would be converted to
INSERT INTO MyTable VALUES
('a','b','c'),
('d','e','f'),
('h','h','i');

Sometimes queries may get too long and you will need to separate them by performing the “2 more steps to complete the query” from above.


After doing one of these conversions recently, I was also asked to make the data searchable, so I made a very simple PHP script for this.

This script lets you search through all the fields and lists all matches. The fields are listed on the 2nd line in an array as "SQL_FieldName"=>"Viewable Name". If the “Viewable Name” contains a pound sign “#” it is matched exactly, otherwise, only part of the search string needs to be found.

<?
$Fields=Array('ClientNumber'=>'Client #', 'FirstName'=>'First Name', 'LastName'=>'Last Name', ...); //Field list
print '<form method=post action=index.php><table>'; //Form action needs to point to the current file
foreach($Fields as $Name => $Value) //Output search text boxes
	print "<tr><td>$Value</td><td><input name=\"$Name\" style='width:200px;' value=\"".
		(isset($_POST[$Name]) ? htmlentities($_POST[$Name], ENT_QUOTES) : '').'"></td></tr>';//Text boxes w/ POSTed values,if set
print '</table><input type=submit value=Search></form>';

if(!isset($_POST[key($Fields)])) //If search data has not been POSTed, stop here
	return;
	
$SearchArray=Array('1=1'); //Search parameters are stored here. 1=1 is passed in case no POSTed search parameter are ...
                           //... requested so there is at least 1 WHERE parameter, and is optimized out with the MySQL preprocessor anyways.
foreach($Fields as $Name => $Value) //Check each POSTed search parameter
	if(trim($_POST[$Name])!='') //If the POSTed search parameter is empty, do not use it as a search parameter
	{
		$V=mysql_escape_string($_POST[$Name]); //Prepare for SQL insertion
		$SearchArray[]=$Name.(strpos($Value, '#')===FALSE ? " LIKE '%$V%'" : "='$V'"); //Pound sign in the Viewable Name=exact ...
			//... value, otherwise, just a partial patch
	}
//Get data from MySQL
mysql_connect('SQL_HOST', 'SQL_USERNAME', 'SQL_PASSWORD');
mysql_select_db('SQL_DATABASE');
$q=mysql_query('SELECT * FROM TABLENAME WHERE '.implode(' AND ', $SearchArray));

//Output retrieved data
$i=0;
while($d=mysql_fetch_assoc($q)) //Iterate through found rows
{
	if(!($i++)) //If this is the first row found, output header
	{
		print '<table border=1 cellpadding=0 cellspacing=0><tr><td>Num</td>'; //Start table and output first column header (row #)
		foreach($Fields as $Name => $Value) //Output the rest of the column headers (Viewable Names)
			print "<td>$Value</td>";
		print '</tr>'; //Finish header row
	}
	print '<tr bgcolor='.($i&1 ? 'white' : 'gray')."><td>$i</td>"; //Start the data field's row. Row's colors are alternating white and gray.
	foreach($Fields as $Name => $Value) //Output row data
		print '<td>'.$d[$Name].'</td>';
	print '</tr>'; //End data row
}

print ($i==0 ? 'No records found.' : '</table>'); //If no records are found, output an error message, otherwise, end the data table
?>
C Jump Tables
The unfortunate reality of different feature sets in different language implementations

I was thinking earlier today how it would be neat for C/C++ to be able to get the address of a jump-to label to be used in jump tables, specifically, for an emulator. A number of seconds after I did a Google query, I found out it is possible in gcc (the open source native Linux compiler) through the “label value operator” “&&”. I am crushed that MSVC doesn’t have native support for such a concept :-(.

The reason it would be great for an emulator is for emulating the CPU, in which, usually, each first byte of a CPU instruction’s opcode [see ASM] gives what the instruction is supposed to do. An example to explain the usefulness of a jump table is as follows:

void DoOpcode(int OpcodeNumber, ...)
{
	void *Opcodes[]={&&ADD, &&SUB, &&JUMP, &&MUL}; //assuming ADD=opcode 0 and so forth
	goto *Opcodes[OpcodeNumber];
  	ADD:
		//...
	SUB:
		//...
	JUMP:
		//...
	MUL:
		//...
}

Of course, this could still be done with virtual functions, function pointers, or a switch statement, but those are theoretically much slower. Having them in separate functions would also remove the possibility of local variables.

Although, again, theoretically, it wouldn’t be too bad to use, I believe, the _fastcall function calling convention with function pointers, and modern compilers SHOULD translate switches to jump tables in an instance like this, but modern compilers are so obfuscated you never know what they are really doing.

It would probably be best to try and code such an instance so that all 3 methods (function pointers, switch statement, jump table) could be utilized through compiler definitions, and then profile for whichever method is fastest and supported.

//Define the switch for which type of opcode picker we want
#define UseSwitchStatement
//#define UseJumpTable
//#define UseFunctionPointers

//Defines for how each opcode picker acts
#if defined(UseSwitchStatement)
	#define OPCODE(o) case OP_##o:
#elif defined(UseJumpTable)
	#define OPCODE(o) o:
	#define GET_OPCODE(o) &&o
#elif defined(UseFunctionPointers)
	#define OPCODE(o) void Opcode_##o()
	#define GET_OPCODE(o) (void*)&Opcode_##o
	//The above GET_OPCODE is actually a problem since the opcode functions aren't listed until after their ...
	//address is requested, but there are a couple of ways around that I'm not going to worry about going into here.
#endif

enum {OP_ADD=0, OP_SUB}; //assuming ADD=opcode 0 and so forth
void DoOpcode(int OpcodeNumber, ...)
{
	#ifndef UseSwitchStatement //If using JumpTable or FunctionPointers we need an array of the opcode jump locations
		void *Opcodes[]={GET_OPCODE(ADD), GET_OPCODE(SUB)}; //assuming ADD=opcode 0 and so forth
	#endif
	#if defined(UseSwitchStatement)
		switch(OpcodeNumber) { //Normal switch statement
	#elif defined(UseJumpTable)
		goto *Opcodes[OpcodeNumber]; //Jump to the proper label
	#elif defined(UseFunctionPointers)
		*(void(*)(void))Opcodes[OpcodeNumber]; //Jump to the proper function
		} //End the current function
	#endif

	//For testing under "UseFunctionPointers" (see GET_OPCODE comment under "defined(UseFunctionPointers)")
	//put the following OPCODE sections directly above this "DoOpcode" function
	OPCODE(ADD)
	{
		//...
	}
	OPCODE(SUB)
	{
		//...
	}

	#ifdef UseSwitchStatement //End the switch statement
	}
	#endif

#ifndef UseFunctionPointers //End the function
}
#endif

After some tinkering, I did discover through assembly insertion it was possible to retrieve the offset of a label in MSVC, so with some more tinkering, it could be utilized, though it might be a bit messy.
void ExamplePointerRetreival()
{
	void *LabelPointer;
	TheLabel:
	_asm mov LabelPointer, offset TheLabel
}
LinkedIn Policies
It's always a bit of a risk giving out email addresses

Since I just added my résumé which mentions my LinkedIn page, I thought I’d mention something else I just discovered about LinkedIn.


I would normally never give out any of my contacts’ email addresses to third parties under any circumstance, but I decided there was very little risk to do so at LinkedIn because it is a largely used website with many users that is also eTrust certified. Unfortunately, I have also heard eTrust certification isn’t exactly hard to get and shouldn’t have too much stock put in it, but it is still something.

Anyways, after reading LinkedIn’s privacy policy, I decided it would be ok to list some of my email contacts to discover if they also used LinkedIn or not. I, of course, added in a dummy email address of mine into this to watch for spam or advertisements, and it has to date not received anything, though I’m sure any company that illegally released email addresses wouldn’t be stupid enough to let go of newly acquired addresses immediately, but then again, I always assume too much of people/corporations... but I digress. I have discovered that they keep all the emails you give them because one of the emails I gave was recently used to sign up for a new account and LinkedIn immediately informed me of this.

While this is a nice extension to the "find your contacts through their emails" function, LinkedIn really should have given me an option to opt out of this, or at the very least informed me that it was keeping the emails I gave it on record. Unfortunately, even if they do have a good privacy policy and abide by it, there is still the chance a rogue staff member could harvest the emails and sell them.


Oh, LinkedIn is also a very buggy system in and of itself. I very often get timeout errors and many other errors to the extent of “The server cannot perform this operation at this time, please try again later”. A friend of mine has also been having trouble linking our profiles together for more than a week now, with no response to his email to them… besides a type of auto response he got back that had absolutely nothing to do with the reported problem.

Outputting directory contents in PHP
Rebuilding the wheel
A friend just asked me to write a PHP function to list all the contents of a directory and its sub-directories.
Nothing special here... just a simple example piece of code and boredom...
function ListContents($DirName)
{
	print '<ul>';
	$dir=opendir($DirName);
	while($file=readdir($dir))
		if($file!='.' && $file!='..')
		{
			$FilePath="$DirName/$file";
			$IsDir=is_dir($FilePath);
			print "<li>$file [".($IsDir ? 'D' : number_format(filesize($FilePath), 0, '.', ',')).']';
			if($IsDir)
				ListContents($FilePath);
			print '</li>';
		}
	closedir($dir);
	print '</ul>';
}
It wouldn’t be a bad idea to turn off PHP’s “output buffering” and on “implicit flush” when running something like this for larger directories.
Example output for “ListContents('c:\\temp');”:
  • A.BMP [230]
  • Dir1 [D]
    • codeblocks-1.0rc2_mingw.exe [13,597,181]
    • Dir1a [D]
      • DEBUGUI.C [25,546]
  • Dir2 [D]
    • Dir3 [D]
      • HW.C [12,009]
      • INIFILE.C [9,436]
    • NTDETECT.COM [47,564]


    I decided to make it a little nicer afterwards by bolding the directories, adding their total size, and changing sizes to a human readable format. This function is a lot more memory intensive because it holds data in strings instead of immediately outputting.
    function HumanReadableSize($Size)
    {
    	$MetricSizes=Array('Bytes', 'KB', 'MB', 'GB', 'TB');
    	for($SizeOn=0;$Size>=1024 && $SizeOn<count($MetricSizes)-1;$SizeOn++) //Loops until Size is < a binary thousand (1,024) or we have run out of listed Metric Sizes
    		$Size/=1024;
    	return preg_replace('/\\.?0+$/', '', number_format($Size, 2, '.', ',')).' '.$MetricSizes[$SizeOn]; //Forces to a maximum of 2 decimal places, adds comma at thousands place, appends metric size
    }
    
    function ListContents2($DirName, &$RetSize)
    {
    	$Output='<ul>';
    	$dir=opendir($DirName);
    	$TotalSize=0;
    	while($file=readdir($dir))
    		if($file!='.' && $file!='..')
    		{
    			$FilePath="$DirName/$file";
    			if(is_dir($FilePath)) //Is directory
    			{
    				$DirContents=ListContents2($FilePath, $DirSize);
    				$Output.="<li><b>$file</b> [".HumanReadableSize($DirSize)."]$DirContents</li>";
    				$TotalSize+=$DirSize;
    			}
    			else //Is file
    			{
    				$FileSize=filesize($FilePath);
    				$Output.="<li>$file [".HumanReadableSize($FileSize).']</li>';
    				$TotalSize+=$FileSize;
    			}
    		}
    	closedir($dir);
    	$RetSize=$TotalSize;
    	$Output.='</ul>';
    	return $Output;
    }
    
    Example output for “print ListContents2('c:\\temp', $Dummy);”:
    • A.BMP [230 Bytes]
    • Dir1 [12.99 MB]
      • codeblocks-1.0rc2_mingw.exe [12.97 MB]
      • Dir1a [24.95 KB]
        • DEBUGUI.C [24.95 KB]
    • Dir2 [0 Bytes]
      • Dir3 [20.94 KB]
        • HW.C [11.73 KB]
        • INIFILE.C [9.21 KB]
      • NTDETECT.COM [46.45 KB]


      The memory problem can be rectified through a little extra IO by calculating the size of a directory before its contents is listed, thereby not needing to keep everything in a string.
      function CalcDirSize($DirName)
      {
      	$dir=opendir($DirName);
      	$TotalSize=0;
      	while($file=readdir($dir))
      		if($file!='.' && $file!='..')
      			$TotalSize+=(is_dir($FilePath="$DirName/$file") ? CalcDirSize($FilePath) :  filesize($FilePath));
      	closedir($dir);
      	return $TotalSize;
      }
      
      function ListContents3($DirName)
      {
      	print '<ul>';
      	$dir=opendir($DirName);
      	$TotalSize=0;
      	while($file=readdir($dir))
      		if($file!='.' && $file!='..')
      		{
      			$FilePath="$DirName/$file";
      			$IsDir=is_dir($FilePath);
      			$FileSize=($IsDir ? CalcDirSize($FilePath) : filesize($FilePath));
      			$TotalSize+=$FileSize;
      			print '<li>'.($IsDir ? '<b>' : '').$file.($IsDir ? '</b>' : '').' ['.HumanReadableSize($FileSize).']';
      			if($IsDir) //Is directory
      				$TotalSize+=ListContents3($FilePath);
      			print '</li>';
      		}
      	closedir($dir);
      	print '</ul>';
      }
      
      Example output: for “ListContents3('c:\\temp');”:
      • A.BMP [230 Bytes]
      • Dir1 [12.99 MB]
        • codeblocks-1.0rc2_mingw.exe [12.97 MB]
        • Dir1a [24.95 KB]
          • DEBUGUI.C [24.95 KB]
      • Dir2 [0 Bytes]
        • Dir3 [20.94 KB]
          • HW.C [11.73 KB]
          • INIFILE.C [9.21 KB]
        • NTDETECT.COM [46.45 KB]


        Of course, after all this, my friend took the original advice I gave him before writing any of this code, which was that using bash commands might get him to his original goal much easier.
        Truecrypt 6.0 fixes
        I was too quick to judge
        TrueCrypt 6.0 [latest version] came out today, and I was looking at the version history. I mention this because I wrote a post about TrueCrypt 5.0 (3 days after it was released, on February the 5th of this year) and the problems I was having with it. I was not aware that after I submitted the bugs to them, they fixed the 2 important ones I reported (See 5.0a history) 4 days after I wrote the post, which were:
        • On computers equipped with certain brands of audio cards, when performing the system encryption pretest or when the system partition/drive is encrypted, the sound card drivers failed to load. This will no longer occur. (Windows Vista/XP/2003)
        • It is possible to access mounted TrueCrypt volumes over a network. (Windows)
        I am quite impressed that they did this so quickly, and am sad I did not find out until now. They also fixed the other missing feature I reported to them within a month of that [version 5.1]
        • Support for hibernation on computers where the system partition is encrypted (previous versions of TrueCrypt prevented the system from hibernating when the system partition was encrypted). (Windows Vista/XP/2008/2003)

        Also in the version history [5.1a], this little paragraph made me smile
        • [Update 2008-04-02: Although we have not filed any complaint with Microsoft yet, we were contacted (on March 27) by Scott Field, a lead Architect in the Windows Client Operating System Division at Microsoft, who stated that he would like to investigate our requirements and look at possible solutions. We responded on March 31 providing details of the issues and suggested solutions.]

        Other very important features they have added for version 6.0 that I am super happy about:
        • Hidden operating systems, which is done in a really well way.
        • Embedded backup header (located at the end of the volume)
        • Up to 20% faster resuming from hibernation when the system partition/drive is encrypted. (As I have always been super frustrated by super slow hibernation resume support on my now abandoned partition encryption software suite, BestCrypt.)
        • Multithreading support (Faster parallel processing, yay)

        I did some speed tests of hibernation support in XP and got the following numbers: (Results are averages of at least 5 tests, in seconds)
        Test SetupHibernationWakeup
        VMWare* w/ no encryption~5.0~6.1
        VMWare* w/ TrueCrypt 6.0 full drive encryption~7.5~11
        VMWare* w/ TrueCrypt 6.0 decoy & dummy encryption~7.3~13.2
        Laptop** w/ no encryption~12.84.8
        Laptop** w/ BestCrypt Volume Encryption~92.1~16.6
        Laptop** w/ TrueCrypt 6.0 full drive encryption~12.5~13.9
        Laptop** w/ TrueCrypt 6.0 decoy & dummy encryption--
        *VMWare was running with 256MB of RAM and 1 virtual CPU on Laptop**. VMWare results were not always stable due to other processes on the host machine, so I terminated the worst offenders
        **Laptop is a 2.4ghz Pentium Core Duo with 2GB RAM and 60GB hard drive running at 7200RPM


        ANYWAYS... The hidden operating system feature really excited me. Unfortunately, the documentation on it is quite cryptic itself, so I thought I’d try explaining it myself. TrueCrypt hidden operating system diagram
        TrueCrypt hidden OS diagram taken from http://www.truecrypt.org/docs/hidden-operating-system.php on 7/5/2008 and belongs to TrueCrypt

        The decoy (first) partition holds a decoy OS and is accessible from the password prompt (password #3) at bootup. You should not have any sensitive data in it, and can give out the password if need be. TrueCrypt recommends using this decoy OS at least as much as the hidden OS so if someone checks out the decoy they are not suspicious of it. If the perpetrator is suspicious of the decoy due to non use, the size of the partition, or just the fact that you have TrueCrypt installed, you may need to fall back onto the second stage of the security in the below paragraph.

        The outer (second) partition holds some decoy files and a hidden volume inside of it. It is accessible by either the decoy or hidden OS by opening the partition through a normal TrueCrypt device mounting (password #1). It is recommended to give out its password only if you have already been forced to mount your decoy OS and the perpetrator suspects a secure partition as is explained in the above paragraph. If any data is written to it after creation, it can destroy information at random within the Hidden OS (see “Partition Sizes” at the bottom).

        The hidden partition holds its own OS and is hidden within the outer (second) partition. It is accessible from the password prompt (password #2) at bootup or by mounting the partition from TrueCrypt as a device when the decoy OS is open. The decoy partition/OS is NOT accessible while the hidden OS is open.


        Basic installation procedure:
        • Create a computer with 2 partitions. The second (outer) partition must be 5% larger than the first (decoy) for a FAT file system, or 110% (2.1x) larger for a NTFS file system (see “Partition Sizes” at the bottom). You might as well make the outer partition FAT since it won’t be used much, if at all, and this won’t affect the hidden partition.
        • Install your operating system on the first (decoy) partition with all of your applications and data that are not sensitive.
        • Run the TrueCrypt hidden install, this does the following:
          • Asks for outer volume password (Password #1). Creates and formats the second (outer) partition/volume.
          • Lets you copy some “sensitive looking” files to the outer partition. Nothing should ever be changed or added to the outer partition after this, see “Partition Sizes” at the bottom.
          • Asks for hidden volume password (Password #2). The hidden partition is created within the outer partition.
          • Asks for decoy volume password (Password #3).
          • Rescue disk is created
          • All data from the first (decoy) partition is copied to the hidden partition, and then all data from the first (decoy) partition is encrypted.

        And finally, things that bugged me, because I like to vent :-) :
        • Forced creation of rescue disk on full volume encryption. Having the file is more than enough since it can be copied to other hard drives, but it wanted proof of the rescue disc creation, so I just mounted the ISO to a virtual drive.
        • No customized pre-boot screens. This isn’t important really, but I loved my hokie ASCII art ^_^;.
        • Partition sizes: The hidden OS partition will be the exact same size as the decoy and the outer partition must be at least 5% larger for FAT and 110% larger for NTFS than the decoy.

        Partition sizes:

        The hidden OS partition will be the exact size as the decoy partition because they are originally duplicates of each other, including their original partition tables, which include the size of the partition.

        The outer (second) partition that holds the hidden partition must be at least 5% larger for FAT and 110% larger for NTFS than the decoy. The reason for this is the file contents tables. NTFS, unfortunately in this case, stores its file table in the middle of the partition. The outer partition’s file table does not, however, affect the hidden partition in any way.

        So, for example (these numbers are theoretical, I am not entirely sure if these are correct), if we have a 2GB decoy partition, the outer NTFS partition must be at least 4.2GB and the hidden partition will be 2GB. If we made the outer partition 6GB, then 0-3GB would be writable, 3.0GB-3.6GB would be used for the file table, 3.6GB-4.0GB would be writable, and 4.0GB-6.0GB would be used by the hidden operating system. So, theoretically, you could write 3.4GB to the outer volume before problems started occurring, but I wouldn’t trust NTFS to only write to the beginning of the drive.

        Firefox Extensions
        Creating this list took way too long x.x;
        So I jumped on the bandwagon and switched over to Firefox 3.0 when it came out a week or two ago, and was pleasantly surprised after some forced addon (used to be called extension) updates that everything worked brilliantly, including my favorite plugin, Firebug. I meant to write this post containing the addons I use and recommend a long time ago (once again, jumping on the bandwagon as everyone else and their dog that has a blog has done this topic too...), but now is as good as ever, especially since there are some updates for Firefox’s new version.

        • Adblock plus:
          • Block unwanted ads, images, and other multimedia.
          • Notes: Had to upgrade to this from just “Adblock”.
        • Adblock Filterset.G Updater:
          • A good set of ads to block for Adblock.
          • Notes: This doesn’t seem to be updated much anymore, and never checked to see if it worked with Adblock plus.
        • AI Roboform Toolbar for Firefox:
          • This is a software suite that always you to store passwords and personal information in encrypted (against AES) container files against a master password, so it’s pretty darn secure. It interfaces well with both IE and Firefox, and really helps with the filling out of personal info on those long tedious credit card forms and such.
          • Notes: I just wish it worked better outside of web browsers in the Windows environment... maybe one day I’ll make something for that, it would be fun.
        • BugMeNot:
          • Bypass web registration by checking the bugmenot.com database for free user-provided accounts.
        • Cache View:
          • Allows you to go to a cache for the page you are currently on from one of the many caching services like Google Cache, Coral Cache, and archive.org’s Wayback Machine.
          • Notes: I modified this to allow you to open all cache sites at once and to work for Firefox 3... maybe one of these days I’ll release the additions.
        • Download Statusbar:
          • “View and manage downloads from a tidy statusbar”
        • Firebug:
          • Required for [web] programmers, and still very useful for [web] developers. Some main features include:
            • JavaScript console for debug output and real-time JavaScript injection
            • JavaScript debugging
            • Realtime HTML DOM view
            • Realtime editing of DOM object information and positioning
            • DOM object CSS styles and where they came from
            • Downloaded files with their acquisition time
          • Notes: This is by far my favorite Firefox extension.
        • FireFTP:
          • Fully featured FTP manager.
          • Notes: You’ll never need to find a component FTP manager again once you’ve got this great Firefox integrated one.
        • Greasemonkey:
          • Insertion of JavaScript scripts on specified web pages.
        • Html Validator:
          • Realtime HTML validation of viewed web pages without having to go through w3c.org (web standards committee).
        • IE Tab:
          • “Embedding Internet Explorer in tabs of Mozilla/Firefox”
          • Notes: Since IE is sometimes a necessity when people refuse to conform to standards; and for developers to make sure things look right in the (unfortunately) most used web browser.
        • keyconfig [functions for] [Original?]:
          • (Re)bind keyboard shortcuts in Firefox.
          • Notes: I heavily rely on this since I’m a bit of a shortcut nut.
        • Locationbar2:
          • Adds options to the location bar like:
            • Highlighting the domain
            • Go to parent directories of your current URL by clicking
            • Hide the protocol (ex: “http://”).
          • Notes: I originally used this because it fixed a major problem that plagued Firefox and still plagues IE in which the address bars show escaped URLs (like “Firefox%20Extensions” instead of “Firefox Extensions”), so foreign URLs, which used lots of non-ASCII characters were next to impossible to read. I submitted this to Mozilla a ways back, and fortunately it was fixed for Firefox 3. This, IMO, is one of the most important fixes for Firefox 3, and it wasn’t even really advertised.
        • OpenDownload:
          • “Allows you to open ANY file (executables, etc.) from the internet into the default program assigned by your operating system, without needing to save it first.”
          • Notes: This is not marked as compatible with Firefox 3, but works fine. Firefox has added an “applications” tab to its options dialog that kind of takes care of this, but this still does at least allow direct opening of all file extensions without also mapping them in Firefox.
        • Tab Mix Plus:
          • “Tab browsing with an added boost.”
          • Notes: This is becoming less needed with each Firefox version upgrade, but it still has a lot of options in it that make it worthwhile.
        • User Agent Switcher:
          • Switch the “User Agent” of Firefox to fool pages into thinking you are using a different web browser or crawler.
          • Notes: There are many uses for this, one being to see how pages change for web crawlers.
        • View Cookies:
          • “View cookies of the current web page.”
          • Notes: Firefox 3 has added a feature to make this no longer needed, but I still much prefer the way this extension handles cookie viewing.
        • Web Developer:
          • A plethora of very useful web developer tools.
        Other addons I no longer use but can still be useful
        • Answers:
          • Alt+Click on any word or term for quick info from answers.com.
        • ChatZilla:
          • An IRC (it’s a kind of chat room protocol) interface through Firefox.
          • Notes: I’m sure I’d use this a lot more... if I actually used IRC.
        • DownThemAll! & FlashGot:
          • Ability to download lots of content and/or links from web pages.
        • Morning Coffee:
          • “Keep track of daily routine websites and opens them in tabs.” You can set websites to load by individual day, weekday/weekend, or every day.
          • Notes: No longer really needed since RSS has become so commonplace.
        • Page Update Checker:
          • “Automatically checks to see if a web page has changed.”
          • Notes: No longer really needed since RSS has become so commonplace.
        • Referrer History:
          • Viewing how you browsed to pages through a referrer tree.
          • Notes: This is not compatible with Firefox 3, hasn’t been updated for ages, and is extremely slow as it uses a brute force method to build the referrer tree. I might see if I can find a better version of something like this (or make it) if the need ever arises again.
        • Torbutton:
          • Toggle completely anonymous web browsing at the push of a button.
          • Notes: I found using the tor network way too slow, so I have since abandoned it for faster methods, which I will post about some day. Tor still remains an excellent “full-proof” way to stay anonymous on the internet though.
        • VideoDownloader:
          • Download videos from many popular sites.
          • Notes: I prefer just using Firebug and a download manager now...
        Addons I no longer use and are (I think) pretty much obsolete as of Firefox 3
        • Enhanced History Manager:
          • Lots of neat history managing features...
          • Notes: This addon hasn’t been updated in a long time... I’m not sure if it works with Firefox 3. To be honest, I don’t even remember what it does completely.
        • Image Zoom:
          • “Adds zoom functionality for images...”
          • Notes: Firefox 3 now has full page zoom, as opposed to just text, so this is no longer really needed.

        And as a Bonus, MozBackup is a simple utility for creating backups of Mozilla products’ profiles.
        An easier way to exchange style sheets in HTML
        Simple JavaScripting

        I have seen rather complex code out there for style sheet swapping in web browsers through JavaScript, and just found out a much simpler way works.

        I could have sworn I tried to do real-time style sheet swapping a very long while back and none of my tests turned out satisfactorily, but a friend was just asking me about it, and I was redoing the tests, and it all worked out perfectly in an incredibly easy fashion in IE 6 & 7 and Firefox 2.5 & 3. All that needs to be done is swap the href of the link object pointing to the external style sheet file.

        <link href="OLDSTYLESHEET.css" rel=stylesheet type="text/css" id=MyScriptSheet>
        <input type=button onclick="document.getElementById('MyScriptSheet').href='NEWSTYLESHEET.css'">

        Adding style sheets by dynamically inserting HTML via JavaScript seemed to work just fine too.

        document.body.innerHTML+='<link href="NEWSTYLESHEET.css" rel=stylesheet type="text/css">';
        FoxPro Table Memo Corruption
        Data integrity loss is such a drag :-(

        My father’s optometric practice has been using an old DOS database called “Eyecare” since the (I believe) early 80’s. For many years, he has been programming a new, very customized, database up from scratch in Microsoft Access which is backwards compatible with “Eyecare”, which uses a minor variant of FoxPro databases. I’ve been helping him with minor things on it for a number of years, and more recently I’ve been giving a lot more help in getting it secured and migrated from Microsoft Access databases (.mdb) into MySQL.

        A recent problem cropped up in that one of the primary tables started crashing Microsoft Access when it was opened (through a FoxPro ODBC driver). Through some tinkering, he discovered that the memo file (.fpt) for the table was corrupted, as trying to view any memo fields is what crashed Access. He asked me to see if I could help in recovering the file, which fortunately I can do at my leisure, as he keeps paper backups of everything for just such circumstances. He keeps daily backups of everything too… but for some reason that’s not an option.


        I went about trying to recover it through the easiest means first, namely, trying to open and export the database through FoxPro, which only recovered 187 of the ~9000 memo records. Next, I tried finding a utility online that did the job, and the first one I found that I thought should work was called “FoxFix”, but it failed miserably. There are a number of other Shareware utilities I could try, but I decided to just see how hard it would be to fix myself first.


        I opened the memo file up in a HEX editor, and after some very quick perusing and calculations, it was quite easy to determine the format:

        So I continued on the path of seeing what I could do to fix the file.
        • First, I had it jump to the header of each record and just get the record data length, and I very quickly found multiple invalid record lengths.
        • Next, I had it attempt to fix each of these by determining the real length of the memo by searching for the first null terminator (“\0”) character, but I quickly discovered an oddity. There are weird sections in many of the memo fields in the format BYTE{0,0,0,1,0,0,0,1,x}, which is 2 little endian DWORDS which equal 1, and a final byte character (usually 0).
        • I added to the algorithm to include these as part of a memo record, and many more original memo lengths then agreed with my calculated memo lengths.
        • The final thing I did was determine how many invalid (non keyboard) characters there were in the memo data fields. There were ~3500 0x8D characters, which were usually always followed by 0xA, so I assume these were supposed to be line breaks (Windows line breaks are denoted by [0xD/new line/\r],[0xA/carriage return/\n]). There were only 5 other invalid characters, so I just changed these to question marks ‘?’.

        Unfortunately, Microsoft Access still crashed when I tried to access the comments fields, so I will next try to just recover the data, tie it to its primary keys (which I will need to determine through the table file [.dbf]), and then rebuild the table. I should be making another post when I get around to doing this.


        The following code which “fixes” the table’s memo file took about 2 hours to code up.
        //Usually included in windows.h
        typedef unsigned long DWORD;
        typedef unsigned char BYTE;
        
        //Includes
        #include <iostream.h> //cout
        #include <stdio.h> //file io
        #include <conio.h> //getch
        #include <ctype.h> //isprint
        
        //Memo file structure
        #pragma warning(disable: 4200) //Remove zero-sized array warning
        const MemoFileHeadLength=512;
        const RecordBlockLength=32; //This is actually found in the header at (WORD*)(Start+6)
        struct MemoRecord //Full structure must be padded at end with \0 to RecordBlockLength
        {
        	DWORD Type; //Type in little endian, 1=Memo
        	DWORD Length; //Length in little endian
        	BYTE Data[0];
        };
        #pragma warning(default: 4200)
        
        //Input and output files
        const char *InFile="EXAM.Fpt.old", *OutFile="EXAM.Fpt";
        
        //Assembly functions
        __forceinline DWORD BSWAP(DWORD n) //Swaps endianness
        {
        	_asm mov eax,n
        	_asm bswap eax
        	_asm mov n, eax
        	return n;
        }
        
        //Main function
        void main()
        {
        	//Read in file
        	const FileSize=6966592; //This should actually be found when the file is opened...
        	FILE* MyFile=fopen(InFile, "rb");
        	BYTE *MyData=new BYTE[FileSize];
        	fread(MyData, FileSize, 1, MyFile);
        	fclose(MyFile);
        
        	//Start checking file integrity
        	DWORD FilePosition=MemoFileHeadLength; //Where we currently are in the file
        	DWORD RecordNum=0, BadRecords=0, BadBreaks=0, BadChars=0; //Data Counters
        	const DWORD OneInLE=0x01000000; //One in little endian
        	while(FilePosition<FileSize) //Loop until EOF
        	{
        		FilePosition+=sizeof(((MemoRecord*)NULL)->Type); //Advanced passed record type (1=memo)
        		DWORD CurRecordLength=BSWAP(*(DWORD*)(MyData+FilePosition)); //Pull in little endian record size
        		cout << "Record #" << RecordNum++ << " reports " << CurRecordLength << " characters long. (Starts at offset " << FilePosition << ")" << endl; //Output record information
        
        		//Determine actual record length
        		FilePosition+=sizeof(((MemoRecord*)NULL)->Length); //Advanced passed record length
        		DWORD RealRecordLength=0; //Actual record length
        		while(true)
        		{
        			for(;MyData[FilePosition+RealRecordLength]!=0 && FilePosition+RealRecordLength<FileSize;RealRecordLength++) //Loop until \0 is encountered
        			{
        #if 1 //**Check for valid characters might not be needed
        				if(!isprint(MyData[FilePosition+RealRecordLength])) //Makes sure all characters are valid
        					if(MyData[FilePosition+RealRecordLength]==0x8D) //**0x8D maybe should be in ValidCharacters string? - If 0x8D is encountered, replace with 0xD
        					{
        						MyData[FilePosition+RealRecordLength]=0x0D;
        						BadBreaks++;
        					}
        					else //Otherwise, replace with a "?"
        					{
        						MyData[FilePosition+RealRecordLength]='?';
        						BadChars++;
        					}
        #endif
        			}
        
        			//Check for inner record memo - I'm not really sure why these are here as they don't really fit into the proper memo record format.... Format is DWORD(1), DWORD(1), BYTE(0)
        			if(((MemoRecord*)(MyData+FilePosition+RealRecordLength))->Type==OneInLE && ((MemoRecord*)(MyData+FilePosition+RealRecordLength))->Length==OneInLE /*&& ((MemoRecord*)(MyData+FilePosition+RealRecordLength))->Data[0]==0*/) //**The last byte seems to be able to be anything, so I removed its check
        			{ //If inner record memo, current memo must continue
        				((MemoRecord*)(MyData+FilePosition+RealRecordLength))->Data[0]=0; //**This might need to be taken out - Force last byte back to 0
        				RealRecordLength+=sizeof(MemoRecord)+1;
        			}
        			else //Otherwise, current memo is finished
        				break;
        		}
        		if(RealRecordLength!=CurRecordLength) //If given length != found length
        		{
        			//Tell the user a bad record was found
        			cout << "   Real Length=" << RealRecordLength << endl;
        			CurRecordLength=RealRecordLength;
        			BadRecords++;
        			//getch();
        
        			//Update little endian bad record length
        			((MemoRecord*)(MyData+FilePosition-sizeof(MemoRecord)))->Length=BSWAP(RealRecordLength);
        		}
        
        		//Move to next record - Each record, including RecordLength is padded to RecordBlockLength
        		DWORD RealRecordSize=sizeof(MemoRecord)+CurRecordLength;
        		FilePosition+=CurRecordLength+(RealRecordSize%RecordBlockLength==0 ? 0 : RecordBlockLength-RealRecordSize%RecordBlockLength);
        	}
        
        	//Tell the user file statistics
        	cout << "Total bad records=" << BadRecords << endl << "Total bad breaks=" << BadBreaks << endl << "Total bad chars=" << BadChars << endl;
        
        	//Output fixed data to new file
        	MyFile=fopen(OutFile, "wb");
        	fwrite(MyData, FileSize, 1, MyFile);
        	fclose(MyFile);
        
        	//Cleanup and wait for user keystroke to end
        	delete[] MyData;
        	getch();
        }
        
        Inlining Executable Resources
        Do you suffer from OPC (Obsessive Perfection Complex)? If not, you aren’t an engineer :-)

        I am somewhat obsessive about file cleanliness, and like to have everything I do well organized with any superfluous files removed. This especially translates into my source code, and even more so for released source code.

        Before I zip up the source code for any project, I always remove the extraneous workspace compilation files. These usually include:

        • C/C++: Debug & Release directories, *.ncb, *.plg, *.opt, and *.aps
        • VB: *.vbw
        • .NET: *.suo, *.vbproj.user

        Unfortunately, a new offender surfaced in the form of the Hyrulean Productions icon and Signature File for about pages. I did not want to have to have every source release include those 2 extra files, so I did research into inlining them in the resource script (.rc) file. Resources are just data directly compiled into an executable, and the resource script tells the executable all of these resources and how to compile them in. All my C projects include a resource script for at least the file version, author information, and Hyrulean Productions icon. Anyways, this turned out to be way more of a pain in the butt than intended.


        There are 2 ways to load “raw data” (not a standard format like an icon, bitmap, string table, version information, etc) into a resource script. The first way is through loading an external file:
        RESOURCEID RESOURCETYPE DISCARDABLE "ResourceFileName"
        for example:
        DAKSIG	SIG	DISCARDABLE	"Dakusan.sig"
        RESOURCEID and RESOURCETYPE are arbitrary and user defined, and it should also be noted to usually have them in caps, as the compilers seem to often be picky about case.

        The second way is through inlining the data:
        RESOURCEID	RESOURCETYPE
        BEGIN
        	DATA
        END
        for example:
        DakSig	Sig
        BEGIN
        	0x32DA,0x2ACF,0x0306,...
        END
        Getting the data in the right format for the resource script is a relatively simple task.
        • First, acquire the data in 16-bit encoded format (HEX). I suggest WinHex for this job.
          On a side note, I have been using WinHex for ages and highly recommend it. It’s one of the most well built and fully featured application suites I know if.
        • Lastly, convert the straight HEX DATA (“DA32CF2A0603...”) into an array of proper endian hex values (“0x32DA,0x2ACF,0x0306...”). This can be done with a global replace regular expression of “(..)(..)” to “0x$2$1,”. I recommend Editpad Pro for this kind of work, another of my favorite pieces of software. As a matter of fact, I am writing this post right now in it :-).

        Here is where the real caveats and problems start falling into place. First, I noticed the resource data was corrupt for a few bytes at a certain location. It turned out to be Visual Studio wanting line lengths in the resource file to be less than ~4175 characters, so I just added a line break at that point.

        This idea worked great for the about page signature, which needed to be raw data anyways, but encoding the icon this way turned out to be impossible :-(. Visual Studio apparently requires external files be loaded if you want to use a pre-defined binary resource type (ICON, BITMAP, etc). The simple solution would be to inline the icon as a user defined raw data type, but unfortunately, the Win32 icon loading API functions (LoadIcon, CreateIconFromResource, LoadImage, etc) only seemed to work with properly defined ICONs. I believe the problem here is that when the compiler loads in the icon to include in the executable, it reformats it somewhat, so I would need to know this format. Again, unfortunately, Win32 APIs failed me. FindResource/FindResourceEx wouldn’t let me load the data for ICON types for direct coping (or reverse engineering) :-(. At this point, it wouldn’t be worth my time to try and get the proper format just to inline my Hyrulean Productions icon into resource scripts. I may come back to it later if I’m ever really bored.


        This unfortunately brings back a lot of bad old memories regarding Win32 APIs. A lot of the Windows system is really outdated, not nearly robust enough, or just way too obfuscated, and has, and still does, cause me innumerable migraines trying to get things working with their system.

        As an example, I just added the first about page to a C project, and getting fonts working on the form was not only a multi-hour long knockdown drag out due to multiple issues, I ended up having to jury rig the final solution in exasperation due to time constraints. I wanted the C about pages to match the VB ones exactly, but font size numbers just wouldn’t conform between the VB GUI designer and Windows GDI (the Windows graphics API), so I just put in arbitrary font size numbers that matched visually instead of trying to find the right conversion process, as the documented font size conversion process was not yielding proper results. This is the main reason VB (and maybe .NET) are far superior in my book when dealing with GUIs (for ease of use at least, not necessarily ability and power). I know there are libraries out that supposedly solve this problem, but I have not yet found one that I am completely happy with, which is why I had started my own fully fledged cross operating system GUI library a ways back, but it won’t be completed for a long time.

        Secure way of proving IP ownership
        Proving that you did what you say you did

        So I was thinking of a new project that might be fun, useful, and possibly even turn a little profit, but I was talked out of it by a friend due to the true complexity of the prospect past the programming part. The concept isn’t exactly new by a long shot, but my idea for the implementation is, at least I would like to think, novel.

        For a very long time, it has been important to be able to prove, without a doubt, that you have the oldest copy of some IP to prove you are the original creator. The usual approach to this is storing copies of the IP at a secure location with the storage time recorded. This is, I am told, very often used in the film industry, as well as many others.

        The main downside to this for the subscriber, IMO, is having their IP, which may be confidential, stored by a third party, and entrusting their secrets to an outsider’s security. Of course, if these services are done properly and are ISO certified for non-breachable secure storage, this shouldn’t be a problem as they are probably more secure than anything the user has themselves. One would like to think, though, that entrusting your IP to no one but yourself is the most secure method.

        The out-of-house storage method may also require that there be records accessible by others telling that you stored your IP elsewhere, and that it exists, which you may not want known either. This is not always a problem though because some places allow completely anonymous storage.

        A large downside for the provider is having to supply and maintain the medium for the secure storage, whether it be vaults for physical property, or hard drives for virtual property.


        My solution to this problem, for virtual property anyways, is to not have the provider permanently store the user’s data at all, but provide a means by which the provider can authenticate a set of the user’s data as being unchanged since a certain date. This would be accomplished by hashing the user’s data against a random salt. The salt would be determined by the date and would only be known by the provider.


        This would work as follows:
        • Every day, the server would create a completely random salt string of a fixed length, probably 128 bits. This random salt would be the only thing the server would need to remember and keep secret. This process could also be done beforehand for many days or years.
        • As the user uploaded the data through a secure connection, the server would process it in chunks, probably 1MB at a time, through the hash function.
        • The hash function, probably a standard one like MD5, would be slightly modified to multiply the current hash on each block iteration against the daily random salt. The salt would also be updated per upload by a known variable, like multiplying the salt against the upload size, which would be known beforehand, or the exact time of upload.
        • A signature from a public-key certificate of a combined string of the time of upload and the hash would be calculated.
        • The user would be returned a confirmation string, which they would need to keep, that contained the time of upload the signature.

        Whenever the user wanted to verify their data, they would just have to resend their data and the confirmation string, and the server would return if it is valid or not.

        I was thinking the service would be free for maybe 10MB a day. Different account tiers with appropriate fees would be available that would give the user 1 month of access and an amount of upload bandwidth credits, with would roll over each month. Unlimited verifications would also be allowed to account holders, though these uploads would still be applied towards the user’s credits. Verifications without an account would be a nominal charge.

        The only thing keeping me from pursuing this idea is that for it to be truly worth it to the end user’s, the processing site and salt tables would have to be secured and ISO certified as such, which would be a lot more costly and trouble than the initial return would provide, I believe, and I don’t have the money to invest in it right now.


        I do need to find one of these normal storage services soon for myself soon though. I’ll post again about this when I do.



        [edit on 6/15/08 @ 5:04pm]
        Well, this isn’t exactly the same thing, but a lot like it.
        http://www.e-timestamp.com
        Online credit card misinformation
        Check your gut suspicions before acting

        I was just doing my accounting and I noticed I had 3 double-charges on my Capital One credit card that all happened within a 2 day period. I found this to be very odd since I have never been double-charged on any of my credit cards since I started using them 10 years ago when I was 14.

        So I went ahead and submitted 2 charge disputes with Capital One, and a third with the other company I saw double-charged. I then finished my accounting, and noticed that the balance showing up on my Capital One did not include those 3 charges. I validated my suspicions by calling up their customer relations department (getting a lady in India) and confirming that the charges only show up once in my account.

        I then did my emails to rescind my previous queries into having the double-charges refunded, and also included in the email to Capital One that their web system (or possibly statement system) has an error and needs to be fixed. The double-charges actually weren’t showing up on the same statements. They showed up once (for May 16th and 17th) on my last month’s statement, and then again (May 17th and 19th) on my current month’s statement. Go Figure.


        [Edit on 6/13/08] A few days ago, after an annoying downtime on the Capitol One credit card site, I noticed they added a new feature that now shows your latest charges within a certain period of days (15, 30, etc) instead of just the current billing cycle. So I’m pretty sure the above problem was due to them implementing this new system without warning the user or having any indication of the system change in the interface. I do know how annoying change control is, and the problems that come along with implementing new features on websites which may temporarily confuse users, but I’d expect better from a multinational corporation like this. Then again, this isn’t the first time this kind of thing has happened on their website, so I shouldn’t be surprised.
        Project About Pages
        Big things come in small packages
        About Window Concept

        I’ve been thinking for a while that I need to add “about windows” to the executables of all my applications with GUIs. So I first made a test design [left, psd file attached]

        Unfortunately, this requires at least 25KB for the background alone, and this is larger than many of my project executables themselves. This is a problem for me, as I like keeping executables small and simple.

        PNG Signature I therefore decided to scratch the background and just go with normal “about windows” and my signature in a spiffy font [BlizzardD]: (white background added by web browser for visibility)
        The above PNG signature is only 1.66KB, so “yay”, right? Wrong :-(, it quickly occurred to me that XP does not natively support PNG.

        GIF SignatureMy next though is “what about a GIF?” (GIF is the predecessor to PNG, also lossless): (1.82KB)
        I remembered that GIF files worked for VB, so I thought that native Windows’ API might support it too without adding in extra DLLs, but alas, I was wrong. This at least partially solves the problem for me in Visual Basic, but not fully, as GIF does not support translucency, but only 1 color of transparency, so the picture would look horribly aliased (pixilated).

        The final solution I decided on is having a small translucency-mask and alpha-blending it and the primary signature color (RGB(6,121,6)) to the “about windows’ ” background.
        GIF Signature MaskSince alpha-blending/translucency is an 8 bit value, a gray-scale (also 8 bits per pixel) image is perfect for a translucency mask format for VB: (1.82KB, GIF)
        You may note that this GIF is the exact same size as the previous GIF, which makes sense as it is essentially the exact same picture, just with swapped color palettes.

        The final hurdle is how to import the picture into C with as little space wasted as possible. The solution to this is to create an easily decompressable alpha-mask (alpha means translucency).
        BMP Signature Mask I started with the bitmap mask: (25.6KB, BMP)
        From there, I figured there would be 2 easy formats for compression that would take very little code to decompress:
        • Number of Transparent Pixels, Number of Foreground Pixels in a Row, List of Foreground Pixel Masks, REPEAT... (This is a form of “Run-length encoding”)
        • Start the whole image off as transparent, and then list each group of foreground pixels with: X Start Coordinate, Y Start Coordinate, Number of Pixels in a Row, List of Foreground Pixel Masks
        It also helped that there were only 16 different alpha-masks, not including the fully transparent mask, so each alpha-mask could be fit within half a byte (4 bits). I only did the first option because I’m pretty sure the second one would be larger because it would take more bits for an x/y location than for a transparent run length number.

        Other variants could be used too, like counting the background as a normal mask index and just do straight run length encoding with indexes, but I knew this would make the file much larger for 2 reasons: this would add a 17th alpha-mask which would push index sizes up to 5 bits, and background run lengths are much longer (in this case 6 bits), so all runs would need to be longer (non-background runs are only 3 bits in this case). Anyways, it ended up creating a 1,652 byte file :-).


        This could also very easily be edited to input/output 8-bit indexed bitmaps, or full color bitmaps even (with a max of 256 colors, or as many as you wanted with a few more code modifications). If one wanted to use this for normal pictures with a solid background instead of an alpha-mask, just know the words “Transparent” means “Background” and “Translucent” means “Non-Background” in the code.

        GIF and PNG file formats actually use similar techniques, but including the code for their decoders would cause a lot more code bloat than I wanted, especially since they [theoretically] include many more compression techniques than just run-length encoding. Programming for specific cases will [theoretically] always be smaller and faster than programming for general cases. On a side note, from past research I’ve done on the JPEG format, along with programming my NES Emulator, Hynes, they [JPEG & NES] share the same main graphical compression technique [grouping colors into blocks and only recording color variations].


        The following is the code to create the compressed alpha-mask stream: [Direct link to C file with all of the following code blocks]
        //** Double stars denotes changes for custom circumstance [The About Window Mask]
        #include <windows.h>
        #include <stdio.h>
        #include <conio.h>
        
        //Our encoding functions
        int ErrorOut(char* Error, FILE* HandleToClose); //If an error occurs, output
        UINT Encode(UCHAR* Input, UCHAR* Output, UINT Width, UINT Height); //Encoding process
        UCHAR NumBitsRequired(UINT Num); //Tests how many bits are required to represent a number
        void WriteToBits(UCHAR* StartPointer, UINT BitStart, UINT Value); //Write Value to Bit# BitStart after StartPointer - Assumes more than 8 bits are never written
        
        //Program constants
        const UCHAR BytesPerPixel=3, TranspMask=255; //24 bits per pixel, and white = transparent background color
        
        //Encoding file header
        typedef struct
        {
        	USHORT DataSize; //Data size in bits - **Should be UINT
        	UCHAR Width, Height; //**Should be USHORTs
        	UCHAR TranspSize, TranslSize; //Largest number of bits required for a run length for Transp[arent] and Transl[ucent]
        	UCHAR NumIndexes, Indexes[0]; //Number and list of indexes
        } EncodedFileHeader;
        
        int main()
        {
        	UCHAR *InputBuffer, *OutputBuffer; //Where we will hold our input and output data
        	FILE *File; //Handle to current input or output file
        	UINT FileSize; //Holds input and output file sizes
        
        	//The bitmap headers tell us about its contents
        	BITMAPFILEHEADER BitmapFileHead;
        	BITMAPINFOHEADER BitmapHead;
        
        	//Read in bitmap header and confirm file type
        	File=fopen("AboutWindow-Mask.bmp", "rb"); //Normally you'd read in the filename from passed arguments (argv)
        	if(!File) //Confirm file open
        		return ErrorOut("Cannot open file for reading", NULL);
        	fread(&BitmapFileHead, sizeof(BITMAPFILEHEADER), 1, File);
        	if(BitmapFileHead.bfType!=*(WORD*)"BM" || BitmapFileHead.bfReserved1 || BitmapFileHead.bfReserved2) //Confirm we are opening a bitmap
        		return ErrorOut("Not a bitmap", File);
        
        	//Read in the rest of the data
        	fread(&BitmapHead, sizeof(BITMAPINFOHEADER), 1, File);
        	if(BitmapHead.biPlanes!=1 || BitmapHead.biBitCount!=24 || BitmapHead.biCompression!=BI_RGB) //Confirm bitmap type - this code would probably have been simpler if I did an 8 bit indexed file instead... oh well, NBD.  **It has also been programmed for easy transition to 8 bit indexed files via the "BytesPerPixel" constant.
        		return ErrorOut("Bitmap must be in 24 bit RGB format", File);
        	FileSize=BitmapFileHead.bfSize-sizeof(BITMAPINFOHEADER)-sizeof(BITMAPFILEHEADER); //Size of the data portion
        	InputBuffer=malloc(FileSize);
        	fread(InputBuffer, FileSize, 1, File);
        	fclose(File);
        
        	//Run Encode
        	OutputBuffer=malloc(FileSize); //We should only ever need at most FileSize space for output (output should always be smaller)
        	memset(OutputBuffer, 0, FileSize); //Needs to be zeroed out due to how writing of data file is non sequential
        	FileSize=Encode(InputBuffer, OutputBuffer, BitmapHead.biWidth, BitmapHead.biHeight); //Encode the file and get the output size
        
        	//Write encoded data out
        	File=fopen("Output.msk", "wb");
        	fwrite(OutputBuffer, FileSize, 1, File);
        	fclose(File);
        	printf("File %d written with %d bytes\n", 1, FileSize);
        
        	//Free up memory and wait for user input
        	free(InputBuffer);
        	free(OutputBuffer);
        	getch(); //Pause for user input
        	return 0;
        }
        
        int ErrorOut(char* Error, FILE* HandleToClose) //If an error occurs, output
        {
        	if(HandleToClose)
        		fclose(HandleToClose);
        	printf("%s\n", Error);
        	getch(); //Pause for user input
        	return 1;
        }
        
        UINT Encode(UCHAR* Input, UCHAR* Output, UINT Width, UINT Height) //Encoding process
        {
        	UCHAR Indexes[256], NumIndexes, IndexSize, RowPad; //The index re-mappings, number of indexes, number of bits an index takes in output data, padding at input row ends for windows bitmaps
        	USHORT TranspSize, TranslSize; //Largest number of bits required for a run length for Transp[arent] (zero) and Transl[ucent] (non zero) - should be UCHAR's, but these are used as explained under "CurTranspLen" below
        	UINT BitSize, x, y, ByteOn, NumPixels; //Current output size in bits, x/y coordinate counters, current byte location in Input, number of pixels in mask
        
        	//Calculate some stuff
        	NumPixels=Width*Height; //Number of pixels in mask
        	RowPad=4-(Width*BytesPerPixel%4); //Account for windows DWORD row padding - see declaration comment
        	RowPad=(RowPad==4 ? 0 : RowPad);
        
        	{ //Do a first pass to find number of different mask values, run lengths, and their encoded values
        		const UCHAR UnusedIndex=255; //In our index list, unused indexes are marked with this constant
        		USHORT CurTranspLen, CurTranslLen; //Keep track of the lengths of the current transparent & translucent runs - TranspSize and TranslSize are temporarily used to hold the maximum run lengths
        		//Zero out all index references and counters
        		memset(Indexes, UnusedIndex, 256);
        		NumIndexes=0;
        		TranspSize=TranslSize=CurTranspLen=CurTranslLen=0;
        		//Start gathering data
        		for(y=ByteOn=0;y<Height;y++) //Column
        		{
        			for(x=0;x<Width;x++,ByteOn+=BytesPerPixel) //Row
        			{
        				UCHAR CurMask=Input[ByteOn]; //Curent alpha mask
        				if(CurMask!=TranspMask) //Translucent value?
        				{
        					//Determine if index has been used yet
        					if(Indexes[CurMask]==UnusedIndex) //We only need to check 1 byte per pixel as they are all the same for gray-scale **This would need to change if using non 24-bit or non gray-scale
        					{
        						((EncodedFileHeader*)Output)->Indexes[NumIndexes]=CurMask; //Save mask number in the index header
        						Indexes[CurMask]=NumIndexes++; //Save index number to the mask
        					}
        
        					//Length of current transparent run
        					TranspSize=(CurTranspLen>TranspSize ? CurTranspLen : TranspSize); //Max(CurTranspLen, TranspSize)
        					CurTranspLen=0;
        
        					//Length of current translucent run
        					CurTranslLen++;
        				}
        				else //Transparent value?
        				{
        					//Length of current translucent run
        					TranslSize=(CurTranslLen>TranslSize ? CurTranslLen : TranslSize);  //Max(CurTranslLen, TranslSize)
        					CurTranslLen=0;
        
        					//Length of current transparent run
        					CurTranspLen++;
        				}
        			}
        
        			ByteOn+=RowPad; //Account for windows DWORD row padding
        		}
        		//Determine number of required bits per value
        		printf("Number of Indexes: %d\nLongest Transparent Run: %d\nLongest Translucent Run: %d\n", NumIndexes,
        			TranspSize=CurTranspLen>TranspSize ? CurTranspLen : TranspSize, //Max(CurTranspLen, TranspSize)
        			TranslSize=CurTranslLen>TranslSize ? CurTranslLen : TranslSize  //Max(CurTranslLen, TranslSize)
        			);
        		IndexSize=NumBitsRequired(NumIndexes);
        		TranspSize=NumBitsRequired(TranspSize); //**This is currently overwritten a few lines down
        		TranslSize=NumBitsRequired(TranslSize); //**This is currently overwritten a few lines down
        		printf("Bit Lengths of - Indexes, Trasparent Run Length, Translucent Run Length: %d, %d, %d\n", IndexSize, TranspSize, TranslSize);
        	}
        
        	//**Modify run sizes (custom) - this function could be run multiple times with different TranspSize and TranslSize until the best values are found - the best values would always be a weighted average
        	TranspSize=6;
        	TranslSize=3;
        
        	//Start processing data
        	BitSize=(sizeof(EncodedFileHeader)+NumIndexes)*8; //Skip the file+bitmap headers and measure in bits
        	x=ByteOn=0;
        	do
        	{
        		//Transparent run
        		UINT CurRun=0;
        		while(Input[ByteOn]==TranspMask && x<NumPixels && CurRun<(UINT)(1<<TranspSize)-1) //Final 2 checks are for EOF and capping run size to max bit length
        		{
        			x++;
        			CurRun++;
        			ByteOn+=BytesPerPixel;
        			if(x%Width==0) //Account for windows DWORD row padding
        				ByteOn+=RowPad;
        		}
        		WriteToBits(Output, BitSize, CurRun);
        		BitSize+=TranspSize;
        
        		//Translucent run
        		CurRun=0;
        		BitSize+=TranslSize; //Prepare to start writing masks first
        		while(x<NumPixels && Input[ByteOn]!=TranspMask && CurRun<(UINT)(1<<TranslSize)-1) //Final 2 checks are for EOF and and capping run size to max bit length
        		{
        			WriteToBits(Output, BitSize+CurRun*IndexSize, Indexes[Input[ByteOn]]);
        			x++;
        			CurRun++;
        			ByteOn+=BytesPerPixel;
        			if(x%Width==0) //Account for windows DWORD row padding
        				ByteOn+=RowPad;
        		}
        		WriteToBits(Output, BitSize-TranslSize, CurRun); //Write the mask before the indexes
        		BitSize+=CurRun*IndexSize;
        	} while(x<NumPixels);
        
        	{ //Output header
        		EncodedFileHeader *OutputHead;
        		OutputHead=(EncodedFileHeader*)Output;
        		OutputHead->DataSize=BitSize-(sizeof(EncodedFileHeader)+NumIndexes)*8; //Length of file in bits not including header
        		OutputHead->Width=Width;
        		OutputHead->Height=Height;
        		OutputHead->TranspSize=(UCHAR)TranspSize;
        		OutputHead->TranslSize=(UCHAR)TranslSize;
        		OutputHead->NumIndexes=NumIndexes;
        	}
        	return BitSize/8+(BitSize%8 ? 1 : 0); //Return entire length of file in bytes
        }
        
        UCHAR NumBitsRequired(UINT Num) //Tests how many bits are required to represent a number
        {
        	UCHAR RetNum;
        	_asm //Find the most significant bit
        	{
        		xor eax, eax //eax=0
        		bsr eax, Num //Find most significant bit in eax
        		mov RetNum, al
        	}
        	return RetNum+((UCHAR)(1<<RetNum)==Num ? 0 : 1); //Test if the most significant bit is the only one set, if not, at least 1 more bit is required
        }
        
        void WriteToBits(UCHAR* StartPointer, UINT BitStart, UINT Value) //Write Value to Bit# BitStart after StartPointer - Assumes more than 8 bits are never written
        {
        	*(WORD*)(&StartPointer[BitStart/8])|=Value<<(BitStart%8);
        }
        

        The code to decompress the alpha mask in C is as follows: (Shares some header information with above code)
        //Decode
        void Decode(UCHAR* Input, UCHAR* Output); //Decoding process
        UCHAR ReadBits(UCHAR* StartPointer, UINT BitStart, UCHAR BitSize); //Read value from Bit# BitStart after StartPointer - Assumes more than 8 bits are never read
        UCHAR NumBitsRequired(UINT Num); //Tests how many bits are required to represent a number --In Encoding Code--
        
        int main()
        {
        	//--Encoding Code--
        		UCHAR *InputBuffer, *OutputBuffer; //Where we will hold our input and output data
        		FILE *File; //Handle to current input or output file
        		UINT FileSize; //Holds input and output file sizes
        	
        		//The bitmap headers tell us about its contents
        		//Read in bitmap header and confirm file type
        		//Read in the rest of the data
        		//Run Encode
        		//Write encoded data out
        	//--END Encoding Code--
        
        	//Run Decode
        	UCHAR* O2=(BYTE*)malloc(BitmapFileHead.bfSize);
        	Decode(OutputBuffer, O2);
        
        /*	//If writing back out to a 24 bit windows bitmap, this adds the row padding back in
        	File=fopen("output.bmp", "wb");
        	fwrite(&BitmapFileHead, sizeof(BITMAPFILEHEADER), 1, File);
        	fwrite(&BitmapHead, sizeof(BITMAPINFOHEADER), 1, File);
        	fwrite(O2, BitmapFileHead.bfSize-sizeof(BITMAPINFOHEADER)-sizeof(BITMAPFILEHEADER), 1, File);*/
        
        	//Free up memory and wait for user input --In Encoding Code--
        	return 0;
        }
        
        //Decoding
        void Decode(UCHAR* Input, UCHAR* Output) //Decoding process
        {
        	EncodedFileHeader H=*(EncodedFileHeader*)Input; //Save header locally so we have quick memory lookups
        	UCHAR Indexes[256], IndexSize=NumBitsRequired(H.NumIndexes); //Save indexes locally so we have quick lookups, use 256 index array so we don't have to allocate memory
        	UINT BitOn=0; //Bit we are currently on in reading
        	memcpy(Indexes, ((EncodedFileHeader*)Input)->Indexes, 256); //Save the indexes
        	Input+=(sizeof(EncodedFileHeader)+H.NumIndexes); //Start reading input past the header
        
        	//Unroll/unencode all the pixels
        	do
        	{
        		UINT i, l; //index counter, length (transparent and then index)
        		//Transparent pixels
        		memset(Output, TranspMask, l=ReadBits(Input, BitOn, H.TranspSize)*BytesPerPixel);
        		Output+=l;
        
        		//Translucent pixels
        		l=ReadBits(Input, BitOn+=H.TranspSize, H.TranslSize);
        		BitOn+=H.TranslSize;
        		for(i=0;i<l;i++) //Write the gray scale out to the 3 pixels, this should technically be done in a for loop, which would unroll itself anyways, but this way ReadBits+index lookup is only done once - ** Would need to be in a for loop if not using gray-scale or 24 bit output
        			Output[i*BytesPerPixel]=Output[i*BytesPerPixel+1]=Output[i*BytesPerPixel+2]=Indexes[ReadBits(Input, BitOn+i*IndexSize, IndexSize)];
        		Output+=l*BytesPerPixel;
        		BitOn+=l*IndexSize;
        	} while(BitOn<H.DataSize);
        
        /*	{ //If writing back out to a 24 bit windows bitmap, this adds the row padding back in
        		UINT i;
        		UCHAR RowPad=4-(H.Width*BytesPerPixel%4); //Account for windows DWORD row padding
        		RowPad=(RowPad==4 ? 0 : RowPad);
        		Output-=H.Width*H.Height*BytesPerPixel; //Restore original output pointer
        		for(i=H.Height;i>0;i--) //Go backwards so data doesn't overwrite itself
        			memcpy(Output+(H.Width*BytesPerPixel+RowPad)*i, Output+(H.Width*BytesPerPixel)*i, H.Width*BytesPerPixel);
        	}*/
        }
        
        UCHAR ReadBits(UCHAR* StartPointer, UINT BitStart, UCHAR BitSize) //Read value from Bit# BitStart after StartPointer - Assumes more than 8 bits are never read
        {
        	return (*(WORD*)&StartPointer[BitStart/8]>>BitStart%8)&((1<<BitSize)-1);
        }
        

        Of course, I added some minor assembly and optimized the decoder code to get it from 335 to 266 bytes, which is only 69 bytes less :-\, but it’s something (measured using my Small project). There is no real reason to include it here, as it’s in many of my projects and the included C file for this post.

        And then some test code just for kicks...
        //Confirm Decoding
        BOOL CheckDecode(UCHAR* Input1, UCHAR* Input2, UINT Width, UINT Height); //Confirm Decoding
        
        //---- Put in main function above "//Free up memory and wait for user input" ----
        printf(CheckDecode(InputBuffer, O2, BitmapHead.biWidth, BitmapHead.biHeight) ? "good" : "bad");
        
        BOOL CheckDecode(UCHAR* Input1, UCHAR* Input2, UINT Width, UINT Height) //Confirm Decoding
        {
        	UINT x,y,i;
        	UCHAR RowPad=4-(Width*BytesPerPixel%4); //Account for windows DWORD row padding
        	RowPad=(RowPad==4 ? 0 : RowPad);
        
        	for(y=0;y<Height;y++)
        		for(x=0;x<Width;x++)
        			for(i=0;i<BytesPerPixel;i++)
        				if(Input1[y*(Width*BytesPerPixel+RowPad)+x*BytesPerPixel+i]!=Input2[y*(Width*BytesPerPixel)+x*BytesPerPixel+i])
        					return FALSE;
        	return TRUE;
        }
        

        From there, it just has to be loaded into a bit array for manipulation and set back a bitmap device context, and it’s done!
        VB Code: (Add the signature GIF as a picture box where it is to show up and set its “Visible” property to “false” and “Appearance” to “flat”)
        'Swap in and out bits
        Private Declare Function GetDIBits Lib "gdi32" (ByVal aHDC As Long, ByVal hBitmap As Long, ByVal nStartScan As Long, ByVal nNumScans As Long, lpBits As Any, lpBI As BITMAPINFOHEADER, ByVal wUsage As Long) As Long
        Private Declare Function SetDIBitsToDevice Lib "gdi32" (ByVal hdc As Long, ByVal x As Long, ByVal y As Long, ByVal dx As Long, ByVal dy As Long, ByVal SrcX As Long, ByVal SrcY As Long, ByVal Scan As Long, ByVal NumScans As Long, Bits As Any, BitsInfo As BITMAPINFOHEADER, ByVal wUsage As Long) As Long
        lpBits As Any, lpBitsInfo As BITMAPINFOHEADER, ByVal wUsage As Long, ByVal dwRop As Long) As Long
        Private Type RGBQUAD
        		b As Byte
        		g As Byte
        		r As Byte
        		Reserved As Byte
        End Type
        Private Type BITMAPINFOHEADER '40 bytes
        		biSize As Long
        		biWidth As Long
        		biHeight As Long
        		biPlanes As Integer
        		biBitCount As Integer
        		biCompression As Long
        		biSizeImage As Long
        		biXPelsPerMeter As Long
        		biYPelsPerMeter As Long
        		biClrUsed As Long
        		biClrImportant As Long
        End Type
        Private Const DIB_RGB_COLORS = 0 '  color table in RGBs
        
        'Prepare colors
        Private Declare Sub CopyMemory Lib "kernel32" Alias "RtlMoveMemory" (Destination As Any, Source As Any, ByVal Length As Long)
        Private Declare Function GetBkColor Lib "gdi32" (ByVal hdc As Long) As Long
        
        Public Sub DisplaySignature(ByRef TheForm As Form)
            'Read in Signature
            Dim BitmapLength As Long, OutBitmap() As RGBQUAD, BitInfo As BITMAPINFOHEADER, Signature As PictureBox
            Set Signature = TheForm.Signature
            BitmapLength = Signature.Width * Signature.Height
            ReDim OutBitmap(0 To BitmapLength - 1) As RGBQUAD
            With BitInfo
                    .biSize = 40
                    .biWidth = Signature.Width
                    .biHeight = -Signature.Height
                    .biPlanes = 1
                    .biBitCount = 32
                    .biCompression = 0 'BI_RGB
                    .biSizeImage = .biWidth * 4 * -.biHeight
            End With
            GetDIBits Signature.hdc, Signature.Image, 0, Signature.Height, OutBitmap(0), BitInfo, DIB_RGB_COLORS
            
            'Alpha blend signature
            Dim i As Long, Alpha As Double, BackColor As RGBQUAD, ForeColor As RGBQUAD, OBC As Long, OFC As Long
            OFC = &H67906
            OBC = GetBkColor(TheForm.hdc)
            CopyMemory BackColor, OBC, 4
            CopyMemory ForeColor, OFC, 4
            For i = 0 To BitmapLength - 1
                Alpha = 1 - (CDbl(OutBitmap(i).r) / 255)
                OutBitmap(i).r = ForeColor.r * Alpha + BackColor.r * (1 - Alpha)
                OutBitmap(i).g = ForeColor.g * Alpha + BackColor.g * (1 - Alpha)
                OutBitmap(i).b = ForeColor.b * Alpha + BackColor.b * (1 - Alpha)
            Next i
            
            SetDIBitsToDevice TheForm.hdc, Signature.Left, Signature.Top, Signature.Width, Signature.Height, 0, 0, 0, Signature.Height, OutBitmap(0), BitInfo, DIB_RGB_COLORS
            TheForm.Refresh
        End Sub
        

        C Code
        //Prepare to decode signature
        	//const UCHAR BytesPerPixel=4, TranspMask=255; //32 bits per pixel (for quicker copies and such - variable not used due to changing BYTE*s to DWORD*s), and white=transparent background color - also not used anymore since we directly write in the background color
        	//Load data from executable
        	HGLOBAL GetData=LoadResource(NULL, FindResource(NULL, "DakSig", "Sig")); //Load the resource from the executable
        	BYTE *Input=(BYTE*)LockResource(GetData); //Get the resource data
        
        	//Prepare header and decoding data
        	UINT BitOn=0; //Bit we are currently on in reading
        	EncodedFileHeader H=*(EncodedFileHeader*)Input; //Save header locally so we have quick memory lookups
        	DWORD *Output=Signature=new DWORD[H.Width*H.Height]; //Allocate signature memory
        
        	//Prepare the index colors
        	DWORD Indexes[17], IndexSize=NumBitsRequired(H.NumIndexes); //Save full color indexes locally so we have quick lookups, use 17 index array so we don't have to allocate memory (since we already know how many there will be), #16=transparent color
        	DWORD BackgroundColor=GetSysColor(COLOR_BTNFACE), FontColor=0x067906;
        	BYTE *BGC=(BYTE*)&BackgroundColor, *FC=(BYTE*)&FontColor;
        	for(UINT i=0;i<16;i++) //Alpha blend the indexes
        	{
        		float Alpha=((EncodedFileHeader*)Input)->Indexes[i] / 255.0f;
        		BYTE IndexColor[4];
        		for(int n=0;n<3;n++)
        			IndexColor[n]=(BYTE)(BGC[n]*Alpha + FC[n]*(1-Alpha));
        		//IndexColor[3]=0; //Don't really need to worry about the last byte as it is unused
        		Indexes[i]=*(DWORD*)IndexColor;
        	}
        	Indexes[16]=BackgroundColor; //Translucent background = window background color
        
        //Unroll/unencode all the pixels
        Input+=(sizeof(EncodedFileHeader)+H.NumIndexes); //Start reading input past the header
        do
        {
        	UINT l; //Length (transparent and then index)
        	//Transparent pixels
        	memsetd(Output, Indexes[16], l=ReadBits(Input, BitOn, H.TranspSize));
        	Output+=l;
        
        	//Translucent pixels
        	l=ReadBits(Input, BitOn+=H.TranspSize, H.TranslSize);
        	BitOn+=H.TranslSize;
        	for(i=0;i<l;i++) //Write the gray scale out to the 3 pixels, this should technically be done in a for loop, which would unroll itself anyways, but this way ReadBits+index lookup is only done once - ** Would need to be in a for loop if not using gray-scale or 24 bit output
        		Output[i]=Indexes[ReadBits(Input, BitOn+i*IndexSize, IndexSize)];
        	Output+=l;
        	BitOn+=l*IndexSize;
        } while(BitOn<H.DataSize);
        
        //Output the signature
        const BITMAPINFOHEADER MyBitmapInfo={sizeof(BITMAPINFOHEADER), 207, 42, 1, 32, BI_RGB, 0, 0, 0, 0, 0};
        SetDIBitsToDevice(MyDC, x, y, MyBitmapInfo.biWidth, MyBitmapInfo.biHeight, 0, 0, 0, MyBitmapInfo.biHeight, Signature, (BITMAPINFO*)&MyBitmapInfo, DIB_RGB_COLORS);
        

        This all adds ~3.5KB to each VB project, and ~2KB to each C/CPP project. Some other recent additions to all project executables include the Hyrulean Productions icon (~1KB) and file version information (1-2KB). I know that a few KB doesn’t seem like much, but when executables are often around 10KB, it can almost double their size.

        While I’m on the topic of project sizes, I should note that I always compress their executables with UPX, a very nifty executable compressor. It would often be more prudent to use my Small project, but I don’t want to complicate my open-source code.


        One other possible solution I did not pursue would be to take the original font and create a subset font of it with only the letters (and font size?) I need, and see if that file is smaller. I doubt it would have worked well though.
        Useful Bash commands and scripts
        Unix is so great
        First, to find out more about any bash command, use
        man COMMAND

        Now, a primer on the three most useful bash commands: (IMO)
        find:
        Find will search through a directory and its subdirectories for objects (files, directories, links, etc) satisfying its parameters.
        Parameters are written like a math query, with parenthesis for order of operations (make sure to escape them with a “\”!), -a for boolean “and”, -o for boolean “or”, and ! for “not”. If neither -a or -o is specified, -a is assumed.
        For example, to find all files that contain “conf” but do not contain “.bak” as the extension, OR are greater than 5MB:
        find -type f \( \( -name "*conf*" ! -name "*.bak" \) -o -size +5120k \)
        Some useful parameters include:
        • -maxdepth & -mindepth: only look through certain levels of subdirectories
        • -name: name of the object (-iname for case insensitive)
        • -regex: name of object matches regular expression
        • -size: size of object
        • -type: type of object (block special, character special, directory, named pipe, regular file, symbolic link, socket, etc)
        • -user & -group: object is owned by user/group
        • -exec: exec a command on found objects
        • -print0: output each object separated by a null terminator (great so other programs don’t get confused from white space characters)
        • -printf: output specified information on each found object (see man file)

        For any number operations, use:
        +nfor greater than n
        -nfor less than n
        nfor exactly than n

        For a complete reference, see your find’s man page.

        xargs:
        xargs passes piped arguments to another command as trailing arguments.
        For example, to list information on all files in a directory greater than 1MB: (Note this will not work with paths with spaces in them, use “find -print0” and “xargs -0” to fix this)
        find -size +1024k | xargs ls -l
        Some useful parameters include:
        • -0: piped arguments are separated by null terminators
        • -n: max arguments passed to each command
        • -i: replaces “{}” with the piped argument(s)

        So, for example, if you had 2 mirrored directories, and wanted to sync their modification timestamps:
        cd /ORIGINAL_DIRECTORY
        find -print0 | xargs -0 -i touch -m -r="{}" "/MIRROR_DIRECTORY/{}"

        For a complete reference, see your xargs’s man page.

        grep:
        GREP is used to search through data for plain text, regular expression, or other pattern matches. You can use it to search through both pipes and files.
        For example, to get your number of CPUs and their speeds:
        cat /proc/cpuinfo | grep MHz
        Some useful parameters include:
        • -E: use extended regular expressions
        • -P: use perl regular expression
        • -l: output files with at least one match (-L for no matches)
        • -o: show only the matching part of the line
        • -r: recursively search through directories
        • -v: invert to only output non-matching lines
        • -Z: separates matches with null terminator

        So, for example, to list all files under your current directory that contain “foo1”, “foo2”, or “bar”, you would use:
        grep -rlE "foo(1|2)|bar"

        For a complete reference, see your grep’s man page.

        And now some useful commands and scripts:
        List size of subdirectories:
        du --max-depth=1
        The --max-depth parameter specifies how many sub levels to list.
        -h can be added for more human readable sizes.

        List number of files in each subdirectory*:
        #!/bin/bash
        export IFS=$'\n' #Forces only newlines to be considered argument separators
        for dir in `find -type d -maxdepth 1`
        do
        	a=`find $dir -type f | wc -l`;
        	if [ $a != "0" ]
        	then
        		echo $dir $a
        	fi
        done
        
        and to sort those results
        SCRIPTNAME | sort -n -k2

        List number of different file extensions in current directory and subdirectories:
        find -type f | grep -Eo "\.[^\.]+$" | sort | uniq -c | sort -nr

        Replace text in file(s):
        perl -i -pe 's/search1/replace1/g; s/search2/replace2/g' FILENAMES
        If you want to make pre-edit backups, include an extension after “-i” like “-i.orig”

        Perform operations in directories with too many files to pass as arguments: (in this example, remove all files from a directory 100 at a time instead of using “rm -f *”)
        find -type f | xargs -n100 rm -f

        Force kill all processes containing a string:
        killall -9 STRING

        Transfer MySQL databases between servers: (Works in Windows too)
        mysqldump -u LOCAL_USER_NAME -p LOCAL_DATABASE | mysql -u REMOTE_USER_NAME -p -D REMOTE_DATABASE -h REMOTE_SERVER_ADDRESS
        “-p” specifies a password is needed

        Some lesser known commands that are useful:
        screen: This opens up a virtual console session that can be disconnected and reconnected from without stopping the session. This is great when connecting to console through SSH so you don’t lose your progress if disconnected.
        htop: An updated version of top, which is a process information viewer.
        iotop: A process I/O (input/output - hard drive access) information viewer. Requires Python ? 2.5 and I/O accounting support compiled into the Linux kernel.
        dig: Domain information retrieval. See “Diagnosing DNS Problems” Post for more information.

        More to come later...

        *Anything staring with “#!/bin/bash” is intended to be put into a script.
        Zelda Treasure Flaws
        The only time when having too much money is a problem

        I had meant to write this post back when I beat “Zelda: Twilight Princess” a few days after it and the Nintendo Wii came out in 2006, but never got around to it, and the idea of writing about a game that came out long ago seemed rather antiquated. The initiative to write this post popped up again though as I just finished replaying “Zelda: Ocarina of Time” (N64).

        I have been a really big Zelda fan for a very long time, and have played most of the series. I got to a GameStop ~8 hours, IIRC, before they started preordering the Wii to make sure I could play Twilight Princess as soon as it came out, as I was very anxious to play it. It was a good thing I did too, because when the Wii actually came out, they were next to impossible to acquire. I knew of many people having to wait in lines well over 15 hours to get one soon after the release, and they were still rarities to attain well over a year later.

        While I really enjoyed Twilight Princess, I was very frustrated by a rupee and treasure problem. “Zelda” (NES) and “Link to the Past” (SNES) had it right. Whenever you found a secret in those games it was something really worth it, namely, a heart piece (increased your life meter), or often even a new item. Rupees (in game money) were hard earned through slaying enemies, only rarely given in bulk as prizes, and you almost always needed more. As I played through Twilight Princess, I was very frustrated in that almost every secret I found, while hoping for something worth it like a heart pieces, was almost always a mass of rupees. There were at least 50 chests I knew of by the end of the game filled with rupees that I couldn’t acquire because I was almost always maxed out on the amount I could carry. What’s even worse is that the game provided you a means to pretty much directly pinpoint where all heart pieces were. These problems pretty much ruined the enjoyment of the search for secret treasures in the game. You could easily be pointed directly to where all hearts were, new game items were only acquirable as primary dungeon treasures, and the plethora of rupees was next to worthless.

        So, as I was replaying Ocarina of Time, I realized how unnecessary rupees were in that game too. There are really only 2 places in the whole game you need rupees to buy important items; one of which is during your very first task within the first few minutes of the game. The only other use for rupees is for a side quest to buy magic beans which takes up a small chunk of your pocket change through the game, but besides that, there is no point to the money system in the game as you never really need it for anything. What’s even more a slap in the face is that one of the primary side quests in the game just rewards you with larger coin purses to carry more rupees, which again, you will never even need to use.

        While these games are extremely fun, this game design flaw just irks me. Things like this will never stop me from playing new Zelda games however, or even replaying the old ones from time to time, especially my by far favorite, Link to the Past, as they are all excellent works. I would even call them pieces of art. Miyamoto forever :-).

        Golem! Cloned?
        Subliminal theft?

        I just now finished watching Disney’s “The Black Cauldron”. While a rather poor example of a Disney animated film, there is one element that really caught my surprise. One of the characters, Gurgi, acted, sounded, and moved just like Golem from Peter Jackson’s rendition of Lord of the Rings. The way Gurgi talked, his inflections, his character’s nature and actions were all pretty much exactly how Golem was portrayed. I’m not necessarily saying Gurgi was stolen from LoTR, or Jackson copied Gurgi alternately, but they are a bit too eerily similar for my speculations.

        The Peter Pan Chronicles
        Good children stories can be fun no matter how old you are

        I’ve been on a bit of a Peter Pan kick lately. It all started with catching Hook a few weeks ago, which I’ve always loved and enjoy watching from time, on the boob tube. After finishing it, I remembered that I was given the original Peter Pan novel for a Christmas when I was around 9 years of age or so, and I decided to pick it up on my next trip to my parents’ house in Dallas. I downloaded all the other official Peter Pan films in the mean time for a watch, as I had never seen them before.

        One of the main reasons for this was I was also curious as to how the stories differed in the film versions from the original story, and from each other. I found out they all varied greatly, especially in the tone from the novel, except for Hook, which got it perfect. I’m not going to go into a comparison of the stories here, as that is not really important. All I’d really like to mention about the movies is that the Disney’s 2002 “Return to Neverland” was a rather poor rip off of the Hook plot line, and I didn’t really find it worth it. Disney has really lost it’s flair since The Lion King, IMO. “Walt Disney’s Peter Pan” (February 5, 1953) and “Peter Pan” (2003) however were both well worth it.

        The main difference I was referring to between most of the movies and the novel is the heavy presence of a dark and sinister theme in the original book. The Lost Boys were just as cut throat as the pirates, as it mentioned the often battles and killing each other in cold bold, and it even mentioned something to the extent of Peter Pan “thinning out the ranks” of the Lost Boys when their numbers got too large, IIRC. The mermaids were silent killers when they got the chance, and there was also talk of “fairy orgies”. I thought this was all great for a children’s book, as it didn’t concentrate on these aspects, but they were there to give a proper setting. It was a very interesting and fun read, but a far cry from the brilliant status it has been given, IMO. Makes me wonder what all the people out there that complain about Harry Potter would say if they gave this one a read. Oh, the only thing Tinkerbelle pretty much ever says throughout the book is “You ass” :-).

        Speaking of Harry Potter, it came as a bit of a shock to me seeing Maggie Smith, who plays Minerva McGonagall in the Harry Potter movies, playing as Granny Wendy in Hook. She did an amazing job at looking decrepit.


        One final non-related note… the very briefly overhead Neverland island view shown on Hook really reminded me of my Eternal Realms map.

        Always Confirm Potentially Hazardous Actions
        Also treat what others tell you with discretion

        So I have been having major speed issues with one of our servers. After countless hours of diagnoses, I determined the bottle neck was always I/O (input/output, accessing the hard drive). For example, when running an MD5 hash on a 600MB file load would jump up to 31 with 4 logical CPUs and it would take 5-10 minutes to complete. When performing the same test on the same machine on a second drive it finished within seconds.

        Replacing the hard drive itself is a last resort for a live production server, and a friend suggested the drive controller could be the problem, so I confirmed that the drive controller for our server was not on-board (on its own card), and I attempted to convince the company hosting our server of the problem so they would replace the drive controller. I ran my own tests first with an iostat check while doing a read of the main hard drive (cat /etc/sda > /dev/null). This produced steadily worsening results the longer the test went on, and always much worse than our secondary drive. I passed these results on to the hosting company, and they replied that a “badblocks –vv” produced results that showed things looked fine.

        So I was about to go run his test to confirm his findings, but decided to check parameters first, as I always like to do before running new Linux commands. Thank Thor I did. The admin had meant to write “badblocks –v” (verbose) and typoed with a double key stroke. The two v’s looked like a w due to the font, and had I ran a “badblocks –w” (write-mode test), I would have wiped out the entire hard drive.

        Anyways, the test outputted the same basic results as my iostat test with throughput results very quickly decreasing from a remotely acceptable level to almost nil. Of course, the admin only took the best results of the test, ignoring the rest.

        I had them swap out the drive controller anyways, and it hasn’t fixed things, so a hard drive replace will probably be needed soon. This kind of problem would be trivial if I had access to the server and could just test the hardware myself, but that is a price to pay for proper security at a server farm.

        Kill Bill TV Edits
        Too hot for TV

        I made the mistake of trying to watch “Kill Bill”, one of my favorite series of movies, on cable tonight. After suffering through commercials and some horrible edits, I decided it I’d acquire a normal movie copy later on. The edits that werre made to the movie so it could air on TV had me cracking up though. One example was in the long term care hospital the protagonist was staying at with the character “Buck” who “liked to fuck”. He had the word “FUCK” tattooed across one of his hand’s knuckles and his car was named and branded as the “Pussy Wagon”. Since this kind of thing was obviously too much for TV audiences, anytime the word “fuck” was said, it was dubbed over with the word “party”, and his branded car and keychain that said “Pussy Wagon” were overlaid on the screen as “Party Wagon”. It was terribly obtrusive and silly, but it had me laughing at least.

        Final Fantasy and Chrono Trigger Cut Scenes
        I have too much video game nostalgia

        I am a big fan of many SquareSoft games, namely, Final Fantasy 4 (US2), Final Fantasy 6 (US3), and Chrono Trigger. I played all of these on the Super Nintendo many many years ago, and still replay them from time to time through emulator.

        I recently recalled that re-releases of these games on the PlayStation consoles included cut scenes, so I decided to look them up. I figured these would be of use to anyone in my boat who is a fan of the old school games but never got to see these.

        I included the original links to these videos, which contain author credits, in the title. All videos were found on YouTube, and of course, owned by SquareSoft.

        Chrono trigger all cutscenes Final Fantasy 4 Ending
        Final Fantasy 5 Ending Final Fantasy 6 Ending
        Final Fantasy IV DS Trailer (Subtitled in English)
        (CAN'T WAIT FOR THIS!)
        Chrono Trigger Gonzales robot song from OVA
        (The rest of the OVA is worthless, do not watch it!)

        [Edit on 6/14/2008 @ 5:35PM]
        Final Fantasy IV DS US English subtitled Trailer
        (New trailer)
        Text Message Storage Limits
        We need open source cell phones

        So I’ve been rather perturbed for a very long time at the 50/50 inbox/outbox limit of stored SMS text messages in all LG cell phones.  Other phones have similar limits, like a Samsung I have is limited to 100/50, and it just erases messages when an overflow occurs, as opposed to the nice prompts on my LG VX9800, with its QWERTY keyboard, which I love.

        I have done some minor hacking on cell phones and tinkered with the firmware, but without a proper emulator, I would never be able to find out where the 50 cap is set and be able to make a hack for phones could store more.


        So today, I was at a Verizon store [unimportant ordeal here] because I got a little bit of water on my LG phone and it was having issues.  Immediately after the spill, it had a bunch of problems including the battery thinking it was always charging, buttons on the front side sending two different buttons when pressed, and some other buttons not working.  I immediately set to shaking it out at all angles to get most of the water out (which there wasn’t much to begin with...), and then I thoroughly blow dried every opening into the inside circuitry.  This fixed everything but the worst problem, signal dropping.  Basically, the phone would lose any connection it made after about 5 seconds, so I couldn’t really answer or makes calls.  Fortunately I was still able to send and receive SMS messages, but received ones didn’t signal the server they were received, and I kept receiving them over and over and over until a connection finally stayed open long enough to tell the server I got it.
        So I took it back to the store to see if they could fix it, and all they tried was updating the firmware... but they said I could trade it in for another phone for $50, which I figured from the beginning is what I would have to do, and was a good idea anyways because of this [temporarily down].
        So they realized they had no replacements in stock... or at the warehouse... for the VX9800 OR the VX9900, which they said they’d upgrade me too if they couldn’t find and VX9800, and I wanted (yay).  So I was told to call back tomorrow and try again.  Bleh. Anyways, I was at the store
        where I found out why this was.  Apparently, cell phones start slowing down considerably with too many stored SMSs.  I was told of a lady that had come in the previous week with 600+ stored messages and the phone took very long intervals to do anything, and clearing it fixed it.

        I know that, on my phone at least, each SMS message is stored as a separate file, so my best guess as to the reason for this problem is that this creates too many entries in the file system for the phone to handle.  This seems like a rather silly and trivial problem to work around, but the cell phone manufactures can get away with it, as they have no good competitors that fix problems like this.


        This is why we really need open source cell phones.  There have been word of open source phones in the works for years... but nothing too solid yet :-\.


        So ANYWAYS, I had already started taking a different approach in early January to fix the problem of backing up SMS messages without having to sync them to your computer, which is a rather obnoxious work around.  I had been researching and planning to write a BREW application that extracts all SMS messages into a text file on your phone so that you don’t have to worry about the limits, and could download them to your computer whenever you wanted, with theoretically thousands of SMS messages archived on your phone.  Unfortunately, as usual, other things took over my time and the project was halted, but I will probably be getting back to it soon.

        Video driver woes
        TV output issues

        So I’ve recently switched over to an old Geforce4 Ti 4600 for TV output on my home server/TV station. Unfortunately, my TV needs output resizing (underscan) due to being dropped a long ways back during transport from a Halo game, and the CRT output is misaligned.

        If I recall, old Nvidia drivers allowed output resizing, but the latest available ones (which are rather old themselves, as NVidia stops supporting old cards with newer driver sets that have more options) that work for my card only allow repositioning of the output signal, so part of the screen is cut off.

        The final solution was to tell VLC media player to output videos at 400:318 aspect ratio when in full screen to force a smaller width that I could then reposition to properly fit the screen. A rather inelegant solution, but it works. One of these days I’ll get myself a new TV :-).

        Truecrypt 5.0 tribulations
        Adopting programs at release is often a bad idea

        Just as is the case with windows, where you never install before at least the first service pack is released, so is the case with TrueCrypt, it seems.


        TrueCrypt is open source, which is a major plus, and in my opinion, the best solution for encrypting data.  In a nutshell, TrueCrypt allows the creation of encrypted “container files” that when mounted act as a hard drive partition, accessible through a password and/or a key file.  The encryption, security, and speed are all top notch and the program runs completely transparent to the user after volume mounting, so I would highly recommend the program to anyone that has anything at all to hide :-).

        It also has some other useful options like the ability to encrypt USB flash cards for opening at other locations without having TrueCrypt installed, and “hidden container files” in which a second hidden volume is contained within the same container, unlockable by a separate password/key file, which is great for plausible deniability.  I have been always been a fan of TrueCrypt since I first found and adopted it years ago, and would highly recommend it.


        Unfortunately, TrueCrypt 5.0, which was just released a few days ago, does not yet meet quality standards.  It does all the old stuff it used to of course, and adds some great new features, but the multiple bugs I have found are forcing me to revert to an older version of it, and back to other 3rd party applications I have been using for other types of encryption.


        The new feature, which I’ve been looking forward too for ages is pre-boot authentication volume encryption, which basically means encrypting 100% of your hard drive (partition) that contains Windows (or another OS) on it so you only have to put in your password during boot, and EVERYTHING is encrypted and safe, and impossible (by today’s standards) to access before the password is put in.  This is especially important for laptops due to the increased likelihood of it falling into others’ hands through loss or theft.  Unfortunately, full volume encryption has broken 2 things; the ability to put my laptop into hibernation (which was also a problem with other volume encryption programs I’ve tried in the past), and oddly enough, it broke my audio drivers so I have no sound XD.  So, I’m reverting back to BestCrypt Volume Encryption [v1.95.1], which I’ve also been using for quite a while, that does the same thing, but allows hibernation.  My only beefs with it are that it’s closed source, something that isn’t usually a problem in my book, but is for this case [security], and that hibernation is SLOW, probably due to the fact that it can no longer use DMA, due to needing to pass data through the CPU for encryption.  Another, technically not so important, feature TrueCrypt doesn’t include yet that most other volume encryption pre-boot authentication packages include is customized boot password prompt screens.  I’ve included my incredibly dorky screens (for BestCrypt Volume Encryption) below :-D.

        The other thing that is broken, oddly enough, forcing me to revert to TrueCrypt 4.3a, is I can’t mount containers over a network anymore through Windows File and Print Sharing :-\.  Ah well, hopefully they’ll get these things fixed soon enough.



        My boot password prompt, and no, I will not explain it, except that DarkSide was my previous computer handle a very good number of years ago.
        My Boot Prompt

        A boot prompt I made for a female friend, weeee, ASCII art ^_^;.
        Friend’s Boot Prompt

        And for reference, the ASCII chart.
        ASCII chart
        Note that when creating a screen for BestCrypt Volume Encryption, the characters 0x08 0x09 0x0A 0x0D are all invalid. The “&” is used to place the password prompt.

        One other Volume Encryption I tried, which was just about as good, though I do not recall if it allowed hibernation, was DriveCrypt Plus Pack [v3.90G]. It also allowed bitmaps [pictures] for the boot password prompt screen.
        Internet Explorer Identity Crisis
        It just wants to think it’s Firefox
        Does anyone else find it odd that IE reports itself as ‘Mozilla’ if you access the navigator.appCodeName variable? You can test this out by putting the following in your browser as the URL javascript:alert(navigator.appCodeName), or you could check out this script, where I noticed this, which reports all information that can be found out about you through going to a web page, and accessible via JavaScript/PHP.
        GTO (and other TV series)
        When too much of a good thing is a bad thing

        I have been a very long time fan of the anime series GTO (Great Teacher Onizuka), though I have only ever owned and seen the first 4 of 10 DVDs.  The series is heavily geared towards adolescent males (shonen) and has its immaturity insecurities, but it’s still a great romantic comedy, with the romantic part paling to the comedy.


        So I very recently acquired the rest of the series, and really wish I had just left it off on the forth DVD (19th episode), where the series planning obviously ended. Up to that point, it was very strongly plot driven with character development as the primary outlet.  It then turned into entirely filler content with very loose and unrealistic plot. The series was actually following the manga (comic) plot line through episode 14 when it bypassed it in timeline.  But really, I couldn’t believe how everything past that point was just so much a waste of time.  How people can turn such things of beauty (not necessarily the series visually, but the storyline...) into utter rubbish so quickly always catches me off guard, though I know I should be used to it by now.


        Extending series past their originally planned plotline and churning out utter crap is a very common problem among television shows, and especially in anime, as the Japanese have a way of carrying things on for way too long.  Think Mario, Zelda, Pokemon, and Power Rangers, but those are just a few examples of Japanese long standing IPs that actually made it to America.  American’s may have a way for milking things for all they are worth for profit, but the Japanese not only have extra profit as a driving force, but also incredibly obsessive fan bases (Otaku) demanding more content.


        Some other examples of this I have to mention off the top of my head are:
        • Nadia - See previous post for more information
        • Kodomo no Omocha (Kodocha), a SUPER girly (Shojo) anime, another of my favorite series, is 100% plot drive excellence.  Up through episode 19, which I believe to be the true ending of Season 1, the multitudes of brilliantly interweaving story arcs are breath taking and moving. From this point, it continued on for another 83 episodes (102 total) of which I have only seen through episode 44. While the general series worthiness seriously degrades at this turning point, it is still a lot of super-hyper-spastic-fun.
        • Full Metal Alchemist, yet another of my favorite series, is an actual example of this problem NOT happening, though it has it happen in a different form.  The series has a strong plot driven and well organized vibe that makes me believe the original 51 episodes were all mostly planned out from the start, but a few inconsistencies between beginning and late episodes makes me not entirely sure. The problem comes in the form of the movie, which I felt to be a complete waste of time to watch. I will expand upon this in the future.
        • The Simpsons, which really should have ended in season 3, which I like to call “Classic Simpsons”, turned into utter retard-like-babbling rubbish somewhere in seasons 7-10. It was initially a very intriguing show, with witty characters (yes, homer was in a manner quite witty) and plot, but unfortunately, the series degraded by pushing the characters stereotypes way too far and making them boring, repetitive, and predictable, repeating the same basic plots and jokes time and time again.
        • And finally, Stargate SG1, which needed to end in Season 7 when the Goa’uld were pretty much defeated, and is still harboring a bastard child known as Stargate Atlantis. While the shows may still have some basic entertainment value, they are still mere husks of their former glory.
        Windows 98
        Nostalgia mode
        So I just plopped in an old Win98 CD (in this case SP2) to grab the QBasic files off of it for the Languages and Libraries page.  I started browsing through the CD, and thought to myself “OMG... win98!”, heh. So I installed it, and wow, am I ever in super nostalgia mode.

        Things I now take for granted that were major Pains in the pre-XP days (well, pre NT kernel....):
        • Getting non-modem LAN connections on the internet: Win98 expected people to connect to the internet via phone modems, as broadband was still unheard of then. The “Windows Connection Wizard” was a pain in the butt and you had to know just the right place to go to get it to recognize a NIC as a valid connection to the internet.
        • Shutting down windows improperly: If you failed to turn off the computer through the proper “Shut Down” method, the FAT file systems did not have certain type of safe-guards that NTFS does, and the computer was be forced to do a ScanDisk on startup. A ScanDisk is also run the first time windows starts after install, and seeing this old piece of software really gave me a warm fuzzy feeling... or was it a feeling of utter nausea?
        • RAM allocation: The DOS-line-kernel of windows never properly kept track of memory from applications, and memory leaks in applications STAYED memory leaks after the program shut down, so RAM could very quickly get eaten up. Programs called “RAM Scrubbers” were around to fix these detected memory leaks and free them.
        • Themes: Most people don’t know that windows themes actually originated with Microsoft Plus! for Windows 95 (I could have sworn it was originally called Windows Plus!... need to find my original CD) software package, which also first introduced the ever-popular and addicting Space Cadet Pinball (check the games folder that comes installed in XP). Most Plus! options were usually integrated straight into later Windows versions or updates. I have included below all the Themes that came with Windows 98 SE for nostalgic value :-). Enjoy!

          Speaking of games, it seems 98SE also included FreeCell... I wasn’t aware it was that old. I think the “Best of Windows Entertainment Pack” (with “Chips Challenge”, “Golf”, “Rodent’s Revenge”, “Tetris”, “SkiFree”, and some other fun games) also originally came on the Plus! CDs, but am not sure of this. I believe the Best Pack also came with the CD packs that came with new computer from Packard Bell and maybe some other manufacturer for like 2 or 3 years in the mid 90s that also included the first game of one of my most favorite game series ever, Journey Man, as well as Microsoft Encarta, Britannica, a Cook Book CD and a Do-It-Yourself Book CD. Good times!!!
        • Calendar: The calendar only displayed 2 digits for the year instead of 4... does this mean Microsoft was expecting everyone to switch from 98 immediately when their next OS (Windows ME [heh] or 2K) came out? See “The Old New Thing” for another interesting problem of the windows calendar of old.
        Things that made me laugh:
        • The first question asked during install was “You have a drive over 512mb in size, would you like to enable large disk support?”
        • All the 3d screensavers were OpenGL. Though DirectX was out at that point, it was still in a state of sheer-crappiness so Microsoft still used OpenGL, which it wouldn’t be caught dead using nowadays ^_^.
        • During install, there were lots of messages touting the operating systems features, including “By converging real-time 2d and 3d graphics ... *MMX is a trademark of Intel Corporation”. It just made me smile knowing that MMX was once so new Microsoft had to put a trademark warning like that.
        • Internet Explorer (5.0) started up at MSN.com already... which immediately crashed the browser! hehe
        • The windows update website informed me as follows: “Important: End of Support for Windows 98 and Windows ME
          Effective July 11, 2006, support for Windows 98, Windows 98 Second Edition and Windows ME (and their related components) will end. Updates for Windows 98 and Windows ME will be limited to those updates that currently appear on the Windows Update website.”
        Things that I miss:
        • The emotion behind the OS. For some reason, Windows 98 and 95 always had... a warmness to them that 2K/XP never had. I’m not sure why... but the newer operating systems always had such a stiff and corporate feeling to them.
        • Winipcfg! Now I am forced to go to the darn command prompt to do it via ipconfig (which was available then also), which is a pain when you have too many NICs and it scrolls the console window or when trying to help someone get their IP Address or MAC address.
        • Restart in MS-DOS mode! Man do I ever miss that. Especially for playing original DOOM. Good ’ol 640k ^_^. The 3.x/95/98 kernels were really based upon DOS so it was valid to have a DOS only mode, but there’s nothing stopping them from including it on newer computers... well, except that DOS didn’t support NTFS, I guess... so it would be confusing. Ah well.
        • FAST load time. If I recall, Win98 always loaded bounds faster than XP... probably has to do with drivers.


        Themes: (Owned by Microsoft?)
        Baseball Dangerous Creatures Inside Your Computer Jungle Leonardo da Vinci More Windows Mystery Nature Science Space Sports The 60’s USA The Golden Era Travel Underwater Windows 98 Windows Default

        Baseball:
        Baseball Theme


        Dangerous Creatures:
        Dangerous Creatures Theme


        Inside Your Computer:
        Inside Your Computer Theme


        Jungle:
        Jungle Theme


        Leonardo da Vinci:
        Leonardo da Vinci Theme


        More Windows:
        More Windows Theme


        Mystery:
        Mystery Theme


        Nature:
        Nature Theme


        Science:
        Science Theme


        Space:
        Space Theme


        Sports:
        Sports Theme


        The 60’s USA:
        The 60’s USA Theme


        The Golden Era:
        The Golden Era Theme


        Travel:
        Travel Theme


        Underwater:
        Underwater Theme


        Windows 98:
        Windows 98 Theme


        Windows Default:
        Windows 98 Default Theme
        The god complex
        Doctors are people too

        Oops, life kind of hit me like a ton of bricks the last few days and I haven’t had time to get much done.  It didn’t help that I had a 72 hour straight run of wakefulness, then slept for about 24 hours straight :-).  *Shakes fist at certain medications*.  But now to continue on the second section of my previous medical post...


        Medical science has come a very long way in the last 100 years, making very large important jumps all the time, but there is still a very very long way to go.  The “purpose” of the appendix was just “officially” found not too long ago, and if something that simple took that long to find out... But anyways, most of where we are in medicine still involves a lot of guessing and fuzzy logic.  While we do know many things for certain, diagnosing is still more often than not guess work due to what the patient can describe.  Even when we know what the problem is, we still aren’t even sure of the definite cause, and without that, we can only make educated guesses in how to treat them.  Sometimes we even have the knowledge to diagnose a problem, but it may be too expensive, in a time+effort vs gains manner, or possibly too early in developmental stages and not considered proper yet.  Then again, sometimes we even do have the answers but they are being withheld for “evil” purposes.  Anyways, I have 4 stories I’d like to share today on this topic to drive my point home.


        First, I’ll get my own story out of the way.   A couple of years back, my appendix burst, I assumed it was just my IBS, as stated in my previous post.   Two days afterwards, I went to the doctor and we specifically said we wanted to rule out appendicitis as a cause, so they took my x-ray, and it somehow turned up as negative... so I was diagnosed with constipation, as my mother was often noting as what she thought it must be.

        So on the way out of the office, stepping out of the door, I stopped and asked the doctor if they could take a blood sample so I could see how my cholesterol was doing (been fighting high cholesterol for a long time, the medication I take for it works wonders), and they did.  So I took some laxatives, and 3 days later I was still in lots of pain and lots of other problems.  So the call from the doctor came in the middle of that Monday, having gone to the doctor mid-Friday, right before I was about to call them back, and I was instructed to go straight to the hospital, as my (white?) blood cell count was super high.  Thank Thor I asked them.

        So I go to the hospital, they do a few tests, one involving drinking a liter of a liquid that tasted like chalk beforehand, which I had to do once on a return visit too, and they come back and tell me my appendix had burst, and somehow, miraculously, I wasn’t dead due to a pocket forming and containing the toxin, and I was to go into surgery within hours.  Obviously, everything went relatively well, as I am still here.

        There was one really painful night, though, with a temperature so high that I was apparently hallucinating, and I don’t remember.  So I got out of the hospital after a week... and then immediately went back in that night due to a bacteria infection and was on antibiotics for another week.  At least I didn’t need morphine (ah gee...) that second week.

        On a more silly note, right before going into surgery, I jokingly asked my female surgeon how long it would take, as I had to log into my computer every (5?) hours for security or it would erase all my porn (or something like that).  Well, the poor naive doctor took it seriously, and literally turned as red as an apple, at which point I had to rescind my statement and explain I was just joking ^_^;.


        Second story is much more recent.  Can’t go into details, but a friend of mine was at the hospital with some stomach problems, and the doctors came back with congratulations, in that she was pregnant.  After finally convincing them that she could not possibly be pregnant and was pretty sure she wasn’t carrying the reincarnation of Jesus, they did more tests and found out it was a rather nasty cyst in her (uterus?); good job doc(s)... so she had it removed.  They determined very soon after when the bloodwork came back what type of cancer it was... so she’s been in very aggressive therapy since.


        The next story has been a long time upset of mine.  A female cousin of mine, who has always been as sweet as can be, contracted lime disease.  This in and of itself wouldn’t have been a problem normally, except that she and her parents had to go doctor hopping for well over a year to finally get it properly diagnosed. By this advanced stage of the problem, it was too late to be able treat it properly with no after effects, so she has lost most of the last 5+ years of her life to the disease and the incredible lethargy and problems it causes.

        They have been trying many many ways to cure the problem, and are finally hopeful at a new possible solution they’ve found.  I hope to Thor it works out and she can start living her life to the fullest again; which actually parallels the next story quite well.


        I saved this one for last because it involves a celebrity :-).  Scott Adams, creator/artist of the Dilbert comic strip, had been afflicted for a few years with Spasmodic Dysphonia, which causes an inability to speak in certain situations.  After going through the prescribed medical procedure involving long needles several times per year for the rest of your life, he finally found a doctor who had a very large success rate of curing the illness, and it worked for him too.

        Apparently, the pharmaceutical industry shuts out any info they can about the proper treatment, as they make fists of money peddling out their very expensive temporary botox treatments that often don’t work well or at all.


        Long story short, our medical industry has a long way to go before I consider it a true science, the first step being saving it from the grip of the pharmaceutical giant.



        Scott Adam’s Blog Posts:
        Good News Day (October 24, 2006): Original Post, Archive

        Good News Day

        As regular readers of my blog know, I lost my voice about 18 months ago. Permanently. It’s something exotic called Spasmodic Dysphonia. Essentially a part of the brain that controls speech just shuts down in some people, usually after you strain your voice during a bout with allergies (in my case) or some other sort of normal laryngitis. It happens to people in my age bracket.

        I asked my doctor – a specialist for this condition – how many people have ever gotten better. Answer: zero. While there’s no cure, painful Botox injections through the front of the neck and into the vocal cords can stop the spasms for a few months. That weakens the muscles that otherwise spasm, but your voice is breathy and weak.

        The weirdest part of this phenomenon is that speech is processed in different parts of the brain depending on the context. So people with this problem can often sing but they can’t talk. In my case I could do my normal professional speaking to large crowds but I could barely whisper and grunt off stage. And most people with this condition report they have the most trouble talking on the telephone or when there is background noise. I can speak normally alone, but not around others. That makes it sound like a social anxiety problem, but it’s really just a different context, because I could easily sing to those same people.

        I stopped getting the Botox shots because although they allowed me to talk for a few weeks, my voice was too weak for public speaking. So at least until the fall speaking season ended, I chose to maximize my onstage voice at the expense of being able to speak in person.

        My family and friends have been great. They read my lips as best they can. They lean in to hear the whispers. They guess. They put up with my six tries to say one word. And my personality is completely altered. My normal wittiness becomes slow and deliberate. And often, when it takes effort to speak a word intelligibly, the wrong word comes out because too much of my focus is on the effort of talking instead of the thinking of what to say. So a lot of the things that came out of my mouth frankly made no sense.

        To state the obvious, much of life’s pleasure is diminished when you can’t speak. It has been tough.

        But have I mentioned I’m an optimist?

        Just because no one has ever gotten better from Spasmodic Dysphonia before doesn’t mean I can’t be the first. So every day for months and months I tried new tricks to regain my voice. I visualized speaking correctly and repeatedly told myself I could (affirmations). I used self hypnosis. I used voice therapy exercises. I spoke in higher pitches, or changing pitches. I observed when my voice worked best and when it was worst and looked for patterns. I tried speaking in foreign accents. I tried “singing” some words that were especially hard.

        My theory was that the part of my brain responsible for normal speech was still intact, but for some reason had become disconnected from the neural pathways to my vocal cords. (That’s consistent with any expert’s best guess of what’s happening with Spasmodic Dysphonia. It’s somewhat mysterious.) And so I reasoned that there was some way to remap that connection. All I needed to do was find the type of speaking or context most similar – but still different enough – from normal speech that still worked. Once I could speak in that slightly different context, I would continue to close the gap between the different-context speech and normal speech until my neural pathways remapped. Well, that was my theory. But I’m no brain surgeon.

        The day before yesterday, while helping on a homework assignment, I noticed I could speak perfectly in rhyme. Rhyme was a context I hadn’t considered. A poem isn’t singing and it isn’t regular talking. But for some reason the context is just different enough from normal speech that my brain handled it fine.

        Jack be nimble, Jack be quick.
        Jack jumped over the candlestick.

        I repeated it dozens of times, partly because I could. It was effortless, even though it was similar to regular speech. I enjoyed repeating it, hearing the sound of my own voice working almost flawlessly. I longed for that sound, and the memory of normal speech. Perhaps the rhyme took me back to my own childhood too. Or maybe it’s just plain catchy. I enjoyed repeating it more than I should have. Then something happened.

        My brain remapped.

        My speech returned.

        Not 100%, but close, like a car starting up on a cold winter night. And so I talked that night. A lot. And all the next day. A few times I felt my voice slipping away, so I repeated the nursery rhyme and tuned it back in. By the following night my voice was almost completely normal.

        When I say my brain remapped, that’s the best description I have. During the worst of my voice problems, I would know in advance that I couldn’t get a word out. It was if I could feel the lack of connection between my brain and my vocal cords. But suddenly, yesterday, I felt the connection again. It wasn’t just being able to speak, it was KNOWING how. The knowing returned.

        I still don’t know if this is permanent. But I do know that for one day I got to speak normally. And this is one of the happiest days of my life.

        But enough about me. Leave me a comment telling me the happiest moment of YOUR life. Keep it brief. Only good news today. I don’t want to hear anything else.



        Voice Update (January 14, 2007): Original Post, Archive

        Voice Update

        No jokes today on “serious Sunday.”

        Many of you asked about my voice. As I’ve explained in this blog, about two years ago I suddenly acquired a bizarre and exotic voice problem called a spasmodic dysphonia. I couldn’t speak for about 18 months unless I was on stage doing my public speaking, or alone, or singing. The rest of the time my vocal cords would clench and I could barely get out a word.

        Other people with this condition report the same bizarre symptoms. We can also often speak perfectly in funny British accents but not in our own voices. We can speak after we have laughed or yawned. Sometimes it helps to pinch our noses or cover our ears. I found I can talk okay if I stretch my head back and look at the ceiling or close my eyes. And we can all sing and hum just fine.

        It looks like a whacky mental problem, except that it comes on suddenly and everyone has a similar set of symptoms regardless of their psychological situation at the time. (It’s not as if we all have post partem depression or just got back from war.)

        The only widely-recognized treatment involves regular Botox shots through the front of the neck and directly into the vocal cords. But because the Botox takes some time to reach full impact, then immediately starts to wear off, you only have your best voice about half of that time. And the shots themselves are no picnic. I was hoping for a better solution, especially since I couldn’t do my public speaking after Botox injections because it weakened my voice too much to project on stage.

        One day, long after the last Botox shot had worn off, I was repeating a nursery rhyme at home. I found that I could speak a poem fairly well even though I couldn’t speak a normal sentence. Suddenly something “clicked” in my brain and I could speak perfectly. Just like that. It was amazing.

        [Note: I doubt the choice of poem had anything to do with it, but it was Jack Be Nimble.]

        Many of you asked if it lasted. It did last, for several days. Then I got a cold, my throat got funky, I had to speak different because of the cold, and lost it. After the cold wore off, it took a few weeks to get back to my current “okay” voice.

        At the moment I can speak okay most of the time in quiet conversation. In other words, if there is no background noise, I can talk almost as if I never had the problem. That’s a HUGE improvement over the past.

        But I still can’t speak in noisy environments. That’s common with this condition, and it has nothing to do with the need to speak loudly to talk over the noise. It has something to do with the outside sound coming into my brain and somehow disabling my speech function. If I cover my ears, I can speak almost normally.

        Unfortunately for me, the world is a noisy place. So outside of conversations with my family at home, I still can’t have a normal conversation.

        Today I am flying to Los Angeles to spend a week with Dr. Morton Cooper. He claims to be able to cure this problem completely – in many if not most cases – using his own brand of intensive voice exercises and feedback. I’ve communicated directly with several people who say that he did indeed fix their voices. The medical community’s reaction to his decades of curing this problem is that they say each of his cures is really just a case of a person who was misdiagnosed in the first place, since spasmodic dysphonia is incurable BY DEFINITION. But many of his cures have involved patients referred by the top specialists in the field of spasmodic dysphonia. So if they are all misdiagnosed, that would be a story in itself. Maybe I’m lucky enough to be misdiagnosed too.

        I’m not sure how much blogging I will be able to do this week. I’ll let you know at the end of the week just how it went. It’s not a sudden cure, and would involve continued voice exercises to speak in the "correct" way, but I am told to expect significant progress after a week.

        Wish me luck.



        Voice Update [2] (January 21, 2007): Original Post, Archive

        Voice Update

        As regular readers know, about two years ago I lost my ability to speak. The problem is called spasmodic dysphonia (SD). This update is primarily for the benefit of the other people with SD. Many of you asked about my experience and for any advice. The rest of you will find this post too detailed. Feel free to skip it.

        First, some background.

        There are two types of spasmodic dysphonia.

        Adductor: The vocal cords clench when you try to speak, causing a strangled sound. (That is my type.)

        Abductor: The vocal cords open when you try to speak, causing a breathy whisper.

        You can get more complete information, including hearing voice clips, at the National Spasomodic Dysphonia Association (NSDA) web site: http://www.dysphonia.org/

        The NSDA site describes the two medical procedures that are recommended by medical doctors:

        1. Botox injections to the vocal cords, several times per year for the rest of your life.

        2. Surgery on the vocal cords – a process that only works sometimes and has the risks of surgery.

        What you won’t find at that site is information about Dr. Morton Cooper’s method of treating spasmodic dysphonia, using what he calls Direct Voice Rehabilitation. I just spent a week with Dr. Cooper. Dr. Cooper has been reporting “cures” of this condition for 35 years. He’s a PH.d, not MD, and possibly the most famous voice doctor in the world.

        According to Dr. Cooper, the NSDA receives funding from Allergan, the company that sells Botox. Dr. Cooper alleges, in his new self-published book, CURING HOPELESS VOICES, that Allergan’s deep pockets control the information about spasmodic dysphonia, ensuring that it is seen as a neurological condition with only one reliable treatment: Botox. I have no opinion on that. I’m just telling you what Dr. Cooper says.

        Botox shots are expensive. Your health insurance would cover it, but I heard estimates that averaged around $2,500 per shot. I believe it depends on the dose, and the dose varies for each individual. Each person receiving Botox for spasmodic dysphonia would need anywhere from 4 to 12 shots per year. Worldwide, Dr. Cooper estimates that millions of people have this condition. It’s big money. (The “official” estimates of people with SD are much lower. Dr. Cooper believes those estimates are way off.)

        I have no first-hand knowledge of Allergan’s motives or activities. I can tell you that Botox worked for me. But it only gave me a “good” voice about half of the time. Individual results vary widely. Even individual treatments vary widely. I think I had about 5 treatments. Two were great. Two were marginal. One didn’t seem to help much. And the shots themselves are highly unpleasant for some people (but not very painful).

        I’ve heard stories of people who feel entirely happy with Botox. For them, it’s a godsend. And I’ve heard stories of people who had okay results, like mine. Dr. Cooper says that people with the abductor type of dysphonia can be made worse by Botox. I know one person with the abductor type who lost his voice completely after Botox, but temporarily. Botox wears off on its own. It’s fairly safe in that sense.

        I can tell you that Dr. Cooper’s method worked for me, far better than Botox. (More on that later.) And you can see for yourself that the NSDA web site doesn’t mention Dr. Cooper’s methods as an option. It doesn’t even mention his methods as something that you should avoid. It’s conspicuous in its absence.

        Dr. Cooper claims that spasmodic dysphonia is not a neurological problem as is claimed by the medical community. He claims that it is caused by using the voice improperly until you essentially lose the ability to speak correctly. Most people (including me) get spasmodic dysphonia after a bout with some sort of routine throat problem such as allergies or bronchitis. The routine problem causes you to strain your voice. By the time the routine problem is cleared up, you’ve solidified your bad speaking habits and can’t find your way back. Dr. Cooper’s methods seek to teach you how to speak properly without any drugs or surgery.

        Some people get spasmodic dysphonia without any obvious trigger. In those cases, the cause might be misuse of the voice over a long period of time, or something yet undiscovered.

        Botox Versus Dr. Cooper
        -------------------------------

        Botox worked for me. It was almost impossible for me to have a conversation, or speak on the phone, until I got my first Botox shot.

        But I had some complaints with the Botox-for-life method:

        1. Botox made my voice functional, but not good. There was an unnatural breathiness to it, especially for the week or two after the shot. And the Botox wore off after several weeks, so there was always a period of poor voice until the next shot.

        2. It looked as if I would need up to ten shots per year. That’s ten half days from my life each year, because of travel time. And the dread of the shot itself was always with me.

        3. The shots aren’t physically painful in any meaningful way. But you do spend about a minute with a needle through the front of your throat, poking around for the right (two) place in the back of your throat. Your urges to cough and swallow are sometimes overwhelming, and that’s not something you want to do with a needle in your throat. (Other people – maybe most people – handle the shots without much problem.)

        4. I couldn’t do public speaking with my “Botox voice.” It was too weak to project on stage. People with spasmodic dysphonia can often sing and act and do public speaking without symptoms. That was my situation. Public speaking is a big part of my income.

        I used Botox to get through the “I do” part of my wedding in July of 2006. Then I took a break from it to see if I could make any gains without it. My voice worsened predictably as the last Botox shot wore off. But it stopped getting worse at a “sometimes okay, often bad” level that was still much better than the pre-Botox days.

        I could speak almost perfectly when alone. I could speak well enough on stage. I could sing. About half of the time I could speak okay on the phone. In quiet conversations I was okay most of the time. But I could barely speak at all if there was any background noise.

        Do you know how often you need to talk in the presence of background noise? It’s often. And it wasn’t just a case of trying to speak over the noise. There’s something mysterious about spasmodic dysphonia that shuts off your ability to speak if there is background noise.

        As I wrote in a previous post, one day I was practicing my speaking with a nursery rhyme at home. Something happened. My normal voice returned. It happened suddenly, and it stuck. The media picked up the story from my blog and suddenly it was national news.

        My voice stayed great until I caught a cold a few weeks later. The cold changed my speaking pattern, and I regressed. With practice, I brought it back to the point where I could have quiet conversations. But I was still bedeviled by background noise and sometimes the phone. Despite my lingering problems, it was still amazing that anyone with spasmodic dysphonia would have that much of a spontaneous recovery. I’ve yet to hear of another case. But it wasn’t good enough.

        After the media flurry, I got a message from Dr. Cooper. He listened to me on the phone, having an especially bad phone day, and he said he could help. I listened to his spiel, about how it’s not really a neurological problem, that he’s been curing it for years, and that the medical community is in the pocket of Allergan.

        Dr. Cooper is what can be described as a “character.” He’s 75, has a deep, wonderful voice, and gives every impression of being a crackpot conspiracy theorist. His price was $5K per week, and he reckoned from my phone voice that I needed at least a week of working with him, with a small group of other spasmodic dysphonia patients. Two weeks of work would be better. (The hardcore cases take a month.) I would have to fly to LA and live in a nearby hotel for a week. So it’s an expensive proposition unless you can get your insurance to pay for it. (Sometimes they do if you have a referral from a neurologist.)

        Needless to say, I was skeptical. Dr. Cooper sent me his DVD that shows patients before and after. I still wasn’t convinced. I asked for references. I spoke with a well-known celebrity who said Dr. Cooper helped him. I heard by e-mail from some other people who said Dr. Cooper helped them.

        You can see video of before and after patients on his web site at: http://www.voice-doctor.com/

        I figured, What the hell? I could afford it. I could find a week. If it didn’t work after a few days, I could go home.

        With Dr. Cooper’s permission, I will describe his theory and his treatment process as best I can.

        THEORY
        ------------

        People with spasmodic dysphonia (SD) can’t hear their own voices properly. Their hearing is fine in general. The only exception is their own voices. In particular, SD people think they are shouting when they speak in a normal voice. I confirmed that to be true with me. I needed three other patients, Dr. Cooper, a recording of me in conversation, and my mother on the telephone to tell me that I wasn’t shouting when I speak normally. It has something to do with the fact that I hear my own voice through the bones in my head. In a crowded restaurant, if I speak in a voice to be heard across the table, I am positive it can be heard across the entire restaurant.  Most SD patients have this illusion.

        People with SD speak too low in the throat, because society gives us the impression that a deep voice sounds better. Our deep voice becomes so much a part of our self image and identity that we resist speaking in the higher pitch that would allow us to speak perfectly. Moreover, DS people have a hugely difficult time maintaining speech at a high pitch because they can’t hear the difference between the higher and lower pitch. Again, this is not a general hearing problem, just a problem with hearing your own voice. I confirmed that to be true with me. When I think I am speaking like a little girl, it sounds normal when played back on a recording.

        (People with abductor SD are sometimes the opposite. They speak at too high a pitch and need to speak lower. That doesn’t seem to be a societal identity thing as much as a bad habit.)

        Since SD people can’t “hear” themselves properly, they can’t speak properly. It’s similar to the problem that deaf people have, but a different flavor. As a substitute for hearing yourself, Dr. Cooper’s voice rehabilitation therapy involves intensive practice until you can “feel” the right vibration in your face. You learn to recognize your correct voice by feel instead of sound.

        People with SD breathe “backwards” when they talk. Instead of exhaling normally while talking, our stomachs stiffen up and we stop breathing. That provides no “gas for the car” as Dr. Cooper is fond of saying. You can’t talk unless air is coming out of your lungs. I confirmed this to be true for all four patients in my group. Each of us essentially stopped breathing when we tried to talk.

        The breathing issue explains to me why people with SD can often sing, or in my case speak on stage. You naturally breathe differently in those situations.

        DR. COOPER’S METHOD
        ----------------------------------

        He calls it Direct Voice Rehabilitation. I thought it was a fancy marketing way of saying “speech therapy,” but over time I came to agree that it’s different enough to deserve its own name.

        Regular speech therapy – which I had already tried to some degree – uses some methods that Dr. Cooper regards as useless or even harmful. For example, a typical speech therapy exercise is to do the “glottal fry” in your throat, essentially a deep motorboat type of sound. Dr. Cooper teaches you to unlearn using that part of the throat for ANYTHING because that’s where the problem is.

        Regular speech therapy also teaches you to practice the sounds that give you trouble. Dr. Cooper’s method involves changing the pitch and breathing, and that automatically fixes your ability to say all sounds.

        To put it another way, regular speech therapy for SD involves practice speaking with the “wrong” part of your throat, according to Dr. Cooper. If true, this would explain why regular speech therapy is completely ineffective in treating SD.

        Dr. Cooper’s method involves these elements:

        1. Learning to breathe correctly while speaking
        2. Learning to speak at the right pitch
        3. Learning to work around your illusion of your own voice.
        4. Intense practice all day.

        While each of these things is individually easy, it’s surprisingly hard to learn how to breathe, hit the right pitch, and think at the same time. That’s why it takes anywhere from a week to a month of intense practice to get it.

        Compare it to learning tennis, where you have to keep your eye on the ball, use the right stroke, and have the right footwork. Individually, those processes are easy to learn. But it takes a long time to do them all correctly at the same time.

        NUTS AND BOLTS
        -------------------------

        I spent Monday through Friday, from 9 am to 2 pm at Dr. Cooper’s office. Lunchtime was also used for practicing as a group in a noisy restaurant environment. This level of intensity seemed important to me. For a solid week, I focused on speaking correctly all of the time. I doubt it would be as effective to spend the same amount of time in one hour increments, because you would slip into bad habits too quickly in between sessions.

        Dr. Cooper started by showing us how we were breathing incorrectly. I don’t think any of us believed it until we literally put hands on each others’ stomachs and observed. Sure enough, our stomachs didn’t collapse as we spoke. So we all learned to breathe right, first silently, then while humming, and allowing our stomachs to relax on the exhale.

        The first two days we spent a few hours in our own rooms humming into devices that showed our pitch. It’s easier to hum the right pitch than to speak it, for some reason. The point of the humming was to learn to “feel” the right pitch in the vibrations of our face. To find the right pitch, you hum the first bar of the “Happy Birthday” song. You can also find it by saying “mm-hmm” in the way you would say if agreeing with someone in a happy and upbeat way.

        The patients who had SD the longest literally couldn’t hum at first. But with lots of work, they started to get it.

        Dr. Cooper would pop in on each of us during practice and remind us of the basics. We’d try to talk, and he’d point out that our stomachs weren’t moving, or that our pitch was too low.

        Eventually I graduated to humming words at the right pitch. I didn’t say the words, just hummed them. Then I graduated to hum-talking. I would hum briefly and then pronounce a word at the same pitch, as in:

        mmm-cow
        mmm-horse
        mmm-chair

        We had frequent group meetings where Dr. Cooper used a 1960s vintage recorder to interview us and make us talk. This was an opportunity for us all to see each other’s progress and for him to reinforce the lessons and correct mistakes. And it was a confidence booster because any good sentences were met with group compliments. The confidence factor can’t be discounted. There is something about knowing you can do something that makes it easier to do. And the positive feedback made a huge difference. Likewise, seeing someone else’s progress made you realize that you could do the same.

        When SD people talk, they often drop words, like a bad cell phone connection. So if an SD patient tries to say, “The baby has a ball,” it might sound like “The b---y –as a –all.” Dr. Cooper had two tricks for fixing that, in addition to the breathing and higher pitch, which takes care of most of it.

        One trick is to up-talk the problem words, meaning to raise your pitch on the syllables you would normally drop your pitch on. In your head, it sounds wrong, but to others, it sounds about right. For example, with the word “baby” I would normally drop down in pitch from the first b to the second, and that would cause my problem. But if I speak it as though the entire word goes up in pitch, it comes out okay, as long as I also breathe correctly.

        Another trick is humming into the problem words as if you are thinking. So when I have trouble ordering a Diet Coke (Diet is hard to say), instead I can say, “I’ll have a mmm-Diet Coke.” It looks like I’m just pausing to think.

        Dr. Cooper invented what he calls the “C Spot” method for finding the right vocal pitch. You put two fingers on your stomach, just below the breastbone, and talk while pressing it quickly and repeatedly, like a fast Morse code operator. It sort of tickles, sort of relaxes you, sort of changes your breathing, and makes you sound like you are sitting on a washing machine, e.g. uh-uh-uh-uh. But it helps you find your right pitch.

        Dr. Cooper repeats himself a lot. (If any of his patients are reading this, they are laughing at my understatement.) At first it seems nutty. Eventually you realize that he’s using a Rasputin-like approach to drill these simple concepts into you via repetition. I can’t begin to tell you how many times he repeated the advice to speak higher and breathe right, each time as if it was the first.

        Eventually we patients were telling each other to keep our pitches up, or down. The peer influence and the continuous feedback were essential, as were the forays into the noisy real world to practice. Normal speech therapy won’t give you that.

        Toward the end of the week we were encouraged to make phone calls and practice on the phone. For people with SD, talking on the phone is virtually impossible. I could speak flawlessly on the phone by the end of the week.

        RESULTS
        -------------

        During my week, there were three other patients with SD in the group. Three of us had the adductor type and one had abductor. One patient had SD for 30 years, another for 18, one for 3 years, and I had it for 2. The patients who had it the longest were recommended for a one month stay, but only one could afford the time to do it.

        The patient with SD for 3 years had the abductor type and spoke in a high, garbled voice. His goal was to speak at a lower pitch, and by the end of the week he could do it, albeit with some concentration. It was a huge improvement.

        The patient with SD for 30 years learned to speak perfectly whenever she kept her pitch high. But after only one week of training, she couldn’t summon that pitch and keep it all the time. I would say she had a 25% improvement in a week. That tracked with Dr. Cooper’s expectations from the start.

        The patient with SD for 18 years could barely speak above a hoarse whisper at the beginning of the week. By the end of the week she could often produce normal words. I’d say she was at least 25% better. She could have benefited from another three weeks.

        I went from being unable to speak in noisy environments to being able to communicate fairly well as long as I keep my pitch high. And when I slip, I can identify exactly what I did wrong. I don’t know how to put a percentage improvement on my case, but the difference is life changing. I expect continued improvement with practice, now that I have the method down. I still have trouble judging my own volume and pitch from the sound, but I know what it “feels” like to do it right.

        Dr. Cooper claims decades of “cures” for allegedly incurable SD, and offers plenty of documentation to support the claim, including video of before-and-afters, and peer reviewed papers. I am not qualified to judge what is a cure and what is an improvement or a workaround. But from my experience, it produces results.

        If SD is a neurological problem, it’s hard to explain why people can recover just by talking differently. It’s also hard to understand how bronchitis causes that neurological problem in the first place. So while I am not qualified to judge Dr. Cooper’s theories, they do pass the sniff test with flying colors.

        And remember that nursery rhyme that seemed to help me the first time? Guess what pitch I repeated it in. It was higher than normal.

        I hope this information helps.

        Pointless Math
        Highway Hypnosis Boredom

        So I just now made the ~200 mile drive from Austin (my current residence) to Dallas (where I grew up), both Texas of course, to take care of some stuff.  I’ll be driving back tonight, wee.  I have to say, that particular drive is one of the dullest in existence.  It’s not particularly long, traffic is normal, and nothing special per say, there’s just nothing to look at the whole way, And large gaps of road with no stops in between.  At least in the desert or middle states you have a little variety or mountains hopefully to look at.  I’ve made 24+ hour straight trips back and forth from Canada that I’ve loathed less :-).

        Anywho, whenever I’m on a car trip of more than 100 miles, my mind always turns to counting down miles and running simple arithmetic in my head to calculate how much longer it will take at my current speed to reach my destination, how much time I can cut off if I went faster, etc.  This time around my mind turned towards deriving some formulas.  This is not the first time this has happened either XD.  I have to occupy myself with something when there’s just music to listen to and nothing else to do!  Driving is basically a muscle reflex for me on these long drives.


        So there are 2 formulas that are useful for this situation.
        #1 How much faster you are traveling per minute at different speeds.
        #2 How much time you will save at different speeds.

        Variables:
        H=Higher Speed In MPH
        L=Lower Speed In MPH
        M=Number of miles to travel
        The following are basic proofs of how the formulas work.  God... I swore after I got out of geometry I’d never think about proofs again.
        The first one is very simple.
        Number of extra miles traveled per hour = (H-L)
        Number of extra miles traveled per minute = (H-L) mph / 60 minutes
        So, for example, if you increase your speed from 20 to 30, you are going 10 miles an hour faster, which is 1/6 of a mile a minute.
        The second one is slightly more difficult but much more useful.
        h = Time it takes in hours to travel M at H = M miles / H mph
        l = Time it takes in hours to travel M at L = M miles / L mph
        Difference of time it takes between 2 speeds in hours = h-l
        (M/H)-(M/L) [Substituting variables]
        (MH-ML)/(HL) [Getting a common denominator]
        M*(H-L)/(HL) [Distributive property]
        So we can see that time saved, in hours, per mile is (H-L)/(H*L).  Just multiply that by M to get total time saved in hours.
        With this second formula, we can see that in the higher speeds you go, the difference between the two speeds increase geometrically to get the same type of time savings (because H*L is a divisor, making it inversely proportional).
        For example:
        If H=20 mph and L=10mph
        Time saved = (20-10)/(20*10) = 10/200 = 1/20 of an hour saved per mile, or 3 minutes
        If H=30 mph and L=20mph
        Time saved = (30-20)/(30*20) = 10/600 = 1/60 of an hour saved per mile, or 1 minute
        If you wanted to save 3 minutes per mile when starting at 15 mph...
        (x-15)/(15x)=1/20
        x-15=15x/20
        -15=15x/20-x
        -15=-1/4x
        x=60 miles per hour
        If you wanted to save 3 minutes per mile when starting at 20 mph...
        (x-20)/(20x)=1/20
        -20=20x/20-x
        -20=0 ...
        Wait, what? ... oh right, you cant save 3 minutes when it only takes 3 minutes per mile @ 20 mph, hehe.

        And if you wanted to save 6 minutes starting at 20mph, you would have to go -20mph, which is kind of theoretically possible since physics has negative velocities... just not negative speeds >.>.  I’m sure all it would take is one point twenty one jiggawatts to achieve.


        If you’ve actually read this far without getting bored, I congratulate you :-).

        Even more sad is the last Dallas-Austin drive I made in which I couldn’t remember the compound continually interest formula and spent a good chunk of the time deriving it in my head (all I could remember was the needed variables

        “pert” - principle, e (~2.71 - exp), rate, time).