Home Page
Archive > Posts > Tags > Quick Fixes
Search:

Slackware 64 Linux Install on UEFI

I recently tried to install Slackware 4.2 64-bit (Linux) onto a new mini PC I just bought. The new PC only supports UEFI so I had major issues getting the darn setup on the install cd to actually run. I never DID actually get the install cd to boot properly on the system, so I used an alternative. While the slack install usb key was in, I also added and loaded up an ubuntu live cd usb key. The following is what I used to run the slackware setup in Ubuntu.

#Login as root
#sudo su

#Settings
InstallDVDName=SlackDVD #This is whatever you named your slackware usb key

#/mnt will contain the new file system for running the setup
cd /mnt

#Extract the initrd.img from the slackware dvd into /mnt
cat /media/ubuntu/$InstallDVDName/isolinux/initrd.img | gzip -d | cpio -i

#Bind special linux directories into the /mnt folder
for i in proc sys dev tmp; do mount -o bind /$i ./$i; done

#Mount the cdrom folder into /mnt/cdrom
rm cdrom
mount -o bind /media/ubuntu/$InstallDVDName/ ./cdrom

#Set /mnt as our actaul (ch)root
chroot .

#Run the slackware setup
usr/lib/setup/setup

#NOTE: When installing, your package source directory is at /cdrom/slackware64
Reading Mailchimp batch request results

It’s a bit of a pain reading results from batch requests to Mailchimp. Here is a quick and dirty bash script to get and pretty print the JSON output. It could be cleaned up a little, including combining some of the commands, but meh.


#Example variables
BATCHID=abc1234567;
APIKEY=abcdefg-us11@us11.api.mailchimp.com;
APIURL=us11.api.mailchimp.com;

#Request the batch information from Mailchimp
curl --request GET --url "https://dummy:$APIKEY@$APIURL/3.0/batches/$BATCHID" 2> /dev/null | \

#Get the URL to the response
grep -oP '"response_body_url":"https:.*?"' | \
grep -oP 'https:[^"]*' | \

#Get the response
xargs wget -O - 2> /dev/null | \

#The response is a .tar.gz file with a single file in it. So get the contents of this file
tar -xzvO 2> /dev/null | \

#Pretty print the json of the full return and the “response” objects within
php -r '$Response=json_decode(file_get_contents("php://stdin"), true); foreach($Response as &$R) $R["response"]=json_decode($R["response"], true); print json_encode($Response, JSON_PRETTY_PRINT);'
Auto Locking Windows on Login

On my primary computer (whose harddrive is encrypted) I always have Windows auto logging in to help with the bootup time. However, my bootup time can be rather slow; and if I needed to have my computer booted but locked, I had to wait for the login to complete so I could lock the computer. This has been becoming a nuisance lately when I need to get out of my house quickly in the morning.

For the solution I created a windows boot entry that auto locks the computer after logging the user in. This also requires a batch file, to run for the user on startup, to detect when this boot entry was selected. Here are the steps to create this setup:


  1. Create the new boot entry: In the windows command line, run: bcdedit /copy {current} /d "Lock on Startup"
    This creates a new boot option, duplicated from your currently selected boot option, in the boot menu labeled “Lock on Startup”.
  2. (Optional) Change the bootup timeout: In the windows command line, run: bcdedit /timeout 5
    Where 5 is a 5 second timeout.
  3. Create a batch file to run on login: In your start menu’s startup folder, add a batch file. You can name it anything as long as the extension is “.bat”.
    Add the following to the file: bcdedit /enum {current} | findstr /r /c:"description *Lock on Startup" && rundll32.exe user32.dll,LockWorkStation
    Note that there are 2 spaces in the description search string to replicate the regular expression's 1 or more quantifier “+”, since findstr only supports the 0 or more quantifier “*”.
Opening IntelliJ via the Symfony ide setting
Nasty Escaping Problems

I wanted a simple setup in Symfony where the programmer could define their ide in the parameters file. Sounds simple, right? Just add something like ide_url: 'phpstorm' to parameters.yml->parameters and ide: '%ide_url%' to config.yml->framework. And it worked great, however, my problem was much more convoluted.

I am actually running the Symfony server on another machine and am accessing the files via NFS on Windows. So, it would try to open PHPStorm with the incorrect path. Symfony suggests the solution to this is writing your own custom URL handler with %f and %l to fill in the filename and line, and use some weird formatting to do string replaces. So I wrote in 'idea://%%f:%%l&/PROJECT_PATH_ON_SERVER/>DRIVE_LETTER:/PATH_ON_WINDOWS/' (note the double parenthesis for escaping) directly in the config.yml and that worked, kind of. The URL was perfect, but IntelliJ does not seem to register the idea:// protocol handler like PHPStorm theoretically does (according to some online threads) with phpstorm://. So I had to write my own solution.

This answer on stackoverflow has the answer on how to register a protocol handler in Windows. But the problem now was that the first parameter passed to IntelliJ started with the idea:// which broke the command-line file-open. So I ended up writing a script to fix this, which is at the bottom.

OK, so we’re almost there; I just had to paste the string I came up with back into the parameters.yml, right? I wish. While this was now working properly in a Symfony error page, a new problem arose. The Symfony bin/console debug:config framework command was failing with You have requested a non-existent parameter "f:". The darn thing was reading the unescaped string as 'idea://%f:%l&...' and it thought %f:% was supposed to be a variable. Sigh.

So the final part was to double escape the strings with 4 percent signs. 'idea://%%%%f:%%%%l&...'. Except now the URL on the error pages gave me idea://%THE_PATH:%THE_LINE_NUMBER. It was adding an extra parenthesis before both values. This was simple to resolve in the script I wrote, so I was finally able to open scripts directly from the error page. Yay.



So here is the final set of data that has to be added to make this work:
Registry: HKCR/idea/(default) = URL:idea Protocol HKCR/idea/URL Protocol = "" HKCR/idea/shell/open/command = "PATH_TO_PHP" -f "PATH_TO_SCRIPT" "%1" "%2" "%3" "%4" "%5" "%6" "%7" "%8" "%9" parameters.yml: parameters: ide_url: 'idea://%%%%f:%%%%l&/PROJECT_PATH_ON_SERVER/>DRIVE_LETTER:/PATH_ON_WINDOWS/' config.yml: framework: ide: '%ide_url%' PHP_SCRIPT_FILE:
<?php
function DoOutput($S)
{
	//You might want to do something like output the error to a file or do an alert here
	print $S;
}

if(!isset($argv[1]))
	return DoOutput('File not given');
if(!preg_match('~^idea://(?:%25|%)?([a-z]:[/\\\\][^:]+):%?(\d+)/?$~i', $argv[1], $MatchData))
	return DoOutput('Invalid format: '.$argv[1]);

$FilePath=$MatchData[1];
if(!file_exists($FilePath))
	return DoOutput('Cannot find file: '.$FilePath);

$String='"C:\Program Files\JetBrains\IntelliJ IDEA 2018.1.6\bin\idea64.exe" --line '.$MatchData[2].' '.escapeshellarg($FilePath);
DoOutput($String);
shell_exec($String);
?>
Download all of an author’s fictionpress stories

I was surprised in my failure to find a script online to download all of an author’s stories from Fiction Press or Fan Fiction.Net, so I threw together the below.

If you go to an author’s page in a browser (only tested in Chrome) it should have all of their stories, and you can run the following script in the console (F12) to grab them all. Their save name format is STORY_NAME_LINK_FORMAT - CHAPTER_NUMBER.html. It works as follows:

  1. Gathers all of the names, chapter 1 links, and chapter counts for each story.
  2. Converts this information into a list of links it needs to download. The links are formed by using the chapter 1 link, and just replacing the chapter number.
  3. It then downloads all of the links to your current browser’s download folder.

Do note that chrome should prompt you to answer “This site is attempting to download multiple files”. So of course, say yes. The script is also designed to detect problems, which would happen if fictionpress changes their html formatting.

//Gather the story information
const Stories=[];
$('.mystories .stitle').each((Index, El) =>
	Stories[Index]={Link:$(El).attr('href'), Name:$(El).text()}
);
$('.mystories .xgray').each((Index, El) =>
	Stories[Index].NumChapters=/ - Chapters: (\d+) - /.exec($(El).text())[1]
);

//Get links to all stories
const LinkStart=document.location.protocol+'//'+document.location.host;
const AllLinks=[];
$.each(Stories, (_, Story) => {
	if(typeof(Story.NumChapters)!=='string' || !/^\d+$/.test(Story.NumChapters))
		return console.log('Bad number of chapters for: '+Story.Name);
	const StoryParts=/^\/s\/(\d+)\/1\/(.*)$/.exec(Story.Link);
	if(!StoryParts)
		return console.log('Bad link format for stories: '+Story.Name);
	for(let i=1; i<=Story.NumChapters; i++)
		AllLinks.push([LinkStart+'/s/'+StoryParts[1]+'/'+i+'/'+StoryParts[2], StoryParts[2]+' - '+i+'.html']);
});

//Download all the links
$.each(AllLinks, (_, LinkInfo) =>
	$('a').attr('download', LinkInfo[1]).attr('href', LinkInfo[0])[0].click()
);

jQuery('.blurb.group .heading a[href^="/works"]').map((_, El) => jQuery(El).text()).toArray().join('\n');
Painless migration from PHP MySQL to MySQLi

The PHP MySQL extension is being deprecated in favor of the MySQLi extension in PHP 5.5, and removed as of PHP 7.0. MySQLi was first referenced in PHP v5.0.0 beta 4 on 2004-02-12, with the first stable release in PHP 5.0.0 on 2004-07-13[1]. Before that, the PHP MySQL extension was by far the most popular way of interacting with MySQL on PHP, and still was for a very long time after. This website was opened only 2 years after the first stable release!


With the deprecation, problems from some websites I help host have popped up, many of these sites being very, very old. I needed a quick and dirty solution to monkey-patch these websites to use MySQLi without rewriting all their code. The obvious answer is to overwrite the functions with wrappers for MySQLi. The generally known way of doing this is with the Advanced PHP Debugger (APD). However, using this extension has a lot of requirements that are not appropriate for a production web server. Fortunately, another extension I recently learned of offers the renaming functionality; runkit. It was a super simple install for me.

  1. From the command line, run “pecl install runkit”
  2. Add “extension=runkit.so” and “runkit.internal_override=On” to the php.ini

Besides the ability to override these functions with wrappers, I also needed a way to make sure this file was always loaded before all other PHP files. The simple solution for that is adding “auto_prepend_file=/PATH/TO/FILE” to the “.user.ini” in the user’s root web directory.

The code for this script is as follows. It only contains a limited set of the MySQL functions, including some very esoteric ones that the web site used. This is not a foolproof script, but it gets the job done.


//Override the MySQL functions
foreach(Array(
    'connect', 'error', 'fetch_array', 'fetch_row', 'insert_id', 'num_fields', 'num_rows',
    'query', 'select_db', 'field_len', 'field_name', 'field_type', 'list_dbs', 'list_fields',
    'list_tables', 'tablename'
) as $FuncName)
    runkit_function_redefine("mysql_$FuncName", '',
        'return call_user_func_array("mysql_'.$FuncName.'_OVERRIDE", func_get_args());');

//If a connection is not explicitely passed to a mysql_ function, use the last created connection
global $SQLLink; //The remembered SQL Link
function GetConn($PassedConn)
{
    if(isset($PassedConn))
        return $PassedConn;
    global $SQLLink;
    return $SQLLink;
}

//Override functions
function mysql_connect_OVERRIDE($Host, $Username, $Password) {
    global $SQLLink;
    return $SQLLink=mysqli_connect($Host, $Username, $Password);
}
function mysql_error_OVERRIDE($SQLConn=NULL) {
    return mysqli_error(GetConn($SQLConn));
}
function mysql_fetch_array_OVERRIDE($Result, $ResultType=MYSQL_BOTH) {
    return mysqli_fetch_array($Result, $ResultType);
}
function mysql_fetch_row_OVERRIDE($Result) {
    return mysqli_fetch_row($Result);
}
function mysql_insert_id_OVERRIDE($SQLConn=NULL) {
    return mysqli_insert_id(GetConn($SQLConn));
}
function mysql_num_fields_OVERRIDE($Result) {
    return mysqli_num_fields($Result);
}
function mysql_num_rows_OVERRIDE($Result) {
    return mysqli_num_rows($Result);
}
function mysql_query_OVERRIDE($Query, $SQLConn=NULL) {
    return mysqli_query(GetConn($SQLConn), $Query);
}
function mysql_select_db_OVERRIDE($DBName, $SQLConn=NULL) {
    return mysqli_select_db(GetConn($SQLConn), $DBName);
}
function mysql_field_len_OVERRIDE($Result, $Offset) {
    $Fields=$Result->fetch_fields();
    return $Fields[$Offset]->length;
}
function mysql_field_name_OVERRIDE($Result, $Offset) {
    $Fields=$Result->fetch_fields();
    return $Fields[$Offset]->name;
}
function mysql_field_type_OVERRIDE($Result, $Offset) {
    $Fields=$Result->fetch_fields();
    return $Fields[$Offset]->type;
}
function mysql_list_dbs_OVERRIDE($SQLConn=NULL) {
    $Result=mysql_query('SHOW DATABASES', GetConn($SQLConn));
    $Tables=Array();
    while($Row=mysqli_fetch_assoc($Result))
        $Tables[]=$Row['Database'];
    return $Tables;
}
function mysql_list_fields_OVERRIDE($DBName, $TableName, $SQLConn=NULL) {
    $SQLConn=GetConn($SQLConn);
    $CurDB=mysql_fetch_array(mysql_query('SELECT Database()', $SQLConn));
    $CurDB=$CurDB[0];
    mysql_select_db($DBName, $SQLConn);
    $Result=mysql_query("SHOW COLUMNS FROM $TableName", $SQLConn);
    mysql_select_db($CurDB, $SQLConn);
    if(!$Result) {
        print 'Could not run query: '.mysql_error($SQLConn);
        return Array();
    }
    $Fields=Array();
    while($Row=mysqli_fetch_assoc($Result))
        $Fields[]=$Row['Field'];
    return $Fields;
}
function mysql_list_tables_OVERRIDE($DBName, $SQLConn=NULL) {
    $SQLConn=GetConn($SQLConn);
    $CurDB=mysql_fetch_array(mysql_query('SELECT Database()', $SQLConn));
    $CurDB=$CurDB[0];
    mysql_select_db($DBName, $SQLConn);
    $Result=mysql_query("SHOW TABLES", $SQLConn);
    mysql_select_db($CurDB, $SQLConn);
    if(!$Result) {
        print 'Could not run query: '.mysql_error($SQLConn);
        return Array();
    }
    $Tables=Array();
    while($Row=mysql_fetch_row($Result))
        $Tables[]=$Row[0];
    return $Tables;
}
function mysql_tablename_OVERRIDE($Result) {
    $Fields=$Result->fetch_fields();
    return $Fields[0]->table;
}

And here is some test code to confirm functionality:
global $MyConn, $TEST_Table;
$TEST_Server='localhost';
$TEST_UserName='...';
$TEST_Password='...';
$TEST_DB='...';
$TEST_Table='...';
function GetResult() {
    global $MyConn, $TEST_Table;
    return mysql_query('SELECT * FROM '.$TEST_Table.' LIMIT 1', $MyConn);
}
var_dump($MyConn=mysql_connect($TEST_Server, $TEST_UserName, $TEST_Password));
//Set $MyConn to NULL here if you want to test global $SQLLink functionality
var_dump(mysql_select_db($TEST_DB, $MyConn));
var_dump(mysql_query('SELECT * FROM INVALIDTABLE LIMIT 1', $MyConn));
var_dump(mysql_error($MyConn));
var_dump($Result=GetResult());
var_dump(mysql_fetch_array($Result));
$Result=GetResult(); var_dump(mysql_fetch_row($Result));
$Result=GetResult(); var_dump(mysql_num_fields($Result));
var_dump(mysql_num_rows($Result));
var_dump(mysql_field_len($Result, 0));
var_dump(mysql_field_name($Result, 0));
var_dump(mysql_field_type($Result, 0));
var_dump(mysql_tablename($Result));
var_dump(mysql_list_dbs($MyConn));
var_dump(mysql_list_fields($TEST_DB, $TEST_Table, $MyConn));
var_dump(mysql_list_tables($TEST_DB, $MyConn));
mysql_query('CREATE TEMPORARY TABLE mysqltest (i int auto_increment, primary key (i))', $MyConn);
mysql_query('INSERT INTO mysqltest VALUES ()', $MyConn);
mysql_query('INSERT INTO mysqltest VALUES ()', $MyConn);
var_dump(mysql_insert_id($MyConn));
mysql_query('DROP TEMPORARY TABLE mysqltest', $MyConn);
Weird filename encoding issues on windows

So somehow all of the file names in my Rammstein music directory, and some in my Daft Punk, had characters with diacritics replaced with an invalid character. I pasted one of such filenames into a hex editor to evaluate what the problem was. First, I should note that Windows encodes its filenames (and pretty much everything) in UTF16. Everything else in the world (mostly) has settled on UTF8, which is a much better encoding for many reasons. So during some file copy/conversion at some point in the directories’ lifetime, the file names had done a freakish (utf16*)(utf16->utf8) rename, or something to that extent. I had noticed that all I needed to do was to replace the first 2 bytes of the diacritic character with a different byte. Namely “EF 8x” to “Cx”, and the rest of the bytes for the character were fine. So if anyone ever needs it, here is the bash script.

LANG=;
IFS=$'\n'
for i in `find -type f | grep -P '\xEF[\x80-\x8F]'`; do
	FROM="$i";
	TO=$(echo "$i" | perl -pi -e 's/\xEF([\x80-\x8F])/pack("C", ord($1)+(0xC0-0x80))/e');
	echo Renaming "'$FROM'" to "'$TO'"
	mv "$FROM" "$TO"
done

I may need to expand the range beyond the x80-x8F range, but am unsure at this point. I only confirmed the range x82-x83.

Getting HTML from Simple Machine Forum (SMF) Posts

When I first created my website 10 years ago, from scratch, I did not want to deal with writing a comment system with HTML markups. And in those days, there weren’t plugins for everything like there is today. My solution was setting up a forum which would contain a topic for every Project, Update, and Post, and have my pages mirror the linked topic’s posts.

I had just put in a quick hack at the time in which the pulled SMF message’s body had links converted from bbcode (there might have been 1 other bbcode I also hooked). I had done this with regular expressions, which was a nasty hack.

So anywho, I finally got around to writing a script that converts SMF messages’ bbcode to HTML and caches it. You can download it here, or see the code below. The script is optimized so that it only ever needs to load SMF code when a post has not yet been cached. Caching happens during the initial loading of an SMF post within the script’s main function, and is discarded if the post is changed.

The script requires that you run the query on line #3 of itself in your SMF database. Directly after that are 3 variables you need to set. The script assumes you are already logged in to the appropriate user. To use it, call “GFTP\GetForumTopicPosts($ForumTopicID)”. I have the functions split up so you can do individual posts too if needed (requires a little extra code).


<?
//This SQL command must be ran before using the script
//ALTER TABLE smf_messages ADD body_html text, ADD body_md5 char(32) DEFAULT NULL;

namespace GFTP;

//Forum database variables
global $ForumInfo;
$ForumInfo=Array(
    'DBName'=>'YourDatabase_smf',
    'Location'=>'/home/YourUser/www',
    'MessageTableName'=>'smf2_messages',
);

function GetForumTopicPosts($ForumTopicID)
{
    //Change to the forum database
    global $ForumInfo;
    $CurDB=mysql_fetch_row(mysql_query('SELECT database()'))[0];
    if($CurDB!=$ForumInfo['DBName'])
        mysql_select_db($ForumInfo['DBName']);
    $OldEncoding=SetEncoding(true);

    //Get the posts
    $PostsInfos=Array();
    $PostsQuery=mysql_query('SELECT '.implode(', ', PostFields())." FROM $ForumInfo[MessageTableName] WHERE id_topic='".intval($ForumTopicID).
        "' AND approved=1 ORDER BY id_msg ASC LIMIT 1, 9999999");
    if($PostsQuery) //If query failed, do not process
        while(($PostInfo=mysql_fetch_assoc($PostsQuery)) && ($PostsInfos[]=$PostInfo))
            if(md5($PostInfo['body'])!=$PostInfo['body_md5']) //If the body md5s do not match, get new value, otherwise, use cached value
                ProcessPost($PostsInfos[count($PostsInfos)-1]); //Process the lastest post as a reference

    //Restore from the forum database
    if($CurDB!=$ForumInfo['DBName'])
        mysql_select_db($CurDB);
    SetEncoding(false, $OldEncoding);

    //Return the posts
    return $PostsInfos;
}

function ProcessPost(&$PostInfo) //PostInfo must have fields id_msg, body, body_md5, and body_html
{
    //Load SMF
    global $ForumInfo;
    if(!defined('SMF'))
    {
        global $context;
        require_once(rtrim($ForumInfo['Location'], DIRECTORY_SEPARATOR).DIRECTORY_SEPARATOR.'SSI.php');
        mysql_select_db($ForumInfo['DBName']);
        SetEncoding();
    }

    //Update the cached body_html field
    $ParsedCode=$PostInfo['body_html']=parse_bbc($PostInfo['body']);
    $EscapedHTMLBody=mysql_escape_string($ParsedCode);
    $BodyMD5=md5($PostInfo['body']);
    mysql_query("UPDATE $ForumInfo[MessageTableName] SET body_html='$EscapedHTMLBody', body_md5='$BodyMD5' WHERE id_msg=$PostInfo[id_msg]");
}

//The fields to select in the Post query
function PostFields() { return Array('id_msg', 'poster_time', 'id_member', 'subject', 'poster_name', 'body', 'body_md5', 'body_html'); }

//Swap character encodings. Needs to be set to utf8
function SetEncoding($GetOld=false, $NewSet=Array('utf8', 'utf8', 'utf8'))
{
    //Get the old charset if required
    $CharacterVariables=Array('character_set_client', 'character_set_results', 'character_set_connection');
    $OldSet=Array();
    if($GetOld)
    {
        //Fill in variables with default in case they are not found
        foreach($CharacterVariables as $Index => $Variable)
            $OldSet[$Variable]='utf8';

        //Query for the character sets and update the OldSet array
        $Query=mysql_query('SHOW VARIABLES LIKE "character_%"');
        while($VariableInfo=mysql_fetch_assoc($Query))
            if(isset($OldSet[$VariableInfo['Variable_name']]))
                $OldSet[$VariableInfo['Variable_name']]=$VariableInfo['Value'];

        $OldSet=array_values($OldSet); //Turn back into numerical array
    }

    //Change to the new database encoding
    $CompiledSets=Array();
    foreach($CharacterVariables as $Index => $Variable)
        $CompiledSets[$Index]=$CharacterVariables[$Index].'="'.mysql_escape_string($NewSet[$Index]).'"';
    mysql_query('SET '.implode(', ', $CompiledSets));

    //If requested, return the previous values
    return $OldSet;
}
?>
PHPMyAdmin SQL Export: Key Position

After version 4.2.0.0 (2014-05-08) of phpMyAdmin, it stopped including table’s keys inline within the create table statement, and instead opted to add all the table keys at the very end of the export file by modifying the tables. (See "rfe #1004 Create indexes at the end in SQL export). This behavior has been annoying to many people, including myself, but I never noticed anyone mentioning a fix. I looked into the source and there is a very simple way to restore this behavior to what it originally was.


Edit the file “phpMyAdmin/libraries/plugins/export/ExportSql.class.php”. In it, the code block starting with the below line needs to be skipped
if (preg_match('@CONSTRAINT|KEY@', $create_query)) {
The easiest way to do this is changing that line to
if (false && preg_match('@CONSTRAINT|KEY@', $create_query)) {
Useful Exim Scripts
For fighting spam

In the course of my Linux administrative duties (on a cPanel server), I have created multiple scripts to help us out with Exim, our mail transfer agent. These are mostly used to help us fight spam, and determine who is spamming when it occurs.



This monitors the number of emails in the queue, and sends ours admins an email when a limit (1000) is reached. It would need to be run on a schedule (via cron).
#!/bin/bash
export AdminEmailList="ADMIN EMAIL LIST SEPARATED BY COMMAS HERE"
export Num=`/usr/sbin/exim -bpc`
if [ $Num -gt 1000 ]; then
        echo "Too many emails! $Num" | /usr/sbin/sendmail -v "$AdminEmailList"
        #Here might be a good place to delete emails with “undeliverable” strings within them
        #Example (See the 3rd script): exim-delete-messages-with 'A message that you sent could not be delivered'
fi

This deletes any emails in the queue from or to a specified email address (first parameter). If the address is the recipient, the from must be "<>" (root)
#!/bin/bash
exiqgrep -ir $1 -f '<>' | xargs exim -Mrm
exiqgrep -if $1 | xargs exim -Mrm

This deletes any emails in the queue which contain a given string (first parameter)
#!/bin/bash
if [ "$1" == "" ]
then
  echo 'Cannot delete with empty string'
else
  grep -lir "$1" /var/spool/exim/input/ | sed -e 's/^.*\/\([a-zA-Z0-9-]*\)-[DH]$/\1/g' | xargs /usr/sbin/exim -Mrm
fi

Get a count of emails in the queue per sender (sender email address is supplied by sender and can be faked)
#!/bin/bash
exim -bp | grep -oP '<.*?>' | sort | uniq -c | sort -n

Get a count of emails in the queue per account (running this script can take a little while)
#!/bin/bash
exim -bp | grep -Po '(?<= )[-\w]+(?= <)' | xargs -n1 exim -Mvh | grep -ioP '(?<=auth_sender ).*$' | sort | uniq -c

Bonus: Force all non-specified accounts on Exim to use a certain IP address for sending. It would need to be run on a schedule (via cron).
#!/bin/bash
export IPAddress="YOUR ADDRESS HERE"
/usr/bin/perl -i -pe 's/\*:.*/*: '$IPAddress'/g' /etc/mailips
Renaming a series for Plex

I was recently trying to upload a TV series into Plex and was having a bit of a problem with the file naming. While I will leave the show nameless, let’s just say it has a magic dog.

Each of the files (generally) contained 2 episodes and were named S##-E##-E## (Season #, First Episode #, Second Episode #). Plex really didn’t like this, as for multi-episode files, it only supports the naming convention of first episode number THROUGH a second episode number. As an example S02-E05-E09 is considered episodes 5 through 9 of season 2. So I wrote a quick script to fix up the names of the files to consider each file only 1 episode (the first one), and then create a second symlinked file, pointing to the first episode, but named for the second episode.

So, for the above example, we would get 2 files with the exact same original filenames, except with the primary file having “S02E05,E09” in place of the episode number information, and the linked file having “S02E09-Link” in its place.


The following is the bash code for renaming/fixing a single episode file. It needs to be saved into a script file. This requires perl for regular expression renaming.


#Get the file path info and updated file name
FilePath=`echo "$1" | perl -pe 's/\/[^\/]*$//g'`
FileName=`echo "$1" | perl -pe 's/^.*\///g'`
UpdatedFileName=`echo "$FileName" | perl -pe 's/\b(S\d\d)-(E\d\d)-(E\d\d)\b/$1$2,$3/g'`

#If the file is not in the proper format, exit prematurely
if [ "$UpdatedFileName" == "$FileName" ]; then
    echo "Proper format not found: $FilePath/$FileName"
    exit 1
fi

#Rename the file
cd "$FilePath"
mv "$FileName" "$UpdatedFileName"

#Create a link to the file with the second episode name
NewLinkName=`echo "$FileName" | perl -pe 's/\b(S\d\d)-(E\d\d)-(E\d\d)\b/$1$3-Link/g'`
ln -s "$UpdatedFileName" "$NewLinkName"

If you save that to a file named “RenameShow.sh”, you would use this like “find /PATH/ -type f -print0 | xargs -0n 1 ./RenameShowl.sh”. For windows, make sure you use windows symlinks with /H (to make them hard file links, as soft/symbolic link files really just don’t work in Windows).

Missing phar wrapper

Phar files are PHP’s way of distributing an entire PHP solution in a single package file. I recently had a problem on my Cygwin PHP server that said “Unable to find the wrapper "phar" - did you forget to enable it when you configured PHP?”. I couldn’t find any solution for this online, so I played with it a bit.

The quick and dirty solution I came up with is to include the phar file like any normal PHP file, which sets your current working directory inside of the phar file. After that, you can include files inside the phar and then change your directory back to where you started. Here is the code I used:

if(preg_match('/^(?:win|cygwin)/i', PHP_OS))
{
    $CWD=getcwd();
    require_once('Scripts/PHPExcel.phar');
    require_once('PHPExcel/IOFactory.php');
    chdir($CWD);
}
else
    require_once('phar://Scripts/PHPExcel.phar/PHPExcel/IOFactory.php');
Netflix Auto Continue Play

Here is a little Tampermonkey script for Chrome that automatically clicks the “Continue playing” button when it pops up on Netflix, pausing the current stream.


// ==UserScript==
// @name         Netflix auto continue play
// @namespace    https://www.castledragmire.com/Posts/Netflix_Auto_Continue_Play
// @version      1.0
// @description  When netflix pops up the "Continue play" button, this script auto-selects "Continue" within 1 second
// @author       Dakusan
// @match        http://www.netflix.com/
// @grant        none
// ==/UserScript==

setInterval(function() {
    var TheElements=document.getElementsByClassName('continue-playing');
    for(var i=0;i<TheElements.length;i++)
        if(/\bbutton\b/.test(TheElements[i].className))
        {
            console.log('"Continue Playing" Clicked');
            TheElements[i].click();
        }
}, 1000);

Make sure to set the “User matches” in the settings page to include both “http://www.netflix.com/*” and “https://www.netflix.com/*”.

Facebook career resume upload+import erasing fix
Features causing unintended consequences

So I was applying for a job that requires using Facebook’s career section, and I found it immensely helpful that it automatically filled in most of the form (Skills, Education History, Experience, etc) via your profile, as that information can take well over half an hour to fill in every time. However, an uploaded resume is also required on the form, and when you upload the file, Facebook tries to import the contents from it, thereby deleting all information already filled in!

The following is the solution to this problem
  1. Make sure all information you have already filled in (the non automatic stuff) is saved elsewhere to fill in again. If this includes Skills, Education History or Experience, I highly recommend you fill this information directly into your Facebook profile. It can be made private if you do not wish it to be shared.
  2. Upload your resume, which will erase everything already on the forms.
  3. After you have uploaded your resume, run the following javascript (available via your browser’s javacript/developer console):
    document.getElementsByName('cand_resume_fbid')[0].value
    and store the result
  4. Do a complete/hard refresh (ctrl+f5) of the page. While this will erase the resume, it will again fill in all the other automatic information.
  5. Run the following javascript to restore the resume (replacing the ### with the value received from above):
    document.getElementsByName('cand_resume_fbid')[0].value='###';
    The change will not show on the page, but it works
Font size hack for Mac Web Browsers

Mac renders fonts differently than windows, which is a problem when trying to make cross-system compatible web pages. I've read in many places that using "em" instead of pixels fixes this problem, but sometimes using pixel based measurements is required for a project.

I found when testing a recent project's web page out on Apple's OSX that no matter what browser I used, the fonts were always 2px bigger than on windows, which threw off my layouts. So I used the simple JavaScript solution below (jQuery required). Note that this assumes all font sizes are in pixels. It might not hurt to add a check for that if you mix font size types.

$(document).ready(function() {
	if(/Macintosh/.test(navigator.userAgent))
		$.each(document.styleSheets, function(Indx, SS) {
			var rules=SS.cssRules || SS.rules;
			for(var i=0;i<rules.length;i++)
				if(rules[i].style && rules[i].style.fontSize!='')
					rules[i].style.fontSize=(parseInt(rules[i].style.fontSize, 10)-2)+'px';
});
Cygwin SIGINT fix for golang

Cygwin has had a long time problem that, depending on your configuration, may cause you to be unable to send a SIGINT (interrupt signal via Ctrl+C) to a native Windows command line executables. As a matter of fact, trying to do so may completely freeze up the console, requiring a process kill of the actual console, bash, and the executable you ran. This problem can crop up for many reasons including the version of Cygwin you are running and your terminal emulator. I specifically installed mintty as my default Cygwin console to get rid of this problem a long time ago (among many other features it had), and now it even has this problem.

While my normal solution is to try and steer clear of native Windows command line executables in Cygwin, this is not always an option. Golang was also causing me this problem every time I ran a network server, which was especially problematic as I would have to ALSO manually kill the server process or it would continue to hold the network port so another test of the code could not use it. An example piece of code is as follows:

package main
import ( "net/http"; "fmt" )
func main() {
	var HR HandleRequest
	if err := http.ListenAndServe("127.0.0.1:81", HR); err!=nil {
		fmt.Println("Error starting server") }
}

//Handle a server request
type HandleRequest struct{}
func (HR HandleRequest) ServeHTTP(w http.ResponseWriter, req *http.Request) {
	fmt.Printf("Received connection from: %s\n", req.RemoteAddr)
}
---
go run example.go

The first solution I found to this problem, which is by far the best solution, was to build the executable and then run it, instead of just running it straight from go.
go build example.go && example.exe
However, as of this post, it seems to no longer work! The last time I tested it and confirmed it was working was about 3 months ago, so who knows what has changed since then.

The second solution is to just build in some method of killing the process that uses “os.Exit”. For example, the following will exit if the user types “exit”
func ListenForExitCommand() {
	for s:=""; s!="exit"; { //Listen for the user to type exit
		if _, err:=fmt.Scanln(&s); err!=nil {
			if err.Error()!="unexpected newline" {
				fmt.Println(err) }
		} else if s=="flush" || s=="exit" {
			//Clean up everything here
		}
	}
	fmt.Println("Exit received, closing process")
	os.Exit(1)
}
and then add the following at the top of the main function:
go ListenForExitCommand() //Listen for "exit" command
UTF8 fix for iOS mail
Thanks apple</sarcasm>

Nowadays, it just makes sense to do everything in utf8 from the start. Most everything of consequence now supports it, it keeps byte count down for most anyone using an alphabet that is primarily latin, it is compatible with a old of older software, and can help lead to some useful shortcuts in programming. A very good number of libraries and languages even assume it as the default character encoding now.

It’s especially nice for html and web pages, so content can just be pasted in as-is, except of course for single and double quotes (which I always replace with ’ “ ” anyways) and < > &. However, I was recently thrown for a loop when my utf8 encoding was not being read correctly in Apple’s iOS mail client. After lots of testing and research, I came to the conclusion that it just plain doesn’t support utf-8 by default! I think I read somewhere you have to install foreign language packs to make it available, but that doesn’t help our normal latin-based-alphabet users.


The simple solution for diacritic (and other html encodable) characters is to just encode them as html: (only supported for php>=5.3.4)
function EncodeForIOS($String) //This assumes the following characters are already encoded (line 2): & < >
{
	$MailHTMLConvertArray=get_html_translation_table(HTML_ENTITIES, ENT_NOQUOTES, 'UTF-8');
	unset($MailHTMLConvertArray['&'], $MailHTMLConvertArray['<'], $MailHTMLConvertArray['>']);
	return strtr($String, $MailHTMLConvertArray);
}
Windows “ln” (symbolic linking) support for Cygwin
Cygwin requires lots of tweaks to really be usable
The “ln -s” command in Cygwin creates a fake symbolic link only supported in Cygwin. I whipped up the following script to create a true windows symbolic link that is supported in both Windows and Cygwin (it shows up as a symlink in Cygwin).
TARGET=`echo $1 | perl -pe 's/\\//\\\\/g'`; #Determine the target from the first parameter (where we are linking to). Change forward slashes to back slashes
LINK=`echo $2 | perl -pe 's/\\//\\\\/g'` #Determine the link name from the second parameter (where the symlink is made). Change forward slashes to back slashes
cmd /c mklink $3 $4 $5 $6 "$LINK" "$TARGET" #Perform the windows mklink command with optional extra parameters
Note that for the link name, you have to include the full filename, not just specify a directory.
See here for information on mklink and its switches. Specifically:
  • /d : directory symlink (more hard)
  • /j : directory junction (more soft)
This handles the problem a little more directly than my other post on the topic ("Symlinks in a Windows programming environment").

[Edit on 2016-01-12 @ 12:34am]
And once again, I have a new version of the code. This version has the following advantages:
  • No longer limited to just 4 extra parameters (dynamic instead of static)
  • Can now just specify your directory as the link location, and the filename will be automatically filled in
  • Handles spaces better
Do note, if you want to link directly to a hard drive letter, you must use "c:/" instead of "/cygdrive/c/"

#Get the target and link
TARGET="$1"
shift
LINK="$1"
shift

#If the link is already a directory, append the filename to the end of it
if [ -d "$LINK" ]; then
    #Get the file/directory name without the rest of the path
    ELEMENT_NAME=`echo "$TARGET" | perl -pe 's/^.*?\/([^\/]+)\/?$/$1/'`

    #Append the file name to the target, making sure there is only 1 separating "/"
    LINK=`echo "$LINK" | perl -pe 's/^(.*?)\/?$/$1/'`
    LINK="$LINK"/"$ELEMENT_NAME"
fi

#Replace forward slashes with back slashes
TARGET=`echo $TARGET | perl -pe 's/\\//\\\\/g'`
LINK=`echo $LINK | perl -pe 's/\\//\\\\/g'`

#Perform the windows mklink command with optional extra parameters
cmd /c mklink "$@" "$LINK" "$TARGET"
Selectively skipping data in a cPanel backup
Using a hammer instead of a scalpel

I was having problems on one of our production Linux cPanel servers in which our backup drive was not able to hold all the data from our primary drive for both our daily and weekly backups. An easy hack to fix this is to mount any subfolders you wish to exclude (generally very large ones) as a readonly temp file system in the appropriate backup folder. With this method, you can selectively exclude individual directories to one or more of the daily/weekly/monthly backup folders.

The only downside to this method is that pkgacct (called by cpbackup) logs will throw readonly file system errors for each file that cannot be copied.


So, to have cPanel discard an individual directory during the backup, you need to do the following:
First, make sure the backup directory to exclude is created and empty by running:
rm -rf PATH;
mkdir -p PATH;
NOTE: BE CAREFUL WITH “rm -rf”, IT IS A DANGEROUS COMMAND

To manually mount the directory, run:
mount tmpfs PATH -t tmpfs -o defaults,ro
To permanently mount the directory (mount on boot), edit /etc/fstab and add the following line:
tmpfs PATH tmpfs defaults,ro 0 0
If you do the permanent fix, don’t forget to run “mount PATH” to have it mount it to the live system, since fstab will not mount all its listed file systems until the next boot.

An example of a PATH might be: /backup/cpbackup/weekly/dakusan/public_html/uploads

cPanel also recently added (experimental) hard linking for backups, which really helps out with space concerns, and makes the need for this script a bit less.

Automatic disconnect protection for SSH terminals
A simple solution for a simple problem

I got tired a long time ago of losing what I was currently working on in SSH sessions when they were lost due to disconnects from network connectivity issues. To combat this I have been using screen when running sessions that I can absolutely not lose, but the problem still persists in other sessions or when I forgot to run it. The easy solution to this was to add screen to one of my bash init scripts (~/.bashrc [or ~/.bash_profile]) as follows:

alias autoscreen="screen -x -RR && exit"
if [[ "$TERM" == cygwin* || "$TERM" == xterm* ]]; then autoscreen; fi
This automatically makes the screen command run on bash user initialization, always connecting to the same session.

Edit on 2012-12-17 @ 7:00pm:
The last iteration was:
if [ $TERM == "xterm" ]; then screen -R pts-0.`hostname` && exit; fi
  • The main screen command is now an “alias” to help out with some bash parsing problems.
  • The resume parameters are now “-x -RR” which first attempts to multiplex a session, and if that fails, it creates a session. With multiplexing turned on, everyone uses the same screen session and can interact with each other, and you don’t have to worry about accidentally connecting to the wrong screen session or having new ones turned on. The only problem is sometimes you may accidentally step on other user’s toes :-)
  • The special screen session name was removed so it always starts with the default name (easier to manually interact with)
  • I added detection of multiple term names (cygwin and xterm), and added a wildcard at the end of each since there is often suffixes to these names. More term names can easily be added using this syntax.

Edit on 2010-12-30 @ 3:50am: I changed != "screen" to == "xterm" because otherwise scp and some other non-term programs were failing. You may have to use something other than “xterm” for your systems.


Edit on 2010-1-24 @ 2:00pm: I added the exit; so the terminal automatically exits when the screen session closes.

iGoogle Security Problems
For a company that stresses security...

I’ve recently been having problems using the Google Reader widget in iGoogle. Normally, when I clicked on an RSS Title, a “bubble” popped up with the post’s content. However recently when clicking on the titles, the original post’s source opened up in a new tab. I confirmed the settings for the widget were correct, so I tried to remember the last change I made in Firefox that could have triggered this problem, as it seems the problem was not widespread, and only occurred to a few other people with no solution found. I realized a little bit back that I had installed the HTTPS Everywhere Firefox plugin. As described on the EFF’s site “HTTPS Everywhere is a Firefox extension ... [that] encrypts your communications with a number of major websites”.

Once I disabled the plugin and found the problem went away, I started digging through Google’s JavaScript code with FireBug. It turns out the start of the problem was that the widgets in iGoogle are run in their own IFrames (which is a very secure way of doing a widget system like this). However, the Google Reader contents was being pulled in through HTTPS secure channels (as it should thanks to HTTPS Everywhere), while the iGoogle page itself was pulled in through a normal HTTP channel! Separate windows/frames/tabs cannot interact with each other through JavaScript if they are not part of the same domain and protocol (HTTP/HTTPS) to prevent Cross-site scripting hacks.

I was wondering why HTTPS Everywhere was not running iGoogle through an HTTPS channel, so I tried it myself and found out Google automatically redirects HTTPS iGoogle requests to non secure HTTP channels! So much for having a proper security model in place...

So I did a lot more digging and modifying of Google’s code to see if I couldn’t find out exactly where the problem was occurring and if it couldn’t be fixed with a hack. It seems the code to handle the RSS Title clicking is injected during the “onload” event of the widget’s IFrame. I believe this was the code that was hitting the security privilege error to make things not work. I attempted to hijack the Google Reader widget’s onload function and add special privileges using “netscape.security.PrivilegeManager.enablePrivilege”, but it didn’t seem to help the problem. I think with some more prodding I could have gotten it working, but I didn’t want to waste any more time than I already had on the problem.

The code that would normally be loaded into the widget’s IFrame window hooks the “onclick” event of all RSS Title links to both perform the bubble action and cancel the normal “click” action. Since the normal click action for the anchor links was not being canceled, the browser action of following the link occurred. In this case, the links also had a “target” set to open a new window/tab.


There is however a “fix” for this problem, though I don’t find it ideal. If you edit the “extensions\https-everywhere@eff.org\chrome\content\rules\GoogleServices.xml” file in your Firefox profile directory (most likely at “C:\Users\USERNAME\AppData\Roaming\Mozilla\Firefox\Profiles\PROFILENAME\” if running Windows 7), you can comment out or delete the following rule so Google Reader is no longer run through secure HTTPS channels:

<rule from="^http://(www\.)?google\.com/reader/" 
to="https://www.google.com/reader/"/>

That being said, I’ve been having a plethora of problems with Facebook and HTTPS Everywhere too :-\ (which it actually mentions might happen in its options dialog). You’d think the largest sites on the Internet could figure out how to get their security right, but either they don’t care (the more likely option), or they don’t want the encryption overhead. Alas.

Visual Studio IDE Tab Order
Microsoft fails at usability

I’ve been really annoyed for a while by the unintuitive IDE tab ordering in Visual Studio 2005+. When you type [shift+]alt+tab, you don’t get the next/previous tab in the list as would be the OBVIOUS way to do it (which probably all other IDEs do this right). No, it switches between tabs in an arbitrary hidden order related to the last acces order of the tabs.

Searching the internet for a solution to this was pretty fruitless, so I tried to tackle the problem myself. I dug through all the possible structures I could find in the Visual Studio IDE macro explorer, and was unfortunately unable to find where the tab order was kept in a window pane (if it is even accessible to the user). I thought I had the solution at one point, but realized it also just switches tabs in the order they were originally opened :-(. This is the VB macro code I came up with to do at least that, which uses “DTE.ActiveWindow.Document.Collection” for the tab-open order.

	 Public Sub TabDirection(ByVal Direction As Integer)
		  'Find the index of the current tab
		  Dim i As Integer
		  Dim Index As Integer
		  Dim Count As Integer
		  Count = DTE.ActiveWindow.Document.Collection.Count
		  For i = 1 To Count
				If DTE.ActiveWindow.Document.Collection.Item(i).ActiveWindow.Equals(DTE.ActiveWindow) Then Index = i
		  Next i

		  'Determine the new index
		  Index = Index + Direction
		  If Index > Count Then Index = 1
		  If Index = 0 Then Index = Count

		  DTE.ActiveWindow.Document.Collection.Item(Index).Activate() 'Activate the next tab in the proper direction
	 End Sub

	 Public Sub TabForward()
		  TabDirection(1)
	 End Sub

	 Public Sub TabBackward()
		  TabDirection(-1)
	 End Sub
Live Streaming SHOUTcast through Flash
The one time I decide to look online before trying it out myself

A client of mine wanted their website to have an applet that played streaming music from a SHOUTcast server. The easy solution would have been to just embed a Windows Media Player applet into the page, but that would only work for IE.

I thoroughly searched the web and was unable to find a Flash applet (or other solution) that already did this (and actually worked). Most of the information I was finding was people having problems getting this kind of thing working in Flash with no answer provided. After giving up on finding a resolution online, I decided to load up Flash and see what I could find from some tinkering.

Quite frankly, I’m shocked people were having so many problems with this. I started an ActionScript 2.0 project and put in the following code, and it worked right away in Flash CS3 (v9.0) with no problem:

var URL="http://example.shoutcast.castledragmire.com:1234/" //The URL to the SHOUTcast server
var MySound:Sound=new Sound(this);
MySound.loadSound(URL,true);

Unfortunately, once I exported the Flash applet and loaded it up in my browsers, it was no longer working. After a few minutes of poking around, I had a hunch that the SHOUTcast host might be sending different data depending on the [Browser’s] User Agent. I changed Firefox’s User Agent to “Flash” through a Firefox add-on (User Agent Switcher), and it worked :-D.

Once again, unfortunately, this was not a viable solution because I couldn’t have every user who visited the client’s web page change their browser User Agent string :-). The quickest solution at this point to the problem was to just create a passthrough script that grabbed the live stream on their server and passed it to the client. The following is the PHP script I used for this:

$streamname='example.shoutcast.castledragmire.com';
$port      ='1234';
$path      ='/';

header('Content-type: audio/mpeg');
$sock=fsockopen($streamname,$port);
fputs($sock, "GET $path HTTP/1.0\r\n");
fputs($sock, "Host: $streamname\r\n");
fputs($sock, "User-Agent: WinampMPEG/2.8\r\n");
fputs($sock, "Accept: */*\r\n");
fputs($sock, "Connection: close\r\n\r\n");
fpassthru($sock);
fclose($sock);

The final two steps to get this working were:
  1. Setting the Flash Applet’s URL variable to the PHP file
  2. Turning off PHP output buffering for the file. This can only be done through Apache or the php.ini depending on the server setup. This is very important, as if it’s on, the data will never get sent to the user.

The only problem with this method is that it taxes the server that is passing the data through, especially since it uses PHP... This kind of thing could very easily be done in C though (as a matter of fact, I will be writing a post on something very close to that very soon).

Focus Follows Mouse
Active Window Tracking in Windows

A friend of mine who mainly works in Linux is always frustrated when he has to deal with working in Windows and doesn’t have FFM (Focus Follows Mouse). FFM means focusing a window when the mouse cursor moves over it (rollover), preferably without raising the window’s z-order position. I told him I’d throw together a program that did this, but my original approach was problematic.

My original approach was to use LowLevelMouseProc (WinAPI:SetWindowsHookEx), which is the same basic approach as my HalfKey project. Whenever a non-focused window (WinAPI:GetActiveWindow) is moused over (WinAPI:WindowFromPoint), it would activate it (gain the focus) (WinAPI:SetWindowLong/WinAPI:SetForegroundWindow). Unfortunately, as I found out, Windows is specifically designed to make an activated window go to the top of the z-order, and there is no way around this. The only solution I could find was recording the original position of the activated window (WinAPI:GetParent) and restoring it after the operation. This was however less than optimal :-(.

After some additional research on the topic, I found out the FFM functionality is actually built into Windows through a little known “System Parameter” (WinAPI:SystemParametersInfo). Microsoft calls it “Active Window Tracking”. Below is the code to the FocusFollowsMouse.exe (command line executable) to activate this system feature (the executable can be run without the command line and it will use the default options).


FocusFollowsMouse program information:
Turns on active window tracking in Windows
Parameters:
  • -h The help screen (cancels further actions)
  • -i installs new settings permanently
  • -d disables active window tracking
  • -r raise the window when focusing

//Toggles active window tracking, with options
#include <windows.h>
#include <stdio.h>

int main(int argc, char *argv[])
{
	BOOL SendUpdate=SPIF_SENDCHANGE; //How to send the update to the system. Default is to only send to the system, but not make the change permanent
	BOOL TurnOn=TRUE; //By default we are turning on the option
	BOOL RaiseWindow=FALSE; //Whether to raise the window that is gaining the focus
	int i;

	//Read in options
	for(i=1;i<argc;i++)
	{
		if(argv[i][0]!='-') //Parameters must start with a -
			continue;
		switch(argv[i][1]) //Execute a valid action on the first character. Characters after it are ignored
		{
			case 'h': //Help  screen
				printf("%s", "Turns on active window tracking in Windows\n"
					"-h this help screen (cancels further actions)\n"
					"-i installs new settings permanently\n"
					"-d disables active window tracking\n"
					"-r raise the window when focusing\n");
					return 0; //Cancel further actions
			case 'i': //Install permanently
				SendUpdate|=SPIF_UPDATEINIFILE; //Writes the new system-wide parameter setting to the user profile.
				break;
			case 'd': //Turn off76
				TurnOn=FALSE;
				break;
			case 'r': //Raise the window
				RaiseWindow=TRUE;
				break;
		}
	}

	//Execute the system parameters change
	SystemParametersInfo(SPI_SETACTIVEWINDOWTRACKING, 0, (PVOID)TurnOn, SendUpdate);
	SystemParametersInfo(SPI_SETACTIVEWNDTRKZORDER, 0, (PVOID)RaiseWindow, SendUpdate);

	return 0;
}
Telnet Workaround
I hate not having root ^_^;

I recently had to do some work on a system where I was not allowed SSH/telnet access. Trying to do work strictly across ftp can take hours, especially when you have thousands of files to transfer, so I came up with a quick solution in PHP for simple command line access.

<form method=post action="exec.php">
	<table style="height:100%;width:100%">
		<tr><td height="10%">
			<textarea name=MyAction style="height:100%;width:100%"><?=(isset($_REQUEST['MyAction']) ? $_REQUEST['MyAction'] : '')?></textarea>
		</td></tr><tr><td height="90%">
			<textarea name=Output style="height:100%;width:100%"><?
if(isset($_REQUEST['MyAction']))
{
	$MyAction=preg_split("/\\r?\\n/", $_REQUEST['MyAction']);
	foreach($MyAction as $Action)
	{
		exec($Action, $MyOutput);
		print htmlentities(implode("\n", $MyOutput), ENT_QUOTES, 'ISO8859-1')."\n-----------------------\n";
	}
}
			?></textarea>
		</td></tr><tr><td height=1>
			<input type=submit>
		</td></tr>
	</table>
</form>

This code allows you to enter commands on separate lines in the top box, and after the form is submitted, the output of each command is entered into the bottom box separated by dashed lines.

Note that between each command the environment is reset, so commands like "cd" which change the current directory are not useable :-(. You must also change the line 'action="exec.php"' to reflect the name you give the file.


A more suitable solution would be possible through AJAX and a program that redirected console output from a persistent session, but this was just meant as quick fix :-).

Winamp Problems
Adding features also potentially adds bugs

I’ve been getting tired of Winamp (v5.112 specifically) randomly crashing for no specific reason on my new home music server. I tried upgrading to the latest version, but it also crashes. The best solution at this point was to just downgrade to an old version, like v2.8, which I have been using for more years than I can remember on most all of my machines.

Old versions of Winamp have a pretty low footprint and are great at what they are supposed to do: playing music. It’s gotten bloated nowadays though, which often happens when great software has hit its peak and has nowhere to go. This is one reason to just keep using versions of software that you are used to and have no problems with; like using XP instead of Windows Vista, or some older versions of Microsoft Office, pre-2003. Newer does not always mean better!

Legacy Geocities Login Problems
Big corporations refusing to acknowledge that they have problems, let alone fix them

I have a friend that has a legacy Geocities (the MySpace of the 1990s for free web hosting) account (one from who knows how long before GeoCities was bought by Yahoo). The control panel (at geocities.yahoo.com/gcp) won’t allow logging in to his legacy account because it gets stuck in an infinite redirect loop, redirecting right back to itself.

My guess is that the problem has to do with cookies (on Geocities’ servers’ side, not the client’s!), but I didn’t get that far, as I found a roundabout solution to his problem. After logging in, the user can go to http://geocities.yahoo.com/filemanager or http://geocities.yahoo.com/v/fm.html to manage their files. While the rest of the control panel is still not accessible, this was enough of a solution for him.

Reports are that Yahoo refuses to respond about this problem with their servers.

Microsoft IIS Bug
Bad Programming: Only using file extensions as an indicator

According to a Microsoft KB article titled “Virtual directory names with executable extensions are not used correctly”, using a virtual folder ending in an executable extension (like .com, .exe, .dll, or .sh) under the web server for IIS [Microsoft’s Internet information services server suite] makes the contents inside the folder unviewable. This behavior itself is kind of silly, as you’d assume a web server would always check to see if something was a file or folder first.

Unfortunately, this doesn’t apply to just virtual folders, but all folders under an IIS web server, as I found out a few years ago when I backed up a site that I knew would be taken down very soon (ironically, because the company [SysInternals] was being taken over by Microsoft) and mirrored it on my Home Server, which runs IIS.

The solution I used was to add a character (in my case an underscore “_”) to the end of all the directory names ending in “.com” and then doing a global regular expression replace through all files in the mirror to replace any occurrences of these directories.


Search For: “(DOMAIN1|DOMAIN2|DOMAIN3)([\\/])
Replace With: “$1_/$2

I still plan on getting up some site mirrors of places that no longer exist and such for the miscellaneous section one of these days...

Windows Hosts File
When DNS decides to be finicky

Another of my favorite XP hacks is modifying domain addresses through XP’s Hosts file. You can remap where a domain points on your local computer by adding an IP address followed by a domain in the “c:\windows\system32\drivers\etc\hosts” file.

Domain names are locally controlled, looked up, and cached on your computer at the OS level, so there are simple hacks like this for other OSs too.

I often utilize this solution as a server admin who controls a lot of domains (Over 100, and I control most of them at the registrar level too ^_^). The domain system itself across the web is incredibly fastidious and prone to problems if not perfectly configured, so this hack is a wonderful time saver and diagnostic tool until things resolve and work properly.

Computers are Evil
Setting up new computers can be quite the hassle

The new home server for the new entertainment center I recently set up has made itself out to be quite a nuisance. I am unsure as to whether I will keep using it or not, but fortunately, I have not yet taken down my old home server, as I wanted to do some break in testing on the new one first.

Setting up new computers is almost always a pain in the ass, what with installing and configuring all the software from scratch (which always includes a format and new OS), and making sure all the hardware works properly and finding drivers for it (sometimes when you don’t even have the proper information on what that hardware is). But sometimes, computers can go above and beyond the normal setup nuances and annoyances and be downright evil. I have long proclaimed to people that computers have personalities and minds of their own and they decide when and where they want to be accommodating or uncooperative. Besides all the normal computer setup problems (including not knowing what the hardware was and having to figure that out), this one also had a few more doozies.

The first big problem started with the fact that I wanted to use this computer for video output, and it does not have an AGP slot. As I contemplated in the previous post on this topic, I went ahead and bought a PCI Geforce 5200 for $27.79 including shipping. The card did not fit properly in the new case, so I had to unscrew a few things, which were fortunately designed for just that reason. Then the big problem came up in that video outputted from the s-video port on the card showed up on the TV at a 50% over zoom, so I couldn’t see half the screen. I couldn’t test the monitor output port either because it is DVI, and I have no DVI monitors, alas. After 2 or 3 hours of tinkering with it and throwing everything plus the kitchen sink at the problem, including trying a different s-video cable, I finally stumbled on the solution and got it working, yay. That is... until after I rebooted and it wasn’t working again x.x;. Another 20 or so more minutes of tinkering got it fixed again, and I was able to quickly hone down on a procedure to fix the problem on the next reboot, optimizing it with each successive reboot over the next few days. The procedure is as follows: (The TV over s-video starts as the primary monitor, and I have a second monitor connected to the VGA port to the onboard graphics card)

  • Open “Display Properties” [Right click Desktop > Properties] > Settings
  • Attach second monitor so I can see what I’m doing
  • Open NVidia Control Panel
  • Rotate screen to 90 degrees. It only wants to rotate the screen at 1024x768, which is too high a resolution for the TV, so it kicks the resolution down to 640x480 while rotating
  • Keep setting the screen to no rotation (0 degrees) until the scaling is correct [usually twice]. The NVidia control panel doesn’t want to allow going back to normal rotation now due to the 1024x768 required resolution thing, and will keep the setting set as 90 degrees, so the process can easily be repeated until it works.
  • Now that the screen is at the correct scale (at 640x480), all that’s left is to get the rotation back to normal. To do this, immediately after accepting the rotation process in the NVidia Control Panel, it has to be closed out (alt+f4) so that it saves the rotation setting at 0 degrees but doesn’t try to set it back after all the resolution changes.
  • Raise the resolution back to 800x600
  • Detach secondary monitor now that it is no longer needed

The screen still unfortunately has about 100-200 “pixels” (monitors don’t have pixels, technically) on the top and bottom of the screen that are unused, but eh, NBD. At least this graphics card lets me properly pan and scan (zoom/scale and move) the s-video output around unlike my Geforce4 Ti 4600! The next problem with the video card is that some video outputted from it is just too slow. Though most content is watchable, the choppiness makes it unbearable. The problem with this might just be that the PCI bus doesn’t have the required throughput, which is why most video cards are used over AGP (or nowadays PCI express).

There are even two more final problems with it, one a possible deal killer, the other rather insignificant. The unimportant problem is that XP refuses to install updates. I believe this to be a problem with SP3. The final problem is that the computer seems to randomly compltely freeze up every now and then for no particular reason, requiring a reboot. This has happened 2 or 3 times so far, so I’m waiting to see how often it happens, if anymore. I know it’s not overheating as I currently have the case open; and I see no blown capacitors... hmmmm...



<frustration>Computers!</frustration>
Video Game Nostalgia
And Metal Gear Solid Problems

So a comic [Gunnerkrigg Court] that I enjoy and read daily [updates MWF] recently referenced Metal Gear Solid, which finally made me decide to play through the series.

For reference, whenever I bring up games from here on out, it’s usually to talk about encountered problems, which I will usually provide fixes for, or technical aspects of the game. I’m not qualified, or funny enough, to want to review games; and that is not the purpose of my postings here.


The first thing I wanted to mention is a fix for a graphical problem. As the game is rather “old” (released in 2000 for Windows), it can be incompatible with modern systems. One of the options it uses in hardware mode is 8-bit textures, which is no longer supported, though for the life of me I can’t see why a hack could be made in the video card drivers for this problem. Because of this, the game only allows you to run in software mode. After a lot of digging and searching, in which every place said the same thing (it’s not fixable), I finally found a hacked executable [Metal Gear Solid v1.0 [ENGLISH] No-CD/WinXP+Vista+GeForce+ATi Fixed EXE] made by a kind sole to fix the problem.

Another problem which really frustrated me was a “puzzle” in the game referring to looking for information on the “back of the CD case”. I had just received an “optical disk” in the game, however, it appeared to be a floppy disk and no matter what I did I couldn’t find the required information with the item. I figured it must have been a bug and finally gave in and looked it up online. It turns out they meant the actual CD case the game came in had a number [radio frequency] written on the back of it - “140.15”. I can only assume they did this as a means of “copy protection” to frustrate anyone who didn’t actually buy the game. Unfortunately, I acquired the game without a CD case so I was frustrated by this myself.


This kind of system reminded me of the very old days of gaming in which some games asked you to input a certain word from a certain paragraph on a certain page of the manual to enter the game, or asked questions with answers found in the manual. One of the games I had that did the former was Teenage Mutant Ninja Turtles [1989] for DOS. I have fond memories of playing this and a (monochrome? [green and black :-) ] IIRC?) version of Muppet Adventure: Chaos at the Carnival [1989] (Dear Thor! heh) [also a DOS game] as they were, IIRC, two of my first video games, though I got many others around that time. Both games had later released NES ports too.

My real favorite childhood games however, which are still both cult classics, were Doom, which got me into the design aspect of making games, and most importantly, ZZT, which is what really got me started on programming in 1991 at the age of 5. I still have the original floppy disks for ZZT too :-). ZZT was more scripting than programming though, and I didn’t start real programming until I got into QBasic in 1993. I might release some of my creations for these games one of these days for nostalgic sake ^_^;. I also remember thoroughly enjoying Star Trek: 25th Anniversary for DOS in 1992 :-). I was a nerd even as a kid! ^_^; This game also had copy protection I had forgotten about. As Wikipedia tells:

The game had a copy-protection system in that the player was forced to consult the game’s manual in order to find out which star system they were supposed to warp to on the navigation map. Warping to the wrong system would send them into either the Klingon or Romulan neutral zones, and initiate an extremely difficult battle that often ends with the destruction of the Enterprise.


[Edit 8/16/2008 @ 10:05pm] Pictures of some of this stuff can be found in tomorrow’s post, “Ancient Software”.
Multiple Windows XP Sessions
Making XP act like Windows Server

All of the Windows lines of OSs from XP through Windows Server 2003 (or 2005 or 2007?) are, to my knowledge and IMO, basically the exact same thing, with just some minor tweaks and extra software for the more expensive versions. My version of XP Professional even comes with IIS (Internet Information Services - Microsoft’s web/ftp/mail server suite). One of my favorite XP hacks adds on a desperately needed functionality found only in Windows Server editions, which is allowing multiple user sessions on a machine at once. This basically means allowing multiple people to log onto a machine at the same time through Remote Desktop (Microsoft’s internal Windows VNC client). I find the most useful function by far of this is the “Remote Control” feature, which allows a second logged in user to see exactly what is on the screen of another session, and if permissions are given, to take control of it. This is perfect for those people whom you often have to trouble shoot computer problems for, eliminating the need for a trip to their location or 3rd party software to view their computer.

This hack requires a few registry modifications, a group policy modification, and a DLL replacement. The DLL replacement was provided by Microsoft in early versions of XP SP2 when they were tinkering with allowing this feature in XP. I found the information for all this here a number of years ago and it has provided itself invaluable since. Unfortunately, this does not work on XP Home edition, just XP Professional. I tried adapting it once and wasted a lot of time :-\. The following is the text from where I got this hack.

Concurrent Remote Desktop Sessions in Windows XP SP2

I mentioned before that Windows XP does not allow concurrent sessions for its Remote Desktop feature. What this means is that if a user is logged on at the local console, a remote user has to kick him off (and ironically, this can be done even without his permission) before starting work on the box. This is irritating and removes much of the productivity that Remote Desktop brings to Windows. Read on to learn how to remove that limitation in Windows XP SP2

A much touted feature in SP2 (Service Pack 2) since then removed was the ability to do just this, have a user logged on locally while another connects to the terminal remotely. Microsoft however removed the feature in the final build. The reason probably is that the EULA (End User License Agreement) allows only a single user to use a computer at a time. This is (IMHO) a silly reason to curtail Remote Desktop’s functionality, so we’ll have a workaround.

Microsoft did try out the feature in earlier builds of Service Pack 2 and it is this that we’re going to exploit here. We’re going to replace termsrv.dll (The Terminal Server) with one from an earlier build (2055).

To get Concurrent Sessions in Remote Desktop working, follow the steps below exactly:

  1. Download the termsrv.zip file and extract it somewhere.
  2. Reboot into Safe Mode. This is necessary to remove Windows File Protection. [Dakusan: I use unlocker for this, which I install on all my machines as it always proves itself useful, and then usually have to do a “shutdown -a” from command line when XP notices the DLL changed.]
  3. Copy the termsrv.dll in the zip to %windir%\System32 and %windir%\ServicePackFiles\i386. If the second folder doesn’t exist, don’t copy it there. Delete termsrv.dll from the dllcache folder: %windir%\system32\dllcache
  4. Merge the contents of Concurrent Sessions SP2.reg file into the registry. [Dakusan: Just run the .reg file and tell XP to allow the action.]
  5. Make sure Fast User Switching is turned on. Go Control Panel -> User Accounts -> Change the way users log on or off and turn on Fast User Switching.
  6. Open up the Group Policy Editor: Start Menu > Run > ‘gpedit.msc’. Navigate to Computer Configuration > Administrative Templates > Windows Components > Terminal Services. Enable ‘Limit Number of Connections’ and set the number of connections to 3 (or more). This enables you to have more than one person remotely logged on.
  7. Now reboot back into normal Windows and try out whether Concurrent Sessions in Remote Desktop works. It should!

If anything goes wrong, the termsrv_sp2.dll is the original file you replaced. Just rename it to termsrv.dll, reboot into safe mode and copy it back.

The termsrv.dl_ file is provided in the zip is for you slipstreamers out there. Just replace that file with the corresponding file in the Windows installation disks.



I have included an old copy of the above web page, from when I first started distributing this, with the information in the hack’s zip file I provide.

If you want to Remote Control another session, I think the user needs to be part of the “Administrators” group, and don’t forget to add any users that you want to be able to remotely log on to the “Remote Desktop Users” group.

This is all actually part of an “Enhanced Windows XP Install” document I made years ago that I never ended up releasing because I hadn’t finished cleaning it up. :-\ One of these days I’ll get it up here. Some of the information pertaining to this hack from that document is as follows:

  • Any computer techy out there that has tried to troubleshoot over the phone knows how much of a problem/pain in the anatomy it is, and for this reason, I install this hack which makes it painless to automatically connect to a users computer through remote desktop, which can then be remotely viewed or controlled via their displayed console session.
  • I often use this hack myself when I am running computers without keyboards/mice, like my entertainment computer. For a permanent solution for something like this though, I recommend a KM (Keyboard/Mouse) solution like synergy, which allows manipulating one computer via a keyboard and mouse on another.
  • Your user account password must also not be blank. Blank passwords often cause problems with remote services.
  • The security risk for this is a port is opened for someone to connect to like telnet or SSH on Unix, which is a minimal risk unless someone has your username+password.
  • You have to have a second username to log into, which can be done under Control Panel > User Accounts, or Control panel > Administrative Tools > Computer Management > System Tools > Local users and Groups.
  • If you want the second user to be able to log in remotely, make sure to add them under Control Panel > System > Remote > Select Remote users, and also check “allow users to connect remotely to this computer”.
  • You also need to know the IP address of the user’s computer you want to connect to, and unfortunately, they are not always static. If you run into this, you may want to use a DDNS service like mine.
  • You may also run into the unfortunate circumstance of NAT/Firewalled networks, which is beyond the scope of this document. Long story short, you need to open up port 3389 in the firewall and forward it through the router/NAT (this is the port for both remote desktop and remote assistance).
  • You may also want to change the port number to something else so a port scanner will not pick it up. To connect to a different port, on the client computer, in remote desktop, you connect to COMPUTERIP:PORT like www.mycomputer.com:5050.
    • Registry Key: HKLM\System\CurrentControlSet\Control\TerminalServer\WinStations\RDP-Tcp\PortNumber - Set as your new port number
    • This requires a reboot to work.
    • Make sure you don’t provide a port that’s already used on the computer, and you probably shouldn’t make it a standard port either [21 [ftp], 25 [smtp], 80 [http], etc])
  • You can also log into their current console session by going to the task manager (ctrl+shift+esc in full screen, or right click taskbar and go to “task manager”) > Users > Right click username > Remote control
    • This will ask the user at the computer if they want to accept this. To have it NOT ask them, do the following:
      • Start > Run > gpedit.msc [enter] > computer configuration > administrative templates > windows components > terminal services
      • Double click the option “Sets rules for remote control of terminal services user sessions”
      • Enable it, and for the “Options” setting, set “Full control without users permission”
  • If the ability for you to access a client’s computer without their immediate permission or knowledge is too “dangerous” for their taste, you may suggest/use Remote Assistance, which is more troublesome, but much more “secure” sounding.
Windows 98 for VMWare

I recently had to install Windows 98 through VMWare for some quick tests, and there were a few minor problems after the install that needed to be resolved. I thought I’d share them here if anyone ever needed them.

  • First, VMWare Tools needs to be installed to get video and some other drivers working.
  • Second, Windows 98 was really before the time when network cards were used to connect to the internet, as broadband technology was rare and modems were the commonplace solution, so it doesn’t make this process easy. To connect through your VMWARE bridge or NAT to the Internet (to use IE - FireFox [newer versions of?] doesn’t work on Windows 98), the following must be done through the MSN Connection Wizard (this is mostly from memory).
    • Open "Connect to the internet" from the desktop
    • Click Next
    • Select Modem Manually [next]
    • Select any of the normal modems in the list on the right, like a generic 56,000 modem [OK]
    • Click Next
    • Click lan/manual
    • Connect using my local area network (LAN) [next]
    • Click Next
    • "No" to email [next]
    • Click Finish
  • Lastly, the default sound driver does not work, so you need to do the following [Information found here by googling]
    • Install the Create Lab’s drivers for the PCI sound card
    • Add the following lines to your VMWare config (vmx) file
      • pciSound.DAC1InterruptsPerSec = 0
      • pciSound.DAC2InterruptsPerSec = 0
    • Optionally, for a better midi waveset, download Creative Lab’s 8mb GM/GS Waveset [version 5] and select it for use in the device’s properties by:
      • Right click my computer
      • Select properties
      • Select the Device Manager tab
      • Find the area for sound and go to “SB PCI(WDM)”
      • Go to the second tab
      • Change the Midi Synthesizer Waveset to the downloaded eapci8m.ecw
Video driver woes
TV output issues

So I’ve recently switched over to an old Geforce4 Ti 4600 for TV output on my home server/TV station. Unfortunately, my TV needs output resizing (underscan) due to being dropped a long ways back during transport from a Halo game, and the CRT output is misaligned.

If I recall, old Nvidia drivers allowed output resizing, but the latest available ones (which are rather old themselves, as NVidia stops supporting old cards with newer driver sets that have more options) that work for my card only allow repositioning of the output signal, so part of the screen is cut off.

The final solution was to tell VLC media player to output videos at 400:318 aspect ratio when in full screen to force a smaller width that I could then reposition to properly fit the screen. A rather inelegant solution, but it works. One of these days I’ll get myself a new TV :-).