Home Page
Archive > Posts > Tags > Servers
Search:

Painless migration from PHP MySQL to MySQLi

The PHP MySQL extension is being deprecated in favor of the MySQLi extension in PHP 5.5, and removed as of PHP 7.0. MySQLi was first referenced in PHP v5.0.0 beta 4 on 2004-02-12, with the first stable release in PHP 5.0.0 on 2004-07-13[1]. Before that, the PHP MySQL extension was by far the most popular way of interacting with MySQL on PHP, and still was for a very long time after. This website was opened only 2 years after the first stable release!


With the deprecation, problems from some websites I help host have popped up, many of these sites being very, very old. I needed a quick and dirty solution to monkey-patch these websites to use MySQLi without rewriting all their code. The obvious answer is to overwrite the functions with wrappers for MySQLi. The generally known way of doing this is with the Advanced PHP Debugger (APD). However, using this extension has a lot of requirements that are not appropriate for a production web server. Fortunately, another extension I recently learned of offers the renaming functionality; runkit. It was a super simple install for me.

  1. From the command line, run “pecl install runkit”
  2. Add “extension=runkit.so” and “runkit.internal_override=On” to the php.ini

Besides the ability to override these functions with wrappers, I also needed a way to make sure this file was always loaded before all other PHP files. The simple solution for that is adding “auto_prepend_file=/PATH/TO/FILE” to the “.user.ini” in the user’s root web directory.

The code for this script is as follows. It only contains a limited set of the MySQL functions, including some very esoteric ones that the web site used. This is not a foolproof script, but it gets the job done.


//Override the MySQL functions
foreach(Array(
    'connect', 'error', 'fetch_array', 'fetch_row', 'insert_id', 'num_fields', 'num_rows',
    'query', 'select_db', 'field_len', 'field_name', 'field_type', 'list_dbs', 'list_fields',
    'list_tables', 'tablename'
) as $FuncName)
    runkit_function_redefine("mysql_$FuncName", '',
        'return call_user_func_array("mysql_'.$FuncName.'_OVERRIDE", func_get_args());');

//If a connection is not explicitely passed to a mysql_ function, use the last created connection
global $SQLLink; //The remembered SQL Link
function GetConn($PassedConn)
{
    if(isset($PassedConn))
        return $PassedConn;
    global $SQLLink;
    return $SQLLink;
}

//Override functions
function mysql_connect_OVERRIDE($Host, $Username, $Password) {
    global $SQLLink;
    return $SQLLink=mysqli_connect($Host, $Username, $Password);
}
function mysql_error_OVERRIDE($SQLConn=NULL) {
    return mysqli_error(GetConn($SQLConn));
}
function mysql_fetch_array_OVERRIDE($Result, $ResultType=MYSQL_BOTH) {
    return mysqli_fetch_array($Result, $ResultType);
}
function mysql_fetch_row_OVERRIDE($Result) {
    return mysqli_fetch_row($Result);
}
function mysql_insert_id_OVERRIDE($SQLConn=NULL) {
    return mysqli_insert_id(GetConn($SQLConn));
}
function mysql_num_fields_OVERRIDE($Result) {
    return mysqli_num_fields($Result);
}
function mysql_num_rows_OVERRIDE($Result) {
    return mysqli_num_rows($Result);
}
function mysql_query_OVERRIDE($Query, $SQLConn=NULL) {
    return mysqli_query(GetConn($SQLConn), $Query);
}
function mysql_select_db_OVERRIDE($DBName, $SQLConn=NULL) {
    return mysqli_select_db(GetConn($SQLConn), $DBName);
}
function mysql_field_len_OVERRIDE($Result, $Offset) {
    $Fields=$Result->fetch_fields();
    return $Fields[$Offset]->length;
}
function mysql_field_name_OVERRIDE($Result, $Offset) {
    $Fields=$Result->fetch_fields();
    return $Fields[$Offset]->name;
}
function mysql_field_type_OVERRIDE($Result, $Offset) {
    $Fields=$Result->fetch_fields();
    return $Fields[$Offset]->type;
}
function mysql_list_dbs_OVERRIDE($SQLConn=NULL) {
    $Result=mysql_query('SHOW DATABASES', GetConn($SQLConn));
    $Tables=Array();
    while($Row=mysqli_fetch_assoc($Result))
        $Tables[]=$Row['Database'];
    return $Tables;
}
function mysql_list_fields_OVERRIDE($DBName, $TableName, $SQLConn=NULL) {
    $SQLConn=GetConn($SQLConn);
    $CurDB=mysql_fetch_array(mysql_query('SELECT Database()', $SQLConn));
    $CurDB=$CurDB[0];
    mysql_select_db($DBName, $SQLConn);
    $Result=mysql_query("SHOW COLUMNS FROM $TableName", $SQLConn);
    mysql_select_db($CurDB, $SQLConn);
    if(!$Result) {
        print 'Could not run query: '.mysql_error($SQLConn);
        return Array();
    }
    $Fields=Array();
    while($Row=mysqli_fetch_assoc($Result))
        $Fields[]=$Row['Field'];
    return $Fields;
}
function mysql_list_tables_OVERRIDE($DBName, $SQLConn=NULL) {
    $SQLConn=GetConn($SQLConn);
    $CurDB=mysql_fetch_array(mysql_query('SELECT Database()', $SQLConn));
    $CurDB=$CurDB[0];
    mysql_select_db($DBName, $SQLConn);
    $Result=mysql_query("SHOW TABLES", $SQLConn);
    mysql_select_db($CurDB, $SQLConn);
    if(!$Result) {
        print 'Could not run query: '.mysql_error($SQLConn);
        return Array();
    }
    $Tables=Array();
    while($Row=mysql_fetch_row($Result))
        $Tables[]=$Row[0];
    return $Tables;
}
function mysql_tablename_OVERRIDE($Result) {
    $Fields=$Result->fetch_fields();
    return $Fields[0]->table;
}

And here is some test code to confirm functionality:
global $MyConn, $TEST_Table;
$TEST_Server='localhost';
$TEST_UserName='...';
$TEST_Password='...';
$TEST_DB='...';
$TEST_Table='...';
function GetResult() {
    global $MyConn, $TEST_Table;
    return mysql_query('SELECT * FROM '.$TEST_Table.' LIMIT 1', $MyConn);
}
var_dump($MyConn=mysql_connect($TEST_Server, $TEST_UserName, $TEST_Password));
//Set $MyConn to NULL here if you want to test global $SQLLink functionality
var_dump(mysql_select_db($TEST_DB, $MyConn));
var_dump(mysql_query('SELECT * FROM INVALIDTABLE LIMIT 1', $MyConn));
var_dump(mysql_error($MyConn));
var_dump($Result=GetResult());
var_dump(mysql_fetch_array($Result));
$Result=GetResult(); var_dump(mysql_fetch_row($Result));
$Result=GetResult(); var_dump(mysql_num_fields($Result));
var_dump(mysql_num_rows($Result));
var_dump(mysql_field_len($Result, 0));
var_dump(mysql_field_name($Result, 0));
var_dump(mysql_field_type($Result, 0));
var_dump(mysql_tablename($Result));
var_dump(mysql_list_dbs($MyConn));
var_dump(mysql_list_fields($TEST_DB, $TEST_Table, $MyConn));
var_dump(mysql_list_tables($TEST_DB, $MyConn));
mysql_query('CREATE TEMPORARY TABLE mysqltest (i int auto_increment, primary key (i))', $MyConn);
mysql_query('INSERT INTO mysqltest VALUES ()', $MyConn);
mysql_query('INSERT INTO mysqltest VALUES ()', $MyConn);
var_dump(mysql_insert_id($MyConn));
mysql_query('DROP TEMPORARY TABLE mysqltest', $MyConn);
Backing up just the user settings in cPanel

One of the companies I work for recently moved one of our cPanel servers to a new collocation, still running cPanel. We decided to use a new backup solution called r1soft, which so far has been working spectacularly. I’d love to use it for my personal computers, except the licenses, which are geared towards enterprise business, are way too costly.

However, since r1soft only backs up files (on the incrementally block level, yay) you can’t use it to restore a cPanel account. It can only restore things like the user’s home directory and SQL databases. Because of this, when we had need to restore an entire account today, and found out there is no easy/quick way to do it, we were up a creek. The obvious future solution for this would be to use cPanel’s backup (or legacy backup) systems, but unfortunately, you can’t easily set them to not backup the user’s databases and home directory, which can be very large, and are already taken care of by r1soft. I ended up adding the following script, ran nightly via cron, to back up user account settings.

It saves all the user settings under the backup path in their own directory, uncompressed, and named cpmove-USERNAME. It is best to do it this way so r1soft’s incremental backups don’t have much extra work if anything changes. Make sure to change line 3 in the following script to the path where you want backups to occur.

#!/bin/bash
#Create and move to backup directory
BACKUPDIR=/backup/userbackup
mkdir -p $BACKUPDIR #Make sure the directory exists
cd $BACKUPDIR

#Remove old backups
rm -rf cpmove-*

#Loop over accounts
for USER in `/usr/sbin/whmapi1 listaccts | grep -oP '(?<=user: )\w+$' | sort -u`; do
  #Backup the account
  /scripts/pkgacct --nocompress --skipbwdata --skiphomedir --skiplogs --skipmysql --skipmailman $USER ./

  #Extract from and remove the tar container file
  tar -xvf cpmove-$USER.tar
  rm -f cpmove-$USER.tar

  #Save MySQL user settings
  mysqldump --compact -fnt -w "User LIKE '$USER""_%'" mysql user db tables_priv columns_priv procs_priv proxies_priv \
  | perl -pe "s~('|NULL)\),\('~\1),\n('~ig" \
  > cpmove-$USER/mysql-users.sql
done;

This script skips a few backup items that need to be noted. Mailman, logs, homedir, and bandwidth data should all be easy 1:1 copy over restores from r1soft. I excluded them because those can take up a lot of room, which we want r1soft to handle. The same goes for MySQL, except that your MySQL users are not backed up to your account, which is why I added the final section.

Do note, for the final section, the line starting with “| perl” is optional. It is there to separate the insert rows into their own lines. A very minor warning though; it would also pick up cases where the last field in MySQL’s user table ends in “NULL),​(”. This would only happen if someone is trying to be malicious and knew about this script, and even then, it couldn’t harm anything.

Bonus note: To restore a MySQL database which does not use a shared-file (like InnoDB does by default), you could actually stop the MySQL server, copy over the binary database files, and start the server back up.

Blacklisting DNS Server on Amazon EC2

Amazon EC2 is a great resource for cheap virtual servers to do simple things, like DNS or (low bandwidth) VPNs. I had the need this morning to set up a DNS server for a company which needed to blacklist a list of domains. The simplest way to do this is by editing all the computers’ hostfiles, but that method leaves a lot to be desired. Namely, blocking entire domains (as opposed to single subdomains), and deploying changes. Centralizing in a single place makes the job instant, immediate, and in the end, faster.

The following are the steps I used to set this up on an EC2 server. All command line instructions are followed by a single command you can run to execute the step. There is a full script below, at the end of the post, containing all steps from when you first login to SSH ("Login to root") to the end.


I am not going to go into the details of setting up an EC2 instance, as that information can be found elsewhere. I will also be skipping over some of the more obvious steps. Just create a default EC2 instance with the “Amazon Linux AMI”, and I will list all the changes that need to be made beyond that.

  • Creating the instance
    • For the first year, for the instance type, you might as well use a t2.micro, as it is free. After that, a t2.nano (which is a new lower level) currently at $56.94/year ($0.0065/Hour), should be fine.
    • After you select your instance type, click “Review and Launch” to launch the instance with all of the defaults.
    • After the confirmation screen, it will ask you to create a key pair. You can see other tutorials about this and how it enables you to log into your instance.
  • Edit the security group
    • Next, you need to edit the security group for your instance to allow incoming connections.
    • Go to “Instances” under the “Instances” group on the left menu, and click your instance.
    • In the bottom of the window, in the “Descriptions” tab, click the link next to “Security Groups”, which will bring you to the proper group in the security groups tab.
    • Right click it and “Edit inbound Rules”.
    • Make sure it has the following rules with Source=Anywhere: ALL ICMP [For pinging], SSH, HTTP, DNS (UDP), DNS (TCP)
  • Assign a permanent IP to your instance
    • To do this, click the “Elastic IPs” under “Network & Security” in the left menu.
    • Click “Allocate New Address”.
    • After creating it, right click the new address, then “Associate Address”, and assign it to your new instance.
  • You should probably set this IP up as an A record somewhere. I will refer to this IP as dns.yourdomain.com from now on.
  • Login to root
    • SSH into your instance as the ec2-user via “ssh ec2-user@dns.yourdomain.com”. If in windows, you could also use putty.
    • Sudo into root via “sudo su”.
  • Allow root login
    • At this point, I recommend setting it up so you can directly root into the server. Warning: some people consider this a security risk.
    • Copy your key pair(s) to the root user via “cat /home/ec2-user/.ssh/authorized_keys > /root/.ssh/authorized_keys
    • Set SSHD to permit root logins by changing the PermitRootLogin variable to “yes” in /etc/ssh/sshd_config. A quick command to do this is “perl -pi -e 's/^\s*#?\s*PermitRootLogin.*$/PermitRootLogin yes/igm' /etc/ssh/sshd_config”, and then reload the SSHD config with “service sshd reload”. Make sure to attempt to directly log into SSH as root before exiting your current session to make sure you haven’t locked yourself out.
  • Install apache (the web server), bind/named (the DNS server), and PHP (a scripting language)
    • yum -y install bind httpd php
  • Start and set services to run at boot
    • service httpd start; service named start; chkconfig httpd on; chkconfig named on;
  • Set the DNS server to be usable by other computers
    • Edit /etc/named.conf and change the 2 following lines to have the value “any”: “listen-on port 53” and “allow-query”
    • perl -pi -e 's/^(\s*(?:listen-on port 53|allow-query)\s*{).*$/$1 any; };/igm' /etc/named.conf; service named reload;
  • Point the DNS server to the blacklist files
    • This is done by adding “include "/var/named/blacklisted.conf";” to /etc/named.conf
    • echo -ne '\ninclude "/var/named/blacklisted.conf";' >> /etc/named.conf
  • Create the blacklist domain list file
    • touch /var/named/blacklisted.conf
  • Create the blacklist zone file
    • Put the following into /var/named/blacklisted.db . Make sure to change dns.yourdomain.com to your domain (or otherwise, “localhost”), and 1.1.1.1 to dns.yourdomain.com’s (your server’s) IP address. Make sure to keep all periods intact.
      $TTL 14400
      @       IN SOA dns.yourdomain.com. dns.yourdomain.com ( 2003052800  86400  300  604800  3600 )
      @       IN      NS   dns.yourdomain.com.
      @       IN      A    1.1.1.1
      *       IN      A    1.1.1.1
    • The first 2 lines tell the server the domains belong to it. The 3rd line sets the base blacklisted domain to your server’s IP. The 4th line sets all subdomains of the blacklisted domain to your server’s IP.
    • This can be done via (Update the first line with your values)
      YOURDOMAIN="dns.yourdomain.com"; YOURIP="1.1.1.1";
      echo -ne "\$TTL 14400\n@       IN SOA $YOURDOMAIN. $YOURDOMAIN ( 2003052800  86400  300  604800  3600 )\n@       IN      NS   $YOURDOMAIN.\n@       IN      A    $YOURIP\n*       IN      A    $YOURIP" > /var/named/blacklisted.db;
  • Fix the permissions on the blacklist files
    • chgrp named /var/named/blacklisted.*; chmod 660 /var/named/blacklisted.*;
  • Set the server’s domain resolution name servers
    • The server always needs to look at itself before other DNS servers. To do this, comment out everything in /etc/resolv.conf and add to it “nameserver localhost”. This is not the best solution. I’ll find something better later.
    • perl -pi -e 's/^(?!;)/;/gm' /etc/resolv.conf; echo -ne '\nnameserver localhost' >> /etc/resolv.conf
  • Run a test
    • At this point, it’s a good idea to make sure the DNS server is working as intended. So first, we’ll add an example domain to the DNS server.
    • Add the following to /var/named/blacklisted.conf and restart named to get the server going with example.com: “zone "example.com" { type master; file "blacklisted.db"; };
    • echo 'zone "example.com" { type master; file "blacklisted.db"; };' >> /var/named/blacklisted.conf; service named reload;
    • Ping “test.example.com” and make sure it’s IP is your server’s IP
    • Set your computer’s DNS to your server’s IP in your computer’s network settings, ping “test.example.com” from your computer, and make sure the returned IP is your server’s IP. If it works, you can restore your computer’s DNS settings.
  • Have the server return a message when a blacklisted domain is accessed
    • Add your message to /var/www/html
    • echo 'Domain is blocked' > /var/www/html/index.html
    • Set all URL paths to show the message by adding the following to the /var/www/html/.htaccess file
      RewriteEngine on
      RewriteCond %{REQUEST_URI} !index.html
      RewriteCond %{REQUEST_URI} !AddRules/
      RewriteRule ^(.*)$ /index.html [L]
    • echo -ne 'RewriteEngine on\nRewriteCond %{REQUEST_URI} !index.html\nRewriteCond %{REQUEST_URI} !AddRules/\nRewriteRule ^(.*)$ /index.html [L]' > /var/www/html/.htaccess
    • Turn on AllowOverride in the /etc/httpd/conf/httpd.conf for the document directory (/var/www/html/) via “ perl -0777 -pi -e 's~(<Directory "/var/www/html">.*?\n\s*AllowOverride).*?\n~$1 All~s' /etc/httpd/conf/httpd.conf
    • Start the server via “service httpd graceful
  • Create a script that allows apache to refresh the name server’s settings
    • Create a script at /var/www/html/AddRules/restart_named with “/sbin/service named reload” and set it to executable
    • mkdir /var/www/html/AddRules; echo '/sbin/service named reload' > /var/www/html/AddRules/restart_named; chmod 755 /var/www/html/AddRules/restart_named
    • Allow the user to run the script as root by adding to /etc/sudoers “apache ALL=(root) NOPASSWD: /var/www/html/AddRules/restart_named” and “Defaults!/var/www/html/AddRules/restart_named !requiretty
    • echo -e 'apache ALL=(root) NOPASSWD:/var/www/html/AddRules/restart_named\nDefaults!/var/www/html/AddRules/restart_named !requiretty' >> /etc/sudoers
  • Create a script that allows the user to add, remove, and list the blacklisted domains
    • Add the following to /var/www/html/AddRules/index.php (one line command not given. You can use “nano” to create it)
      <?php
      //Get old domains
      $BlockedFile='/var/named/blacklisted.conf';
      $CurrentZones=Array();
      foreach(explode("\n", file_get_contents($BlockedFile)) as $Line)
              if(preg_match('/^zone "([\w\._-]+)"/', $Line, $Results))
                      $CurrentZones[]=$Results[1];
      
      //List domains
      if(isset($_REQUEST['List']))
              return print implode('<br>', $CurrentZones);
      
      //Get new domains
      if(!isset($_REQUEST['Domains']))
              return print 'Missing Domains';
      $Domains=$_REQUEST['Domains'];
      if(!preg_match('/^[\w\._-]+(,[\w\._-]+)*$/uD', $Domains))
              return print 'Invalid domains string';
      $Domains=explode(',', $Domains);
      
      //Remove domains
      if(isset($_REQUEST['Remove']))
      {
              $CurrentZones=array_flip($CurrentZones);
              foreach($Domains as $Domain)
                      unset($CurrentZones[$Domain]);
              $FinalDomainList=array_keys($CurrentZones);
      }
      else //Combine domains
              $FinalDomainList=array_unique(array_merge($Domains, $CurrentZones));
      
      //Output to the file
      $FinalDomainData=Array();
      foreach($FinalDomainList as $Domain)
              $FinalDomainData[]=
                      "zone \"$Domain\" { type master; file \"blacklisted.db\"; };";
      file_put_contents($BlockedFile, implode("\n", $FinalDomainData));
      
      //Reload named
      print `sudo /var/www/html/AddRules/restart_named`;
      ?>
    • Add the “apache” user to the “named” group so the script can update the list of domains in /var/named/blacklisted.conf via “usermod -a -G named apache; service httpd graceful;
  • Run the domain update script
    • To add a domain (separate by commas): http://dns.yourdomain.com/AddRules/?Domains=domain1.com,domain2.com
    • To remove a domain (add “Remove&” after the “?”): http://dns.yourdomain.com/AddRules/?Remove&Domains=domain1.com,domain2.com
    • To list the domains: http://dns.yourdomain.com/AddRules/?List
  • Password protect the domain update script
    • Add to AddRules/.htaccess the following
      AuthType Basic
      AuthName "Admins Only"
      AuthUserFile "/var/www/html/AddRules/.htpasswd"
      require valid-user
    • echo -ne 'AuthType Basic\nAuthName "Admins Only"\nAuthUserFile "/var/www/html/AddRules/.htpasswd"\nrequire valid-user' > /var/www/html/AddRules/.htaccess
    • Warning: Putting the password file in an http accessible directory is a security risk. I just did this for sake of organization.
    • Create the user+password via “htpasswd -bc /var/www/html/AddRules/.htpasswd USERNAME” and then entering the password


[Edit on 2016-01-30 @ noon]

To permanently set “localhost” as the resolver DNS, add “DNS1=localhost” to “/etc/sysconfig/network-scripts/ifcfg-eth0”. I have not yet confirmed this edit.

Security Issue

Soon after setting up this DNS server, it started getting hit by a DNS amplification attack. As the server is being used as a client’s DNS server, turning off recursion is not available. The best solution is to limit the people who can query the name server via an access list (usually a specific subnet), but that would very often not be an option either. The solution I currently have in place, which I have not actually verified if it works, is to add a forced-forward rule which only makes external requests to the name server provided by Amazon. To do this, get the name server’s IP from /etc/resolv.conf (it should be commented from an earlier step). Then add the following to your named.conf in the “options” section.

	forwarders {
		DNS_SERVER_IP;
	};
	forward only;

After I added this rule, external DNS requests stopped going through completely. To fix this, I turned “dnssec-validation” to “no” in the named.conf. Don’t forget to restart the service once you have made your changes.

[End of edit]

Full serverside script
Make sure to run this as root (login as root or sudo it)

Download the script here. Make sure to chmod and sudo it when running. “chmod +x dnsblacklist_install.sh; sudo ./dnsblacklist_install.sh;

#User defined variables
VARIABLES_SET=0; #Set this to 1 to allow the script to run
YOUR_DOMAIN="localhost";
YOUR_IP="1.1.1.1";
BLOCKED_ERROR_MESSAGE="Domain is blocked";
ADDRULES_USERNAME="YourUserName";
ADDRULES_PASSWORD="YourPassword";

#Confirm script is ready to run
if [ $VARIABLES_SET != 1 ]; then
    echo 'Variables need to be set in the script';
    exit 1;
fi
if [ `whoami` != 'root' ]; then
    echo 'Must be root to run script. When running the script, add "sudo" before it to' \
        'run as root';
    exit 1;
fi

#Allow root login
cat /home/ec2-user/.ssh/authorized_keys > /root/.ssh/authorized_keys;
perl -pi -e 's/^\s*#?\s*PermitRootLogin.*$/PermitRootLogin yes/igm' /etc/ssh/sshd_config;
service sshd reload;

#Install services
yum -y install bind httpd php;
chkconfig httpd on;
chkconfig named on;
service httpd start;
service named start;

#Set the DNS server to be usable by other computers
perl -pi -e 's/^(\s*(?:listen-on port 53|allow-query)\s*{).*$/$1 any; };/igm' \
    /etc/named.conf;
service named reload;

#Create/link the blacklist files
echo -ne '\ninclude "/var/named/blacklisted.conf";' >> /etc/named.conf;
touch /var/named/blacklisted.conf;

#Create the blacklist zone file
echo -ne "\$TTL 14400
@       IN SOA $YOUR_DOMAIN. $YOUR_DOMAIN ( 2003052800  86400  300  604800  3600 )
@       IN      NS   $YOUR_DOMAIN.
@       IN      A    $YOUR_IP
*       IN      A    $YOUR_IP" > /var/named/blacklisted.db;

#Fix the permissions on the blacklist files
chgrp named /var/named/blacklisted.*;
chmod 660 /var/named/blacklisted.*;

#Set the server’s domain resolution name servers
perl -pi -e 's/^(?!;)/;/gm' /etc/resolv.conf;
echo -ne '\nnameserver localhost' >> /etc/resolv.conf;

#Run a test
echo 'zone "example.com" { type master; file "blacklisted.db"; };' >> \
    /var/named/blacklisted.conf;
service named reload;
FOUND_IP=`dig -t A example.com | grep -ioP "^example\.com\..*?"'in\s+a\s+[\d\.:]+' | \
     grep -oP '[\d\.:]+$'`;
if [ "$YOUR_IP" == "$FOUND_IP" ]
then
  echo 'Success: Example domain matches your given IP' > /dev/stderr;
else
  echo 'Warning: Example domain does not match your given IP' > /dev/stderr;
fi

#Have the server return a message when a blacklisted domain is accessed
echo "$BLOCKED_ERROR_MESSAGE" > /var/www/html/index.html;
perl -0777 -pi -e 's~(<Directory "/var/www/html">.*?\n\s*AllowOverride).*?\n~$1 All~s' \
     /etc/httpd/conf/httpd.conf;
echo -n 'RewriteEngine on
RewriteCond %{REQUEST_URI} !index.html
RewriteCond %{REQUEST_URI} !AddRules/
RewriteRule ^(.*)$ /index.html [L]' > /var/www/html/.htaccess;
service httpd graceful;

#Create a script that allows apache to refresh the name server’s settings
mkdir /var/www/html/AddRules;
echo '/sbin/service named reload' > /var/www/html/AddRules/restart_named;
chmod 755 /var/www/html/AddRules/restart_named;

echo 'apache ALL=(root) NOPASSWD:/var/www/html/AddRules/restart_named
Defaults!/var/www/html/AddRules/restart_named !requiretty' >> /etc/sudoers;

#Create a script that allows the user to add, remove, and list the blacklisted domains
echo -n $'<?php
//Get old domains
$BlockedFile=\'/var/named/blacklisted.conf\';
$CurrentZones=Array();
foreach(explode("\\n", file_get_contents($BlockedFile)) as $Line)
        if(preg_match(\'/^zone "([\\w\\._-]+)"/\', $Line, $Results))
                $CurrentZones[]=$Results[1];

//List domains
if(isset($_REQUEST[\'List\']))
        return print implode(\'<br>\', $CurrentZones);

//Get new domains
if(!isset($_REQUEST[\'Domains\']))
        return print \'Missing Domains\';
$Domains=$_REQUEST[\'Domains\'];
if(!preg_match(\'/^[\\w\\._-]+(,[\\w\\._-]+)*$/uD\', $Domains))
        return print \'Invalid domains string\';
$Domains=explode(\',\', $Domains);

//Remove domains
if(isset($_REQUEST[\'Remove\']))
{
        $CurrentZones=array_flip($CurrentZones);
        foreach($Domains as $Domain)
                unset($CurrentZones[$Domain]);
        $FinalDomainList=array_keys($CurrentZones);
}
else //Combine domains
        $FinalDomainList=array_unique(array_merge($Domains, $CurrentZones));

//Output to the file
$FinalDomainData=Array();
foreach($FinalDomainList as $Domain)
    $FinalDomainData[]="zone \\"$Domain\\" { type master; file \\"blacklisted.db\\"; };";
file_put_contents($BlockedFile, implode("\\n", $FinalDomainData));

//Reload named
print `sudo /var/www/html/AddRules/restart_named`;
?>' > /var/www/html/AddRules/index.php;

usermod -a -G named apache;
service httpd graceful;

#Password protect the domain update script
echo -n 'AuthType Basic
AuthName "Admins Only"
AuthUserFile "/var/www/html/AddRules/.htpasswd"
require valid-user' > /var/www/html/AddRules/.htaccess;

htpasswd -bc /var/www/html/AddRules/.htpasswd "$ADDRULES_USERNAME" "$ADDRULES_PASSWORD";

echo 'Script complete';
Syncing Amazon EC2 Instances

In continuation of yesterday’s post, in which I showed how to create Amazon AMIs to keep your newly created EC2 instances up to date, today I will cover syncing already-live instances from the master to slaves. All of the below takes place on the master instance, and assumes all other instances are part of the slave group. You may have to use extra filters on the below “aws” command to only pull IPs from a certain group of instances.

Here is a simple bash script (hereby referred to as “Propagate.sh”) which syncs /var/www/html/ to all of your slave instances. It uses the “aws” command line interface provided by Amazon, which comes default with the Amazon Linux starter AMI.

#The first command line of the script contains the master’s IP, so it does not sync with itself.
export LocalIP=Your_Master_IP_Here;

#Get the IPs of all slave instances
export NewIPs=`aws ec2 describe-instances | grep '"PrivateIpAddress"' | perl -i -pe 's/(^.*?: "|",?\s*?$)//gm' | sort -u | grep -v $LocalIP`

#Loop over all slave instances
for i in $NewIPs; do
        echo "Syncing to: $i";
        #Run an rsync from the master to the slave
        rsync -aP -e 'ssh -o StrictHostKeyChecking=no' /var/www/html/ root@$i:/var/www/html/;
done

You may also want to add “-o UserKnownHostsFile=/dev/null” to the SSH command (directly after “-o StrictHostKeyChecking=no”), as a second EC2 instance may end up having the same IP as a previously terminated instance. Another solution to that problem is syncing the “/etc/ssh/ssh_host_rsa_key*” from the master when an instance initializes, so all instances keep the same SSH fingerprint.


To let other people manually execute this script, you can create a PHP file with the following in it. (Change /var/www/ in all below examples to where you place your Propagate.sh)

<? print nl2br(htmlentities(shell_exec('sudo /var/www/Propagate.sh 2<&1'))); ?>

If your Propagate.sh needs to be ran as root, which it may if your PHP environment is not run as the user root (usually “apache”), then you need to make sure it CAN run as root without intervention. To do this, add the following to the /etc/sudoers file
apache  ALL=(ALL)       NOPASSWD: /usr/bin/whoami, /var/www/Propagate.sh
Change the user from “apache” to the user which PHP runs as (when running through apache).
I included “whoami” as a valid sudoer application for testing purposes.
Also, in the sudoers file, if “Defaults requiretty” is turned on, you will need to comment it/turn it off.

While I did not mention it in yesterday's post, I thought I should at least mention it here. There are other ways to keep file systems in sync with each other. This is just a good use case for when you want to keep all instances as separate independent entities. Another solution to many of the previously mentioned problems is using Amazon's new EFS, which is currently still in preview mode.

Custom Initializations for Amazon AMIs

I was recently hired to move a client's site from our primary server in Houston to the Amazon cloud, as it was about to take a big hit in traffic. The normal setup for this kind of job is pretty straightforward. Move the database over to RDS, set up an AMI of an EC2 instance, a load balancer, and ec2 auto scaling. However, there were a couple of problems I needed to solve this time around for the instances launched via the auto scalar that I had not really needed to do before. This includes syncing the SSH settings and current codebase from the primary instance, as opposed to recreating AMIs every time there was a change. So, long story short, here are the problems and solutions that need to be added before the AMI image is created.


This all assumes you are running as root. Most of these commands should work on any Linux distribution that Amazon has default AMIs for, but some of these may only work in the Amazon and CentOS AMIs.


Pre-setup:
  • Your first instance that you are creating the AMI from should be a permanent instance. This is important for 2 reasons.
    1. When changing configurations for the auto scalar, if and when your instances are terminated and recreated, this instance will always be available on the load balancer, so there is no downtime.
    2. This instance can act as a central repository for other instances to sync from.
    So make sure this instance has an elastic IP assigned to it. From here on out, we will refer to this instance as PrimaryInstance (you can set this physically in the host file, or change it in all scripts to however you want to refer to your elastic IP [most likely through a DNS domain]).
  • Create your ssh private key for the instances: (For all prompts, use default settings)
    ssh-keygen -t rsa -b 4096 -C "your_email@example.com"
  • Make sure your current ssh authorized_keys contains your new ssh private key:
    cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
  • Make sure your ssh known_hosts includes your primary instance, so all future ssh calls to it are automatically accept it as a known host:
    ssh PrimaryInstance -o StrictHostKeyChecking=no
    You do not have to finish the login process. This just makes sure our primary instance will be recognized by other instances.
  • Turn on PermitRootLogin in /etc/ssh/sshd_config and reload the sshd config service sshd reload
    I just recommend this because it makes life way, way easier. The scripts below assume that you did this.

Create a custom init file that runs on boot to take care of all the commands that need to be run.
#Create the script and make sure the full path (+all other root environment variables) are set when it is ran
echo '#!/bin/bash -l' > /etc/rc.d/init.d/custom_init

#Set the script as executable
chmod +x /etc/rc.d/init.d/custom_init

#Executes it as one of the last scripts on run level 3 (Multi-user mode with networking)
ln -s ../init.d/custom_init /etc/rc.d/rc3.d/S99custom_init
All of the below commands in this post will go into this script.

Allow login via password authentication:
perl -i -pe 's/^PasswordAuthentication.*$/PasswordAuthentication yes/mg' /etc/ssh/sshd_config
service sshd reload
Notes:
You may not want to do this. It was just required by my client in this case.
This is required in the startup script because Amazon likes to mess with the sshd_config (and authorized_keys) in new instances it boots.

Sync SSH settings from the PrimaryInstance:
#Remove the known_hosts file, in case something on the PrimaryInstance has changed that would block ssh commands.
rm -f ~/.ssh/known_hosts

#Sync the SSH settings from the PrimaryInstance
rsync -e 'ssh -o StrictHostKeyChecking=no' -a root@PrimaryInstance:~/.ssh/ ~/.ssh/

Sync required files from the PrimaryInstance. In this case, the default web root folder:
rsync -at root@PrimaryInstance:/var/www/html/ /var/www/html/

That's it for the things that need to be configured/added to the instance. From there, create your AMI and launch config, and create/modify your launch group and load balancer.


Also, as a very important note about your load balancer, make sure if you are mirroring its IP on another domain to use a CNAME record, and not the IP in an A record, as the load balancer IP is subject to change.

Lets Encrypt HTTPS Certificates

After a little over a year of waiting, Let’s Encrypt has finally opened its doors to the public! Let’s Encrypt is a free https certificate authority, with the goal of getting the entire web off of http (unencrypted) and on to https. I consider this a very important undertaking, as encryption is one of the best ways we can fight illegal government surveillance. The more out there that is encrypted, the harder it will be to spy on people.

I went ahead and got it up and running on 2 servers today, which was a bit of a pain in the butt. It [no longer] supports Python 2.6, and was also very unhappy with my CentOS 6.4 cPanel install. Also, when you first run the letsencrypt-auto executable script as instructed by the site, it opens up your package manager and immediately starts downloading LOTS of packages. I found this to be quite anti-social, especially as I had not yet seen anywhere, or been warned, that it would do this before I started the install, but oh well. It is convenient. The problem in cPanel was that a specific library, libffi, was causing problems during the install.


To fix the Python problem for all of my servers, I had to install Python 2.7 as an alt Python install so it wouldn’t mess with any existing infrastructure using Python 2.6. After that, I also set the current alias of “python” to “python2.7” so the local shell would pick up on the correct version of Python.


As root in a clean directory:
wget https://www.python.org/ftp/python/2.7.8/Python-2.7.8.tgz
tar -xzvf Python-2.7.8.tgz
cd Python-2.7.8
./configure --prefix=/usr/local
make
make altinstall
alias python=python2.7

The cPanel lib problem was caused by libffi already being installed as 3.0.9-1.el5.rf, but yum wanted to install its devel package as version 3.0.5-3.2.el6.x86_64 (an older version). It did not like running conflicting versions. All that was needed to fix the problem was to manually download and install the same devel version as the current live version.

wget http://pkgs.repoforge.org/libffi/libffi-devel-3.0.9-1.el5.rf.x86_64.rpm
rpm -ivh libffi-devel-3.0.9-1.el5.rf.x86_64.rpm

Unfortunately, the apache plugin was also not working, so I had to do a manual install with “certonly” and “--webroot”.


And that was it; letsencrypt was ready to go and start signing my domains! You can check out my current certificate, issued today, that currently has 13 domains tied to it!

Useful Exim Scripts
For fighting spam

In the course of my Linux administrative duties (on a cPanel server), I have created multiple scripts to help us out with Exim, our mail transfer agent. These are mostly used to help us fight spam, and determine who is spamming when it occurs.



This monitors the number of emails in the queue, and sends ours admins an email when a limit (1000) is reached. It would need to be run on a schedule (via cron).
#!/bin/bash
export AdminEmailList="ADMIN EMAIL LIST SEPARATED BY COMMAS HERE"
export Num=`/usr/sbin/exim -bpc`
if [ $Num -gt 1000 ]; then
        echo "Too many emails! $Num" | /usr/sbin/sendmail -v "$AdminEmailList"
        #Here might be a good place to delete emails with “undeliverable” strings within them
        #Example (See the 3rd script): exim-delete-messages-with 'A message that you sent could not be delivered'
fi

This deletes any emails in the queue from or to a specified email address (first parameter). If the address is the recipient, the from must be "<>" (root)
#!/bin/bash
exiqgrep -ir $1 -f '<>' | xargs exim -Mrm
exiqgrep -if $1 | xargs exim -Mrm

This deletes any emails in the queue which contain a given string (first parameter)
#!/bin/bash
if [ "$1" == "" ]
then
  echo 'Cannot delete with empty string'
else
  grep -lir "$1" /var/spool/exim/input/ | sed -e 's/^.*\/\([a-zA-Z0-9-]*\)-[DH]$/\1/g' | xargs /usr/sbin/exim -Mrm
fi

Get a count of emails in the queue per sender (sender email address is supplied by sender and can be faked)
#!/bin/bash
exim -bp | grep -oP '<.*?>' | sort | uniq -c | sort -n

Get a count of emails in the queue per account (running this script can take a little while)
#!/bin/bash
exim -bp | grep -Po '(?<= )[-\w]+(?= <)' | xargs -n1 exim -Mvh | grep -ioP '(?<=auth_sender ).*$' | sort | uniq -c

Bonus: Force all non-specified accounts on Exim to use a certain IP address for sending. It would need to be run on a schedule (via cron).
#!/bin/bash
export IPAddress="YOUR ADDRESS HERE"
/usr/bin/perl -i -pe 's/\*:.*/*: '$IPAddress'/g' /etc/mailips
Cross site scripting solutions
When you are forced to break the security model

So I was recently hired to set up a go-between system that would allow two independent websites to directly communicate and transfer/copy data between each other via a web browser. This is obviously normally not possible due to cross-site browser security settings (XSS), so I gave the client 2 possible solutions. Both of these solutions are written with the assumption that there is a go-between intermediary iframe/window, on a domain that they control, between the 2 independent site iframes/window. This would also work fine for one site you control against a site you do not control.

  1. Tell the browser to ignore this security requirement:
    • For example, if you add to the chrome command line arguments “--disable-web-security”, cross-site security checks will be removed. However, chrome will prominently display on the very first tab (which can be closed) at the top of the browser “You are using an unsupported command-line flag: —disable-web-security. Stability and security will suffer”. This can be scary to the user, and could also allow security breaches if the user utilizes that browser [session] for anything except the application page.
  2. The more appropriate way to do it, which requires a bit of work on the administrative end, is having all 3 sites pretend to run off of the same domain. To do this:
    1. You must have a domain that you control, which we will call UnifyingDomain.com (This top level domain can contain subdomains)
    2. The 2 sites that YOU control would need a JavaScript line of  “document.domain='UnifyingDomain.com';” somewhere in them. These 2 sites must also be run off of a subdomain of UnifyingDomain.com, (which can also be done through apache redirect directives).
    3. The site that you do not control would need to be forwarded through your UnifyingDomain.com (not a subdomain) via an apache permanent redirect.
      • This may not work, if their site programmer is dumb and does not use proper relative links for everything (absolute links are the devil :-) ). If this is the case:
        • You can use a [http] proxy to pull in their site through your domain (in which case, if you wanted, you could inject a “domain=”)
        • You can use the domain that you do not control as the top level UnifyingDomain.com, and add rules into your computer’s hostname files to redirect its subdomains to your IPs.

This project is why I ended up making my HTTP Forwarders client in go (coming soon).

MySQL replication ring status reporting script

I just threw together a quick script to report status on a MySQL replication ring. While replication rings have been the only real multi-master MySQL solution for replication (with the ability for nodes to go down without majorly breaking things) until recently, I have read that MariaDB (still not MySQL) now allows a slave to have multiple masters, meaning many replication topologies are now possible (star, mesh, etc). This script could easily be adapted for those circumstances too.

This script will report all the variables from “SHOW MASTER STATUS” and “SHOW SLAVE STATUS” from all servers in your replication ring, in a unified table. It also includes a “Pretty Status” row that lets you quickly see how things look. The possibilities for this row are:

  • Bad state: ...
    This shows if the Slave_IO_State is not “Waiting for master to send event”
  • Cannot determine master’s real position
    This shows if the Position variable on the master could not be read
  • On old master file
    This shows if the slave’s “Master_Log_File” variable does not equal the master’s “File” variable
  • Bytes behind: xxx
    This shows if none of the above errors occurred. It subtracts the master’s “Position” from the slave’s “Read_Master_Log_Pos”. This should generally be at or around 0. A negative value essentially means 0 (this should only happen between the last and first server).

The “Seconds_Behind_Master” variable can also be useful for determining the replication ring’s current replication status.

The code is below the example. The entire source file can also be found here. The 3 variables that need to be configured are at the top of the file. It assumes that all servers are accessible via the single given username and password.


Example:
Master
Server Name
EXAMPLE1.MYDOMAIN.COM
EXAMPLE2
Filemysql-bin.000003mysql-bin.000011
Position252497463215834
Binlog_Do_DBexample_data,devexample_dataexample_data,devexample_data
Binlog_Ignore_DB
Slave
Pretty Status
Bytes behind: 0
Bytes behind: 0
Slave_IO_StateWaiting for master to send eventWaiting for master to send event
Master_HostEXAMPLE2EXAMPLE1.MYDOMAIN.COM
Master_Userexample_slaveexample_slave
Master_Port33063306
Connect_Retry6060
Master_Log_Filemysql-bin.000011mysql-bin.000003
Read_Master_Log_Pos321583425249746
Relay_Log_Filewww-relay-bin.070901www-relay-bin.071683
Relay_Log_Pos252252
Relay_Master_Log_Filemysql-bin.000011mysql-bin.000003
Slave_IO_RunningYesYes
Slave_SQL_RunningYesYes
Replicate_Do_DBexample_data,devexample_dataexample_data,devexample_data
Replicate_Ignore_DB
Replicate_Do_Table
Replicate_Ignore_Table
Replicate_Wild_Do_Table
Replicate_Wild_Ignore_Table
Last_Errno00
Last_Error
Skip_Counter00
Exec_Master_Log_Pos321583425249746
Relay_Log_Space552552
Until_ConditionNoneNone
Until_Log_File
Until_Log_Pos00
Master_SSL_AllowedNoNo
Master_SSL_CA_File
Master_SSL_CA_Path
Master_SSL_Cert
Master_SSL_Cipher
Master_SSL_Key
Seconds_Behind_Master00
Master_SSL_Verify_Server_CertNoNo
Last_IO_Errno00
Last_IO_Error
Last_SQL_Errno00
Last_SQL_Error
Replicate_Ignore_Server_Ids
Not given
Master_Server_Id2
Not given


Code:

<?
//Configurations
$Servers=Array('SERVER1.YOURDOMAIN.COM', 'SERVER2.YOURDOMAIN.COM'); //List of host names to access mysql servers on. This must be in the order of the replication ring.
$SlaveUserName='SLAVE_RING_USERNAME'; //This assumes all servers are accessible via this username with the same password
$SlavePassword='SLAVE_RING_PASSWORD';

//Get the info for each server
$ServersInfo=Array(); //SERVER_NAME=>Array('Master'=>Array(Col1=>Val1, ...), 'Slave'=>Array(Col1=>Val1, ...)
$ColsNames=Array('Master'=>Array('Server Name'=>0), 'Slave'=>Array('Pretty Status'=>0)); //The column names for the 2 (master and slave) queries. Custom column names are also added here
$CustomFieldNames=array_merge($ColsNames['Master'], $ColsNames['Slave']); //Store the custom column names so they are not HTML escaped later
foreach($Servers as $ServerName)
{
    //Connect to the server
    $Link=@new mysqli($ServerName, $SlaveUserName, $SlavePassword);
    if($Link->connect_error)
        die(EHTML("Connection error to $ServerName server: $Link->connect_error"));

    //Get the replication status info from the server
    $MyServerInfo=$ServersInfo[$ServerName]=Array(
        'Master'=>$Link->Query('SHOW MASTER STATUS')->fetch_array(MYSQLI_ASSOC),
        'Slave'=>$Link->Query('SHOW SLAVE STATUS')->fetch_array(MYSQLI_ASSOC)
    );
    mysqli_close($Link); //Close the connection

    //Gather the column names
    foreach($ColsNames as $ColType => &$ColNames)
        foreach($MyServerInfo[$ColType] as $ColName => $Dummy)
            $ColNames[$ColName]=0;
}
unset($ColNames);

//Gather the pretty statuses
foreach($Servers as $Index => $ServerName)
{
    //Determine the pretty status
    $SlaveInfo=$ServersInfo[$ServerName]['Slave'];
    $MasterInfo=$ServersInfo[$Servers[($Index+1)%count($Servers)]]['Master'];
    if($SlaveInfo['Slave_IO_State']!='Waiting for master to send event')
        $PrettyStatus='Bad state: '.EHTML($SlaveInfo['Slave_IO_State']);
    else if(!isset($MasterInfo['Position']))
        $PrettyStatus='Cannot determine master’s real position';
    else if($SlaveInfo['Master_Log_File']!=$MasterInfo['File'])
        $PrettyStatus='On old master file';
    else
        $PrettyStatus='Bytes behind: '.($MasterInfo['Position']-$SlaveInfo['Read_Master_Log_Pos']);

    //Add the server name and pretty status to the output columns
    $ServersInfo[$ServerName]['Master']['Server Name']='<div class=ServerName>'.EHTML($ServerName).'</div>';
    $ServersInfo[$ServerName]['Slave']['Pretty Status']='<div class=PrettyStatus>'.EHTML($PrettyStatus).'</div>';
}

//Output the document
function EHTML($S) { return htmlspecialchars($S, ENT_QUOTES, 'UTF-8'); } //Escape HTML
?>
<!DOCTYPE html>
<html>
<head>
    <title>Replication Status</title>
    <meta charset="UTF-8">
    <style>
        table { border-collapse:collapse; }
        table tr>* { border:1px solid black; padding:3px; }
        th { text-align:left; font-weight:bold; }
        .ReplicationDirectionType { font-weight:bold; text-align:center; color:blue; }
        .ServerName { font-weight:bold; text-align:center; color:red; }
        .PrettyStatus { font-weight:bold; color:red; }
        .NotGiven { font-weight:bold; }
    </style>
</head>
<body><table>
<?
//Output the final table
foreach($ColsNames as $Type => $ColNames) //Process by direction type (Master/Slave) then columns
{
    print '<tr><td colspan='.(count($Servers)+1).' class=ReplicationDirectionType>'.$Type.'</td></tr>'; //Replication direction (Master/Server) type title column
    foreach($ColNames as $ColName => $Dummy) //Process each column name individually
    {
        print '<tr><th>'.EHTML($ColName).'</th>'; //Column name
        $IsHTMLColumn=isset($CustomFieldNames[$ColName]); //Do not escape HTML on custom fields
        foreach($ServersInfo as $ServerInfo) //Output the column for each server
            if($IsHTMLColumn) //Do not escape HTML on custom fields
                print '<td>'.$ServerInfo[$Type][$ColName].'</td>';
            else //If not a custom field, output the escaped HTML of the value. If the column does not exist for this server (different mysql versions), output "Not given"
                print '<td>'.(isset($ServerInfo[$Type][$ColName]) ? EHTML($ServerInfo[$Type][$ColName]) : '<div class=NotGiven>Not given</div>').'</td>';
        print '</tr>';
    }
}
?>
</table></body>
</html>

One final note. When having this script run, you might need to make sure none of the listed server IPs evaluates to localhost (127.x.x.x), as MySQL may instead then use the local socket pipe, which may not work with users who only have REPLICATION permissions and a wildcard host.

Results from my first high-load scalable system
Putting the cloud on a scale so it’s not so heavy

I’ve wanted to create and test a large-scale application for a very long time but have never really had the chance until recently. The Vintage Experience project I did earlier this year finally gave me the opportunity. As one of many parts of the project, I was tasked to create a voting system that could handle 1 million votes via a web page in a 30 second time span. The final system was deployed successfully without any problems for Gala Artis 2013 (a French Canadian artist/TV awards show). The following are the results of my implementation and testing.

The main front-end was done via a static HTML page (smart-phone optimized) that was hosted by Amazon S3, where handling 33k requests/second is a drop in the bucket. All voting requests were done via AJAX from this web page to backend servers hosted by Amazon EC2.

The backend was programmed in GoLang as a simple web server, optimized for memory and speed, which spawned a new goroutine for each incoming request. The request returned a message to the user stating either the error message, or success if the vote was added to the database. Each server held a single persistent MySQL connection to an Amazon RDS “Large DB Instance” with the minimum IOPS (1000). Votes from a server were sent to the database in batches once a second, or earlier if 10,000 votes had been received.

The servers were Amazon “M1 Standard Extra Large” (m1.xlarge) instances running Linux, of which there were 6 total, handling vote requests delegated by a round-robin DNS on Amazon’s Route 53. During stress testing, each server was found to be able to handle 6800 requests/second, and load was staying under 3, so there were was probably another bottle neck. Running the same tests using php(sapi)+apache(fork), only 4500 requests/second could be executed, and there was a 16+ load.

On the servers, I found it necessary to set the following sysctl setting to increase performance “net.core.somaxconn=1024”. The following commands need to be run to execute this:

sysctl 'net.core.somaxconn=1024' #Store for the current computer session
echo 'net.core.somaxconn=1024' >> /etc/sysctl.conf #Set after a reboot

Stress test client instances were also run on Amazon as m1.xlarge instances, and were found to be able to push 5000-6000 requests/second. The GoLang test clients spawned 200 requests at a time and waited for them to finish before sending the next batch. The client system needed the following sysctl settings for optimal performance:

net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_fin_timeout=30
net.ipv4.ip_local_port_range="15000 65534"
My MySQL Replication Ring Configuration

As of 2010, if you wanted to set up a MySQL replication configuration with multiple servers which could all update and send the updates to the other servers, a replication ring was the only solution (in which every server has a master and a slave in a ring configuration). While there are new (probably better) solutions as of late including using MariaDB’s multi-source replication, and tungsten-replicator (which I was referred to in late April and have not yet tried), I still find a replication ring to be an easy to use solution in some circumstances. However, there are some major disadvantages including:

  • If one node in the ring goes down, the entire ring stops replicating at that point until the node is brought back up
  • If a node goes down in the ring and has corrupted or incomplete data, say, from a power outdate, the entire ring may have to be painstakingly synced and rebuilt.
Anywho, the following is my basic MySQL configurations for setting up a replication ring, which needs to be put on every server node in the ring: (See MySQL docs for more information on each configuration)
[mysqld]
#---GENERAL REPLICATION VARIABLES--- (These should never change)
log_bin=mysql-bin.log                           #Turn on the binary log, which is used to hold queries that are propagated to other machines
slave-skip-errors               = 1062          #Do not stop replication if a duplicate key is found (which shouldn’t happen anyways). You may want to turn this off to verify integrity, but then your replication ring will stop if a duplicate key is found
#master-connect-retry           = 5             #How often to keep trying to reconnect to the master node of the current machine after a connection is lost. This has been removed as of MySQL 5.5, and needs to instead be included in the “CHANGE MASTER” command
sync_binlog                     = 100           #After how many queries, sync to the binlog
slave_net_timeout               = 120           #The number of seconds to wait for more data from a master/slave before restarting the connection
max_binlog_size                 = 1000M         #The maximum size the binlog can get before creating a new binlog
log-slave-updates                               #Log slave updates to the binary log. If there are only 2 nodes in the ring, this is not required
slave_exec_mode                 = IDEMPOTENT    #Suppression of duplicate-key and no-key-found errors
replicate-same-server-id        = 0             #Skip running SQL commands received via replication that were generated by the current server node

#---INDEPENDENT REPLICATION VARIABLES--- (These should change per setup)
binlog-do-db                   = DATABASE_NAME  #Only add statements from this database to the binlog. You can have multiple of these configurations
replicate-do-db                = DATABASE_NAME  #Only read statements from this database during replication. You can have multiple of these configurations
auto_increment_increment        = 2             #After ever auto-increment, add this to its number. Set this to the number of nodes. This helps assures no duplicate IDs

#---SERVER CONFIGURATION--- (These numbers should match)
server-id                       = 1             #The current node number in the ring. Every node needs to have its own incremental server number starting at 1
auto_increment_offset           = 1             #What to start auto-increment numbers at for this server. Set this to the server-id. This helps assures no duplicate IDs
Cygwin SIGINT fix for golang

Cygwin has had a long time problem that, depending on your configuration, may cause you to be unable to send a SIGINT (interrupt signal via Ctrl+C) to a native Windows command line executables. As a matter of fact, trying to do so may completely freeze up the console, requiring a process kill of the actual console, bash, and the executable you ran. This problem can crop up for many reasons including the version of Cygwin you are running and your terminal emulator. I specifically installed mintty as my default Cygwin console to get rid of this problem a long time ago (among many other features it had), and now it even has this problem.

While my normal solution is to try and steer clear of native Windows command line executables in Cygwin, this is not always an option. Golang was also causing me this problem every time I ran a network server, which was especially problematic as I would have to ALSO manually kill the server process or it would continue to hold the network port so another test of the code could not use it. An example piece of code is as follows:

package main
import ( "net/http"; "fmt" )
func main() {
	var HR HandleRequest
	if err := http.ListenAndServe("127.0.0.1:81", HR); err!=nil {
		fmt.Println("Error starting server") }
}

//Handle a server request
type HandleRequest struct{}
func (HR HandleRequest) ServeHTTP(w http.ResponseWriter, req *http.Request) {
	fmt.Printf("Received connection from: %s\n", req.RemoteAddr)
}
---
go run example.go

The first solution I found to this problem, which is by far the best solution, was to build the executable and then run it, instead of just running it straight from go.
go build example.go && example.exe
However, as of this post, it seems to no longer work! The last time I tested it and confirmed it was working was about 3 months ago, so who knows what has changed since then.

The second solution is to just build in some method of killing the process that uses “os.Exit”. For example, the following will exit if the user types “exit”
func ListenForExitCommand() {
	for s:=""; s!="exit"; { //Listen for the user to type exit
		if _, err:=fmt.Scanln(&s); err!=nil {
			if err.Error()!="unexpected newline" {
				fmt.Println(err) }
		} else if s=="flush" || s=="exit" {
			//Clean up everything here
		}
	}
	fmt.Println("Exit received, closing process")
	os.Exit(1)
}
and then add the following at the top of the main function:
go ListenForExitCommand() //Listen for "exit" command
Selectively skipping data in a cPanel backup
Using a hammer instead of a scalpel

I was having problems on one of our production Linux cPanel servers in which our backup drive was not able to hold all the data from our primary drive for both our daily and weekly backups. An easy hack to fix this is to mount any subfolders you wish to exclude (generally very large ones) as a readonly temp file system in the appropriate backup folder. With this method, you can selectively exclude individual directories to one or more of the daily/weekly/monthly backup folders.

The only downside to this method is that pkgacct (called by cpbackup) logs will throw readonly file system errors for each file that cannot be copied.


So, to have cPanel discard an individual directory during the backup, you need to do the following:
First, make sure the backup directory to exclude is created and empty by running:
rm -rf PATH;
mkdir -p PATH;
NOTE: BE CAREFUL WITH “rm -rf”, IT IS A DANGEROUS COMMAND

To manually mount the directory, run:
mount tmpfs PATH -t tmpfs -o defaults,ro
To permanently mount the directory (mount on boot), edit /etc/fstab and add the following line:
tmpfs PATH tmpfs defaults,ro 0 0
If you do the permanent fix, don’t forget to run “mount PATH” to have it mount it to the live system, since fstab will not mount all its listed file systems until the next boot.

An example of a PATH might be: /backup/cpbackup/weekly/dakusan/public_html/uploads

cPanel also recently added (experimental) hard linking for backups, which really helps out with space concerns, and makes the need for this script a bit less.

OpenVPN Authentication and Gateway Configuration
Securing oneself is a never ending battle

For a number of years now when on insecure network connections I have been routing my computer to the Internet through secure tunnels and VPNs, but I’ve been interested in trying out different types of VPN software lately so I can more easily help secure friends who ask of it. This would mainly include ease of installation and enabling, which partly requires no extra software for them to install.

Unfortunately, Windows 7 and Android (and probably most other software) only support PPTP and L2TP/IPSEC out of the box. While these protocols are good for what they do, everything I have read says OpenVPN is superior to them as a protocol. I was very frustrated to find out how little support OpenVPN actually has today as a standard in the industry, which is to say, you have to use third party clients and it is rarely, if ever, included by default in OSes. The OpenVPN client and server aren’t exactly the easiest to set up either for novices.


So on to the real point of this post. The sample client and server configurations for OpenVPN were set up just how I needed them except they did not include two important options for me: User authentication and full client Internet forwarding/tunneling/gateway routing. Here is how to enable both.


Routing all client traffic (including web-traffic) through the VPN:
  • Add the following options to the server configuration file:
    • push "redirect-gateway def1" #Tells the client to use the server as its default gateway
    • push "dhcp-option DNS 10.8.0.1" #Tells the client to use the server as its DNS Server (DNS Server's IP address dependent on configuration)
  • Run the following commands in bash:
    • iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE #This command assumes that the VPN subnet is 10.8.0.0/24 (taken from the server directive in the OpenVPN server configuration) and that the local ethernet interface is eth0.
    • echo '1' > /proc/sys/net/ipv4/ip_forward #Enable IP Forwarding (This one is not mentioned in the OpenVPN howto)

Adding user authentication (Using alternative authentication methods)

To set up username/password authentication on the server, an authorization script is needed that receives the username/password and returns whether the login information was successful (0) or failed (1). The steps to set up this process are as follows:

  • Add the following options to the server configuration file:
    • auth-user-pass-verify verify.php via-env #The third argument (method) specifies whether to send the username and password through either a temporary file (via-file) or environment variables (via-env)
    • script-security 3 system #Allows OpenVPN to run user scripts and executables and send password authentication information through environment variables. While "system" is deprecated, I had to use it or external commands like ifconfig and route were failing with "failed: could not execute external program"
  • Add the following options to the client configuration file:
    • auth-user-pass #Request user credentials to log in
  • The final step is to create the verify.php (see auth-user-pass-verify configuration above) script which returns whether it was successful, and also outputs its success to stdout, which is added to the OpenVPN log file.
    #!/usr/bin/php -q
    <?
    //Configuration
    $ValidUserFile='users.txt'; //This file must be in htpasswd SHA1 format (htpasswd -s)
    $Method='via-env'; //via-file or via-env (see auth-user-pass-verify configuration above for more information)
    
    //Get the login info
    if($Method=='via-file') //via-file method
    {
    	$LoginInfoFile=trim(file_get_contents('php://stdin')); //Get the file that contains the passed login info from stdin
    	$LoginInfo=file_get_contents($LoginInfoFile); //Get the passed login info
    	file_put_contents($LoginInfoFile, str_repeat('x', strlen($LoginInfo))); //Shred the login info file
    	$LoginInfo=explode("\n", $LoginInfo); //Split into [Username, Password]
    	$UserName=$LoginInfo[0];
    	$Password=$LoginInfo[1];
    }
    else //via-env method
    {
    	$UserName=$_ENV['username'];
    	$Password=$_ENV['password'];
    }
    
    //Test the login info against the valid user file
    $UserLine="$UserName:{SHA}".base64_encode(sha1($Password, TRUE)); //Compile what the user line should look like
    foreach(file($ValidUserFile, FILE_IGNORE_NEW_LINES) as $Line) //Attempt to match against each line in the file
    	if($UserLine==$Line) //If credentials match, return success
    	{
    		print "Logged in: $UserName\n";
    		exit(0);
    	}
    
    //Return failure
    print "NOT Logged in: $UserName\n";
    exit(1);
    ?>
    		
Auto initializing SSH Remote Tunnel
SSH is so incredibly useful

I often find SSH tunnels absolutely indispensable in my line of work for multiple reasons including secure proxies (tunneling) over insecure connections and connecting computers and programs together over difficult network setups involving NATs and firewalls.

One such example of this I ran into recently is that I have a server machine (hereby called “the client”) that I wanted to make sure I have access to no matter where it is. For this I created an auto initializing SSH remote port tunnel to a server with a static IP Address (hereby called “the proxy server”) which attempts to keep itself open when there is problems.

The first step of this was to create the following bash script on the client that utilizes the OpenSSH’s client to connect to an OpenSSH server on the proxy server for tunneling:

#!/bin/bash
for ((;;)) #Infinite loop to keep the tunnel open
do
	ssh USERNAME@PROXYSERVER -o ExitOnForwardFailure=yes -o TCPKeepAlive=yes -o ServerAliveCountMax=2 -o ServerAliveInterval=10 -N -R PROXYPORT:localhost:22 &>> TUNNEL_LOG #Create the SSH tunnel
	echo "Restarting: " `date` >> TUNNEL_LOG #Write to the log file "TUNNEL_LOG" whenever a restart is attempted
	sleep 1 #Wait 1 second in between connection attempts
done
The parts of the command that create the SSH tunnel are as follows:
Part of Command Description
ssh The OpenSSH client application
USERNAME@PROXYSERVER The proxy server and username on said server to connect to
-o ExitOnForwardFailure=yes Automatically terminate the SSH session if the remote port forward fails
-o TCPKeepAlive=yes
-o ServerAliveCountMax=2
-o ServerAliveInterval=10
Make sure the SSH connection is working, and if not, reinitialize it. The connection fails if server keepalive packets that are sent every 10 seconds are not received twice in a row, or if TCP protocol keepalive fails
-N “Do not execute a remote command. This is useful for just forwarding ports” (This means no interactive shell is run)
-R PROXYPORT:localhost:22 Establish a port of PROXYPORT on the proxy server that sends all data to port 22 (ssh) on the client (localhost)
&>> TUNNEL_LOG Send all output from both stdin and stderr to the log file “TUNNEL_LOG”

For this to work, you will need to set up public key authentication between the client and utilized user on the proxy server. To do this, you have to run “ssh-keygen” on the client. When it has finished, you must copy the contents of your public key file (most likely at “~/.ssh/id_rsa.pub”) to the “~/.ssh/authorized_keys” file on the user account you are logging into on the proxy server. You will also have to log into this account at least once from the client so the proxy server’s information is in the client’s “known_hosts” file.

For security reasons you may want the user on the proxy server to have a nologin shell set since it is only being used for tunneling, and the above mentioned public key will allow access without a password.

For network access reasons, it might also be worth it to have the proxy port and the ssh port on the proxy server set to commonly accessible ports on all network setups (that a firewall wouldn’t block). Or you may NOT want to have it on common ports for other security reasons :-).

If you want the proxy port on the proxy server accessible from other computers (not only the proxy server), you will have to turn on “GatewayPorts” (set to “yes”) in the proxy server’s sshd config, most likely located at “/etc/ssh/sshd_config”.


The next step is to create a daemon that calls this script. The normal method for this in Linux would be to use inittab. This can be a bit cumbersome though with Linux distributions that use upstart, like Ubuntu, so I just cheated in it and created the following script to initialize a daemon through rc.d:

#!/bin/bash
echo -e '#!/bin/bash\nfor ((;;))\ndo\n  ssh USERNAME@PROXYSERVER -o TCPKeepAlive=yes -o ExitOnForwardFailure=yes -o ServerAliveCountMax=2 -o ServerAliveInterval=10 -N -R PROXYPORT:localhost:22 &>> TUNNEL_LOG\n  echo "Restarting: " `date` >> TUNNEL_LOG\n  sleep 1\ndone' > TUNNEL_SCRIPT_PATH #This creates the above script
echo -e '#!/bin/bash\ncd TUNNEL_SCRIPT_DIRECTORY\n./TUNNEL_SCRIPT_EXECUTABLE &' > /etc/init.d/TUNNEL_SCRIPT_SERVICE_NAME #This creates the init.d daemon script. I have the script set the working path before running the executable so the log file stays in the same directory
chmod u+x TUNNEL_SCRIPT_PATH /etc/init.d/TUNNEL_SCRIPT_SERVICE_NAME #Set all scripts as executable
update-rc.d TUNNEL_SCRIPT_SERVICE_NAME defaults #Set the run levels at which the script runs
cPanel Hard Linked VirtFS Causing Quota Problems
Quota problems are annoying and expensive to debug :-\

For a really long time I’ve seen many cPanel servers have improper space reporting and quota problems due to the VirtFS system, which is used for the “jailshell” login for users (if a user logs into SSH [or presumably Telnet] whom has their shell set to “/usr/local/cpanel/bin/jailshell” in “/etc/passwd”). Whenever a user logs into their jailshell, it creates a virtual directory structure to chroot them into, and “hard links” their home directory inside this virtfs directory, which doubles their apparent hard disk usage from their home directory (does not include their sql files, for example). Sometimes, the virtfs directory is not unmounted (presumably if the user does not log out correctly, which I have not confirmed), so the doubled hard disk usage is permanent, causing them to go over their quota if a full quota update is ran.

I had given this up as a lost cause a while back, because all the information I could find on the Internet said to just leave it alone and not worry about it, and that it couldn’t be fixed. I picked the problem back up today when one of our servers decided to do a quota check update and a bunch of accounts went over. It seems a script was added recently at “/scripts/clear_orphaned_virtfs_mounts” that fixes the problem :-) (not sure when it was added, but the creation, modification, and access times for the files on all 3 of my cPanel servers shows as today...). Surprisingly, I could not find this file documented or mentioned anywhere on the Internet, and still no mentions anywhere on how to fix this problem.


So the simplest way to fix the problem is to run
/scripts/clear_orphaned_virtfs_mounts --clearall
I did some digging and found that that script calls the function “clean_user_virtfs” in “/scripts/cPScript/Filesys/Virtfs”, so you can just clear one user directory “properly” by running
cd /scripts
perl -e 'use cPScript::Filesys::Virtfs();cPScript::Filesys::Virtfs::clean_user_virtfs("ACCOUNTNAME");'
All this script really does is unmount the home directory inside the jailed failed system (so the hard link is done through a “mount” that doesn’t show up in “df”... interesting). So the problem can also be fixed by
umount -f /home/virtfs/USERNAME/home/USERNAME

After this is done, a simple
/scripts/fixquotas
can be used to set the users’ quotas back to what they should be. Do note that this operation goes through every file on the file system to count users’ disk usages.
Telnet Workaround
I hate not having root ^_^;

I recently had to do some work on a system where I was not allowed SSH/telnet access. Trying to do work strictly across ftp can take hours, especially when you have thousands of files to transfer, so I came up with a quick solution in PHP for simple command line access.

<form method=post action="exec.php">
	<table style="height:100%;width:100%">
		<tr><td height="10%">
			<textarea name=MyAction style="height:100%;width:100%"><?=(isset($_REQUEST['MyAction']) ? $_REQUEST['MyAction'] : '')?></textarea>
		</td></tr><tr><td height="90%">
			<textarea name=Output style="height:100%;width:100%"><?
if(isset($_REQUEST['MyAction']))
{
	$MyAction=preg_split("/\\r?\\n/", $_REQUEST['MyAction']);
	foreach($MyAction as $Action)
	{
		exec($Action, $MyOutput);
		print htmlentities(implode("\n", $MyOutput), ENT_QUOTES, 'ISO8859-1')."\n-----------------------\n";
	}
}
			?></textarea>
		</td></tr><tr><td height=1>
			<input type=submit>
		</td></tr>
	</table>
</form>

This code allows you to enter commands on separate lines in the top box, and after the form is submitted, the output of each command is entered into the bottom box separated by dashed lines.

Note that between each command the environment is reset, so commands like "cd" which change the current directory are not useable :-(. You must also change the line 'action="exec.php"' to reflect the name you give the file.


A more suitable solution would be possible through AJAX and a program that redirected console output from a persistent session, but this was just meant as quick fix :-).

Election Night on the Internet
The day that truly tests if your servers can handle the load

I’m not going to get into politics in this post (nor will I generally ever); I just wanted to point out a few things that I saw on the Internet on election night that made me smile :-) .


Wikipedia actually errored out when I tried to get to Barack Obama soon after it was announced he won.
Wikipedia Down on Barack Obama

I was also really surprised that Fox News had up the following picture/announcement mere minutes after all the other stations reported Obama had won (when it was still only projections). I would have thought Fox News would hold out on announcing it to their viewers until it was more sure...
Fox News Obama Wins
Legacy Geocities Login Problems
Big corporations refusing to acknowledge that they have problems, let alone fix them

I have a friend that has a legacy Geocities (the MySpace of the 1990s for free web hosting) account (one from who knows how long before GeoCities was bought by Yahoo). The control panel (at geocities.yahoo.com/gcp) won’t allow logging in to his legacy account because it gets stuck in an infinite redirect loop, redirecting right back to itself.

My guess is that the problem has to do with cookies (on Geocities’ servers’ side, not the client’s!), but I didn’t get that far, as I found a roundabout solution to his problem. After logging in, the user can go to http://geocities.yahoo.com/filemanager or http://geocities.yahoo.com/v/fm.html to manage their files. While the rest of the control panel is still not accessible, this was enough of a solution for him.

Reports are that Yahoo refuses to respond about this problem with their servers.

XML Problems in PHP
I hate debugging other peoples’ libraries :-\

We recently moved one of our important web server clients to a newly acquired server (our 12th server at ThePlanetThePlanet [Used to be called EV1Servers, and before that RackShack], one of, if not the largest, server-farm hosting company in the states). A bad problem cropped up on his site in the form of a PHP script (CaRP) that deals with parsing XML.

The problem was that whenever the XML was parsed and then returned, all XML entities (escaped XML characters like “&gt;” “&lt;” and “&quot;”) were removed/deleted. I figured the problem had to do with a bad library, as the code worked perfectly on our old server, and the PHP settings on both were almost identical, but I wasn’t sure which one. After an hour or two of manipulating the code and debugging, I narrowed it down to which XML function calls had the problem, and that it was definitely not the scripts themselves. The following code demonstrates the problem.

$MyXMLData='<?xml version="1.0" encoding="iso-8859-1"?><description>&lt;img test=&quot;a&quot;</description>';
$MyXml=xml_parser_create(strtoupper('ISO-8859-1'));
xml_parser_set_option($MyXml,XML_OPTION_TARGET_ENCODING,'ISO-8859-1');
xml_parse_into_struct($MyXml, $MyXMLData, $MyData);
print htmlentities($MyData[0]['value']);
On the server with the problem, the following was outputted:
img test=a
while it should have outputted the following:
<img test="a"

I went with a hunch at this point and figured that it might be the system’s LibXML libraries, so I repointed them away from version 2.7.1, which appears to be buggy, to an older version that was also on the system, 2.6.32. And low and behold, things were working again, yay :-D.

Technical data (This is a cPanel install): In “/opt/xml2/lib/” delete the symbolic links “libxml2.so” & “libxml2.so.2” and redirect them as symbolic links to “libxml2.so.2.6.32” instead of “libxml2.so.2.7.1”.

High Computer Load
I hate servers. So much... so, so much...
High Server Load

I think it was actually much higher than this, but it wouldn’t let me log in to find out! >:-( . Wish I could easily make SSH and everything I do in it have priority over other process... but then again I probably wouldn’t be able to do anything to fix the load when this sometimes happens anyways. *sighs*

I’ll explain more about “load” in an upcoming post.

Client Side Security Fallacies
Never rely solely on information you receive from untrusted sources

One of the most laughable aspects of client/server* systems is client side based security access restrictions. What I mean by this is when credentials and actions are not checked and restricted on the server side of the equation, only on the client side, which can ALWAYS be bypassed.


To briefly explain why it is basically insane to trust a client computer; ANY multimedia, software, data, etc that has touched a person’s computer is essentially now their property. Once something has been on or through a person’s computer, the user can make copies, modify it, and do whatever the heck they want with it. This is how the digital world works. There are ways to help stop copying and modification, like hashes and encryption, but most of the ways in which things are implemented nowadays are quite fallible. There may be, for example, safeguards in place to only allow a user to use a piece of software on one certain computer or for a certain amount of time (DRM [Digital Rights Management]), but these methods are ALWAYS bypassable. The only true security comes by not letting information which people aren’t supposed to have access to cross through their computer, and keeping track of all verifiable factual information on secure servers. A long time ago at an IGDA [International Game Developers Association] meeting (I only ever went to the one unfortunately :-\), I learned an interesting truth that hadn’t occurred to me before from the lecturer. That is, that companies that make games and other software [usually] know it will sooner or later be pirated/cracked**. The true intention of software DRM is to make it hard enough to crack to discourage the crackers into giving up, and to make it take long enough so that hopefully people stop waiting for a free copy and go ahead and buy it. By the time a piece of software is cracked (if it takes as long as they hope), the companies know the majority of the remainder of the people usually wouldn’t have bought it anyways. Now I’m done with the basic explanation of client side insecurities, back to the real reason for this post.


While it is actually proper to program safeguards into client side software, you can never rely on it for true security. Security measures should always be duplicated in both client and server software. There are two reasons off the top of my head for implementing security access restrictions into the client side of software. The first is to help remove strain on servers. There is no point in asking a server if something is valid when the client can immediately confirm that it isn’t. The second reason is for speed. It’s MUCH quicker if a client can detect a problem and instantly inform the user than having to wait for a server to answer, though this time is usually imperceptible to the user, it can really add up.

So I thought I’d give a couple of examples of this to help you understand more where I’m coming from. This is a very big problem in the software industry. I find exploitable instances of this kind of thing on a very regular basis. However, I generally don’t take advantage of such holes, and try to inform the companies/programmers if they’ll listen. The term for this is white hat hacking, as opposed to black hat.


First, a very basic example. Let’s say you have a folder on your website “/PersonalPictures” that you wanted to restrict access to with a password. The proper way to do it would be to restrict access to the whole folder and all files in it on the server side, requiring a password be sent to the server to view the contents of each file. This is normally done through Apache httpd (the most utilized web server software) with an “.htaccess” file and the mod_auth (authentication) module. The improper way to do it would be a page that forwarded to the “hidden” section with a JavaScript script like the following.

if(prompt('Please enter the password')=='SecretPassword')
	document.location.href='/PersonalPictures';

The problem with this code is two fold (besides the fact it pops up a request window :-) ). First, the password is exposed in plain text to the user. Fortunately, passwords are usually not as easy to find as this, but I have found passwords in web pages and Flash code before with some digging (yes, Flash files (and Java!) are 100% decompilable to their original source code, sans comments). The second problem is that once the person goes to the URL “/PersonalPictures”, they can get back there and to all files inside it without the password, and also give it freely to others (no need to mention the fact that the URL is written in plain text here, as it’s the same as with the password). This specific problem with JavaScript was much more prevalent in the old day when people ran their web pages through free hosting sites like Geocities (now owned and operated by Yahoo) which didn’t allow for proper password protection.

This kind of problem is still around on the web, though it morphed with the times into a new form. Many server side scripts I have found across the Internet assume their client side web pages can take care of security and ignore the necessary checks in the server scripts. For example, very recently I was on a website that only allowed me to add a few items to a list. The way it was done is that there was a form with a textbox that you submitted every time you wanted to add an entry to the list. After submitting, the page was reloaded with the updated list. After you added the maximum allowed number of items to the list, when the page refreshed, the form to add more was gone. This is incredibly easy to bypass however. The normal way to do this would be to just send the modified packets directly to the server with whatever information you want in it. The easier method would be to make your own form submission page and just submit to the proper URL all you want. The Firebug extension for Firefox however makes this kind of thing INCREDIBLY easy. All that needs to be done is to add an attribute to the form to send the requests to a new window “<form action=... method=... target=_blank>”, so the form is never erased/overwritten and you can keep sending requests all you want. Using Firebug, you can also edit the values of hidden input boxes for this kind of thing.

AJAX (Asynchronous JavaScript and XML - A tool used in web programming to send and receive data from a server without having to refresh a page) has often been lampooned as insecure for this kind of reason. In reality, the medium itself is not insecure at all; it’s just how people use it.


As a matter of fact, the majority of my best and most fun Ragnarok hacking was done with these methods. I just monitored the packets that came in and out of the system, reverse engineered how they were all structured, then made modifications and resent them myself to see what I could do. With this, I was able to do things like (These should be most of the exploits; listed in descending order of usefulness & severity):

  • Duplicate items
  • Crash the server (It was never fixed AFAIK, but I stopped playing 5+ years ago. I just put that it was fixed on my site so people wouldn’t look for it ^_^; )
  • Warp to any map from any warp location (warp locations are only supposed to link to 1 other map)
  • Spoof your name during chats (so you could pretend someone else was saying something - Ender’s game, anyone? ^_^)
  • Use certain skills of other classes (I have up pictures of my swordsman using merchant skills to house a selling shop)
  • Add skills points to an item on your skill tree that is not yet available (and use it immediately)
  • Warp back to save point without dying
  • Talk to NPCs on a map from any location on that map, and sometimes from other maps (great for selling items when in a dungeon)
  • Attack with weapons much quicker than was supposed to be allowed
  • Use certain skills on creatures from any location on a map no matter how far they are
  • Equip any item in any spot (so you could equip body armor on your head slot and get much more free armor defense points)
  • Run commands on your party/guild and in chat rooms as if you were the leader/admin
  • Rollback a characters stat’s to when you logged on that session (part of the dupe hack)
  • Bypass text repetition, length, and curse filters
  • Find out user account names

The original list is here; it should contain most of what I found. I took it down very soon after putting it up (replacement here) because I didn’t want to explicitly screw the game over with people finding out about these hacks (I had a lot of bad encounters with the company that ran the game, they refused to acknowledge or fix existing bugs when I reported them). There were so many things the server didn’t check just because the client wasn’t allowed to do them naturally.


Here are some very old news stories I saved up for when I wrote about this subject:


Just because you don’t give someone a way to do something doesn’t mean they won’t find a way.



*A server is a computer you connect to and a client is the connecting computer. So all you people connecting to this website are clients connecting to my web server.
**“Cracked” usually means to make a piece of software usable when it is not supposed to be, bypassing the DRM
Linux Runlevels
“Safe Mode” for Linux

I am still, very unfortunately, looking into the problem I talked about way back here :-( [not a lot, but it still persists]. This time I decided to try and boot the OS into a “Safe Mode” with nothing running that could hinder performance tests (like hundreds of HTTP and MySQL sessions). Fortunately, my friend whom is a Linux server admin for a tech firm was able to point me in the right direction after researching the topic was proving frustratingly fruitless.


Linux has “runlevels” it can run at, which are listed in “/etc/inittab” as follows:

# Default runlevel. The runlevels used by RHS are:
#   0 - halt (Do NOT set initdefault to this)
#   1 - Single user mode
#   2 - Multiuser, without NFS (The same as 3, if you do not have networking)
#   3 - Full multiuser mode
#   4 - unused
#   5 - X11
#   6 - reboot (Do NOT set initdefault to this)

So I needed to get into “Single user mode” to run the tests, which could be done two ways. Before I tell you how though, it is important to note that if you are trying to do something like this remotely, normal SSH/Telnet will not be accessible, so you will need either physical access to the computer, or something like a serial console connection, which can be routed through networks.

So the two ways are:
  • Through the “init” command. Running “init #” at the console, where # is the runlevel number, will bring you into that runlevel. However, this might not kill all currently unneeded running processes when going to a lower level, but it should get the majority of them, I believe.
  • Append “s” (for single user mode) to the grub configuration file (/boot/grub/grub.conf on my system) at the end of the line starting with “kernel”, then reboot. I am told appending a runlevel number may also work.
SpamSoap
Info on a spam filtering solution

I was long ago pointed to SpamSoap by a friend who helped lead the IT department of a rather large and prestigious law firm. It seems to be an excellent spam filtering solution, but can get to be rather expensive as it is a pay per month per mailbox program, kind of (you pay in groups, like 1-5, 6-10, ..., 201-250, etc). I wouldn’t mind too much trying out the filtering with Google’s domain email program, but Google has marked multiple legitimate emails as spam in my Gmail account in the past, and I don’t trust their cloud computing approach too much with my data.

I originally set up a SpamSoap account 2 to 3 years ago for a single client, and have more recently been setting it up for myself, family, and some other clients. The client that has been using it for that time has been very happy with it, and the only reason I didn’t start using it for myself and others at that time was because it marked a legitimate email as spam for me, and diagnosing why didn’t get very far with their tech support. I have however done a lot more research into their system recently, asking their staff lots and lots of questions to understand the system more, and believe I know why it was caught as spam. Unfortunately, their documentation is horrible and their site doesn’t really go into details at all, so information on how it all works and how to set some things up is not easy to come by. The problem, in my case, is that where the email arrives from to the SpamSoap servers is considered. Servers that SpamSoap receives a lot of spam from are marked as more likely to be sending spam, so unfortunately, forwarding emails from another address to an address on a domain filtered by SpamSoap is a bad idea, as then the whole server that manages the domain that is forwarding is marked as sending a spam message. However, this is only one of many spam determining metrics used, and, of course, it takes many spam messages to make a difference for a server, but if you are forwarding from an address that receives a lot, bad things happen :-). Anyways, here’s some of the information I gathered on how their system works, and other important tidbits, if anyone is interested in using them.


The way SpamSoap works is you pay for “user accounts”. Each user account has 1 white/black list associated with it (which isn’t technically needed, but helps things along), 1 quarantine area, and receives (if they choose too) daily quarantine reports to their master address. Each user account can have email aliases tied to it, but due to the quarantine area it’s important to separate users. Pricing is based upon number of user accounts in tiers, like 1-5=$15/month, 6-10=$25/month, 11-20=$42/month, and so on.

The actual filtering is done by setting MX records for the domains to be filtered to SpamSoap, and SpamSoap actually just sets up a proxy connection between the sending server and your server for delivery. If a message is detected as spam during this process, the delivery attempt is canceled.

There are 2 types of incoming spam that can be filtered different ways by the system; high scoring (100% likelihood) spam, and medium scoring (>90% or something like that, an exact number is not obtainable) spam. With either of these you can choose to either: Tag the message with “[SPAM]” in the header, quarantine the message, deny delivery, or let it through. There are also filtering rules and actions you can set up on based other criteria, like: viruses, content (profanity, racial insensitivity, sexual overtones, etc), click protection, and attachments.

Domain grouping with aliases is a slightly more complicated topic. You can have as many domains as you want, and it does not affect pricing; only the number of user accounts does (or if you choose other options, listed here).

Basically, first, you have master domains. A master domain can have multiple alias domains tied to it. All email addresses with the first section as the same are aliases of each other in this setup. For example, if domain1.com is a master, and domain2.com is an alias, then me@domain1.com and me@domain2.com are email aliases of each other no matter what. If you wanted to alias “myself” with that same user, then those 2 plus me@domain1.com and me@domain2.com would all be the same user. In this setup, if you wanted me@domain1.com and me@domain2.com as separate users, you would have to split up the domains to not be aliased (in a group). You cannot however alias emails across domains that are not aliased, so for example, if both domain1.com and domain2.com were master domains, you could not alias me@domain1.com and myself@domain2.com. These configuration issues really only tend to be problems with generic names like “info@” and “admin@”, for example, a problem would creep up if me@domain1.com wanted to alias myself@domain2.com but info@domain1.com and info@domain2.com needed to be separate user accounts. If this happened, domain1.com and domain2.com would have to both be their own master domains and myself&me would have to be separate user accounts and the white/black lists would need to be duplicated, and 2 quarantine reports would come in.


I would personally recommend for all normal user inboxes to have high likelihood spam denied, medium likelihood spam as quarantined, and anything with a virus as denied with a return notification. Also, anyone that wanted to not be filtered on a domain that runs through SpamSoap would need to be on one user account as aliases with the no filtering option set. The same goes for users who do not need a quarantine (freeloaders ^_^; ), in which one user account could be set up for basic filtering w/o quarantine and lists.

Because of the no forwarding problem as stated above, all domains would need to be pushed through SpamSoap with emails that needed filtered, and then they could be forwarded afterwards to the appropriate inbox from your own servers. So, in other words, domains that go through SpamSoap cannot be forwarded TO and filtered unless the domains that are forwarding to it are also set up with SpamSoap. The consequences of such are higher likelihood of anything being forwarded being counted as spam and that server being marked as a potential spammer.


SpamSoap also has separate reseller and partner programs for people that forward them business, but they would only be useful if one sent a lot of business their way, generating SpamSoap lots of revenue.


I hope that all made sense, it wasn’t easy to write out x.x; .

WinampRC
When the broad solution won’t cut it, get specific

I wrote earlier about my new entertainment center and how evil it has been. Unfortunately, things have only been getting worse. After trying to play music on it while torrenting or doing other things, I found out it can pretty much only do 1 task at a time, and barely, so I’ve decided to make it now only act as a music station and occasionally watch video through it when the video doesn’t require too much power. I even found an old 256MB stick of PC2700 RAM to put in it (yay for finding random antiquated computer parts around the house!), which did not help, as expected. This regrettably means I will have to keep my current home server at its job, which is a major power hog, and way too powerful for what it does, but ah well.

When listening to music I have the obvious need to easily pause playback, and the occasional need to skip songs I don’t feel like listening to ATM. I would normally use the multiple remote desktop hack for this, but the computer just can’t handle 2 XP sessions going at once. For this reason, Synergy (a great way to do KVM through software) would normally be the perfect fallback solution, except I’d rather not have to use my TV (which is the computer’s primary video output) just to control music on the surround sound system. That, and I’d rather not have to use the TV at all for the computer, because, as written before, I have to go through 5 minutes of hoops to get video working right on it. So the solution was to find a remote way to control Winamp, the only music player I’ve used since around ’98 :-).

After some searching, I found WinampRC, and it fits the remote control solution perfectly, especially as it is super lightweight! The only real problem I have with it is that its playlist editor is rather underdeveloped, and it’s hard to add music, especially in batches. Another minor problem is that there are no global keyboard shortcuts :-(, but I can fix this later with other software through macros. All in all though, I’m very happy with it :-).


[Edit on 2008-09-03 @ 7:34am]

Unfortunately, one other semi-major problem has crept up with the program, and it will be a hard one, if not impossible, to diagnose. Sometimes a few seconds after switching over to a new song, it automatically skips to the next song on the list. I can only assume this is because it has improperly measured audio playback times and thinks the previous song finished after it already did. This isn’t as bad as it could be though, and is only occasional, so I won’t be looking for another solution just yet.


[Edit on 2008-09-06 @ 4:30pm]

Ok, just using a normal keyboard, with a PS/2 extension cord, hooked up to the computer to issue shortcuts ~.~ . At least I don’t have to keep the TV on still.

Microsoft IIS Bug
Bad Programming: Only using file extensions as an indicator

According to a Microsoft KB article titled “Virtual directory names with executable extensions are not used correctly”, using a virtual folder ending in an executable extension (like .com, .exe, .dll, or .sh) under the web server for IIS [Microsoft’s Internet information services server suite] makes the contents inside the folder unviewable. This behavior itself is kind of silly, as you’d assume a web server would always check to see if something was a file or folder first.

Unfortunately, this doesn’t apply to just virtual folders, but all folders under an IIS web server, as I found out a few years ago when I backed up a site that I knew would be taken down very soon (ironically, because the company [SysInternals] was being taken over by Microsoft) and mirrored it on my Home Server, which runs IIS.

The solution I used was to add a character (in my case an underscore “_”) to the end of all the directory names ending in “.com” and then doing a global regular expression replace through all files in the mirror to replace any occurrences of these directories.


Search For: “(DOMAIN1|DOMAIN2|DOMAIN3)([\\/])
Replace With: “$1_/$2

I still plan on getting up some site mirrors of places that no longer exist and such for the miscellaneous section one of these days...

Core Dump Files
Not all OSs crash in the same way :-)

If you ever find a file named “core.#” when running Linux, where # is replaced by a number, it means something crashed at some point. Most of the time, you will probably just want to delete the file, but sometimes you may wonder what crashed. To do this, you use gdb (The GNU debugger), a very power tool, to analyze the core dump file.

gdb --core=COREFILENAME

Near the very bottom of the blob of outputted text after running this command, you should see a line that says “Core was generated by `...'.”. This tells you the command line of what crashed. To exit gdb, enter “quit”. You can also use gdb to find out what actually happened and troubleshoot/debug the problem, but that’s a very long and complex topic.


Recently, I started seeing hundreds of core dump files taking up gigabytes of space showing up in “/usr/local/cpanel/whostmgr/docroot/” on multiple of our web servers. According to several online sources, it seems cPanel (web hosting made easy!) likes to dump many, if not all, of its programs' core files into this directory. In our case, it has been “dnsadmin” doing the crashing. We’ve been having some pretty major DNS problems lately, this kind on the name server level, so I may have to rebuild our DNS cluster in the next few days. Joy.

Diagnosing DNS Problems
Digging until you find the root

Yesterday I wrote a bit about the DNS system being rather fussy, so I thought today I’d go a bit more into how DNS works, and some good tools for problem solving in this area.


First, some technical background on the subject is required.
  • A network is simply a group of computers hooked together to communicate with each other. In the old days, all networking was done through physical wires (called the medium), but nowadays much of it is done through wireless connections. Wired networking is still required for the fastest communications, and is especially important for major backbones (the super highly utilized lines that connect networks together across the world).
  • A LAN is a local network of all computers connected together in one physical location, whether it be a single room, a building, or a city. Technically, a LAN doesn’t have to be localized in one area, but it is preferred, and we will just assume it is so for arguments sake :-).
  • A WAN is a Wide (Area) Network that connects multiple LANs together. This is what the Internet is.
  • The way one computer finds another computer on a network is through its IP Address [hereby referred to as IPs in this post only]. There are other protocols, but this (TCP/IP) is by far the most widely utilized and is the true backbone of the Internet. IPs are like a house’s address (123 Fake Street, Theoretical City, Made Up Country). To explain it in a very simplified manner (this isn’t even remotely accurate, as networking is a complicated topic, but this is a good generalization), IPs have 4 sections of numbers ranging from 0-255 (1 byte). For example, 67.45.32.28 is a (class 4) IP. Each number in that address is a broader location, so the “28” is like a street address, “32” is the street, “45” is the city, and “67” is the country. When you send a packet from your computer, it goes to your local (street) router which then passes it to the city router and so on until it reaches its destination. If you are in the same city as the final destination of the packet, then it wouldn’t have to go to the country level.
  • The final important part of networking (for this post) is the domain system (DNS) itself. A domain is a label for an IP Address, like calling “1600 Pennsylvania Avenue” as “The White House”. As an example, “www.castledragmire.com” just maps to my web server at “209.85.115.128” (this is the current IP, it will change if the site is ever moved to a new server).

Next is a brief lesson on how DNS itself works:
  • The root DNS servers (a.root-servers.net through m.root-servers.net) point to the servers that hold top-level-domain information (.com, .org., .net, .jp, etc)
    Examples of these servers are as follows:
    auns1.audns.net.au
    bizE.GTLD.biz
    caCA04.CIRA.ca
    cnA.DNS.cn
    com&netA.GTLD-SERVERS.NET
    deZ.NIC.de
    euU.NIC.eu
    infoB9.INFO.AFILIAS-NST.ORG
    orgTLD1.ULTRADNS.NET
    tvC5.NSTLD.COM
  • Next, these root name servers (like A.GTLD-SERVERS.NET through M.GTLD-SERVERS.NET for .com) hold two main pieces of information for ALL domains under their top-level-domain jurisdiction:
    • The registrar where the domain was registered
    • The name server(s) that are responsible for the domain
    Only registrars can talk to these root servers, so you have to go through the registrar to change the name server information.
  • The final lowest rung in the DNS hierarchy is name servers. Name servers hold all the actual addressing information for a domain and can be run by anyone. The 2 most important (or maybe relevant is a better word...) types of DNS records are:
    • A: There should be many of these, each pointing a domain or subdomain (castledragmire.com, www.castledragmire.com, info.castledragmire.com, ...) to a specific IP address (version 4)
    • SOA: Start of Authority - There is only one of these records per domain, and it specifies authoritative information including the primary name server, the domain administrator’s email, the domain serial number, and several timeout values relating to refreshing domain information.

Now that we have all the basics down, on to the actual reason for this post. It’s really a nuisance trying to explain to people why their domain isn’t working, or is pointing to the wrong place. So here’s why it happens!

Back in the old days, it often took days for DNS propagation to happen after you made changes at your registrar or elsewhere, but fortunately, this problem is of the past. The reason for this is that ISPs and/or routers cached domain lookups and only refreshed them according to the metrics in the SOA record mentioned above, as they were supposed to. This was done for network speed reasons, as I believe older OSs might not have cached domains (wild speculation), and ISPs didn’t want to look up the address for a domain every time it was requested. Now, though, I rarely see caching on any level except at the local computer; not only on the OS level, but even some programs cache domains, like FireFox.

So the answer for when a person is getting the wrong address for a domain, and you know it is set correctly, is usually to just reboot. Clearing the DNS cache works too (for the OS level), but explaining how to do that is harder than saying “just reboot” ^_^;.

To clear the DNS cache in XP, enter the following into your “run” menu or in the command prompt: “ipconfig /flushdns”. This does not ALWAYS work, but it should work.


If your domain is still resolving to the wrong address when you ping it after your DNS cache is cleared, the next step is to see what name servers are being used for the information. You can do a whois on your domain to get the information directly form the registrar who controls the domain, but be careful where you do this as you never know what people are doing with the information. For a quick and secure whois, you can use “whois” from your linux command line, which I have patched through to a web script here. This script gives both normal and extended information, FYI.

Whois just tells you the name servers that you SHOULD be contacting, it doesn’t mean these are the ones you are asking, as the root DNS servers may not have updated the information yet. This is where our command line programs come into play.

In XP, you can use “nslookup -query=hinfo DOMAINNAME” and “nslookup -query=soa DOMAINNAME” to get a domain’s name servers, and then “nslookup NAMESERVER DOMAINNAME” to get the IP the name server points too. For example: (Important information in the following examples are bolded and in white)

C:\>nslookup -query=hinfo castledragmire.com
Server:  dns-redirect-lb-01.texas.rr.com
Address:  24.93.41.127

castledragmire.com
        primary name server = ns3.deltaarc.com
        responsible mail addr = admins.deltaarc.net
        serial  = 2007022713
        refresh = 14400 (4 hours)
        retry   = 7200 (2 hours)
        expire  = 3600000 (41 days 16 hours)
        default TTL = 86400 (1 day)

C:\>nslookup -query=soa castledragmire.com
Server:  dns-redirect-lb-01.texas.rr.com
Address:  24.93.41.127

Non-authoritative answer:
castledragmire.com
        primary name server = ns3.deltaarc.com
        responsible mail addr = admins.deltaarc.net
        serial  = 2007022713
        refresh = 14400 (4 hours)
        retry   = 7200 (2 hours)
        expire  = 3600000 (41 days 16 hours)
        default TTL = 86400 (1 day)

castledragmire.com      nameserver = ns4.deltaarc.com
castledragmire.com      nameserver = ns3.deltaarc.com
ns3.deltaarc.com        internet address = 216.127.92.71

C:\>nslookup ns3.deltaarc.com castledragmire.com
Server:  ev1s-209-85-115-128.theplanet.com
Address:  209.85.115.128

Name:    ns3.deltaarc.com
Address:  216.127.92.71

Nslookup is also available in Linux, but Linux has a better tool for this, as nslookup itself doesn’t always seem to give the correct answers, for some reason. So I recommend you use dig if you have it or Linux available to you. So with dig, we just start at the root name servers and work our way up to the SOA name server to get the real information of where the domain is resolving to and why.

root@www [~]# dig @a.root-servers.net castledragmire.com

; <<>> DiG 9.2.4 <<>> @a.root-servers.net castledragmire.com
; (2 servers found)
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 5587
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 13, ADDITIONAL: 14

;; QUESTION SECTION:
;castledragmire.com.            IN      A

;; AUTHORITY SECTION:
com.                    172800  IN      NS      H.GTLD-SERVERS.NET.
com.                    172800  IN      NS      I.GTLD-SERVERS.NET.
com.                    172800  IN      NS      J.GTLD-SERVERS.NET.
com.                    172800  IN      NS      K.GTLD-SERVERS.NET.
com.                    172800  IN      NS      L.GTLD-SERVERS.NET.
com.                    172800  IN      NS      M.GTLD-SERVERS.NET.
com.                    172800  IN      NS      A.GTLD-SERVERS.NET.
com.                    172800  IN      NS      B.GTLD-SERVERS.NET.
com.                    172800  IN      NS      C.GTLD-SERVERS.NET.
com.                    172800  IN      NS      D.GTLD-SERVERS.NET.
com.                    172800  IN      NS      E.GTLD-SERVERS.NET.
com.                    172800  IN      NS      F.GTLD-SERVERS.NET.
com.                    172800  IN      NS      G.GTLD-SERVERS.NET.

;; ADDITIONAL SECTION:
A.GTLD-SERVERS.NET.     172800  IN      A       192.5.6.30
A.GTLD-SERVERS.NET.     172800  IN      AAAA    2001:503:a83e::2:30
B.GTLD-SERVERS.NET.     172800  IN      A       192.33.14.30
B.GTLD-SERVERS.NET.     172800  IN      AAAA    2001:503:231d::2:30
C.GTLD-SERVERS.NET.     172800  IN      A       192.26.92.30
D.GTLD-SERVERS.NET.     172800  IN      A       192.31.80.30
E.GTLD-SERVERS.NET.     172800  IN      A       192.12.94.30
F.GTLD-SERVERS.NET.     172800  IN      A       192.35.51.30
G.GTLD-SERVERS.NET.     172800  IN      A       192.42.93.30
H.GTLD-SERVERS.NET.     172800  IN      A       192.54.112.30
I.GTLD-SERVERS.NET.     172800  IN      A       192.43.172.30
J.GTLD-SERVERS.NET.     172800  IN      A       192.48.79.30
K.GTLD-SERVERS.NET.     172800  IN      A       192.52.178.30
L.GTLD-SERVERS.NET.     172800  IN      A       192.41.162.30

;; Query time: 240 msec
;; SERVER: 198.41.0.4#53(198.41.0.4)
;; WHEN: Sat Aug 23 04:15:28 2008
;; MSG SIZE  rcvd: 508

root@www [~]# dig @a.gtld-servers.net castledragmire.com

; <<>> DiG 9.2.4 <<>> @a.gtld-servers.net castledragmire.com
; (2 servers found)
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35586
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 2, ADDITIONAL: 2

;; QUESTION SECTION:
;castledragmire.com.            IN      A

;; AUTHORITY SECTION:
castledragmire.com.     172800  IN      NS      ns3.deltaarc.com.
castledragmire.com.     172800  IN      NS      ns4.deltaarc.com.

;; ADDITIONAL SECTION:
ns3.deltaarc.com.       172800  IN      A       216.127.92.71
ns4.deltaarc.com.       172800  IN      A       209.85.115.181

;; Query time: 58 msec
;; SERVER: 192.5.6.30#53(192.5.6.30)
;; WHEN: Sat Aug 23 04:15:42 2008
;; MSG SIZE  rcvd: 113

root@www [~]# dig @ns3.deltaarc.com castledragmire.com

; <<>> DiG 9.2.4 <<>> @ns3.deltaarc.com castledragmire.com
; (1 server found)
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 26198
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 0

;; QUESTION SECTION:
;castledragmire.com.            IN      A

;; ANSWER SECTION:
castledragmire.com.     14400   IN      A       209.85.115.128

;; AUTHORITY SECTION:
castledragmire.com.     14400   IN      NS      ns4.deltaarc.com.
castledragmire.com.     14400   IN      NS      ns3.deltaarc.com.

;; Query time: 1 msec
;; SERVER: 216.127.92.71#53(216.127.92.71)
;; WHEN: Sat Aug 23 04:15:52 2008
;; MSG SIZE  rcvd: 97

Linux also has the “host” command, but I prefer and recommend “dig”.


And that’s how you diagnose DNS problems! :-). For reference, two common DNS configuration problems are not having your SOA and NS records properly set for the domain on your name server.


I also went ahead and added dig to the “Useful Bash commands and scripts” post.

Computers are Evil
Setting up new computers can be quite the hassle

The new home server for the new entertainment center I recently set up has made itself out to be quite a nuisance. I am unsure as to whether I will keep using it or not, but fortunately, I have not yet taken down my old home server, as I wanted to do some break in testing on the new one first.

Setting up new computers is almost always a pain in the ass, what with installing and configuring all the software from scratch (which always includes a format and new OS), and making sure all the hardware works properly and finding drivers for it (sometimes when you don’t even have the proper information on what that hardware is). But sometimes, computers can go above and beyond the normal setup nuances and annoyances and be downright evil. I have long proclaimed to people that computers have personalities and minds of their own and they decide when and where they want to be accommodating or uncooperative. Besides all the normal computer setup problems (including not knowing what the hardware was and having to figure that out), this one also had a few more doozies.

The first big problem started with the fact that I wanted to use this computer for video output, and it does not have an AGP slot. As I contemplated in the previous post on this topic, I went ahead and bought a PCI Geforce 5200 for $27.79 including shipping. The card did not fit properly in the new case, so I had to unscrew a few things, which were fortunately designed for just that reason. Then the big problem came up in that video outputted from the s-video port on the card showed up on the TV at a 50% over zoom, so I couldn’t see half the screen. I couldn’t test the monitor output port either because it is DVI, and I have no DVI monitors, alas. After 2 or 3 hours of tinkering with it and throwing everything plus the kitchen sink at the problem, including trying a different s-video cable, I finally stumbled on the solution and got it working, yay. That is... until after I rebooted and it wasn’t working again x.x;. Another 20 or so more minutes of tinkering got it fixed again, and I was able to quickly hone down on a procedure to fix the problem on the next reboot, optimizing it with each successive reboot over the next few days. The procedure is as follows: (The TV over s-video starts as the primary monitor, and I have a second monitor connected to the VGA port to the onboard graphics card)

  • Open “Display Properties” [Right click Desktop > Properties] > Settings
  • Attach second monitor so I can see what I’m doing
  • Open NVidia Control Panel
  • Rotate screen to 90 degrees. It only wants to rotate the screen at 1024x768, which is too high a resolution for the TV, so it kicks the resolution down to 640x480 while rotating
  • Keep setting the screen to no rotation (0 degrees) until the scaling is correct [usually twice]. The NVidia control panel doesn’t want to allow going back to normal rotation now due to the 1024x768 required resolution thing, and will keep the setting set as 90 degrees, so the process can easily be repeated until it works.
  • Now that the screen is at the correct scale (at 640x480), all that’s left is to get the rotation back to normal. To do this, immediately after accepting the rotation process in the NVidia Control Panel, it has to be closed out (alt+f4) so that it saves the rotation setting at 0 degrees but doesn’t try to set it back after all the resolution changes.
  • Raise the resolution back to 800x600
  • Detach secondary monitor now that it is no longer needed

The screen still unfortunately has about 100-200 “pixels” (monitors don’t have pixels, technically) on the top and bottom of the screen that are unused, but eh, NBD. At least this graphics card lets me properly pan and scan (zoom/scale and move) the s-video output around unlike my Geforce4 Ti 4600! The next problem with the video card is that some video outputted from it is just too slow. Though most content is watchable, the choppiness makes it unbearable. The problem with this might just be that the PCI bus doesn’t have the required throughput, which is why most video cards are used over AGP (or nowadays PCI express).

There are even two more final problems with it, one a possible deal killer, the other rather insignificant. The unimportant problem is that XP refuses to install updates. I believe this to be a problem with SP3. The final problem is that the computer seems to randomly compltely freeze up every now and then for no particular reason, requiring a reboot. This has happened 2 or 3 times so far, so I’m waiting to see how often it happens, if anymore. I know it’s not overheating as I currently have the case open; and I see no blown capacitors... hmmmm...



<frustration>Computers!</frustration>
Multiple Windows XP Sessions
Making XP act like Windows Server

All of the Windows lines of OSs from XP through Windows Server 2003 (or 2005 or 2007?) are, to my knowledge and IMO, basically the exact same thing, with just some minor tweaks and extra software for the more expensive versions. My version of XP Professional even comes with IIS (Internet Information Services - Microsoft’s web/ftp/mail server suite). One of my favorite XP hacks adds on a desperately needed functionality found only in Windows Server editions, which is allowing multiple user sessions on a machine at once. This basically means allowing multiple people to log onto a machine at the same time through Remote Desktop (Microsoft’s internal Windows VNC client). I find the most useful function by far of this is the “Remote Control” feature, which allows a second logged in user to see exactly what is on the screen of another session, and if permissions are given, to take control of it. This is perfect for those people whom you often have to trouble shoot computer problems for, eliminating the need for a trip to their location or 3rd party software to view their computer.

This hack requires a few registry modifications, a group policy modification, and a DLL replacement. The DLL replacement was provided by Microsoft in early versions of XP SP2 when they were tinkering with allowing this feature in XP. I found the information for all this here a number of years ago and it has provided itself invaluable since. Unfortunately, this does not work on XP Home edition, just XP Professional. I tried adapting it once and wasted a lot of time :-\. The following is the text from where I got this hack.

Concurrent Remote Desktop Sessions in Windows XP SP2

I mentioned before that Windows XP does not allow concurrent sessions for its Remote Desktop feature. What this means is that if a user is logged on at the local console, a remote user has to kick him off (and ironically, this can be done even without his permission) before starting work on the box. This is irritating and removes much of the productivity that Remote Desktop brings to Windows. Read on to learn how to remove that limitation in Windows XP SP2

A much touted feature in SP2 (Service Pack 2) since then removed was the ability to do just this, have a user logged on locally while another connects to the terminal remotely. Microsoft however removed the feature in the final build. The reason probably is that the EULA (End User License Agreement) allows only a single user to use a computer at a time. This is (IMHO) a silly reason to curtail Remote Desktop’s functionality, so we’ll have a workaround.

Microsoft did try out the feature in earlier builds of Service Pack 2 and it is this that we’re going to exploit here. We’re going to replace termsrv.dll (The Terminal Server) with one from an earlier build (2055).

To get Concurrent Sessions in Remote Desktop working, follow the steps below exactly:

  1. Download the termsrv.zip file and extract it somewhere.
  2. Reboot into Safe Mode. This is necessary to remove Windows File Protection. [Dakusan: I use unlocker for this, which I install on all my machines as it always proves itself useful, and then usually have to do a “shutdown -a” from command line when XP notices the DLL changed.]
  3. Copy the termsrv.dll in the zip to %windir%\System32 and %windir%\ServicePackFiles\i386. If the second folder doesn’t exist, don’t copy it there. Delete termsrv.dll from the dllcache folder: %windir%\system32\dllcache
  4. Merge the contents of Concurrent Sessions SP2.reg file into the registry. [Dakusan: Just run the .reg file and tell XP to allow the action.]
  5. Make sure Fast User Switching is turned on. Go Control Panel -> User Accounts -> Change the way users log on or off and turn on Fast User Switching.
  6. Open up the Group Policy Editor: Start Menu > Run > ‘gpedit.msc’. Navigate to Computer Configuration > Administrative Templates > Windows Components > Terminal Services. Enable ‘Limit Number of Connections’ and set the number of connections to 3 (or more). This enables you to have more than one person remotely logged on.
  7. Now reboot back into normal Windows and try out whether Concurrent Sessions in Remote Desktop works. It should!

If anything goes wrong, the termsrv_sp2.dll is the original file you replaced. Just rename it to termsrv.dll, reboot into safe mode and copy it back.

The termsrv.dl_ file is provided in the zip is for you slipstreamers out there. Just replace that file with the corresponding file in the Windows installation disks.



I have included an old copy of the above web page, from when I first started distributing this, with the information in the hack’s zip file I provide.

If you want to Remote Control another session, I think the user needs to be part of the “Administrators” group, and don’t forget to add any users that you want to be able to remotely log on to the “Remote Desktop Users” group.

This is all actually part of an “Enhanced Windows XP Install” document I made years ago that I never ended up releasing because I hadn’t finished cleaning it up. :-\ One of these days I’ll get it up here. Some of the information pertaining to this hack from that document is as follows:

  • Any computer techy out there that has tried to troubleshoot over the phone knows how much of a problem/pain in the anatomy it is, and for this reason, I install this hack which makes it painless to automatically connect to a users computer through remote desktop, which can then be remotely viewed or controlled via their displayed console session.
  • I often use this hack myself when I am running computers without keyboards/mice, like my entertainment computer. For a permanent solution for something like this though, I recommend a KM (Keyboard/Mouse) solution like synergy, which allows manipulating one computer via a keyboard and mouse on another.
  • Your user account password must also not be blank. Blank passwords often cause problems with remote services.
  • The security risk for this is a port is opened for someone to connect to like telnet or SSH on Unix, which is a minimal risk unless someone has your username+password.
  • You have to have a second username to log into, which can be done under Control Panel > User Accounts, or Control panel > Administrative Tools > Computer Management > System Tools > Local users and Groups.
  • If you want the second user to be able to log in remotely, make sure to add them under Control Panel > System > Remote > Select Remote users, and also check “allow users to connect remotely to this computer”.
  • You also need to know the IP address of the user’s computer you want to connect to, and unfortunately, they are not always static. If you run into this, you may want to use a DDNS service like mine.
  • You may also run into the unfortunate circumstance of NAT/Firewalled networks, which is beyond the scope of this document. Long story short, you need to open up port 3389 in the firewall and forward it through the router/NAT (this is the port for both remote desktop and remote assistance).
  • You may also want to change the port number to something else so a port scanner will not pick it up. To connect to a different port, on the client computer, in remote desktop, you connect to COMPUTERIP:PORT like www.mycomputer.com:5050.
    • Registry Key: HKLM\System\CurrentControlSet\Control\TerminalServer\WinStations\RDP-Tcp\PortNumber - Set as your new port number
    • This requires a reboot to work.
    • Make sure you don’t provide a port that’s already used on the computer, and you probably shouldn’t make it a standard port either [21 [ftp], 25 [smtp], 80 [http], etc])
  • You can also log into their current console session by going to the task manager (ctrl+shift+esc in full screen, or right click taskbar and go to “task manager”) > Users > Right click username > Remote control
    • This will ask the user at the computer if they want to accept this. To have it NOT ask them, do the following:
      • Start > Run > gpedit.msc [enter] > computer configuration > administrative templates > windows components > terminal services
      • Double click the option “Sets rules for remote control of terminal services user sessions”
      • Enable it, and for the “Options” setting, set “Full control without users permission”
  • If the ability for you to access a client’s computer without their immediate permission or knowledge is too “dangerous” for their taste, you may suggest/use Remote Assistance, which is more troublesome, but much more “secure” sounding.
New Entertainment Center
Missed a day of posting :'(

Doh, I wanted to try posting at least once a day all month and I missed out yesterday because I was just too busy with other stuff, alas.

I had lots of work, mostly house work, to catch up on, culminating in transporting and setting up a new entertainment system, wee. I had to wire the darn thing twice because the first setup really didn’t work well and would have been damaging to the cords; and it takes like an hour each time to do the wiring. Grrrr, oh well.

I originally had all my entertainment stuff set up in my living room, as I assumed that I was going to have guests and would want to watch stuff out there with them on the couch. This has unfortunately turned out to not be the case in this apartment, and I usually only end up going to the living room couch mainly to snuggle with my cat, as she will only get close to me on the couch for some reason, while I have the TV on in the background while I work. Now I can have stuff running in the background whenever I want from my room, and can use the surround sound speakers to play my music, as I am getting tired of my crappy (though better than many) laptop speakers, and headphones are more cumbersome than not.

I bought the new parts of my entertainment system from some [married] friends who finished moving yesterday to a one bedroom in downtown Austin that couldn’t even remotely hold all their stuff, so they were desperately trying to get rid of a lot of it in a hurry. I got a 32" CRT TV [$50], the black shelving unit w/ glass enclosure [$50], and an older computer (512MB of RAM, 780mhz CPU, 80GB hard drive) [$20] to use as a new Home Server. All of these were at about 20% of cost, yay :-).

I really needed the new TV, as my current 32" TV is badly scratched up from being dropped on cement a few years back. I am leaving the old TV hooked up to my current Home Server, which I will now be using as just a multimedia station for that TV and a place for extra backup hard drives. I wanted the “new” computer as a new Home Server, as the current one uses way too much power for what it’s being used for, and is too loud to keep in my room. I’ll be just turning it on when I need it now instead; backing up and watching videos that require too much CPU power that the new computer won’t be able to handle.

The new computer is quite tiny and has no AGP slots, just 2 PCI32 slots. I am therefore looking into getting a $25 (w/ shipping) e-GeForce 5200 PCI card for tv-out. I hope it fits in the small case, because if not, I will just have to leave the case cover off.


Click for full size (I love my new camera ^_^ )
Entertainment System
Always Confirm Potentially Hazardous Actions
Also treat what others tell you with discretion

So I have been having major speed issues with one of our servers. After countless hours of diagnoses, I determined the bottle neck was always I/O (input/output, accessing the hard drive). For example, when running an MD5 hash on a 600MB file load would jump up to 31 with 4 logical CPUs and it would take 5-10 minutes to complete. When performing the same test on the same machine on a second drive it finished within seconds.

Replacing the hard drive itself is a last resort for a live production server, and a friend suggested the drive controller could be the problem, so I confirmed that the drive controller for our server was not on-board (on its own card), and I attempted to convince the company hosting our server of the problem so they would replace the drive controller. I ran my own tests first with an iostat check while doing a read of the main hard drive (cat /etc/sda > /dev/null). This produced steadily worsening results the longer the test went on, and always much worse than our secondary drive. I passed these results on to the hosting company, and they replied that a “badblocks –vv” produced results that showed things looked fine.

So I was about to go run his test to confirm his findings, but decided to check parameters first, as I always like to do before running new Linux commands. Thank Thor I did. The admin had meant to write “badblocks –v” (verbose) and typoed with a double key stroke. The two v’s looked like a w due to the font, and had I ran a “badblocks –w” (write-mode test), I would have wiped out the entire hard drive.

Anyways, the test outputted the same basic results as my iostat test with throughput results very quickly decreasing from a remotely acceptable level to almost nil. Of course, the admin only took the best results of the test, ignoring the rest.

I had them swap out the drive controller anyways, and it hasn’t fixed things, so a hard drive replace will probably be needed soon. This kind of problem would be trivial if I had access to the server and could just test the hardware myself, but that is a price to pay for proper security at a server farm.