Home Page
Archive > Posts > Tags > Linux
Search:

Mounting a TrueCrypt system dd image in Linux

The following is a tutorial on mounting a dd image of a TrueCrypt system-level-encrypted volume. This tutorial was tested and written against Ubuntu 16.04.2 LTS.


Trying to mount your loopback device with losetup or mount doesn’t quite work. If you tried, you’d get an error like the following:

No such file or directory:
/sys/block/loop2/loop2p2/start
VeraCrypt::File::Open:276


Instead, use sudo kpartx -va IMAGE_FILENAME. This will give you something like the following:

add map loop2p1 (253:0): 0 204800 linear 7:2 2048
add map loop2p2 (253:1): 0 976564224 linear 7:2 206848
This shows you the partitions in the image and which loopback devices they are mounted to. In my case, loop2 and loop2p2, which I will continue using for the rest of this tutorial.
So this mounts the following:
  • /dev/loop2: The primary loopback device
  • /dev/mapper/loop2p*: The partition loopback devices (in my case, loop2p1 and loop2p2)


If you attempt to mount loop2p2 with TrueCrypt or VeraCrypt as a system partition, no matter the password, you will get the error “Partition device required”.
To fix this we need to get the loop2p2 to show up in /dev and make an edit to the VeraCrypt source code.


You can run the following command to see the loopback partition devices and their sizes. This is where I am pulling loop2p2 from.

lsblk /dev/loop2
This will give the following:
NAME      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop2       7:2    0 465.8G  1 loop 
├─loop2p2 253:1    0 465.7G  1 part 
└─loop2p1 253:0    0   100M  1 part 
	

Run the following command to create /dev/loop2p* block devices:

sudo partx -a /dev/loop2


Run the following commands to download and compile VeraCrypt:

sudo apt-get install git yasm libfuse-dev libwxgtk3.0-dev #yasm requires universe repository
git clone https://github.com/veracrypt/VeraCrypt
cd VeraCrypt/src
nano Platform/Unix/FilesystemPath.cpp #You can use the editor of your choice for this

In Platform/Unix/FilesystemPath.cpp make the following change:
After the following 2 lines of code in the FilesystemPath::ToHostDriveOfPartition() function, currently Line 78:
#ifdef TC_LINUX
        path = StringConverter::StripTrailingNumber (StringConverter::ToSingle (Path));
Add the following:
string pathStr = StringConverter::ToSingle (path);
size_t t = pathStr.find("loop");
if(t != string::npos)
    path = pathStr.substr (0, pathStr.size() - 1);
Then continue to run the following commands to finish up:
make
Main/veracrypt -m system,ro,nokernelcrypto -tc /dev/loop2p2 YOUR_MOUNT_LOCATION


VeraCrypt parameter information:

  • If you don’t include the “nokernelcrypto” option, you will get the following error:
    device-mapper: reload ioctl on veracrypt1 failed: Device or resource busy
    Command failed
    
  • the “ro” is if you want to mount in readonly
  • “-tc” means the volume was created in TrueCrypt (not VeraCrypt)

  • Doing this in Windows is a lot easier. You just need to use a program called Arsenal Image Mounter to mount the drive, and then mount the partition in TrueCrypt (or VeraCrypt).

Slackware 64 Linux Install on UEFI

I recently tried to install Slackware 4.2 64-bit (Linux) onto a new mini PC I just bought. The new PC only supports UEFI so I had major issues getting the darn setup on the install cd to actually run. I never DID actually get the install cd to boot properly on the system, so I used an alternative. While the slack install usb key was in, I also added and loaded up an ubuntu live cd usb key. The following is what I used to run the slackware setup in Ubuntu.

#Login as root
#sudo su

#Settings
InstallDVDName=SlackDVD #This is whatever you named your slackware usb key

#/mnt will contain the new file system for running the setup
cd /mnt

#Extract the initrd.img from the slackware dvd into /mnt
cat /media/ubuntu/$InstallDVDName/isolinux/initrd.img | gzip -d | cpio -i

#Bind special linux directories into the /mnt folder
for i in proc sys dev tmp; do mount -o bind /$i ./$i; done

#Mount the cdrom folder into /mnt/cdrom
rm cdrom
mount -o bind /media/ubuntu/$InstallDVDName/ ./cdrom

#Set /mnt as our actaul (ch)root
chroot .

#Run the slackware setup
usr/lib/setup/setup

#NOTE: When installing, your package source directory is at /cdrom/slackware64
Blacklisting DNS Server on Amazon EC2

Amazon EC2 is a great resource for cheap virtual servers to do simple things, like DNS or (low bandwidth) VPNs. I had the need this morning to set up a DNS server for a company which needed to blacklist a list of domains. The simplest way to do this is by editing all the computers’ hostfiles, but that method leaves a lot to be desired. Namely, blocking entire domains (as opposed to single subdomains), and deploying changes. Centralizing in a single place makes the job instant, immediate, and in the end, faster.

The following are the steps I used to set this up on an EC2 server. All command line instructions are followed by a single command you can run to execute the step. There is a full script below, at the end of the post, containing all steps from when you first login to SSH ("Login to root") to the end.


I am not going to go into the details of setting up an EC2 instance, as that information can be found elsewhere. I will also be skipping over some of the more obvious steps. Just create a default EC2 instance with the “Amazon Linux AMI”, and I will list all the changes that need to be made beyond that.

  • Creating the instance
    • For the first year, for the instance type, you might as well use a t2.micro, as it is free. After that, a t2.nano (which is a new lower level) currently at $56.94/year ($0.0065/Hour), should be fine.
    • After you select your instance type, click “Review and Launch” to launch the instance with all of the defaults.
    • After the confirmation screen, it will ask you to create a key pair. You can see other tutorials about this and how it enables you to log into your instance.
  • Edit the security group
    • Next, you need to edit the security group for your instance to allow incoming connections.
    • Go to “Instances” under the “Instances” group on the left menu, and click your instance.
    • In the bottom of the window, in the “Descriptions” tab, click the link next to “Security Groups”, which will bring you to the proper group in the security groups tab.
    • Right click it and “Edit inbound Rules”.
    • Make sure it has the following rules with Source=Anywhere: ALL ICMP [For pinging], SSH, HTTP, DNS (UDP), DNS (TCP)
  • Assign a permanent IP to your instance
    • To do this, click the “Elastic IPs” under “Network & Security” in the left menu.
    • Click “Allocate New Address”.
    • After creating it, right click the new address, then “Associate Address”, and assign it to your new instance.
  • You should probably set this IP up as an A record somewhere. I will refer to this IP as dns.yourdomain.com from now on.
  • Login to root
    • SSH into your instance as the ec2-user via “ssh ec2-user@dns.yourdomain.com”. If in windows, you could also use putty.
    • Sudo into root via “sudo su”.
  • Allow root login
    • At this point, I recommend setting it up so you can directly root into the server. Warning: some people consider this a security risk.
    • Copy your key pair(s) to the root user via “cat /home/ec2-user/.ssh/authorized_keys > /root/.ssh/authorized_keys
    • Set SSHD to permit root logins by changing the PermitRootLogin variable to “yes” in /etc/ssh/sshd_config. A quick command to do this is “perl -pi -e 's/^\s*#?\s*PermitRootLogin.*$/PermitRootLogin yes/igm' /etc/ssh/sshd_config”, and then reload the SSHD config with “service sshd reload”. Make sure to attempt to directly log into SSH as root before exiting your current session to make sure you haven’t locked yourself out.
  • Install apache (the web server), bind/named (the DNS server), and PHP (a scripting language)
    • yum -y install bind httpd php
  • Start and set services to run at boot
    • service httpd start; service named start; chkconfig httpd on; chkconfig named on;
  • Set the DNS server to be usable by other computers
    • Edit /etc/named.conf and change the 2 following lines to have the value “any”: “listen-on port 53” and “allow-query”
    • perl -pi -e 's/^(\s*(?:listen-on port 53|allow-query)\s*{).*$/$1 any; };/igm' /etc/named.conf; service named reload;
  • Point the DNS server to the blacklist files
    • This is done by adding “include "/var/named/blacklisted.conf";” to /etc/named.conf
    • echo -ne '\ninclude "/var/named/blacklisted.conf";' >> /etc/named.conf
  • Create the blacklist domain list file
    • touch /var/named/blacklisted.conf
  • Create the blacklist zone file
    • Put the following into /var/named/blacklisted.db . Make sure to change dns.yourdomain.com to your domain (or otherwise, “localhost”), and 1.1.1.1 to dns.yourdomain.com’s (your server’s) IP address. Make sure to keep all periods intact.
      $TTL 14400
      @       IN SOA dns.yourdomain.com. dns.yourdomain.com ( 2003052800  86400  300  604800  3600 )
      @       IN      NS   dns.yourdomain.com.
      @       IN      A    1.1.1.1
      *       IN      A    1.1.1.1
    • The first 2 lines tell the server the domains belong to it. The 3rd line sets the base blacklisted domain to your server’s IP. The 4th line sets all subdomains of the blacklisted domain to your server’s IP.
    • This can be done via (Update the first line with your values)
      YOURDOMAIN="dns.yourdomain.com"; YOURIP="1.1.1.1";
      echo -ne "\$TTL 14400\n@       IN SOA $YOURDOMAIN. $YOURDOMAIN ( 2003052800  86400  300  604800  3600 )\n@       IN      NS   $YOURDOMAIN.\n@       IN      A    $YOURIP\n*       IN      A    $YOURIP" > /var/named/blacklisted.db;
  • Fix the permissions on the blacklist files
    • chgrp named /var/named/blacklisted.*; chmod 660 /var/named/blacklisted.*;
  • Set the server’s domain resolution name servers
    • The server always needs to look at itself before other DNS servers. To do this, comment out everything in /etc/resolv.conf and add to it “nameserver localhost”. This is not the best solution. I’ll find something better later.
    • perl -pi -e 's/^(?!;)/;/gm' /etc/resolv.conf; echo -ne '\nnameserver localhost' >> /etc/resolv.conf
  • Run a test
    • At this point, it’s a good idea to make sure the DNS server is working as intended. So first, we’ll add an example domain to the DNS server.
    • Add the following to /var/named/blacklisted.conf and restart named to get the server going with example.com: “zone "example.com" { type master; file "blacklisted.db"; };
    • echo 'zone "example.com" { type master; file "blacklisted.db"; };' >> /var/named/blacklisted.conf; service named reload;
    • Ping “test.example.com” and make sure it’s IP is your server’s IP
    • Set your computer’s DNS to your server’s IP in your computer’s network settings, ping “test.example.com” from your computer, and make sure the returned IP is your server’s IP. If it works, you can restore your computer’s DNS settings.
  • Have the server return a message when a blacklisted domain is accessed
    • Add your message to /var/www/html
    • echo 'Domain is blocked' > /var/www/html/index.html
    • Set all URL paths to show the message by adding the following to the /var/www/html/.htaccess file
      RewriteEngine on
      RewriteCond %{REQUEST_URI} !index.html
      RewriteCond %{REQUEST_URI} !AddRules/
      RewriteRule ^(.*)$ /index.html [L]
    • echo -ne 'RewriteEngine on\nRewriteCond %{REQUEST_URI} !index.html\nRewriteCond %{REQUEST_URI} !AddRules/\nRewriteRule ^(.*)$ /index.html [L]' > /var/www/html/.htaccess
    • Turn on AllowOverride in the /etc/httpd/conf/httpd.conf for the document directory (/var/www/html/) via “ perl -0777 -pi -e 's~(<Directory "/var/www/html">.*?\n\s*AllowOverride).*?\n~$1 All~s' /etc/httpd/conf/httpd.conf
    • Start the server via “service httpd graceful
  • Create a script that allows apache to refresh the name server’s settings
    • Create a script at /var/www/html/AddRules/restart_named with “/sbin/service named reload” and set it to executable
    • mkdir /var/www/html/AddRules; echo '/sbin/service named reload' > /var/www/html/AddRules/restart_named; chmod 755 /var/www/html/AddRules/restart_named
    • Allow the user to run the script as root by adding to /etc/sudoers “apache ALL=(root) NOPASSWD: /var/www/html/AddRules/restart_named” and “Defaults!/var/www/html/AddRules/restart_named !requiretty
    • echo -e 'apache ALL=(root) NOPASSWD:/var/www/html/AddRules/restart_named\nDefaults!/var/www/html/AddRules/restart_named !requiretty' >> /etc/sudoers
  • Create a script that allows the user to add, remove, and list the blacklisted domains
    • Add the following to /var/www/html/AddRules/index.php (one line command not given. You can use “nano” to create it)
      <?php
      //Get old domains
      $BlockedFile='/var/named/blacklisted.conf';
      $CurrentZones=Array();
      foreach(explode("\n", file_get_contents($BlockedFile)) as $Line)
              if(preg_match('/^zone "([\w\._-]+)"/', $Line, $Results))
                      $CurrentZones[]=$Results[1];
      
      //List domains
      if(isset($_REQUEST['List']))
              return print implode('<br>', $CurrentZones);
      
      //Get new domains
      if(!isset($_REQUEST['Domains']))
              return print 'Missing Domains';
      $Domains=$_REQUEST['Domains'];
      if(!preg_match('/^[\w\._-]+(,[\w\._-]+)*$/uD', $Domains))
              return print 'Invalid domains string';
      $Domains=explode(',', $Domains);
      
      //Remove domains
      if(isset($_REQUEST['Remove']))
      {
              $CurrentZones=array_flip($CurrentZones);
              foreach($Domains as $Domain)
                      unset($CurrentZones[$Domain]);
              $FinalDomainList=array_keys($CurrentZones);
      }
      else //Combine domains
              $FinalDomainList=array_unique(array_merge($Domains, $CurrentZones));
      
      //Output to the file
      $FinalDomainData=Array();
      foreach($FinalDomainList as $Domain)
              $FinalDomainData[]=
                      "zone \"$Domain\" { type master; file \"blacklisted.db\"; };";
      file_put_contents($BlockedFile, implode("\n", $FinalDomainData));
      
      //Reload named
      print `sudo /var/www/html/AddRules/restart_named`;
      ?>
    • Add the “apache” user to the “named” group so the script can update the list of domains in /var/named/blacklisted.conf via “usermod -a -G named apache; service httpd graceful;
  • Run the domain update script
    • To add a domain (separate by commas): http://dns.yourdomain.com/AddRules/?Domains=domain1.com,domain2.com
    • To remove a domain (add “Remove&” after the “?”): http://dns.yourdomain.com/AddRules/?Remove&Domains=domain1.com,domain2.com
    • To list the domains: http://dns.yourdomain.com/AddRules/?List
  • Password protect the domain update script
    • Add to AddRules/.htaccess the following
      AuthType Basic
      AuthName "Admins Only"
      AuthUserFile "/var/www/html/AddRules/.htpasswd"
      require valid-user
    • echo -ne 'AuthType Basic\nAuthName "Admins Only"\nAuthUserFile "/var/www/html/AddRules/.htpasswd"\nrequire valid-user' > /var/www/html/AddRules/.htaccess
    • Warning: Putting the password file in an http accessible directory is a security risk. I just did this for sake of organization.
    • Create the user+password via “htpasswd -bc /var/www/html/AddRules/.htpasswd USERNAME” and then entering the password


[Edit on 2016-01-30 @ noon]

To permanently set “localhost” as the resolver DNS, add “DNS1=localhost” to “/etc/sysconfig/network-scripts/ifcfg-eth0”. I have not yet confirmed this edit.

Security Issue

Soon after setting up this DNS server, it started getting hit by a DNS amplification attack. As the server is being used as a client’s DNS server, turning off recursion is not available. The best solution is to limit the people who can query the name server via an access list (usually a specific subnet), but that would very often not be an option either. The solution I currently have in place, which I have not actually verified if it works, is to add a forced-forward rule which only makes external requests to the name server provided by Amazon. To do this, get the name server’s IP from /etc/resolv.conf (it should be commented from an earlier step). Then add the following to your named.conf in the “options” section.

	forwarders {
		DNS_SERVER_IP;
	};
	forward only;

After I added this rule, external DNS requests stopped going through completely. To fix this, I turned “dnssec-validation” to “no” in the named.conf. Don’t forget to restart the service once you have made your changes.

[End of edit]

Full serverside script
Make sure to run this as root (login as root or sudo it)

Download the script here. Make sure to chmod and sudo it when running. “chmod +x dnsblacklist_install.sh; sudo ./dnsblacklist_install.sh;

#User defined variables
VARIABLES_SET=0; #Set this to 1 to allow the script to run
YOUR_DOMAIN="localhost";
YOUR_IP="1.1.1.1";
BLOCKED_ERROR_MESSAGE="Domain is blocked";
ADDRULES_USERNAME="YourUserName";
ADDRULES_PASSWORD="YourPassword";

#Confirm script is ready to run
if [ $VARIABLES_SET != 1 ]; then
    echo 'Variables need to be set in the script';
    exit 1;
fi
if [ `whoami` != 'root' ]; then
    echo 'Must be root to run script. When running the script, add "sudo" before it to' \
        'run as root';
    exit 1;
fi

#Allow root login
cat /home/ec2-user/.ssh/authorized_keys > /root/.ssh/authorized_keys;
perl -pi -e 's/^\s*#?\s*PermitRootLogin.*$/PermitRootLogin yes/igm' /etc/ssh/sshd_config;
service sshd reload;

#Install services
yum -y install bind httpd php;
chkconfig httpd on;
chkconfig named on;
service httpd start;
service named start;

#Set the DNS server to be usable by other computers
perl -pi -e 's/^(\s*(?:listen-on port 53|allow-query)\s*{).*$/$1 any; };/igm' \
    /etc/named.conf;
service named reload;

#Create/link the blacklist files
echo -ne '\ninclude "/var/named/blacklisted.conf";' >> /etc/named.conf;
touch /var/named/blacklisted.conf;

#Create the blacklist zone file
echo -ne "\$TTL 14400
@       IN SOA $YOUR_DOMAIN. $YOUR_DOMAIN ( 2003052800  86400  300  604800  3600 )
@       IN      NS   $YOUR_DOMAIN.
@       IN      A    $YOUR_IP
*       IN      A    $YOUR_IP" > /var/named/blacklisted.db;

#Fix the permissions on the blacklist files
chgrp named /var/named/blacklisted.*;
chmod 660 /var/named/blacklisted.*;

#Set the server’s domain resolution name servers
perl -pi -e 's/^(?!;)/;/gm' /etc/resolv.conf;
echo -ne '\nnameserver localhost' >> /etc/resolv.conf;

#Run a test
echo 'zone "example.com" { type master; file "blacklisted.db"; };' >> \
    /var/named/blacklisted.conf;
service named reload;
FOUND_IP=`dig -t A example.com | grep -ioP "^example\.com\..*?"'in\s+a\s+[\d\.:]+' | \
     grep -oP '[\d\.:]+$'`;
if [ "$YOUR_IP" == "$FOUND_IP" ]
then
  echo 'Success: Example domain matches your given IP' > /dev/stderr;
else
  echo 'Warning: Example domain does not match your given IP' > /dev/stderr;
fi

#Have the server return a message when a blacklisted domain is accessed
echo "$BLOCKED_ERROR_MESSAGE" > /var/www/html/index.html;
perl -0777 -pi -e 's~(<Directory "/var/www/html">.*?\n\s*AllowOverride).*?\n~$1 All~s' \
     /etc/httpd/conf/httpd.conf;
echo -n 'RewriteEngine on
RewriteCond %{REQUEST_URI} !index.html
RewriteCond %{REQUEST_URI} !AddRules/
RewriteRule ^(.*)$ /index.html [L]' > /var/www/html/.htaccess;
service httpd graceful;

#Create a script that allows apache to refresh the name server’s settings
mkdir /var/www/html/AddRules;
echo '/sbin/service named reload' > /var/www/html/AddRules/restart_named;
chmod 755 /var/www/html/AddRules/restart_named;

echo 'apache ALL=(root) NOPASSWD:/var/www/html/AddRules/restart_named
Defaults!/var/www/html/AddRules/restart_named !requiretty' >> /etc/sudoers;

#Create a script that allows the user to add, remove, and list the blacklisted domains
echo -n $'<?php
//Get old domains
$BlockedFile=\'/var/named/blacklisted.conf\';
$CurrentZones=Array();
foreach(explode("\\n", file_get_contents($BlockedFile)) as $Line)
        if(preg_match(\'/^zone "([\\w\\._-]+)"/\', $Line, $Results))
                $CurrentZones[]=$Results[1];

//List domains
if(isset($_REQUEST[\'List\']))
        return print implode(\'<br>\', $CurrentZones);

//Get new domains
if(!isset($_REQUEST[\'Domains\']))
        return print \'Missing Domains\';
$Domains=$_REQUEST[\'Domains\'];
if(!preg_match(\'/^[\\w\\._-]+(,[\\w\\._-]+)*$/uD\', $Domains))
        return print \'Invalid domains string\';
$Domains=explode(\',\', $Domains);

//Remove domains
if(isset($_REQUEST[\'Remove\']))
{
        $CurrentZones=array_flip($CurrentZones);
        foreach($Domains as $Domain)
                unset($CurrentZones[$Domain]);
        $FinalDomainList=array_keys($CurrentZones);
}
else //Combine domains
        $FinalDomainList=array_unique(array_merge($Domains, $CurrentZones));

//Output to the file
$FinalDomainData=Array();
foreach($FinalDomainList as $Domain)
    $FinalDomainData[]="zone \\"$Domain\\" { type master; file \\"blacklisted.db\\"; };";
file_put_contents($BlockedFile, implode("\\n", $FinalDomainData));

//Reload named
print `sudo /var/www/html/AddRules/restart_named`;
?>' > /var/www/html/AddRules/index.php;

usermod -a -G named apache;
service httpd graceful;

#Password protect the domain update script
echo -n 'AuthType Basic
AuthName "Admins Only"
AuthUserFile "/var/www/html/AddRules/.htpasswd"
require valid-user' > /var/www/html/AddRules/.htaccess;

htpasswd -bc /var/www/html/AddRules/.htpasswd "$ADDRULES_USERNAME" "$ADDRULES_PASSWORD";

echo 'Script complete';
UTF8 BOM
When a good idea is still considered too much by some

While UTF-8 has almost universally been accepted as the de-facto standard for Unicode character encoding in most non-Windows systems (mmmmmm Plan 9 ^_^), the BOM (Byte Order Marker) still has large adoption problems. While I have been allowing my text editors to add the UTF8 BOM to the beginning of all my text files for years, I have finally decided to rescind this practice for compatibility reasons.

While the UTF8 BOM is useful so that editors know for sure what the character encoding of a file is, and don’t have to guess, they are not really supported, for their reasons, in Unixland. Having to code solutions around this was becoming cumbersome. Programs like vi and pico/nano seem to ignore a file’s character encoding anyways and adopt the character encoding of the current terminal session.

The main culprit in which I was running into this problem a lot with is PHP. The funny thing about it too was that I had a solution for it working properly in Linux, but not Windows :-).

Web browsers do not expect to receive the BOM marker at the beginning of files, and if they encounter it, may have serious problems. For example, in a certain browser (*cough*IE*cough*) having a BOM on a file will cause the browser to not properly read the DOCTYPE, which can cause all sorts of nasty compatibility issues.

Something in my LAMP setup on my cPanel systems was removing the initial BOM at the beginning of outputted PHP contents, but through some preliminary research I could not find out why this was not occurring in Windows. However, both systems were receiving multiple BOMs at the beginning of the output due to PHP’s include/require functions not stripping the BOM from those included files. My solution to this was a simple overload of these include functions as follows (only required when called from any directly opened [non-included] PHP file):

<?
/*Safe include/require functions that make sure UTF8 BOM is not output
Use like: eval(safe_INCLUDETYPE($INCLUDE_FILE_NAME));
where INCLUDETYPE is one of the following: include, require, include_once, require_once
An eval statement is used to maintain current scope
*/

//The different include type functions
function safe_include($FileName)	{ return real_safe_include($FileName, 'include'); }
function safe_require($FileName)	{ return real_safe_include($FileName, 'require'); }
function safe_include_once($FileName)	{ return real_safe_include($FileName, 'include_once'); }
function safe_require_once($FileName)	{ return real_safe_include($FileName, 'require_once'); }

//Start the processing and return the eval statement
function real_safe_include($FileName, $IncludeType)
{
	ob_start();
	return "$IncludeType('".strtr($FileName, Array("\\"=>"\\\\", "'", "\\'"))."'); safe_output_handler();";
}

//Do the actual processing and return the include data
function safe_output_handler()
{
	$Output=ob_get_clean();
	while(substr($Output, 0, 3)=='?') //Remove all instances of UTF8 BOM at the beginning of the output
		$Output=substr($Output, 3);
	print $Output;
}
?>

I would have like to have used PHP’s output_handler ini setting to catch even the root file’s BOM and not require include function overloads, but, as php.net puts it “Only built-in functions can be used with this directive. For user defined functions, use ob_start().”.

As a bonus, the following bash command can be used to find all PHP files in the current directory tree with a UTF8 BOM:

grep -rlP "^\xef\xbb\xbf" . | grep -iP "\.php\$"

[Edit on 2015-11-27]
Better UTF8 BOM file find code (Cygwin compatible):
 find . -name '*.php' -print0 | xargs -0 -n1000 grep -l $'^\xef\xbb\xbf'
And to remove the BOMs (Cygwin compatible):
find . -name '*.php' -print0 | xargs -0 -n1000 grep -l $'^\xef\xbb\xbf' | xargs -i perl -i.bak -pe 'BEGIN{ @d=@ARGV } s/^\xef\xbb\xbf//; END{ unlink map "$_$^I", @d }' "{}"
Simpler remove BOMs (not Cygwin/Windows compatible):
find . -name '*.php' -print0 | xargs -0 -n1000 grep -l $'^\xef\xbb\xbf' | xargs -i perl -i -pe 's/^\xef\xbb\xbf//' "{}"
High Computer Load
I hate servers. So much... so, so much...
High Server Load

I think it was actually much higher than this, but it wouldn’t let me log in to find out! >:-( . Wish I could easily make SSH and everything I do in it have priority over other process... but then again I probably wouldn’t be able to do anything to fix the load when this sometimes happens anyways. *sighs*

I’ll explain more about “load” in an upcoming post.

Linux Runlevels
“Safe Mode” for Linux

I am still, very unfortunately, looking into the problem I talked about way back here :-( [not a lot, but it still persists]. This time I decided to try and boot the OS into a “Safe Mode” with nothing running that could hinder performance tests (like hundreds of HTTP and MySQL sessions). Fortunately, my friend whom is a Linux server admin for a tech firm was able to point me in the right direction after researching the topic was proving frustratingly fruitless.


Linux has “runlevels” it can run at, which are listed in “/etc/inittab” as follows:

# Default runlevel. The runlevels used by RHS are:
#   0 - halt (Do NOT set initdefault to this)
#   1 - Single user mode
#   2 - Multiuser, without NFS (The same as 3, if you do not have networking)
#   3 - Full multiuser mode
#   4 - unused
#   5 - X11
#   6 - reboot (Do NOT set initdefault to this)

So I needed to get into “Single user mode” to run the tests, which could be done two ways. Before I tell you how though, it is important to note that if you are trying to do something like this remotely, normal SSH/Telnet will not be accessible, so you will need either physical access to the computer, or something like a serial console connection, which can be routed through networks.

So the two ways are:
  • Through the “init” command. Running “init #” at the console, where # is the runlevel number, will bring you into that runlevel. However, this might not kill all currently unneeded running processes when going to a lower level, but it should get the majority of them, I believe.
  • Append “s” (for single user mode) to the grub configuration file (/boot/grub/grub.conf on my system) at the end of the line starting with “kernel”, then reboot. I am told appending a runlevel number may also work.
Core Dump Files
Not all OSs crash in the same way :-)

If you ever find a file named “core.#” when running Linux, where # is replaced by a number, it means something crashed at some point. Most of the time, you will probably just want to delete the file, but sometimes you may wonder what crashed. To do this, you use gdb (The GNU debugger), a very power tool, to analyze the core dump file.

gdb --core=COREFILENAME

Near the very bottom of the blob of outputted text after running this command, you should see a line that says “Core was generated by `...'.”. This tells you the command line of what crashed. To exit gdb, enter “quit”. You can also use gdb to find out what actually happened and troubleshoot/debug the problem, but that’s a very long and complex topic.


Recently, I started seeing hundreds of core dump files taking up gigabytes of space showing up in “/usr/local/cpanel/whostmgr/docroot/” on multiple of our web servers. According to several online sources, it seems cPanel (web hosting made easy!) likes to dump many, if not all, of its programs' core files into this directory. In our case, it has been “dnsadmin” doing the crashing. We’ve been having some pretty major DNS problems lately, this kind on the name server level, so I may have to rebuild our DNS cluster in the next few days. Joy.

Useful Bash commands and scripts
Unix is so great
First, to find out more about any bash command, use
man COMMAND

Now, a primer on the three most useful bash commands: (IMO)
find:
Find will search through a directory and its subdirectories for objects (files, directories, links, etc) satisfying its parameters.
Parameters are written like a math query, with parenthesis for order of operations (make sure to escape them with a “\”!), -a for boolean “and”, -o for boolean “or”, and ! for “not”. If neither -a or -o is specified, -a is assumed.
For example, to find all files that contain “conf” but do not contain “.bak” as the extension, OR are greater than 5MB:
find -type f \( \( -name "*conf*" ! -name "*.bak" \) -o -size +5120k \)
Some useful parameters include:
  • -maxdepth & -mindepth: only look through certain levels of subdirectories
  • -name: name of the object (-iname for case insensitive)
  • -regex: name of object matches regular expression
  • -size: size of object
  • -type: type of object (block special, character special, directory, named pipe, regular file, symbolic link, socket, etc)
  • -user & -group: object is owned by user/group
  • -exec: exec a command on found objects
  • -print0: output each object separated by a null terminator (great so other programs don’t get confused from white space characters)
  • -printf: output specified information on each found object (see man file)

For any number operations, use:
+nfor greater than n
-nfor less than n
nfor exactly than n

For a complete reference, see your find’s man page.

xargs:
xargs passes piped arguments to another command as trailing arguments.
For example, to list information on all files in a directory greater than 1MB: (Note this will not work with paths with spaces in them, use “find -print0” and “xargs -0” to fix this)
find -size +1024k | xargs ls -l
Some useful parameters include:
  • -0: piped arguments are separated by null terminators
  • -n: max arguments passed to each command
  • -i: replaces “{}” with the piped argument(s)

So, for example, if you had 2 mirrored directories, and wanted to sync their modification timestamps:
cd /ORIGINAL_DIRECTORY
find -print0 | xargs -0 -i touch -m -r="{}" "/MIRROR_DIRECTORY/{}"

For a complete reference, see your xargs’s man page.

grep:
GREP is used to search through data for plain text, regular expression, or other pattern matches. You can use it to search through both pipes and files.
For example, to get your number of CPUs and their speeds:
cat /proc/cpuinfo | grep MHz
Some useful parameters include:
  • -E: use extended regular expressions
  • -P: use perl regular expression
  • -l: output files with at least one match (-L for no matches)
  • -o: show only the matching part of the line
  • -r: recursively search through directories
  • -v: invert to only output non-matching lines
  • -Z: separates matches with null terminator

So, for example, to list all files under your current directory that contain “foo1”, “foo2”, or “bar”, you would use:
grep -rlE "foo(1|2)|bar"

For a complete reference, see your grep’s man page.

And now some useful commands and scripts:
List size of subdirectories:
du --max-depth=1
The --max-depth parameter specifies how many sub levels to list.
-h can be added for more human readable sizes.

List number of files in each subdirectory*:
#!/bin/bash
export IFS=$'\n' #Forces only newlines to be considered argument separators
for dir in `find -type d -maxdepth 1`
do
	a=`find $dir -type f | wc -l`;
	if [ $a != "0" ]
	then
		echo $dir $a
	fi
done
and to sort those results
SCRIPTNAME | sort -n -k2

List number of different file extensions in current directory and subdirectories:
find -type f | grep -Eo "\.[^\.]+$" | sort | uniq -c | sort -nr

Replace text in file(s):
perl -i -pe 's/search1/replace1/g; s/search2/replace2/g' FILENAMES
If you want to make pre-edit backups, include an extension after “-i” like “-i.orig”

Perform operations in directories with too many files to pass as arguments: (in this example, remove all files from a directory 100 at a time instead of using “rm -f *”)
find -type f | xargs -n100 rm -f

Force kill all processes containing a string:
killall -9 STRING

Transfer MySQL databases between servers: (Works in Windows too)
mysqldump -u LOCAL_USER_NAME -p LOCAL_DATABASE | mysql -u REMOTE_USER_NAME -p -D REMOTE_DATABASE -h REMOTE_SERVER_ADDRESS
“-p” specifies a password is needed

Some lesser known commands that are useful:
screen: This opens up a virtual console session that can be disconnected and reconnected from without stopping the session. This is great when connecting to console through SSH so you don’t lose your progress if disconnected.
htop: An updated version of top, which is a process information viewer.
iotop: A process I/O (input/output - hard drive access) information viewer. Requires Python ? 2.5 and I/O accounting support compiled into the Linux kernel.
dig: Domain information retrieval. See “Diagnosing DNS Problems” Post for more information.

More to come later...

*Anything staring with “#!/bin/bash” is intended to be put into a script.
Always Confirm Potentially Hazardous Actions
Also treat what others tell you with discretion

So I have been having major speed issues with one of our servers. After countless hours of diagnoses, I determined the bottle neck was always I/O (input/output, accessing the hard drive). For example, when running an MD5 hash on a 600MB file load would jump up to 31 with 4 logical CPUs and it would take 5-10 minutes to complete. When performing the same test on the same machine on a second drive it finished within seconds.

Replacing the hard drive itself is a last resort for a live production server, and a friend suggested the drive controller could be the problem, so I confirmed that the drive controller for our server was not on-board (on its own card), and I attempted to convince the company hosting our server of the problem so they would replace the drive controller. I ran my own tests first with an iostat check while doing a read of the main hard drive (cat /etc/sda > /dev/null). This produced steadily worsening results the longer the test went on, and always much worse than our secondary drive. I passed these results on to the hosting company, and they replied that a “badblocks –vv” produced results that showed things looked fine.

So I was about to go run his test to confirm his findings, but decided to check parameters first, as I always like to do before running new Linux commands. Thank Thor I did. The admin had meant to write “badblocks –v” (verbose) and typoed with a double key stroke. The two v’s looked like a w due to the font, and had I ran a “badblocks –w” (write-mode test), I would have wiped out the entire hard drive.

Anyways, the test outputted the same basic results as my iostat test with throughput results very quickly decreasing from a remotely acceptable level to almost nil. Of course, the admin only took the best results of the test, ignoring the rest.

I had them swap out the drive controller anyways, and it hasn’t fixed things, so a hard drive replace will probably be needed soon. This kind of problem would be trivial if I had access to the server and could just test the hardware myself, but that is a price to pay for proper security at a server farm.