Home Page
Archive > Posts > 2015 > All
Search:

Syncing Amazon EC2 Instances

In continuation of yesterday’s post, in which I showed how to create Amazon AMIs to keep your newly created EC2 instances up to date, today I will cover syncing already-live instances from the master to slaves. All of the below takes place on the master instance, and assumes all other instances are part of the slave group. You may have to use extra filters on the below “aws” command to only pull IPs from a certain group of instances.

Here is a simple bash script (hereby referred to as “Propagate.sh”) which syncs /var/www/html/ to all of your slave instances. It uses the “aws” command line interface provided by Amazon, which comes default with the Amazon Linux starter AMI.

#The first command line of the script contains the master’s IP, so it does not sync with itself.
export LocalIP=Your_Master_IP_Here;

#Get the IPs of all slave instances
export NewIPs=`aws ec2 describe-instances | grep '"PrivateIpAddress"' | perl -i -pe 's/(^.*?: "|",?\s*?$)//gm' | sort -u | grep -v $LocalIP`

#Loop over all slave instances
for i in $NewIPs; do
        echo "Syncing to: $i";
        #Run an rsync from the master to the slave
        rsync -aP -e 'ssh -o StrictHostKeyChecking=no' /var/www/html/ root@$i:/var/www/html/;
done

You may also want to add “-o UserKnownHostsFile=/dev/null” to the SSH command (directly after “-o StrictHostKeyChecking=no”), as a second EC2 instance may end up having the same IP as a previously terminated instance. Another solution to that problem is syncing the “/etc/ssh/ssh_host_rsa_key*” from the master when an instance initializes, so all instances keep the same SSH fingerprint.


To let other people manually execute this script, you can create a PHP file with the following in it. (Change /var/www/ in all below examples to where you place your Propagate.sh)

<? print nl2br(htmlentities(shell_exec('sudo /var/www/Propagate.sh 2<&1'))); ?>

If your Propagate.sh needs to be ran as root, which it may if your PHP environment is not run as the user root (usually “apache”), then you need to make sure it CAN run as root without intervention. To do this, add the following to the /etc/sudoers file
apache  ALL=(ALL)       NOPASSWD: /usr/bin/whoami, /var/www/Propagate.sh
Change the user from “apache” to the user which PHP runs as (when running through apache).
I included “whoami” as a valid sudoer application for testing purposes.
Also, in the sudoers file, if “Defaults requiretty” is turned on, you will need to comment it/turn it off.

While I did not mention it in yesterday's post, I thought I should at least mention it here. There are other ways to keep file systems in sync with each other. This is just a good use case for when you want to keep all instances as separate independent entities. Another solution to many of the previously mentioned problems is using Amazon's new EFS, which is currently still in preview mode.

Custom Initializations for Amazon AMIs

I was recently hired to move a client's site from our primary server in Houston to the Amazon cloud, as it was about to take a big hit in traffic. The normal setup for this kind of job is pretty straightforward. Move the database over to RDS, set up an AMI of an EC2 instance, a load balancer, and ec2 auto scaling. However, there were a couple of problems I needed to solve this time around for the instances launched via the auto scalar that I had not really needed to do before. This includes syncing the SSH settings and current codebase from the primary instance, as opposed to recreating AMIs every time there was a change. So, long story short, here are the problems and solutions that need to be added before the AMI image is created.


This all assumes you are running as root. Most of these commands should work on any Linux distribution that Amazon has default AMIs for, but some of these may only work in the Amazon and CentOS AMIs.


Pre-setup:
  • Your first instance that you are creating the AMI from should be a permanent instance. This is important for 2 reasons.
    1. When changing configurations for the auto scalar, if and when your instances are terminated and recreated, this instance will always be available on the load balancer, so there is no downtime.
    2. This instance can act as a central repository for other instances to sync from.
    So make sure this instance has an elastic IP assigned to it. From here on out, we will refer to this instance as PrimaryInstance (you can set this physically in the host file, or change it in all scripts to however you want to refer to your elastic IP [most likely through a DNS domain]).
  • Create your ssh private key for the instances: (For all prompts, use default settings)
    ssh-keygen -t rsa -b 4096 -C "your_email@example.com"
  • Make sure your current ssh authorized_keys contains your new ssh private key:
    cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
  • Make sure your ssh known_hosts includes your primary instance, so all future ssh calls to it are automatically accept it as a known host:
    ssh PrimaryInstance -o StrictHostKeyChecking=no
    You do not have to finish the login process. This just makes sure our primary instance will be recognized by other instances.
  • Turn on PermitRootLogin in /etc/ssh/sshd_config and reload the sshd config service sshd reload
    I just recommend this because it makes life way, way easier. The scripts below assume that you did this.

Create a custom init file that runs on boot to take care of all the commands that need to be run.
#Create the script and make sure the full path (+all other root environment variables) are set when it is ran
echo '#!/bin/bash -l' > /etc/rc.d/init.d/custom_init

#Set the script as executable
chmod +x /etc/rc.d/init.d/custom_init

#Executes it as one of the last scripts on run level 3 (Multi-user mode with networking)
ln -s ../init.d/custom_init /etc/rc.d/rc3.d/S99custom_init
All of the below commands in this post will go into this script.

Allow login via password authentication:
perl -i -pe 's/^PasswordAuthentication.*$/PasswordAuthentication yes/mg' /etc/ssh/sshd_config
service sshd reload
Notes:
You may not want to do this. It was just required by my client in this case.
This is required in the startup script because Amazon likes to mess with the sshd_config (and authorized_keys) in new instances it boots.

Sync SSH settings from the PrimaryInstance:
#Remove the known_hosts file, in case something on the PrimaryInstance has changed that would block ssh commands.
rm -f ~/.ssh/known_hosts

#Sync the SSH settings from the PrimaryInstance
rsync -e 'ssh -o StrictHostKeyChecking=no' -a root@PrimaryInstance:~/.ssh/ ~/.ssh/

Sync required files from the PrimaryInstance. In this case, the default web root folder:
rsync -at root@PrimaryInstance:/var/www/html/ /var/www/html/

That's it for the things that need to be configured/added to the instance. From there, create your AMI and launch config, and create/modify your launch group and load balancer.


Also, as a very important note about your load balancer, make sure if you are mirroring its IP on another domain to use a CNAME record, and not the IP in an A record, as the load balancer IP is subject to change.

Lets Encrypt HTTPS Certificates

After a little over a year of waiting, Let’s Encrypt has finally opened its doors to the public! Let’s Encrypt is a free https certificate authority, with the goal of getting the entire web off of http (unencrypted) and on to https. I consider this a very important undertaking, as encryption is one of the best ways we can fight illegal government surveillance. The more out there that is encrypted, the harder it will be to spy on people.

I went ahead and got it up and running on 2 servers today, which was a bit of a pain in the butt. It [no longer] supports Python 2.6, and was also very unhappy with my CentOS 6.4 cPanel install. Also, when you first run the letsencrypt-auto executable script as instructed by the site, it opens up your package manager and immediately starts downloading LOTS of packages. I found this to be quite anti-social, especially as I had not yet seen anywhere, or been warned, that it would do this before I started the install, but oh well. It is convenient. The problem in cPanel was that a specific library, libffi, was causing problems during the install.


To fix the Python problem for all of my servers, I had to install Python 2.7 as an alt Python install so it wouldn’t mess with any existing infrastructure using Python 2.6. After that, I also set the current alias of “python” to “python2.7” so the local shell would pick up on the correct version of Python.


As root in a clean directory:
wget https://www.python.org/ftp/python/2.7.8/Python-2.7.8.tgz
tar -xzvf Python-2.7.8.tgz
cd Python-2.7.8
./configure --prefix=/usr/local
make
make altinstall
alias python=python2.7

The cPanel lib problem was caused by libffi already being installed as 3.0.9-1.el5.rf, but yum wanted to install its devel package as version 3.0.5-3.2.el6.x86_64 (an older version). It did not like running conflicting versions. All that was needed to fix the problem was to manually download and install the same devel version as the current live version.

wget http://pkgs.repoforge.org/libffi/libffi-devel-3.0.9-1.el5.rf.x86_64.rpm
rpm -ivh libffi-devel-3.0.9-1.el5.rf.x86_64.rpm

Unfortunately, the apache plugin was also not working, so I had to do a manual install with “certonly” and “--webroot”.


And that was it; letsencrypt was ready to go and start signing my domains! You can check out my current certificate, issued today, that currently has 13 domains tied to it!

PHPMyAdmin SQL Export: Key Position

After version 4.2.0.0 (2014-05-08) of phpMyAdmin, it stopped including table’s keys inline within the create table statement, and instead opted to add all the table keys at the very end of the export file by modifying the tables. (See "rfe #1004 Create indexes at the end in SQL export). This behavior has been annoying to many people, including myself, but I never noticed anyone mentioning a fix. I looked into the source and there is a very simple way to restore this behavior to what it originally was.


Edit the file “phpMyAdmin/libraries/plugins/export/ExportSql.class.php”. In it, the code block starting with the below line needs to be skipped
if (preg_match('@CONSTRAINT|KEY@', $create_query)) {
The easiest way to do this is changing that line to
if (false && preg_match('@CONSTRAINT|KEY@', $create_query)) {
AutoHotKey Scripts

In lieu of using my own custom C++ background services to take care of hot key tasks in Windows, I started using AutoHotKey a while back. While it’s not perfect, and it is missing a lot of Win32 API functionality, I am still able to mostly accomplish what I want in it. I was thinking I should add some of the simple scripts I use here.


Center a string within padding characters and output as key-strokes
Example:
  • PadText = ~*
  • Length = 43
  • Text = Example Text
  • Result = ~*~*~*~*~*~*~*~*Example Text~*~*~*~*~*~*~*~
;Get the last values
IniPath=%A_ScriptDir%\AutoHotKey.ini
IniRead,PadText,%IniPath%,CenterString,PadText,-
IniRead,NewLength,%IniPath%,CenterString,NewLength,10
IniRead,TheString,%IniPath%,CenterString,TheString,The String

;Get the input
InputBox,PadText,Center String,Pad Character,,,,,,,,%PadText%
InputBox,NewLength,Center String,New Length,,,,,,,,%NewLength%
InputBox,TheString,Center String,String To Center,,,,,,,,%TheString%

;Cancel on blank pad or invalid number
if StrLen(PadText)==0
{
	MsgBox,Pad text cannot be blank
	return
}
if NewLength is not integer
{
	MsgBox,New length must be an integer
	return
}

;Save the last values
IniWrite,%PadText%,%IniPath%,CenterString,PadText
IniWrite,%NewLength%,%IniPath%,CenterString,NewLength
IniWrite,%TheString%,%IniPath%,CenterString,TheString

;Initial padding
PadStrLen:=StrLen(PadText)
PadLen:=NewLength-StrLen(TheString)
NewString:=""
Loop
{
	if StrLen(NewString)>=Ceil(PadLen/2)
		break
	NewString.=PadText
}

;Truncate initial padding to at least half
NewString:=Substr(NewString, 1, Ceil(PadLen/2))

;Add the string
NewString.=TheString

;Final padding
Loop
{
	if StrLen(NewString)>=NewLength
		break
	NewString.=PadText
}

;Truncate to proper length
NewString:=Substr(NewString, 1, NewLength)

;Output to console
Sleep,100
Send %NewString%
return

Format rich clipboard text to plain text
clipboard = %clipboard%
return

Force window to borderless full screen
Description: This takes the active window, removes all window dressing (titlebar, borders, etc), sets its resolution as 1920x1080, and positions the window at 0x0. In other words, this makes your current window take up the entirety of your primary monitor (assuming it has a resolution of 1920x1080).
WinGetActiveTitle, WinTitle
WinSet, Style, -0xC40000, %WinTitle%
WinMove, %WinTitle%, , 0, 0, 1920, 1080
return

Continually press key on current window
Description: Saves the currently active window (by its title) and focused control object within the window; asks the user for a keypress interval and the key to press; starts to continually press the requested key at the requested interval in the original control (or top level window if an active control is not found); stops via the F11 key.
Note: I had created this to help me get through the LISA intro multiple times.
;Get the current window and control
WinGetActiveTitle, TheTitle
ControlGetFocus FocusedControl, %TheTitle%
if(ErrorLevel)
	FocusedControl=ahk_parent

;Get the pause interval
InputBox,IntervalTime,Starting script with window '%TheTitle%',Enter pause interval in milliseconds. After submitted`, hold down the key to repeat,,,,,,,,200
if(ErrorLevel || IntervalTime=="") ;Cancel action if blank or cancelled
	return
IntervalTime := IntervalTime+0

;Get the key to keep pressing - Unfortunately, there is no other way I can find to get the currently pressed keycode besides polling all 255 of them
Sleep 500 ;Barrier to make sure one of the initialization keys is not grabbed
Loop {
	TestKey := 0
	Loop {
		SetFormat, INTEGER, H
		HexTextKey := TestKey
		SetFormat, INTEGER, D
		VirtKey = % "vk" . SubStr(HexTextKey, 3)
		if(GetKeyState(VirtKey)=1 || TestKey>255)
			break
		TestKey:=TestKey+1
	}
	if(TestKey<=255)
		break
	Sleep 500
}
VirtKey := GetKeyName(VirtKey)

;If a direction key, remap to the actual key
if(TestKey>=0x25 && TestKey<=0x28)
	VirtKey := SubStr(VirtKey, 7)

;Let the user know their key
MsgBox Received key: '%VirtKey%'. You may now let go of the key. Hold F11 to stop the script.

;Continually send the key at the requested interval
KeyDelay:=10
SetKeyDelay %KeyDelay% #Interval between up/down keys
IntervalTime-=%KeyDelay%
Loop {
	;Press the key
	ControlSend, %FocusedControl%, {%VirtKey% Up}{%VirtKey% Down}, %TheTitle%

	;Check for the cancel key
	if(GetKeyState("F11"))
		break

	;Wait the requested interval to press the key again
	Sleep, %IntervalTime%
}

;Let the user know the script has ended
MsgBox Ending script with window '%TheTitle%'
return
LISA game difficulty level save hack

I recently bought the game LISA on Steam, and the humor approach is fascinating. Unfortunately, this approach involves being incredible vague, or outright obtuse, at telling you what is going on, or what is going to happen if you do something. The very first choice you have in the game is whether to choose “Pain” mode or “Normal” mode. It doesn’t tell you anything beyond that. Unfortunately, I interpreted this as “Normal” and “Easy”, and so I chose the former “Pain” mode. One of the “features” of pain mode is that you can only use save points once, and there are only 36 of them in the game, spread very far apart. After I was a few hours into the game, and I realized how much of a bother this was going to be, especially because it meant I had to play in possible multi-hour chunks, not knowing when I would get to stop. I didn’t feel like replaying up until that point, so I decided to do some save game file hacking, as that is part of the fun for me.

DO NOTE, this method involves deleting some of the data in the game file, specifically a bunch of boolean flags, which might cause some events in the save to be “forgotten”, so they will reoccur. At the point of the game I was at, the few deleted flag actions that I encountered didn’t affect anything big or of importance. One example of this is the long-winded character repeats his final soliloquy when you enter his map.


So, to switch from “Pain” mode to “Normal” mode in the save file, do the following:
  1. Your save files are located at %STEAM_FOLDER%/steamapps/common/LISA/Save##.rvdata2
  2. Backup the specific save file you want to edit, just in case.
  3. Open that save file in a hex editor. You might need to be in steam offline mode for the edit to stick.
  4. Search for “@data[”. Immediately following it are the hex character “02 02 02”. Delete them and in their place, add the hex character 0x73 (“s”).
  5. Following the “s” character that you just added are 514 bytes that are either “0”, “T”, or “F”, and then a colon (“:”)
  6. Keep the first 110 of these bytes, and then delete everything up to the colon (which should be 404 bytes).
  7. Save the file, and that should be it!
Useful Exim Scripts
For fighting spam

In the course of my Linux administrative duties (on a cPanel server), I have created multiple scripts to help us out with Exim, our mail transfer agent. These are mostly used to help us fight spam, and determine who is spamming when it occurs.



This monitors the number of emails in the queue, and sends ours admins an email when a limit (1000) is reached. It would need to be run on a schedule (via cron).
#!/bin/bash
export AdminEmailList="ADMIN EMAIL LIST SEPARATED BY COMMAS HERE"
export Num=`/usr/sbin/exim -bpc`
if [ $Num -gt 1000 ]; then
        echo "Too many emails! $Num" | /usr/sbin/sendmail -v "$AdminEmailList"
        #Here might be a good place to delete emails with “undeliverable” strings within them
        #Example (See the 3rd script): exim-delete-messages-with 'A message that you sent could not be delivered'
fi

This deletes any emails in the queue from or to a specified email address (first parameter). If the address is the recipient, the from must be "<>" (root)
#!/bin/bash
exiqgrep -ir $1 -f '<>' | xargs exim -Mrm
exiqgrep -if $1 | xargs exim -Mrm

This deletes any emails in the queue which contain a given string (first parameter)
#!/bin/bash
if [ "$1" == "" ]
then
  echo 'Cannot delete with empty string'
else
  grep -lir "$1" /var/spool/exim/input/ | sed -e 's/^.*\/\([a-zA-Z0-9-]*\)-[DH]$/\1/g' | xargs /usr/sbin/exim -Mrm
fi

Get a count of emails in the queue per sender (sender email address is supplied by sender and can be faked)
#!/bin/bash
exim -bp | grep -oP '<.*?>' | sort | uniq -c | sort -n

Get a count of emails in the queue per account (running this script can take a little while)
#!/bin/bash
exim -bp | grep -Po '(?<= )[-\w]+(?= <)' | xargs -n1 exim -Mvh | grep -ioP '(?<=auth_sender ).*$' | sort | uniq -c

Bonus: Force all non-specified accounts on Exim to use a certain IP address for sending. It would need to be run on a schedule (via cron).
#!/bin/bash
export IPAddress="YOUR ADDRESS HERE"
/usr/bin/perl -i -pe 's/\*:.*/*: '$IPAddress'/g' /etc/mailips
Optimization gone bad
Or, the case of the Android app out-of-order calls

On Android, there is a primary thread which runs all UI stuff. If a GUI operation is ran in a different thread, it just won't work, and may throw an error. If you block this thread with too much processing... well... bad things happen. Due to this design, you have to push all UI operations to this main thread using Looper.run().

Runnables pushed to this thread are always ran in FIFO execution order, which is a useful guarantee for programming.

So I decided to get smart and create the following function to add asynchronous calls that needed to be run on the primary thread. It takes a Runnable and either runs it immediately, if already on the Primary thread, or otherwise adds it to the Primary Thread’s queue.

//Run a function on primary thread
public static void RunOnPrimary(Runnable R)
{
    Looper L=MainActivity.getMainLooper();
    //Start commenting here so that items are always added to the queue, forcing in-order processesing
    if(Looper.myLooper()==Looper.getMainLooper())
        R.run();
    else
    //End commenting here
        new Handler(Looper.getMainLooper()).post(R);
}

I was getting weird behaviors though in a part of the project where some actions pulled in from JavaScript were happening before subsequent actions. After the normal debugging one-by-one steps to figure it out, I realized that MAYBE some of the JavaScript calls were, for some bizarre reason, already running on the primary thread. In this case they would run immediately, before the queued items started coming in. This turned out to be the case, so I ended up having to comment out the first 3 lines after the function’s first comment (if/R.run/else), and it worked great.

I found it kind of irritating that I had to add actions to the queue when it could have been run immediately on the current thread, but oh well, I didn’t really have a choice if I want to make sure everything is always run in order across the system.

Renaming a series for Plex

I was recently trying to upload a TV series into Plex and was having a bit of a problem with the file naming. While I will leave the show nameless, let’s just say it has a magic dog.

Each of the files (generally) contained 2 episodes and were named S##-E##-E## (Season #, First Episode #, Second Episode #). Plex really didn’t like this, as for multi-episode files, it only supports the naming convention of first episode number THROUGH a second episode number. As an example S02-E05-E09 is considered episodes 5 through 9 of season 2. So I wrote a quick script to fix up the names of the files to consider each file only 1 episode (the first one), and then create a second symlinked file, pointing to the first episode, but named for the second episode.

So, for the above example, we would get 2 files with the exact same original filenames, except with the primary file having “S02E05,E09” in place of the episode number information, and the linked file having “S02E09-Link” in its place.


The following is the bash code for renaming/fixing a single episode file. It needs to be saved into a script file. This requires perl for regular expression renaming.


#Get the file path info and updated file name
FilePath=`echo "$1" | perl -pe 's/\/[^\/]*$//g'`
FileName=`echo "$1" | perl -pe 's/^.*\///g'`
UpdatedFileName=`echo "$FileName" | perl -pe 's/\b(S\d\d)-(E\d\d)-(E\d\d)\b/$1$2,$3/g'`

#If the file is not in the proper format, exit prematurely
if [ "$UpdatedFileName" == "$FileName" ]; then
    echo "Proper format not found: $FilePath/$FileName"
    exit 1
fi

#Rename the file
cd "$FilePath"
mv "$FileName" "$UpdatedFileName"

#Create a link to the file with the second episode name
NewLinkName=`echo "$FileName" | perl -pe 's/\b(S\d\d)-(E\d\d)-(E\d\d)\b/$1$3-Link/g'`
ln -s "$UpdatedFileName" "$NewLinkName"

If you save that to a file named “RenameShow.sh”, you would use this like “find /PATH/ -type f -print0 | xargs -0n 1 ./RenameShowl.sh”. For windows, make sure you use windows symlinks with /H (to make them hard file links, as soft/symbolic link files really just don’t work in Windows).

Sending URLs as a file in an HTML form using AJAX
It is common knowledge that you can use the FormData class to send a file via AJAX as follows:
var DataToSend=new FormData();
DataToSend.append(PostVariableName, VariableData); //Send a normal variable
DataToSend.append(PostFileVariableName, FileElement.files[0], PostFileName); //Send a file
var xhr=new XMLHttpRequest();
xhr.open("POST", YOUR_URL, true);
xhr.send(DataToSend);

Something that is much less known, which doesn't have any really good full-process examples online (that I could find), is sending a URL's file as the posted file.
This is doable by downloading the file as a Blob, and then directly passing that blob to the FormData. The 3rd parameter to the FormData.append should be the file name.

The following code demonstrates downloading the file. I did not worry about adding error checking.
function DownloadFile(
    FileURL,     //http://...
    Callback,    //The function to call back when the file download is complete. It receives the file Blob.
    ContentType) //The output Content-Type for the file. Example=image/jpeg
{
    var Req=new XMLHttpRequest();
    Req.responseType='arraybuffer';
    Req.onload=function() {
        Callback(new Blob([this.response], {type:ContentType}));
    };
    Req.open("GET", FileURL, true);
    Req.send();
}

And the following code demonstrates submitting that file
//User Variables
var DownloadURL="https://www.castledragmire.com/layout/PopupBG.png";
var PostURL="https://www.castledragmire.com/ProjectContent/WebScripts/Default_PHP_Variables.php";
var PostFileVariableName="MyFile";
var OutputFileName="Example.jpg";
//End of User Variables

DownloadFile(DownloadURL, function(DownloadedFileBlob) {
    //Get the data to send
    var Data=new FormData();
    Data.append(PostFileVariableName, DownloadedFileBlob, OutputFileName);

    //Function to run on completion
    var CompleteFunction=function(ReturnData) {
        //Add your code in this function to handle the ajax result
        var ReturnText=(ReturnData.responseText ? ReturnData : this).responseText;
        console.log(ReturnText);
    }

    //Normal AJAX example
    var Req=new XMLHttpRequest();
    Req.onload=CompleteFunction; //You can also use "onreadystatechange", which is required for some older browsers
    Req.open("POST", PostURL, true);
    Req.send(Data);

    //jQuery example
    $.ajax({type:'POST', url:PostURL, data:Data, contentType:false, processData:false, cache:false, complete:CompleteFunction});
});

Unfortunately, due to cross site scripting (XSS) security settings, you can generally only use ajax to query URLs on the same domain. I use my Cross site scripting solutions and HTTP Forwarders for this. Stackoverflow also has a good thread about it.

Missing phar wrapper

Phar files are PHP’s way of distributing an entire PHP solution in a single package file. I recently had a problem on my Cygwin PHP server that said “Unable to find the wrapper "phar" - did you forget to enable it when you configured PHP?”. I couldn’t find any solution for this online, so I played with it a bit.

The quick and dirty solution I came up with is to include the phar file like any normal PHP file, which sets your current working directory inside of the phar file. After that, you can include files inside the phar and then change your directory back to where you started. Here is the code I used:

if(preg_match('/^(?:win|cygwin)/i', PHP_OS))
{
    $CWD=getcwd();
    require_once('Scripts/PHPExcel.phar');
    require_once('PHPExcel/IOFactory.php');
    chdir($CWD);
}
else
    require_once('phar://Scripts/PHPExcel.phar/PHPExcel/IOFactory.php');
Cross site scripting solutions
When you are forced to break the security model

So I was recently hired to set up a go-between system that would allow two independent websites to directly communicate and transfer/copy data between each other via a web browser. This is obviously normally not possible due to cross-site browser security settings (XSS), so I gave the client 2 possible solutions. Both of these solutions are written with the assumption that there is a go-between intermediary iframe/window, on a domain that they control, between the 2 independent site iframes/window. This would also work fine for one site you control against a site you do not control.

  1. Tell the browser to ignore this security requirement:
    • For example, if you add to the chrome command line arguments “--disable-web-security”, cross-site security checks will be removed. However, chrome will prominently display on the very first tab (which can be closed) at the top of the browser “You are using an unsupported command-line flag: —disable-web-security. Stability and security will suffer”. This can be scary to the user, and could also allow security breaches if the user utilizes that browser [session] for anything except the application page.
  2. The more appropriate way to do it, which requires a bit of work on the administrative end, is having all 3 sites pretend to run off of the same domain. To do this:
    1. You must have a domain that you control, which we will call UnifyingDomain.com (This top level domain can contain subdomains)
    2. The 2 sites that YOU control would need a JavaScript line of  “document.domain='UnifyingDomain.com';” somewhere in them. These 2 sites must also be run off of a subdomain of UnifyingDomain.com, (which can also be done through apache redirect directives).
    3. The site that you do not control would need to be forwarded through your UnifyingDomain.com (not a subdomain) via an apache permanent redirect.
      • This may not work, if their site programmer is dumb and does not use proper relative links for everything (absolute links are the devil :-) ). If this is the case:
        • You can use a [http] proxy to pull in their site through your domain (in which case, if you wanted, you could inject a “domain=”)
        • You can use the domain that you do not control as the top level UnifyingDomain.com, and add rules into your computer’s hostname files to redirect its subdomains to your IPs.

This project is why I ended up making my HTTP Forwarders client in go (coming soon).

Debugging AJAX in Chrome

It has always really bugged me that in Chrome, when you want to view the response and form data for an AJAX request listed in the console, you have to go through multiple annoying clicks to view these two pieces of data, which are also on separate tabs. There is a great Chrome extension though called AJAX-Debugger that gets you all the info you need on the console. However, it also suffered from the having-to-click-through problem for the request data (5th object deep in an object nest), and it did not support JSONP. I’ve gone ahead and fixed these 2 problems :-).


Now to get around to making the other Chrome plugin I’ve been needing for a while ... (Automatic devtool window focus when focusing its parent window)


[Edit on 2015-08-20 @ 8:10am] I added another patch to the Chrome extension to atomically run the group calls (otherwise, they sometimes showed out of order).

Also, the auto focusing thing is not possible as a pure extension due to chrome API inadequacies. While it would be simple to implement using an interval poll via something like Auto Hot Key, I really hate [the hack of] making things constantly poll to watch for something. I’m thinking of a hybrid chrome extension+AHK script as a solution.

Netflix Auto Continue Play

Here is a little Tampermonkey script for Chrome that automatically clicks the “Continue playing” button when it pops up on Netflix, pausing the current stream.


// ==UserScript==
// @name         Netflix auto continue play
// @namespace    https://www.castledragmire.com/Posts/Netflix_Auto_Continue_Play
// @version      1.0
// @description  When netflix pops up the "Continue play" button, this script auto-selects "Continue" within 1 second
// @author       Dakusan
// @match        http://www.netflix.com/
// @grant        none
// ==/UserScript==

setInterval(function() {
    var TheElements=document.getElementsByClassName('continue-playing');
    for(var i=0;i<TheElements.length;i++)
        if(/\bbutton\b/.test(TheElements[i].className))
        {
            console.log('"Continue Playing" Clicked');
            TheElements[i].click();
        }
}, 1000);

Make sure to set the “User matches” in the settings page to include both “http://www.netflix.com/*” and “https://www.netflix.com/*”.

MySQL replication ring status reporting script

I just threw together a quick script to report status on a MySQL replication ring. While replication rings have been the only real multi-master MySQL solution for replication (with the ability for nodes to go down without majorly breaking things) until recently, I have read that MariaDB (still not MySQL) now allows a slave to have multiple masters, meaning many replication topologies are now possible (star, mesh, etc). This script could easily be adapted for those circumstances too.

This script will report all the variables from “SHOW MASTER STATUS” and “SHOW SLAVE STATUS” from all servers in your replication ring, in a unified table. It also includes a “Pretty Status” row that lets you quickly see how things look. The possibilities for this row are:

  • Bad state: ...
    This shows if the Slave_IO_State is not “Waiting for master to send event”
  • Cannot determine master’s real position
    This shows if the Position variable on the master could not be read
  • On old master file
    This shows if the slave’s “Master_Log_File” variable does not equal the master’s “File” variable
  • Bytes behind: xxx
    This shows if none of the above errors occurred. It subtracts the master’s “Position” from the slave’s “Read_Master_Log_Pos”. This should generally be at or around 0. A negative value essentially means 0 (this should only happen between the last and first server).

The “Seconds_Behind_Master” variable can also be useful for determining the replication ring’s current replication status.

The code is below the example. The entire source file can also be found here. The 3 variables that need to be configured are at the top of the file. It assumes that all servers are accessible via the single given username and password.


Example:
Master
Server Name
EXAMPLE1.MYDOMAIN.COM
EXAMPLE2
Filemysql-bin.000003mysql-bin.000011
Position252497463215834
Binlog_Do_DBexample_data,devexample_dataexample_data,devexample_data
Binlog_Ignore_DB
Slave
Pretty Status
Bytes behind: 0
Bytes behind: 0
Slave_IO_StateWaiting for master to send eventWaiting for master to send event
Master_HostEXAMPLE2EXAMPLE1.MYDOMAIN.COM
Master_Userexample_slaveexample_slave
Master_Port33063306
Connect_Retry6060
Master_Log_Filemysql-bin.000011mysql-bin.000003
Read_Master_Log_Pos321583425249746
Relay_Log_Filewww-relay-bin.070901www-relay-bin.071683
Relay_Log_Pos252252
Relay_Master_Log_Filemysql-bin.000011mysql-bin.000003
Slave_IO_RunningYesYes
Slave_SQL_RunningYesYes
Replicate_Do_DBexample_data,devexample_dataexample_data,devexample_data
Replicate_Ignore_DB
Replicate_Do_Table
Replicate_Ignore_Table
Replicate_Wild_Do_Table
Replicate_Wild_Ignore_Table
Last_Errno00
Last_Error
Skip_Counter00
Exec_Master_Log_Pos321583425249746
Relay_Log_Space552552
Until_ConditionNoneNone
Until_Log_File
Until_Log_Pos00
Master_SSL_AllowedNoNo
Master_SSL_CA_File
Master_SSL_CA_Path
Master_SSL_Cert
Master_SSL_Cipher
Master_SSL_Key
Seconds_Behind_Master00
Master_SSL_Verify_Server_CertNoNo
Last_IO_Errno00
Last_IO_Error
Last_SQL_Errno00
Last_SQL_Error
Replicate_Ignore_Server_Ids
Not given
Master_Server_Id2
Not given


Code:

<?
//Configurations
$Servers=Array('SERVER1.YOURDOMAIN.COM', 'SERVER2.YOURDOMAIN.COM'); //List of host names to access mysql servers on. This must be in the order of the replication ring.
$SlaveUserName='SLAVE_RING_USERNAME'; //This assumes all servers are accessible via this username with the same password
$SlavePassword='SLAVE_RING_PASSWORD';

//Get the info for each server
$ServersInfo=Array(); //SERVER_NAME=>Array('Master'=>Array(Col1=>Val1, ...), 'Slave'=>Array(Col1=>Val1, ...)
$ColsNames=Array('Master'=>Array('Server Name'=>0), 'Slave'=>Array('Pretty Status'=>0)); //The column names for the 2 (master and slave) queries. Custom column names are also added here
$CustomFieldNames=array_merge($ColsNames['Master'], $ColsNames['Slave']); //Store the custom column names so they are not HTML escaped later
foreach($Servers as $ServerName)
{
    //Connect to the server
    $Link=@new mysqli($ServerName, $SlaveUserName, $SlavePassword);
    if($Link->connect_error)
        die(EHTML("Connection error to $ServerName server: $Link->connect_error"));

    //Get the replication status info from the server
    $MyServerInfo=$ServersInfo[$ServerName]=Array(
        'Master'=>$Link->Query('SHOW MASTER STATUS')->fetch_array(MYSQLI_ASSOC),
        'Slave'=>$Link->Query('SHOW SLAVE STATUS')->fetch_array(MYSQLI_ASSOC)
    );
    mysqli_close($Link); //Close the connection

    //Gather the column names
    foreach($ColsNames as $ColType => &$ColNames)
        foreach($MyServerInfo[$ColType] as $ColName => $Dummy)
            $ColNames[$ColName]=0;
}
unset($ColNames);

//Gather the pretty statuses
foreach($Servers as $Index => $ServerName)
{
    //Determine the pretty status
    $SlaveInfo=$ServersInfo[$ServerName]['Slave'];
    $MasterInfo=$ServersInfo[$Servers[($Index+1)%count($Servers)]]['Master'];
    if($SlaveInfo['Slave_IO_State']!='Waiting for master to send event')
        $PrettyStatus='Bad state: '.EHTML($SlaveInfo['Slave_IO_State']);
    else if(!isset($MasterInfo['Position']))
        $PrettyStatus='Cannot determine master’s real position';
    else if($SlaveInfo['Master_Log_File']!=$MasterInfo['File'])
        $PrettyStatus='On old master file';
    else
        $PrettyStatus='Bytes behind: '.($MasterInfo['Position']-$SlaveInfo['Read_Master_Log_Pos']);

    //Add the server name and pretty status to the output columns
    $ServersInfo[$ServerName]['Master']['Server Name']='<div class=ServerName>'.EHTML($ServerName).'</div>';
    $ServersInfo[$ServerName]['Slave']['Pretty Status']='<div class=PrettyStatus>'.EHTML($PrettyStatus).'</div>';
}

//Output the document
function EHTML($S) { return htmlspecialchars($S, ENT_QUOTES, 'UTF-8'); } //Escape HTML
?>
<!DOCTYPE html>
<html>
<head>
    <title>Replication Status</title>
    <meta charset="UTF-8">
    <style>
        table { border-collapse:collapse; }
        table tr>* { border:1px solid black; padding:3px; }
        th { text-align:left; font-weight:bold; }
        .ReplicationDirectionType { font-weight:bold; text-align:center; color:blue; }
        .ServerName { font-weight:bold; text-align:center; color:red; }
        .PrettyStatus { font-weight:bold; color:red; }
        .NotGiven { font-weight:bold; }
    </style>
</head>
<body><table>
<?
//Output the final table
foreach($ColsNames as $Type => $ColNames) //Process by direction type (Master/Slave) then columns
{
    print '<tr><td colspan='.(count($Servers)+1).' class=ReplicationDirectionType>'.$Type.'</td></tr>'; //Replication direction (Master/Server) type title column
    foreach($ColNames as $ColName => $Dummy) //Process each column name individually
    {
        print '<tr><th>'.EHTML($ColName).'</th>'; //Column name
        $IsHTMLColumn=isset($CustomFieldNames[$ColName]); //Do not escape HTML on custom fields
        foreach($ServersInfo as $ServerInfo) //Output the column for each server
            if($IsHTMLColumn) //Do not escape HTML on custom fields
                print '<td>'.$ServerInfo[$Type][$ColName].'</td>';
            else //If not a custom field, output the escaped HTML of the value. If the column does not exist for this server (different mysql versions), output "Not given"
                print '<td>'.(isset($ServerInfo[$Type][$ColName]) ? EHTML($ServerInfo[$Type][$ColName]) : '<div class=NotGiven>Not given</div>').'</td>';
        print '</tr>';
    }
}
?>
</table></body>
</html>

One final note. When having this script run, you might need to make sure none of the listed server IPs evaluates to localhost (127.x.x.x), as MySQL may instead then use the local socket pipe, which may not work with users who only have REPLICATION permissions and a wildcard host.

Cygwin install

Since I’m doing the new install thing, I figured I’d record some of my setup. So here is my cygwin install.

  • Internet utilities: wget, curl, ping, openssh, openssl-devel, mysql, nc
  • Program compilation stuff: gcc-g++, autoconf, pkg-config, automake
  • Programming languages: perl, python
  • Other utilities for programming: git, sqlite3
  • Text editor: nano

Mintty [non-default] options:
  • Text->Locale=en_US UTF-8
  • Mouse->Copy on select=Off
  • Mouse->Clicks place command line cursor=On

I also haven’t decided yet if I will do apache or PHP via cygwin or windows installs yet. May do a post about that later.

Hardware performance speed tests

So I got a new computer back in April and have finally gotten around to doing some speed tests to see how different applications and settings affect performance/harddrive read speed.


The following is the (relevant) computer hardware configuration:
  • Motherboard: MSI Z87-GD65
  • CPU: Intel Core i7-4770K Haswell 3.5GHz
  • GPU: GIGABYTE GV-N770OC-4GD GeForce GTX 770 4GB
  • RAM: Crucial Ballistix Tactical 2*8GB
  • 2*Solid state hard drives (SDD): Crucial M500 480GB SATA 2.5" 7mm
  • 7200RPM hard drive (HDD): Seagate Barracuda 3TB ST3000DM001
  • Power Supply: RAIDMAX HYBRID 2 RX-730SS 730W
  • CPU Water Cooler: CORSAIR H100i
  • Case Fans: 2*Cooler Master MegaFlow 200, 200mm case fan

Test setup:

I started with a completely clean install of Windows 7 Ultimate N x64 to gather these numbers.

The first column is the boot time, from the time the start of the "Starting Windows" animation shows to when the user login screen shows up, so the BIOS is not included. I used a stopwatch to get these boot numbers (in seconds), so they are not particularly accurate.

The second and third columns are the time (in seconds) to run a "time md5sum" on cygwin64 on a 1.39GB file (1,503,196,839 bytes), on the solid state (SDD) and 7200RPM (HDD) drives respectively. They are taken immediately after boot so caching and other applications using resources are not variables. I generally did not worry about running the tests multiple times and taking lowest case numbers. The shown milliseconds fluctuations are within margin of error for software measurements due to context switches.


Results:

Boot times are affected between multiple steps, as seen below, but not too bad. The only thing that affected the MD5sum was adding the hardware mirror raid on the SSDs, which dropped the time of the md5 by half. So overall, antivirus and system encryption did not have any noticeable affect on the computer's performance (at least regarding IO on a single file and number crunching).


Numbers:
What was added Boot SSD HDD Notes
Initial installation 4 - -
NIC Drivers and Cygwin 7 4.664 8.393 I'm not sure why the boot time jump so much at this point. The initial number might have been a fluke.
All Windows updates + drivers + 6 monitors 14 4.618 8.393 The boot time jumped up a lot due to having to load all the monitors
Raid 1 mirror[Windows] on SSDs + no page file 17 4.618 8.393 This was removed once I realized Truecrypt could not be used on a dynamic disk (Windows software) RAID
Raid 1 mirror[hardware] on SSDs + no page file 17 2.246 8.408
Truecrypt System Volume Encryption (SSD Raid Only) 17-18 2.278 8.424
Antivirus 18 2.324 8.408 Kaspersky 2014